idnits 2.17.1 draft-ietf-nvo3-vmm-14.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (April 1, 2020) is 1485 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC 7365' is mentioned on line 105, but not defined == Unused Reference: 'RFC2119' is defined on line 611, but no explicit reference was found in the text == Unused Reference: 'RFC2629' is defined on line 614, but no explicit reference was found in the text == Unused Reference: 'RFC7348' is defined on line 627, but no explicit reference was found in the text == Unused Reference: 'RFC7666' is defined on line 640, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2629 (Obsoleted by RFC 7749) Summary: 1 error (**), 0 flaws (~~), 6 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Dunbar 2 Internet Draft Futurewei 3 Intended status: Informational B. Sarikaya 4 Expires: October 1, 2020 Denpel Informatique 5 B.Khasnabish 6 Independent 7 T. Herbert 8 Intel 9 S. Dikshit 10 Aruba-HPE 11 April 1, 2020 13 Virtual Machine Mobility Solutions for L2 and L3 Overlay Networks 14 draft-ietf-nvo3-vmm-14 16 Abstract 18 This document describes virtual machine mobility solutions commonly 19 used in data centers built with overlay-based network. This document 20 is intended for describing the solutions and the impact of moving 21 VMs (or applications) from one Rack to another connected by the 22 Overlay networks. 24 For layer 2, it is based on using an NVA (Network Virtualization 25 Authority) - NVE (Network Virtualization Edge) protocol to update 26 ARP (Address Resolution Protocol) table or neighbor cache entries 27 after a VM (virtual machine) moves from an Old NVE to a New NVE. 28 For Layer 3, it is based on address and connection migration after 29 the move. 31 Status of this Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. This document may not be modified, 38 and derivative works of it may not be created, except to publish it 39 as an RFC and to translate it into languages other than English. 41 Internet-Drafts are working documents of the Internet Engineering 42 Task Force (IETF), its areas, and its working groups. Note that 43 other groups may also distribute working documents as Internet- 44 Drafts. 46 Internet-Drafts are draft documents valid for a maximum of six 47 months and may be updated, replaced, or obsoleted by other documents 48 at any time. It is inappropriate to use Internet-Drafts as 49 reference material or to cite them other than as "work in progress." 51 The list of current Internet-Drafts can be accessed at 52 http://www.ietf.org/ietf/1id-abstracts.txt 54 The list of Internet-Draft Shadow Directories can be accessed at 55 http://www.ietf.org/shadow.html 57 This Internet-Draft will expire on October 1, 2020. 59 Copyright Notice 61 Copyright (c) 2020 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (http://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with 69 respect to this document. Code Components extracted from this 70 document must include Simplified BSD License text as described in 71 Section 4.e of the Trust Legal Provisions and are provided without 72 warranty as described in the Simplified BSD License. 74 Table of Contents 76 1. Introduction...................................................3 77 2. Conventions used in this document..............................4 78 3. Requirements...................................................5 79 4. Overview of the VM Mobility Solutions..........................6 80 4.1. Inter-VNs and External Communication......................6 81 4.2. VM Migration in Layer 2 Network...........................6 82 4.3. VM Migration in Layer-3 Network...........................8 83 4.4. Address and Connection Management in VM Migration.........9 84 5. Handling Packets in Flight....................................10 85 6. Moving Local State of VM......................................11 86 7. Handling of Hot, Warm and Cold VM Mobility....................11 87 8. Other Options.................................................12 88 9. VM Lifecycle Management.......................................13 89 10. Security Considerations......................................13 90 11. IANA Considerations..........................................14 91 12. Acknowledgments..............................................14 92 13. Change Log...................................................14 93 14. References...................................................14 94 14.1. Normative References....................................15 95 14.2. Informative References..................................16 97 1. Introduction 98 This document describes the overlay-based data center networks 99 solutions in supporting multitenancy and VM (Virtual Machine) 100 mobility. Being able to move VMs dynamically, from one server to 101 another, makes it possible for dynamic load balancing or work 102 distribution. Therefore, dynamic VM Mobility is highly desirable 103 for large scale multi-tenant DCs. 104 This document is strictly within the DCVPN, as defined by the NVO3 105 Framework [RFC 7365]. The intent is to describe Layer 2 and Layer 106 3 Network behavior when VMs are moved from one NVE to another. 107 This document assumes that the VMs move is initiated by VM 108 management system, i.e. planed move. How and when to move VM are 109 out of the scope of this document. RFC7666 already has the 110 description of the MIB for VMs controlled by Hypervisor. The 111 impact of VM mobility on higher layer protocols and applications 112 is outside its scope. 113 Many large DCs (Data Centers), especially Cloud DCs, host tasks 114 (or workloads) for multiple tenants. A tenant can be a department 115 of one organization or an organization. There are communications 116 among tasks belonging to one tenant and communications among tasks 117 belonging to different tenants or with external entities. 118 Server Virtualization, which is being used in almost all of 119 today's data centers, enables many VMs to run on a single physical 120 computer or server sharing the processor/memory/storage. Network 121 connectivity among VMs is provided by the network virtualization 122 edge (NVE) [RFC8014]. It is highly desirable [RFC7364] to allow 123 VMs to be moved dynamically (live, hot, or cold move) from one 124 server to another for dynamic load balancing or optimized work 125 distribution. 126 There are many challenges and requirements related to VM mobility 127 in large data centers, including dynamic attaching/detaching VMs 128 to/from Virtual Network Edges (VNEs). In addition, retaining IP 129 addresses after a move is a key requirement [RFC7364]. Such a 130 requirement is needed in order to maintain existing transport 131 connections. 132 In traditional Layer-3 based networks, retaining IP addresses 133 after a move is generally not recommended because the frequent 134 move will cause fragmented IP addresses, which introduces 135 complexity in IP address management. 136 In view of many VM mobility schemes that exist today, there is a 137 desire to document comprehensive VM mobility solutions that cover 138 both IPv4 and IPv6. The large Data Center networks can be 139 organized as one large Layer-2 network geographically distributed 140 in several buildings/cities or Layer-3 networks with large number 141 of host routes that cannot be aggregated as the result of frequent 142 moves from one location to another without changing their IP 143 addresses. The connectivity between Layer 2 boundaries can be 144 achieved by the network virtualization edge (NVE) functioning as 145 Layer 3 gateway routing across bridging domain such as in 146 Warehouse Scale Computers (WSC). 148 2. Conventions used in this document 150 This document uses the terminology defined in [RFC7364]. In 151 addition, we make the following definitions: 153 VM: Virtual Machine 155 Tasks: Task is a program instantiated or running on a virtual 156 machine or container. Tasks in virtual machines or 157 containers can be migrated from one server to another. 158 We use task, workload and virtual machine 159 interchangeably in this document. 161 Hot VM Mobility: A given VM could be moved from one server to 162 another in running state. 164 Warm VM Mobility: In case of warm VM mobility, the VM states are 165 mirrored to the secondary server (or domain) at a 166 predefined (configurable) regular intervals. This 167 reduces the overheads and complexity, but this may also 168 lead to a situation when both servers may not contain 169 the exact same data (state information) 171 Cold VM Mobility: A given VM could be moved from one server to 172 another in stopped or suspended state. 174 Old NVE: refers to the old NVE where packets were forwarded to 175 before migration. 177 New NVE: refers to the new NVE after migration. 179 Packets in flight: refers to the packets received by the Old NVE 180 sent by the correspondents that have old ARP or neighbor 181 cache entry before VM or task migration. 183 Users of VMs in diskless systems or systems not using 184 configuration files are called end user clients. 186 Cloud DC: Third party data centers that host applications, 187 tasks or workloads owned by different organizations or 188 tenants. 190 3. Requirements 192 This section states requirements on data center network virtual 193 machine mobility. 195 Data center network should support both IPv4 and IPv6 VM mobility. 197 Virtual machine (VM) mobility should not require changing VMs' IP 198 addresses after the move. 200 There is "Hot Migration" with transport service continuing, and 201 "Cold Migration" with transport service restarted, i.e. the task 202 running is stopped on the Old NVE, moved to the New NVE and the task 203 is restarted. Not all DCs support "Hot Migration. DCs that only 204 support Cold Migration should make their customers aware of the 205 potential service interruption during the Cold Migration. 207 VM mobility solutions/procedures should minimize triangular routing 208 except for handling packets in flight. 210 VM mobility solutions/procedures should not need to use tunneling 211 except for handling packets in flight. 213 4. Overview of the VM Mobility Solutions 215 Layer 2 and Layer 3 mobility solutions are described respectively 216 in the following sections. 218 4.1. Inter-VNs and External Communication 220 Inter VNs (Virtual Networks) communication refers to communication 221 among tenants (or hosts) belonging to different VNs. Those tenants 222 can be attached to the NVEs co-located in the same Data Center or 223 in different Data centers. When a VM communicates with an external 224 entity, the VM is effectively communicating with a peer in a 225 different network or a globally reachable host. 227 This document assumes that the inter-VNs communication and 228 the communication with external entities are via NVO3 Gateway 229 functionality as described in Section 5.3 of RFC 8014 230 [RFC8014]. NVO3 Gateways relay traffic onto and off of a virtual 231 network, enabling communication both across different VNs and with 232 external entities. 234 NVO3 Gateway functionality enforces appropriate policies to 235 control communication among VNs and with external entities (e.g., 236 hosts). 238 Moving a VM a new NVE may move the VM away from the NVO3 239 Gateway(s) used by the VM's traffic, e.g., some traffic may be 240 better handled by an NVO3 Gateway that is closer to the new NVE 241 than the NVO3 Gateway that was used before the VM move. If NVO3 242 Gateway changes are not possible for some reason, then the VM's 243 traffic can continue to use the prior NVO3 Gateway(s), which may 244 have some drawbacks, e.g., longer network paths. 246 4.2. VM Migration in Layer 2 Network 248 In a Layer-2 based approach, VM moving to another NVE does not 249 change its IP address. But this VM is now under a new NVE, 250 previously communicating NVEs may continue sending their packets 251 to the Old NVE. Therefore, Address Resolution Protocol (ARP) 252 cache in IPv4 [RFC0826] or neighbor cache in IPv6 [RFC4861] in the 253 NVEs that have attached VMs communicating with the VM being moved 254 need to be updated promptly. If the VM being moved has 255 communication with external entities, the NVO3 gateway needs to be 256 notified of the new NVE where the VM is moved to. 258 In IPv4, the VM immediately after the move should send a 259 gratuitous ARP request message containing its IPv4 and Layer 2 MAC 260 address in its new NVE. Upon receiving this message, the New NVE 261 can update its ARP cache. The New NVE should send a notification 262 of the newly attached VM to the central directory [RFC7067] 263 embedded in the NVA to update the mapping of the IPv4 address & 264 MAC address of the moving VM along with the new NVE address. An 265 NVE-to-NVA protocol is used for this purpose [RFC8014]. The old 266 NVE, upon a VM is moved away, should send an ARP scan to all its 267 attached VMs to refresh its ARP Cache. 269 Reverse ARP (RARP) which enables the host to discover its IPv4 270 address when it boots from a local server [RFC0903], is not used 271 by VMs if the VM already knows its IPv4 address (most common 272 scenario). Next, we describe a case where RARP is used. 274 There are some vendor deployments (diskless systems or systems 275 without configuration files) wherein the VM's user, i.e. end-user 276 client askes for the same MAC address upon migration. This can be 277 achieved by the clients sending RARP request message which carries 278 the MAC address looking for an IP address allocation. The server, 279 in this case the new NVE needs to communicate with NVA, just like 280 in the gratuitous ARP case to ensure that the same IPv4 address is 281 assigned to the VM. NVA uses the MAC address as the key in the 282 search of ARP cache to find the IP address and informs this to the 283 new NVE which in turn sends RARP reply message. This completes IP 284 address assignment to the migrating VM. 286 Other NVEs that have attached VMs or the NVO3 Gateway that have 287 external entities communicating with this VM may still have the 288 old ARP entry. To avoid old ARP entries being used by other NVEs, 289 the old NVE upon discovering a VM is detached should send a 290 notification to all other NVEs and its NVO3 Gateway to time out 291 the ARP cache for the VM [RFC8171]. When an NVE (including the old 292 NVE) receives packet or ARP request destined towards a VM (its MAC 293 or IP address) that is not in the NVE's ARP cache, the NVE should 294 send query to NVA's Directory Service to get the associated NVE 295 address for the VM. This is how the Old NVE tunneling these in- 296 flight packets to the New NVE to avoid packets loss. 298 When VM address is IPv6, the operation is similar: 300 In IPv6, after the move, the VM immediately sends an unsolicited 301 neighbor advertisement message containing its IPv6 address and 302 Layer-2 MAC address to its new NVE. This message is sent to the 303 IPv6 Solicited Node Multicast Address corresponding to the target 304 address which is the VM's IPv6 address. The NVE receiving this 305 message should send request to update VM's neighbor cache entry in 306 the central directory of the NVA. The NVA's neighbor cache entry 307 should include IPv6 address of the VM, MAC address of the VM and 308 the NVE IPv6 address. An NVE-to-NVA protocol is used for this 309 purpose [RFC8014]. 311 To avoid other NVEs communicating with this VM using the old 312 neighbor cache entry, the old NVE upon discovering a VM being 313 moved or VM management system which initiates the VM move should 314 send a notification to all NVEs to timeout the ND cache for the VM 315 being moved. When a ND cache entry for those VMs times out, their 316 corresponding NVEs should send query to the NVA for an update. 318 4.3. VM Migration in Layer-3 Network 320 Traditional Layer-3 based data center networks usually have all 321 hosts (tasks) within one subnet attached to one NVE. By this 322 design, the NVE becomes the default route for all hosts (tasks) 323 within the subnet. But this design requires IP address of a host 324 (task) to change after the move to comply with the prefixes of the 325 IP address under the new NVE. 327 A VM migration in Layer 3 Network solution is to allow IP 328 addresses staying the same after moving to different locations. 329 The Identifier Locator Addressing or ILA [I-D.herbert-intarea-ila] 330 is one of such solutions. 332 Because broadcasting is not available in Layer-3 based networks, 333 multicast of neighbor solicitations in IPv6 and ARP for IPv4 would 334 need to be emulated. Scalability of the multicast (such as IPv6 ND 335 and IPv4 ARP) can become problematic because the hosts belonging 336 to one subnet (or one VLAN) can span across many NVEs. Sending 337 broadcast traffic to all NVEs can cause unnecessary traffic in the 338 DCN if the hosts belonging to one subnet are only attached to a 339 very small number of NVEs. It is preferable to have a directory 340 [RFC7067] or NVA to manage the updates to an NVE of the potential 341 other NVEs a specific subnet may be attached and get periodic 342 reports from an NVE of all the subnets being attached/detached, as 343 described by RFC8171. 345 Hot VM Migration in Layer 3 involves coordination among many 346 entities, such as VM management system and NVA. Cold task 347 migration, which is a common practice in many data centers, 348 involves the following steps: 350 - Stop running the task. 351 - Package the runtime state of the job. 352 - Send the runtime state of the task to the New NVE where the 353 task is to run. 354 - Instantiate the task's state on the new machine. 355 - Start the tasks for the task continuing from the point at which 356 it was stopped. 358 RFC7666 has the more detailed description of the State Machine of 359 VMs controlled by Hypervisor 361 4.4. Address and Connection Management in VM Migration 363 Since the VM attached to the New NVE needs to be assigned with the 364 same address as VM attached to the Old NVE, extra processing or 365 configuration is needed, such as: 367 - Configure IPv4/v6 address on the target VM/NVE. 368 - Suspend use of the address on the old NVE. This includes the 369 old NVE sending query to NVA upon receiving packets destined 370 towards the VM being moved away. If there is no response from 371 NVA for the new NVE for the VM, the old NVE can only drop the 372 packets. Referring to the VM State Machine described in 373 RFC7666. 374 - Trigger NVA to push the new NVE-VM mapping to other NVEs which 375 have the attached VMs communicating with the VM being moved. 377 Connection management for the applications running on the VM being 378 moved involves reestablishing existing TCP connections in the new 379 place. 381 The simplest course of action is to drop all TCP connections to 382 the applications running on the VM during a migration. If the 383 migrations are relatively rare events in a data center, impact is 384 relatively small when TCP connections are automatically closed in 385 the network stack during a migration event. If the applications 386 running are known to handle this gracefully (i.e. reopen dropped 387 connections) then this approach may be viable. 389 More involved approach to connection migration entails a proxy to 390 the application (or the application itself) to pause the 391 connection, package connection state and send to target, 392 instantiate connection state in the peer stack, and restarting the 393 connection. From the time the connection is paused to the time it 394 is running again in the new stack, packets received for the 395 connection could be silently dropped. For some period of time, 396 the old stack will need to keep a record of the migrated 397 connection. If it receives a packet, it can either silently drop 398 the packet or forward it to the new location, as described in 399 Section 5. 401 5. Handling Packets in Flight 403 The Old NVE may receive packets from the VM's ongoing 404 communications. These packets should not be lost; they should be 405 sent to the New NVE to be delivered to the VM. The steps involved 406 in handling packets in flight are as follows: 408 Preparation Step: It takes some time, possibly a few seconds for 409 a VM to move from its Old NVE to a New NVE. During this period, a 410 tunnel needs to be established so that the Old NVE can forward 411 packets to the New NVE. Old NVE gets New NVE address from its NVA 412 assuming that the NVA gets the notification when a VM is moved 413 from one NVE to another. It is out of the scope of this document 414 on which entity manages the VM move and how NVA gets notified of 415 the move. The Old NVE can store the New NVE address for the VM 416 with a timer. When the timer expired, the entry for the New NVE 417 for the VM can be deleted. 419 Tunnel Establishment - IPv6: Inflight packets are tunneled to the 420 New NVE using the encapsulation protocol such as VXLAN in IPv6. 422 Tunnel Establishment - IPv4: Inflight packets are tunneled to the 423 New NVE using the encapsulation protocol such as VXLAN in IPv4. 425 Tunneling Packets - IPv6: IPv6 packets received for the migrating 426 VM are encapsulated in an IPv6 header at the Old NVE. New NVE 427 decapsulates the packet and sends IPv6 packet to the migrating VM. 429 Tunneling Packets - IPv4: IPv4 packets received for the migrating 430 VM are encapsulated in an IPv4 header at the Old NVE. New NVE 431 decapsulates the packet and sends IPv4 packet to the migrating VM. 433 Stop Tunneling Packets: When the Timer for storing the New NVE 434 address for the VM expires. The Timer should be long enough for 435 all other NVEs that need to communicate with the VM to get their 436 NVE-VM cache entries updated. 438 6. Moving Local State of VM 439 In addition to the VM mobility related signaling (VM Mobility 440 Registration Request/Reply), the VM state needs to be transferred 441 to the New NVE. The state includes its memory and file system if 442 the VM cannot access the memory and the file system after moving 443 to the New NVE. 445 The mechanism of transferring VM States and file system is out of 446 the scope of this document. Referring to RFC7666 for detailed 447 information. 449 7. Handling of Hot, Warm and Cold VM Mobility 450 Both Cold and Warm VM mobility (or migration) refers to the VM 451 being completely shut down at the Old NVE before restarted at the 452 New NVE. Therefore, all transport services to the VM are 453 restarted. 455 In this document, all VM mobility is initiated by VM Management 456 System. The Cold VM mobility only exchange the needed states 457 between the Old NVE and the New NVE after the VM attached to the 458 Old NVE is completely shut down. There is time delay before the 459 new VM is launched. The cold mobility option can be used for non- 460 critical applications and services that can tolerate interrupted 461 TCP connections. 463 The Warm VM mobility refers to having the functional components 464 under the new NVE to receive running status of the VM at frequent 465 intervals, so that it can take less time to launch the VM under 466 the new NVE and other NVEs that communicate with the VM can be 467 notified of the VM move more promptly. The duration of the 468 interval determines the effectiveness (or benefit) of Warm VM 469 mobility. The larger the duration, the less effective the Warm VM 470 mobility option becomes. 472 For Hot VM Mobility, once a VM moves to a New NVE, the VM IP 473 address does not change and the VM should be able to continue to 474 receive packets to its address(es). The VM needs to send a 475 gratuitous Address Resolution message or unsolicited Neighbor 476 Advertisement message upstream after each move. 478 Upon starting at the New NVE, the VM should send an ARP or 479 Neighbor Discovery message. Cold VM mobility also allows the Old 480 NVE and all communicating NVEs to time out ARP/neighbor cache 481 entries of the VM. It is necessary for the NVA to push the 482 updated ARP/neighbor cache entry to NVEs or for NVEs to pull the 483 updated ARP/neighbor cache entry from NVA. 485 8. Other Options 486 VM Hot mobility is to enable uninterrupted running of the 487 application or workload instantiated on the VM when the VM running 488 conditions changes, such as utilization overload, hardware running 489 condition changes, or others. Hot, Warm and Cold mobility are 490 planned activities which are managed by VM management system. 492 For unexpected events, such as unexpected failure, a VM might need 493 to move to a new NVE, which is called Hot VM Failover in this 494 document. For Hot VM Failover, there are redundant primary and 495 secondary VMs whose states are synchronized by means that are 496 outside the scope of this draft. If the VM in the primary NVE 497 fails, there is no need to actively move the VM to the secondary 498 NVE because the VM in the secondary NVE can immediately pick up 499 the processing. 501 The VM Failover to the new NVE is transparent to the peers that 502 communicate with this VM. This can be achieved by both active VM 503 and standby VM share the same TCP port and same IP address, and 504 using distributed load balancing functionality that controls which 505 VM responds to each service request. In the absence of a failure, 506 the new VM can pick up providing service while the sender (peer) 507 still continues to receive Ack from the old VM. If the situation 508 (loading condition of the primary responding VM) changes the 509 secondary responding VM may start providing service to the sender 510 (peers). On failure, the sender (peer) may have to retry the 511 request, so this structure is limited to requests that can be 512 safely retried. 514 If load balancing functionality is not used, the VM Failover can 515 be made transparent to the sender (peers) without relying on 516 request retry by using techniques described in section 4 that do 517 not depend on the primary VM or its associated NVE doing anything 518 after the failure. This restriction is necessary because a 519 failure that affects the primary VM may also cause its associated 520 NVE to fail (e.g., if the NVE is located in the hypervisor 521 that hosts the primary VM and the underlying physical server 522 fails, both the primary VM and the hypervisor that contains the 523 NVE fail as a consequence). 525 The Hot VM Failover option is the costliest mechanism, and hence 526 this option is utilized only for mission-critical applications and 527 services. 529 9. VM Lifecycle Management 530 The VM lifecycle management is a complicated task, which is beyond 531 the scope of this document. Not only it involves monitoring server 532 utilization, balanced distribution of workload, etc., but also 533 needs to manage seamlessly VM migration from one server to 534 another. 536 10. Security Considerations 537 Security threats for the data and control plane for overlay 538 networks are discussed in [RFC8014]. ARP (IPv4) and ND (IPv6) are 539 not secure, especially if they can be sent gratuitously across 540 tenant boundaries in a multi-tenant environment. 542 In overlay data center networks, ARP and ND messages can be used 543 to mount address spoofing attacks from untrusted VMs and/or other 544 untrusted sources. Examples of untrusted VMs include running third 545 party applications (i.e., applications not written by the tenant 546 who controls the VM). Those untrusted VMs can send falsified ARP 547 (IPv4) and ND (IPv6) messages, causing NVE, NVO3 Gateway, and NVA 548 to be overwhelmed and not able to perform legitimate functions. 549 The attacker can intercept, modify, or even stop data in-transit 550 ARP/ND messages intended for other VNs and initiate DDOS attacks 551 to other VMs attached to the same NVE. A simple black-hole attacks 552 can be mounted by sending a falsified ARP/ND message to indicate 553 that the victim's IP address has moved to the attacker's VM. That 554 technique can also be used to mount man-in-the-middle attacks with 555 some more effort to ensure that the intercepted traffic is 556 eventually delivered to the victim. 558 The locator-identifier mechanism given as an example (ILA) doesn't 559 include secure binding. It doesn't discuss how to securely bind 560 the new locator to the identifier. 562 Because of those threats, VM management system needs to apply 563 stronger security mechanisms when add a VM to an NVE. Some tenants 564 may have requirement that prohibit their VMs to be co-attached to 565 the NVEs with other tenants. Some Data Centers deploy additional 566 functionality in their NVO3 Gateways for mitigation of ARP/ND 567 threats, e.g., periodically sending each Gateway's ARP/ND cache 568 contents to the NVA or other central control system to check for 569 ARP/ND cache entries that are not consistent with the locations of 570 VMs and their IP addresses indicated by the VM Management System. 572 11. IANA Considerations 574 This document makes no request to IANA. 576 12. Acknowledgments 578 The authors are grateful to Bob Briscoe, David Black, Dave R. 579 Worley, Qiang Zu, Andrew Malis for helpful comments. 581 13. Change Log 583 . submitted version -00 as a working group draft after adoption 585 . submitted version -01 with these changes: references are updated, 586 o added packets in flight definition to Section 2 588 . submitted version -02 with updated address. 590 . submitted version -03 to fix the nits. 592 . submitted version -04 in reference to the WG Last call comments. 594 . Submitted version - 05, 06, 07, 08, 09, 10, 11, 12 to address IETF 595 LC comments from TSV area. 597 14. References 598 14.1. Normative References 600 [RFC0826] Plummer, D., "An Ethernet Address Resolution Protocol: Or 601 Converting Network Protocol Addresses to 48.bit Ethernet 602 Address for Transmission on Ethernet Hardware", STD 37, 603 RFC 826, DOI 10.17487/RFC0826, November 1982, 604 . 606 [RFC0903] Finlayson, R., Mann, T., Mogul, J., and M. Theimer, "A 607 Reverse Address Resolution Protocol", STD 38, RFC 903, 608 DOI 10.17487/RFC0903, June 1984, . 611 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 612 Requirement Levels", BCP 14, RFC 2119, March 1997. 614 [RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629, 615 DOI 10.17487/RFC2629, June 1999, . 618 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 619 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 620 DOI 10.17487/RFC4861, September 2007, . 623 [RFC7067] L. Dunbar, D. Eastlake, R. Perlman, I. Gashinsky, 624 "directory Assistance Problem and High Level Design 625 Proposal", RFC7067, Nov. 2013 627 [RFC7348] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger, 628 L., Sridhar, T., Bursell, M., and C. Wright, "Virtual 629 eXtensible Local Area Network (VXLAN): A Framework for 630 Overlaying Virtualized Layer 2 Networks over Layer 3 631 Networks", RFC 7348, DOI 10.17487/RFC7348, August 2014, 632 . 634 [RFC7364] Narten, T., Ed., Gray, E., Ed., Black, D., Fang, L., 635 Kreeger, L., and M. Napierala, "Problem Statement: 636 Overlays for Network Virtualization", RFC 7364, DOI 637 10.17487/RFC7364, October 2014, . 640 [RFC7666] H. Asai, et al, "Management Information Base for Virtual 641 Machines Controlled by a Hypervisor", RFC7666, Oct 2015. 643 [RFC8014] Black, D., Hudson, J., Kreeger, L., Lasserre, M., and T. 644 Narten, "An Architecture for Data-Center Network 645 Virtualization over Layer 3 (NVO3)", RFC 8014, DOI 646 10.17487/RFC8014, December 2016, . 649 [RFC8171] D. Eastlake, L. Dunbar, R. Perlman, Y. Li, "Edge Directory 650 Assistance Mechanisms", RFC 8171, June 2017 651 14.2. Informative References 653 [I-D.herbert-intarea-ila] Herbert, T. and P. Lapukhov, "Identifier- 654 locator addressing for IPv6", draft-herbert-intarea-ila - 655 04 (work in progress), March 2017. 657 Authors' Addresses 659 Linda Dunbar 660 Futurewei 661 Email: ldunbar@futurewei.com 663 Behcet Sarikaya 664 Denpel Informatique 665 Email: sarikaya@ieee.org 667 Bhumip Khasnabish 668 Independent 669 Email: vumip1@gmail.com 671 Tom Herbert 672 Intel 673 Email: tom@herbertland.com 675 Saumya Dikshit 676 Aruba-HPE 677 Bangalore, India 678 Email: saumya.dikshit@hpe.com