idnits 2.17.1 draft-ietf-nvo3-vmm-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 29 has weird spacing: '...nection after...' == Line 107 has weird spacing: '... VMs to move...' == Line 190 has weird spacing: '...e tasks is s...' == Line 299 has weird spacing: '... entity that...' == Line 321 has weird spacing: '...e tasks cont...' == (3 more instances...) -- The document date (November 18, 2019) is 1620 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2629' is defined on line 534, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2629 (Obsoleted by RFC 7749) Summary: 1 error (**), 0 flaws (~~), 8 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Dunbar 2 Internet Draft Futurewei 3 Intended status: Informational B. Sarikaya 4 Expires: May 18, 2020 Denpel Informatique 5 B.Khasnabish 6 Independent 7 T. Herbert 8 Intel 9 S. Dikshit 10 Aruba-HPE 11 November 18, 2019 13 Virtual Machine Mobility Solutions for L2 and L3 Overlay Networks 14 draft-ietf-nvo3-vmm-06 16 Abstract 18 This document discusses Virtual Machine (VM) mobility solutions that 19 are commonly used in overlay-based Data Center (DC) networks. The 20 objective is to describe the solutions and their impact on moving 21 VMs (and applications) from one rack to another connected by the 22 Overlay networks. 24 For layer 2 networks, it is based on using an NVA (Network 25 Virtualization Authority) - NVE (Network Virtualization Edge) 26 protocol to update the ARP (Address Resolution Protocol) table or 27 neighbor cache entries after a VM (virtual machine) moves from an 28 Old NVE to a New NVE. For Layer 3, it is based on migration of 29 address and connection after the move. 31 Status of this Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79.This Internet-Draft is submitted in 35 full conformance with the provisions of BCP 78 and BCP 79. This 36 document may not be modified, and derivative works of it may not be 37 created, except to publish it as an RFC and to translate it into 38 languages other than English. 40 Internet-Drafts are working documents of the Internet Engineering 41 Task Force (IETF), its areas, and its working groups. Note that 42 other groups may also distribute working documents as Internet- 43 Drafts. 45 Internet-Drafts are draft documents valid for a maximum of six 46 months and may be updated, replaced, or obsoleted by other documents 47 at any time. It is inappropriate to use Internet-Drafts as 48 reference material or to cite them other than as "work in progress." 50 The list of current Internet-Drafts can be accessed at 51 http://www.ietf.org/ietf/1id-abstracts.txt 53 The list of Internet-Draft Shadow Directories can be accessed at 54 http://www.ietf.org/shadow.html 56 This Internet-Draft will expire on May 10, 2020. 58 Copyright Notice 60 Copyright (c) 2019 IETF Trust and the persons identified as the 61 document authors. All rights reserved. 63 This document is subject to BCP 78 and the IETF Trust's Legal 64 Provisions Relating to IETF Documents 65 (http://trustee.ietf.org/license-info) in effect on the date of 66 publication of this document. Please review these documents 67 carefully, as they describe your rights and restrictions with 68 respect to this document. Code Components extracted from this 69 document must include Simplified BSD License text as described in 70 Section 4.e of the Trust Legal Provisions and are provided without 71 warranty as described in the Simplified BSD License. 73 Table of Contents 75 1. Introduction...................................................3 76 2. Conventions used in this document..............................4 77 3. Requirements...................................................5 78 4. Overview of the VM Mobility Solutions..........................5 79 4.1. VM Migration in Layer-2 Network...........................5 80 4.2. Task Migration in Layer-3 Network.........................7 81 4.2.1. Address and Connection Migration in Task Migration...8 82 5. Handling Packets in Flight.....................................9 83 6. Moving Local State of VM......................................10 84 7. Handling of Hot, Warm and Cold VM Mobility....................10 85 8. VM Operation..................................................11 86 9. Security Considerations.......................................11 87 10. IANA Considerations..........................................12 88 11. Acknowledgments..............................................12 89 12. Change Log...................................................12 90 13. References...................................................12 91 13.1. Normative References....................................13 92 13.2. Informative References..................................14 94 1. Introduction 95 This document describes the overlay-based DC networking solutions 96 in support of multi-tenancy and VM mobility. Many large DCs, 97 especially Cloud DCs, host tasks (or workloads) for multiple 98 tenants. A tenant can be a department of one organization or an 99 organization. There is communication among tasks belonging to one 100 tenant and communication among tasks belonging to different 101 tenants or with external entities. 102 Server Virtualization, which is being used in almost all of 103 today's DCs, enables many VMs to run on a single physical computer 104 or server sharing the processor/memory/storage. Network 105 connectivity among VMs is provided by the network virtualization 106 edge (NVE) [RFC8014]. It is highly desirable [RFC7364] to allow 107 VMs to move dynamically (live, hot, or cold move) from one 108 server to another for dynamic load balancing or optimized workload 109 distribution. 110 There are many challenges and requirements related to VM mobility 111 in large data centers, including dynamically attaching/detaching 112 VMs to/from Virtual Network Edges (VNEs). In addition, retaining 113 the IP addresses after a move is a key requirement [RFC7364]. 114 Such a requirement is needed in order to maintain existing 115 transport connections. 116 In traditional Layer-3 based networks, retaining IP addresses 117 after a move is generally not recommended because the frequent 118 move will cause fragmented IP addresses, which complicates IP 119 address management. 120 In view of many VM mobility schemes that exist today, there is a 121 need to document comprehensive VM mobility solutions that cover 122 both IPv4 and IPv6. Large DC networks can be organized as one 123 large (a) Layer-2 network geographically distributed across 124 buildings/cities or (b) Layer-3 networks with large number of host 125 routes that cannot be aggregated as a result of frequent moves 126 from one location to another without changing the IP addresses. 128 The connectivity between Layer 2 boundaries can be achieved by the 129 NVE functioning as Layer-3 gateway, performing routing across 130 bridging domain such as in Warehouse Scale Computers (WSC). 132 2. Conventions used in this document 134 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL 135 NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and 136 "OPTIONAL" in this document are to be interpreted as described in 137 RFC 2119 [RFC2119] and [RFC8014]. 139 This document uses the terminology defined in [RFC7364]. In 140 addition, we make the following definitions: 142 VM: Virtual Machine 144 Tasks: Task is a program instantiated or running on a virtual 145 machine or container. Tasks in virtual machines or 146 containers can be migrated from one server to another. 147 We use task, workload and virtual machine 148 interchangeably in this document. 150 Hot VM Mobility: A given VM could be moved from one server to 151 another in running state. 153 Warm VM Mobility: In case of warm VM mobility, the VM states are 154 mirrored to the secondary server (or domain) at a 155 predefined (configurable) regular intervals. This 156 reduces the overheads and complexity, but this may also 157 lead to a situation when both servers may not contain 158 the exact same data (state information) 160 Cold VM Mobility: A given VM could be moved from one server to 161 another in stopped or suspended state. 163 Old NVE: This refers to the old NVE where packets were forwarded 164 to before migration. 166 New NVE: This refers to the new NVE after migration. 168 Packets in flight: This refers to the packets received by the Old 169 NVE sent by the correspondents that have old ARP or 170 neighbor cache entry before VM or task migration. 172 Users of VMs in diskless systems or the systems that are not 173 using configuration files are called end user clients. 175 Cloud DC: Third party DCs that host applications, tasks or 176 workloads and owned by different organizations or 177 tenants. 179 3. Requirements 181 This section states VM mobility requirements on DC networks. 183 DC networks should support both IPv4 and IPv6 VM mobility. 185 VM mobility should not require changing their IP addresses after the 186 move. 188 There exist "Hot Migration" where transport service continuity is 189 maintained, and "Cold Migration" where the transport service needs 190 to be restarted, i.e., execution of the tasks is stopped on the 191 "Old" NVE, moved to the "New" NVE and the task is restarted. 193 VM mobility solutions/procedures should minimize triangular routing 194 except for handling packets in flight. 196 VM mobility solutions/procedures should not need to use tunneling 197 except for handling packets in flight. 199 4. Overview of the VM Mobility Solutions 201 Layer-2 and Layer-3 mobility solutions are described respectively 202 in the following sections. 204 4.1. VM Migration in Layer-2 Network 206 Ability to move VMs dynamically, from one server to another, makes 207 it possible for dynamic load balancing or workload distribution. 209 Therefore, this scheme is highly desirable for utilization in 210 large scale multi-tenant DCs. 212 In a Layer-2 based VM migration approach, a VM that is moving to 213 another server does not change its IP address. But since this VM 214 is now under a new NVE, previously communicating NVEs will 215 continue sending their packets to the Old NVE. To solve this 216 problem, Address Resolution Protocol (ARP) cache in IPv4 [RFC0826] 217 or neighbor cache in IPv6 [RFC4861] in the NVEs need to be updated 218 promptly. All NVEs need to change their caches associating the VM 219 Layer-2 or Medium Access Control (MAC) address with the new NVE's 220 IP address as soon as the VM moves. Such a change enables all NVEs 221 to encapsulate the outgoing MAC frames with the current target NVE 222 IP address. It may take some time to refresh the ARP/ND cache when 223 a VM has moved to a New NVE. During this period, a tunnel is 224 needed for that Old NVE to forward packets destined to the VM 225 under the New NVE. 227 In case of IPv4, immediately after the move, the VM should send a 228 gratuitous ARP request message containing its IPv4 and Layer-2 MAC 229 address to its new NVE. This message's destination address is the 230 broadcast address. Upon receiving this message, both old and new 231 NVEs should update the VM's ARP entry in the central directory at 232 the NVA, to update its mappings to record the IPv4 address and MAC 233 address of the moving VM along with the new NVE IPv4 address. An 234 NVE-to-NVA protocol is used for this purpose [RFC8014]. 236 Reverse ARP (RARP) which enables the host to discover its IPv4 237 address when it boots from a local server [RFC0903], is not used 238 by VMs because the VM already knows its IPv4 address. Next, we 239 describe a case where RARP is used. 241 There are some vendor deployments (e.g., diskless systems or 242 systems without configuration files) where the VM's user, i.e., 243 end-user client asks for the same MAC address upon migration. 244 This can be achieved by the clients sending RARP request message 245 which carries the MAC address looking for an IP address 246 allocation. The server, in this case the new NVE, needs to 247 communicate with NVA, just like in the gratuitous ARP case to 248 ensure that the same IPv4 address is assigned to the VM. NVA uses 249 the MAC address as the key in the search of ARP cache to find the 250 IP address and informs this to the new NVE which in turn sends 251 RARP reply message. This completes IP address assignment to the 252 migrating VM. 254 Other NVEs communicating with this VM could have the old ARP 255 entry. If any VMs in those NVEs need to communicate with the VM 256 attached to the new NVE, old ARP entries might be used. Thus, the 257 packets are delivered to the old NVE. The old NVE MUST tunnel 258 these in-flight packets to the new NVE. 260 When an ARP entry for those VMs times out, their corresponding 261 NVEs should access the NVA for an update. 263 IPv6 operation is slightly different: 265 In IPv6, after the move, the VM immediately sends an unsolicited 266 neighbor advertisement message containing its IPv6 address and 267 Layer-2 MAC address to its new NVE. This message is sent to the 268 IPv6 Solicited Node Multicast Address corresponding to the target 269 address which is the VM's IPv6 address. The NVE receiving this 270 message should send request to update VM's neighbor cache entry in 271 the central directory of the NVA. The NVA's neighbor cache entry 272 should include IPv6 address of the VM, MAC address of the VM and 273 the NVE IPv6 address. An NVE-to-NVA protocol is used for this 274 purpose [RFC8014]. 276 Other NVEs communicating with this VM might still use the old 277 neighbor cache entry. If any VM in those NVEs need to communicate 278 with the VM attached to the new NVE, it could use the old neighbor 279 cache entry. Thus, the packets are delivered to the old NVE. The 280 old NVE MUST tunnel these in-flight packets to the new NVE. 282 When a neighbor cache entry in those VMs times out, their 283 corresponding NVEs should access the NVA for an update. 285 4.2. Task Migration in Layer-3 Network 287 Layer-2 based DC networks become quickly prohibitive because 288 ARP/neighbor caches don't scale. Scaling can be accomplished 289 seamlessly in Layer-3 data center networks by just giving each 290 virtual network an IP subnet and a default route that points to 291 its NVE. This means no explosion of ARP/ neighbor cache in VMs 292 and NVEs (just one ARP/ neighbor cache entry for the default 293 route) and there is no need to have Ethernet header in 294 encapsulation [RFC7348] which saves at least 16 bytes. 296 Even though the term VM and Task are used interchangeably in this 297 document, the term Task is used in the context of Layer-3 298 migration mainly to have slight emphasis on the task of moving an 299 entity that is instantiated on a VM or a container. 301 Traditional Layer-3 based DC networks require IP address of the 302 task to change after moving because the pre-fixes of the IP 303 address usually reflect the locations. It is necessary to have an 304 IP based VM migration solution that can allow IP addresses staying 305 the same after the VMs move to different locations. The Identifier 306 Locator Addressing or ILA [I-D.herbert-nvo3-ila] is one of such 307 solutions. 309 Because broadcasting is not available in Layer-3 based networks, 310 multicast of neighbor solicitations in IPv6 would need to be 311 emulated. 313 Cold task migration, which is a common practice in many data 314 centers, involves the following steps: 316 - Stop running the task. 317 - Package the runtime state of the job. 318 - Send the runtime state of the task to the new NVE where the 319 task is to run. 320 - Instantiate the task's state on the new machine. 321 - Start the tasks continuing it from the point at which it was 322 stopped. 324 Address migration and connection migration in moving tasks or VMs 325 are addressed next. 327 4.2.1. Address and Connection Migration in Task Migration 329 Address migration is achieved as follows: 331 - Configure IPv4/v6 address on the target Task. 332 - Suspend use of the address on the old Task. This includes 333 handling established connections. A state may be established 334 to drop packets or send ICMPv4 or ICMPv6 destination 335 unreachable message when packets to the migrated address are 336 received. 337 - Push the new mapping to VM. Communicating VMs will learn of 338 the new mapping via a control plane either by participating in 339 a protocol for mapping propagation or by getting the new 340 mapping from a central database such as Domain Name System 341 (DNS). 343 Connection migration involves reestablishing existing TCP 344 connections of the task in the new place. 346 The simplest course of action is to drop all TCP connections to 347 the VM across a migration. If the migrations are relatively rare 348 events in a data center, impact is relatively small when TCP 349 connections are automatically closed in the network stack during a 350 migration event. If the applications running are known to handle 351 this gracefully (i.e. reopen dropped connections) then this 352 approach may be viable. 354 More involved approach to connection migration entails pausing the 355 connection, packaging connection state and sending to target, 356 instantiating connection state in the peer stack, and restarting 357 the connection. From the time the connection is paused to the 358 time it is running again in the new stack, packets received for 359 the connection could be silently dropped. For some period of 360 time, the old stack will need to keep a record of the migrated 361 connection. If it receives a packet, it can either silently drop 362 the packet or forward it to the new location, as described in 363 Section 5. 365 5. Handling Packets in Flight 367 The Old NVE may receive packets from the VM's ongoing 368 communications. These packets should not be lost; they should be 369 sent to the New NVE to be delivered to the VM. The steps involved 370 in handling packets in flight are as follows: 372 Preparation Step: It takes some time, possibly a few seconds for 373 a VM to move from its Old NVE to a New NVE. During this period, a 374 tunnel needs to be established so that the Old NVE can forward 375 packets to the New NVE. Old NVE gets New NVE address from NVA in 376 the request to move the VM. The Old NVE can store the New NVE 377 address for the VM with a timer. When the timer expired, the entry 378 for the New NVE for the VM can be deleted. 380 Tunnel Establishment - IPv6: Inflight packets are tunneled to the 381 New NVE using the encapsulation protocol such as VXLAN in IPv6. 383 Tunnel Establishment - IPv4: Inflight packets are tunneled to the 384 New NVE using the encapsulation protocol such as VXLAN in IPv4. 386 Tunneling Packets - IPv6: IPv6 packets received for the migrating 387 VM are encapsulated in an IPv6 header at the Old NVE. New NVE 388 decapsulates the packet and sends IPv6 packet to the migrating VM. 390 Tunneling Packets - IPv4: IPv4 packets received for the migrating 391 VM are encapsulated in an IPv4 header at the Old NVE. New NVE 392 decapsulates the packet and sends IPv4 packet to the migrating VM. 394 Stop Tunneling Packets: When the Timer for storing the New NVE 395 address for the VM expires. The Timer should be long enough for 396 all other NVEs that need to communicate with the VM to get their 397 NVE-VM cache entries updated. 399 6. Moving Local State of VM 400 In addition to the VM mobility related signaling (VM Mobility 401 Registration Request/Reply), the VM state needs to be transferred 402 to the New NVE. The state includes its memory and file system if 403 the VM cannot access the memory and the file system after moving 404 to the New NVE. Old NVE opens a TCP connection with New NVE over 405 which VM's memory state is transferred. 407 File system or local storage is more complicated to transfer. The 408 transfer should ensure consistency, i.e. the VM at the New NVE 409 should find the same file system it had at the Old NVE. Pre- 410 copying is a commonly used technique for transferring the file 411 system. First the whole disk image is transferred while VM 412 continues to run. After the VM is moved, any changes in the file 413 system are packaged together and sent to the New NVE Hypervisor 414 which reflects these changes to the file system locally at the 415 destination. 417 7. Handling of Hot, Warm and Cold VM Mobility 418 Both Cold and Warm VM mobility (migration), refers to the VM being 419 completely shut down at the old NVE before restarted at the new 420 NVE. Therefore, all transport services to the VM need to restart. 422 Upon starting at the new NVE, the VM should send an ARP or 423 Neighbor Discovery message. Cold VM mobility also allows the Old 424 NVE and all communicating NVEs to time out ARP/neighbor cache 425 entries of the VM. It is necessary for the NVA to push the 426 updated ARP/neighbor cache entry to NVEs or for NVEs to pull the 427 updated ARP/neighbor cache entry from NVA. 429 The Cold VM mobility can be facilitated by cold standby entity 430 receiving scheduled backup information. The cold standby entity 431 can be a VM or other form factors which is beyond the scope of 432 this document. The cold mobility option can be used for non- 433 critical applications and services that can tolerate interrupted 434 TCP connections. 436 The Warm VM mobility refers the backup entities receive backup 437 information at more frequent intervals. The duration of the 438 interval determines the warmth of the option. The larger the 439 duration, the less warm (and hence cold) the Warm VM mobility 440 option becomes. 442 There is also a Hot Standby option in addition to the Hot 443 Mobility, where there are VMs in both primary and secondary NVEs. 444 They have identical information and can provide services 445 simultaneously as in load-share mode of operation. If the VM in 446 the primary NVE fails, there is no need to actively move the VM to 447 the secondary NVE because the VM in the secondary NVE already 448 contains identical information. The Hot Standby option is the 449 costliest mechanism, and hence this option is utilized only for 450 mission-critical applications and services. In Hot Standby 451 option, regarding TCP connections, one option is to start with and 452 maintain TCP connections to two different VMs at the same time. 453 The least loaded VM responds first and starts providing service 454 while the sender (origin) still continues to receive Ack from the 455 heavily loaded (secondary) VM and chooses not to use the service 456 of the secondary responding VM. If the situation (loading 457 condition of the primary responding VM) changes the secondary VM 458 may start providing service to the sender (origin). 460 8. VM Operation 461 Once a VM moves to a new NVE, the VM's IP address does not change 462 and the VM should be able to continue to receive packets to its 463 address(es). 465 The VM needs to send a gratuitous Address Resolution message or 466 unsolicited Neighbor Advertisement message upstream after each 467 move. 469 The VM lifecycle management is a complicated task, which is beyond 470 the scope of this document. Not only it involves monitoring server 471 utilization, balancing the distribution of workload, etc., but 472 also needs seamless management VM migration from one server to 473 another. 475 9. Security Considerations 476 Security threats for the data and control plane for overlay 477 networks are discussed in [RFC8014]. There are several issues in 478 a multi-tenant environment that create problems. In Layer-2 based 479 overlay DC networks, lack of security in VXLAN, and corruption of 480 VNI can lead to delivery of information to the wrong tenant. 482 Also, ARP in IPv4 and ND in IPv6 are not secure, especially if we 483 accept the gratuitous versions. When these are done over a UDP 484 encapsulation, as in VXLAN, the problem gets worse since it is 485 trivial for a non-trusted entity to spoof UDP packets. 487 In Layer-3 based overlay data center networks, the problem of 488 address spoofing may arise. An NVE may have untrusted tasks 489 attached to it. This usually happens in situations when the VMs 490 (tasks) running third party applications. This requires the usage 491 of stronger security mechanisms. 493 10. IANA Considerations 495 This document makes no request to IANA. 497 11. Acknowledgments 499 The authors are grateful to Bob Briscoe, David Black, Dave R. 500 Worley, Qiang Zu, and Andrew Malis for helpful comments. 502 12. Change Log 504 . submitted version -00 as a working group draft after adoption 506 . submitted version -01 with these changes: references are updated, 507 o added packets in flight definition to Section 2 509 . submitted version -02 with updated address. 511 . submitted version -03 to fix the nits. 513 . submitted version -04 in reference to the WG Last call comments. 515 . Submitted version - 05 to address IETF LC comments from TSV area. 517 13. References 518 13.1. Normative References 520 [RFC0826] Plummer, D., "An Ethernet Address Resolution Protocol: Or 521 Converting Network Protocol Addresses to 48.bit Ethernet 522 Address for Transmission on Ethernet Hardware", STD 37, 523 RFC 826, DOI 10.17487/RFC0826, November 1982, 524 . 526 [RFC0903] Finlayson, R., Mann, T., Mogul, J., and M. Theimer, "A 527 Reverse Address Resolution Protocol", STD 38, RFC 903, 528 DOI 10.17487/RFC0903, June 1984, . 531 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 532 Requirement Levels", BCP 14, RFC 2119, March 1997. 534 [RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629, 535 DOI 10.17487/RFC2629, June 1999, . 538 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 539 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 540 DOI 10.17487/RFC4861, September 2007, . 543 [RFC7348] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., 544 Kreeger, L., Sridhar, T., Bursell, M., and C. Wright, 545 "Virtual eXtensible Local Area Network (VXLAN): A 546 Framework for Overlaying Virtualized Layer 2 Networks over 547 Layer 3 Networks", RFC 7348, DOI 10.17487/RFC7348, August 548 2014, . 550 [RFC7364] Narten, T., Ed., Gray, E., Ed., Black, D., Fang, L., 551 Kreeger, L., and M. Napierala, "Problem Statement: 552 Overlays for Network Virtualization", RFC 7364, DOI 553 10.17487/RFC7364, October 2014, . 556 [RFC8014] Black, D., Hudson, J., Kreeger, L., Lasserre, M., and T. 557 Narten, "An Architecture for Data-Center Network 558 Virtualization over Layer 3 (NVO3)", RFC 8014, DOI 559 10.17487/RFC8014, December 2016, . 562 13.2. Informative References 564 [I-D.herbert-nvo3-ila] Herbert, T. and P. Lapukhov, "Identifier- 565 locator addressing for IPv6", draft-herbert-nvo3-ila-04 566 (work in progress), March 2017. 568 Authors' Addresses 570 Linda Dunbar 571 Futurewei 572 Email: ldunbar@futurewei.com 574 Behcet Sarikaya 575 Denpel Informatique 576 Email: sarikaya@ieee.org 578 Bhumip Khasnabish 579 Independent 580 Email: vumip1@gmail.com 582 Tom Herbert 583 Intel 584 Email: tom@herbertland.com 586 Saumya Dikshit 587 Aruba-HPE 588 Bangalore, India 589 Email: saumya.dikshit@hpe.com