idnits 2.17.1 draft-ietf-nvo3-vmm-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 546 has weird spacing: '...Virtual eXten...' -- The document date (August 22, 2019) is 1708 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2629' is defined on line 535, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2629 (Obsoleted by RFC 7749) Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Dunbar 2 Internet Draft Futurewei 3 Intended status: Informational B. Sarikaya 4 Expires: Dec 2019 Denpel Informatique 5 B.Khasnabish 6 Independent 7 T. Herbert 8 Intel 9 S. Dikshit 10 Aruba-HPE 11 August 22, 2019 13 Virtual Machine Mobility Solutions for L2 and L3 Overlay Networks 14 draft-ietf-nvo3-vmm-05 16 Abstract 18 This document describes virtual machine mobility solutions commonly 19 used in data centers built with overlay-based network. This document 20 is intended for describing the solutions and the impact of moving 21 VMs (or applications) from one Rack to another connected by the 22 Overlay networks. 24 For layer 2, it is based on using an NVA (Network Virtualization 25 Authority) - NVE (Network Virtualization Edge) protocol to update 26 ARP (Address Resolution Protocol) table or neighbor cache entries 27 after VM (virtual machine) moves from Old NVE to the New NVE. For 28 Layer 3, it is based on address and connection migration after the 29 move. 31 Status of this Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. This document may not be modified, 38 and derivative works of it may not be created, except to publish it 39 as an RFC and to translate it into languages other than English. 41 Internet-Drafts are working documents of the Internet Engineering 42 Task Force (IETF), its areas, and its working groups. Note that 43 other groups may also distribute working documents as Internet- 44 Drafts. 46 Internet-Drafts are draft documents valid for a maximum of six 47 months and may be updated, replaced, or obsoleted by other documents 48 at any time. It is inappropriate to use Internet-Drafts as 49 reference material or to cite them other than as "work in progress." 51 The list of current Internet-Drafts can be accessed at 52 http://www.ietf.org/ietf/1id-abstracts.txt 54 The list of Internet-Draft Shadow Directories can be accessed at 55 http://www.ietf.org/shadow.html 57 This Internet-Draft will expire on February 22, 2009. 59 Copyright Notice 61 Copyright (c) 2019 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (http://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with 69 respect to this document. Code Components extracted from this 70 document must include Simplified BSD License text as described in 71 Section 4.e of the Trust Legal Provisions and are provided without 72 warranty as described in the Simplified BSD License. 74 Table of Contents 76 1. Introduction...................................................3 77 2. Conventions used in this document..............................4 78 3. Requirements...................................................5 79 4. Overview of the VM Mobility Solutions..........................6 80 4.1. VM Migration in Layer 2 Network...........................6 81 4.2. Task Migration in Layer-3 Network.........................7 82 4.2.1. Address and Connection Migration in Task Migration...8 83 5. Handling Packets in Flight.....................................9 84 6. Moving Local State of VM......................................10 85 7. Handling of Hot, Warm and Cold VM Mobility....................10 86 8. VM Operation..................................................11 87 9. Security Considerations.......................................12 88 10. IANA Considerations..........................................12 89 11. Acknowledgments..............................................12 90 12. Change Log...................................................12 91 13. References...................................................13 92 13.1. Normative References....................................13 93 13.2. Informative References..................................14 95 1. Introduction 96 This document describes the overlay-based data center networks 97 solutions in supporting multitenancy and VM (Virtual Machine) 98 mobility. Many large DCs, especially Cloud DCs, host tasks (or 99 workloads) for multiple tenants, which can be multiple departments 100 of one organization or multiple organizations. There is 101 communication among tasks belonging to one tenant and 102 communications among tasks belonging to different tenants or with 103 external entities. 104 Server Virtualization, which is being used in almost all of 105 today's data centers, enables many VMs to run on a single physical 106 computer or compute server sharing the processor/memory/storage. 107 Network connectivity among VMs is provided by the network 108 virtualization edge (NVE) [RFC8014]. It is highly desirable 109 [RFC7364] to allow VMs to be moved dynamically (live, hot, or cold 110 move) from one server to another for dynamic load balancing or 111 optimized work distribution. 112 There are many challenges and requirements related to VM mobility 113 in large data centers, including dynamic attaching/detaching VMs 114 to/from Virtual Network Edges (VNEs). Retaining IP addresses 115 after a move is a key requirement [RFC7364]. Such a requirement 116 is needed in order to maintain existing transport connections. 117 In traditional Layer-3 based networks, retaining IP addresses 118 after a move is generally not recommended because the frequent 119 move will cause non-aggregated IP addresses (a.k.a. fragmented IP 120 addresses), which introduces complexity in IP address management. 122 In view of many VM mobility schemes that exist today, there is a 123 desire to document comprehensive VM mobility solutions that cover 124 both IPv4 and IPv6. The large Data Center networks can be 125 organized as one large Layer-2 network geographically distributed 126 in several buildings/cities or Layer-3 networks with large number 127 of host routes that cannot be aggregated as the result of frequent 128 move from one location to another without changing their IP 129 addresses. The connectivity between Layer 2 boundaries can be 130 achieved by the network virtualization edge (NVE) functioning as 131 Layer 3 gateway routing across bridging domain such as in 132 Warehouse Scale Computers (WSC). 134 2. Conventions used in this document 136 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL 137 NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and 138 "OPTIONAL" in this document are to be interpreted as described in 139 RFC 2119 [RFC2119] and [RFC8014]. 141 This document uses the terminology defined in [RFC7364]. In 142 addition, we make the following definitions: 144 VM: Virtual Machine 146 Tasks: Task is a program instantiated or running on a virtual 147 machine or container. Tasks in virtual machines or 148 containers can be migrated from one server to another. 149 We use task, workload and virtual machine 150 interchangeably in this document. 152 Hot VM Mobility: A given VM could be moved from one server to 153 another in running state. 155 Warm VM Mobility: In case of warm VM mobility, the VM states are 156 mirrored to the secondary server (or domain) at a 157 predefined (configurable) regular intervals. This 158 reduces the overheads and complexity, but this may also 159 lead to a situation when both servers may not contain 160 the exact same data (state information) 162 Cold VM Mobility: A given VM could be moved from one server to 163 another in stopped or suspended state. 165 Old NVE: refers to the old NVE where packets were forwarded to 166 before migration. 168 New NVE: refers to the new NVE after migration. 170 Packets in flight: refers to the packets received by the Old NVE 171 sent by the correspondents that have old ARP or neighbor 172 cache entry before VM or task migration. 174 Users of VMs in diskless systems or systems not using 175 configuration files are called end user clients. 177 Cloud DC: Third party data centers that host applications, 178 tasks or workloads owned by different organizations or 179 tenants. 181 3. Requirements 183 This section states requirements on data center network virtual 184 machine mobility. 186 Data center network should support both IPv4 and IPv6 VM mobility. 188 Virtual machine mobility should not require changing their IP 189 addresses after the move. 191 There is "Hot Migration" with transport service continuing, and 192 there is a "Cold Migration" with transport service restarted, i.e. 193 stop the task running on the Old NVE and move to the New NVE before 194 restart as described in the Task Migration. 196 VM mobility solutions/procedures should minimize triangular routing 197 except for handling packets in flight. 199 VM mobility solutions/procedures should not need to use tunneling 200 except for handling packets in flight. 202 4. Overview of the VM Mobility Solutions 204 Layer 2 and Layer 3 mobility solutions are described respectively 205 in the following sections. 207 4.1. VM Migration in Layer 2 Network 209 Being able to move VMs dynamically, from one server to another, 210 makes it possible for dynamic load balancing or work distribution. 211 Therefore, it is highly desirable for large scale multi-tenants 212 data centers. 214 In a Layer-2 based approach, VM moving to another server does not 215 change its IP address, but this VM is now under a new NVE, 216 previously communicating NVEs will continue to send their packets 217 to the Old NVE. To solve this problem, Address Resolution 218 Protocol (ARP) cache in IPv4 [RFC0826] or neighbor cache in IPv6 219 [RFC4861] in the NVEs need to be updated. NVEs need to change 220 their caches associating the VM Layer-2 or Medium Access Control 221 (MAC) address with the NVE's IP address. Such a change enables 222 NVEs to encapsulate the outgoing MAC frames with the current 223 target NVE address. It may take some time to refresh ARP/ND cache 224 when a VM is moved to a New NVE. During this period, a tunnel is 225 needed so that Old NVE can forwards packets destined to the VM to 226 the New NVE. 228 In IPv4, the VM immediately after the move should send a 229 gratuitous ARP request message containing its IPv4 and Layer 2 MAC 230 address in its new NVE. This message's destination address is the 231 broadcast address. Old NVE receives this message. Both Old and 232 New NVEs should update VM's ARP entry in the central directory at 233 the NVA, to update its mappings to record the IPv4 address & MAC 234 address of the moving VM along with the new NVE IPv4 address. An 235 NVE-to-NVA protocol is used for this purpose [RFC8014]. 237 Reverse ARP (RARP) which enables the host to discover its IPv4 238 address when it boots from a local server [RFC0903], is not used 239 by VMs because the VM already knows its IPv4 address. Next, we 240 describe a case where RARP is used. 242 There are some vendor deployments (diskless systems or systems 243 without configuration files) wherein VM users, i.e. end-user 244 clients ask for the same MAC address upon migration. This can be 245 achieved by the clients sending RARP request message which carries 246 the old MAC address looking for an IP address allocation. The 247 server, in this case the new NVE needs to communicate with NVA, 248 just like in the gratuitous ARP case to ensure that the same IPv4 249 address is assigned to the VM. NVA uses the MAC address as the 250 key in the search of ARP cache to find the IP address and informs 251 this to the new NVE which in turn sends RARP reply message. This 252 completes IP address assignment to the migrating VM. 254 Other NVEs communicating with this VM could have the old ARP 255 entry. If any VMs in those NVEs need to communicate with the VM 256 attached to the New NVE, old ARP entries might be used. Thus, the 257 packets are delivered to the Old NVE. The Old NVE MUST tunnel 258 these in-flight packets to the New NVE. 260 When an ARP entry for those VMs times out, their corresponding 261 NVEs should access the NVA for an update. 263 IPv6 operation is slightly different: 265 In IPv6, after the move, the VM immediately sends an unsolicited 266 neighbor advertisement message containing its IPv6 address and 267 Layer-2 MAC address to its new NVE. This message is sent to the 268 IPv6 Solicited Node Multicast Address corresponding to the target 269 address which is the VM's IPv6 address. The NVE receiving this 270 message should send request to update VM's neighbor cache entry in 271 the central directory of the NVA. The NVA's neighbor cache entry 272 should include IPv6 address of the VM, MAC address of the VM and 273 the NVE IPv6 address. An NVE-to-NVA protocol is used for this 274 purpose [RFC8014]. 276 Other NVEs communicating with this VM might still use the old 277 neighbor cache entry. If any VM in those NVEs need to communicate 278 with the VM attached to the New NVE, it could use the old neighbor 279 cache entry. Thus, the packets are delivered to the Old NVE. The 280 Old NVE MUST tunnel these in-flight packets to the New NVE. 282 When a neighbor cache entry in those VMs times out, their 283 corresponding NVEs should access the NVA for an update. 285 4.2. Task Migration in Layer-3 Network 287 Layer-2 based data center networks become quickly prohibitive 288 because ARP/neighbor caches don't scale. Scaling can be 289 accomplished seamlessly Layer-3 data center networks by just 290 giving each virtual network an IP subnet and a default route that 291 points to NVE. This means no explosion of ARP/ neighbor cache in 292 VMs and NVEs (just one ARP/ neighbor cache entry for default 293 route) and there is no need to have Ethernet header in 294 encapsulation [RFC7348] which saves at least 16 bytes. 296 Even though the term VM and Task are used interchangeably in this 297 document, the term Task is used in the context of Layer-3 298 migration mainly to have slight emphasis on the moving an entity 299 (Task) that is instantiated on a VM or a container. 301 Traditional Layer-3 based data center networks require IP address 302 of the task to change after moving because the prefixes of the IP 303 address usually reflect the locations. It is necessary to have an 304 IP based VM migration solution that can allow IP addresses staying 305 the same after moving to different locations. The Identifier 306 Locator Addressing or ILA [I-D.herbert-nvo3-ila] is one of such 307 solutions. 309 Because broadcasting is not available in Layer-3 based networks, 310 multicast of neighbor solicitations in IPv6 would need to be 311 emulated. 313 Cold task migration, which is a common practice in many data 314 centers, involves the following steps: 316 - Stop running the task. 317 - Package the runtime state of the job. 318 - Send the runtime state of the task to the New NVE where the 319 task is to run. 320 - Instantiate the task's state on the new machine. 321 - Start the tasks for the task continuing from the point at which 322 it was stopped. 324 Address migration and connection migration in moving tasks or VMs 325 are addressed next. 327 4.2.1. Address and Connection Migration in Task Migration 329 Address migration is achieved as follows: 331 - Configure IPv4/v6 address on the target Task. 332 - Suspend use of the address on the old Task. This includes 333 handling established connections. A state may be established 334 to drop packets or send ICMPv4 or ICMPv6 destination 335 unreachable message when packets to the migrated address are 336 received. 338 - Push the new mapping to VM. Communicating VMs will learn of 339 the new mapping via a control plane either by participation in 340 a protocol for mapping propagation or by getting the new 341 mapping from a central database such as Domain Name System 342 (DNS). 344 Connection migration involves reestablishing existing TCP 345 connections of the task in the new place. 347 The simplest course of action is to drop TCP connections across a 348 migration. It the migrations are relatively rare events, it is 349 conceivable that TCP connections could be automatically closed in 350 the network stack during a migration event. If the applications 351 running are known to handle this gracefully (i.e. reopen dropped 352 connections) then this may be viable. 354 More involved approach to connection migration entails pausing the 355 connection, packaging connection state and sending to target, 356 instantiating connection state in the peer stack, and restarting 357 the connection. From the time the connection is paused to the 358 time it is running again in the new stack, packets received for 359 the connection could be silently dropped. For some period of 360 time, the old stack will need to keep a record of the migrated 361 connection. If it receives a packet, it can either silently drop 362 the packet or forward it to the new location, similarly as in 363 Section 5. 365 5. Handling Packets in Flight 367 The Old NVE may receive packets from the VM's ongoing 368 communications and these packets should not be lost, and they 369 should be sent to the New NVE to be delivered to the VM. The 370 steps involved in handling packets in flight are as follows: 372 Preparation Step: It takes some time, possibly a few seconds for 373 a VM to move from its Old NVE to a New NVE. During this period, a 374 tunnel needs to be established so that the Old NVE can forward 375 packets to the New NVE. Old NVE gets New NVE address from NVA in 376 the request to move the VM. The Old NVE can store the New NVE 377 address for the VM with a timer. When the timer expired, the entry 378 for the New NVE for the VM can be deleted. 380 Tunnel Establishment - IPv6: Inflight packets are tunneled to the 381 New NVE using the encapsulation protocol such as VXLAN in IPv6. 383 Tunnel Establishment - IPv4: Inflight packets are tunneled to the 384 New NVE using the encapsulation protocol such as VXLAN in IPv4. 386 Tunneling Packets - IPv6: IPv6 packets received for the migrating 387 VM are encapsulated in an IPv6 header at the Old NVE. New NVE 388 decapsulates the packet and sends IPv6 packet to the migrating VM. 390 Tunneling Packets - IPv4: IPv4 packets received for the migrating 391 VM are encapsulated in an IPv4 header at the Old NVE. New NVE 392 decapsulates the packet and sends IPv4 packet to the migrating VM. 394 Stop Tunneling Packets: When Old NVE stops receiving packets 395 destined to the VM that has just moved to the New NVE. The Timer 396 for storing the New NVE address for the VM should be long enough 397 for all other NVEs that need to communicate with the VM to get 398 their NVE-VM cache entries updated. 400 6. Moving Local State of VM 401 In addition to the VM mobility related signaling (VM Mobility 402 Registration Request/Reply), the VM state needs to be transferred 403 to the New NVE. The state includes its memory and file system if 404 the VM cannot access the memory and the file system after moved to 405 the New NVE. Old NVE opens a TCP connection with New NVE over 406 which VM's memory state is transferred. 408 File system or local storage is more complicated to transfer. The 409 transfer should ensure consistency, i.e. the VM at the New NVE 410 should find the same file system it had at the Old NVE. Pre- 411 copying is a commonly used technique for transferring the file 412 system. First the whole disk image is transferred while VM 413 continues to run. After the VM is moved any changes in the file 414 system are packaged together and sent to the New NVE Hypervisor 415 which reflects these changes to the file system locally at the 416 destination. 418 7. Handling of Hot, Warm and Cold VM Mobility 419 Both Cold and Warm VM mobility (or migration) refers to the VM 420 being completely shut down at the Old NVE before restarted at the 421 New NVE. Therefore, all transport services to the VM are 422 restarted. 424 Upon starting at the New NVE, the VM should send an ARP or 425 Neighbor Discovery message. Cold VM mobility also allows the Old 426 NVE and all communicating NVEs to time out ARP/neighbor cache 427 entries of the VM. It is necessary for the NVA to push the 428 updated ARP/neighbor cache entry to NVEs or for NVEs to pull the 429 updated ARP/neighbor cache entry from NVA. 431 The Cold VM mobility can be facilitated by cold standby entity 432 receiving scheduled backup information. The cold standby entity 433 can be a VM or can be other form factors which is beyond the scope 434 of this document. The cold mobility option can be used for non- 435 critical applications and services that can tolerate interrupted 436 TCP connections. 438 The Warm VM mobility refers the backup entities receive backup 439 information at more frequent intervals. The duration of the 440 interval determines the warmth of the option. The larger the 441 duration, the less warm (and hence cold) the Warm VM mobility 442 option becomes. 444 There is also a Hot Standby option in addition to the Hot 445 Mobility, where there are VMs in both primary and secondary NVEs 446 and they identical information and can provide services 447 simultaneously as in load-share mode of operation. If the VMs in 448 the primary NVE fails, there is no need to actively move the VMs 449 to the secondary NVE because the VMs in the secondary NVE already 450 contain identical information. The hot standby option is the most 451 costly mechanism, and hence this option is utilized only for 452 mission-critical applications and services. In hot standby 453 option, regarding TCP connections, one option is to start with and 454 maintain TCP connections to two different VMs at the same time. 455 The least loaded VM responds first and pickup providing service 456 while the sender (origin) still continues to receive Ack from the 457 heavily loaded (secondary) VM and chooses not use the service of 458 the secondary responding VM. If the situation (loading condition 459 of the primary responding VM) changes the secondary responding VM 460 may start providing service to the sender (origin). 462 8. VM Operation 463 Once VM moves to a New NVE, VM IP address does not change and VM 464 should be able to continue to receive packets to its address(es). 466 VM needs to send a gratuitous Address Resolution message or 467 unsolicited Neighbor Advertisement message upstream after each 468 move. 470 The VM lifecycle management is a complicated task, which is beyond 471 the scope of this document. Not only it involves monitoring server 472 utilization, balanced distribution of workload, etc., but also 473 needs to manage seamlessly VM migration from one server to 474 another. 476 9. Security Considerations 477 Security threats for the data and control plane for overlay 478 networks are discussed in [RFC8014]. There are several issues in 479 a multi-tenant environment that create problems. In Layer-2 based 480 overlay data center networks, lack of security in VXLAN, 481 corruption of VNI can lead to delivery to wrong tenant. Also, ARP 482 in IPv4 and ND in IPv6 are not secure, especially if we accept 483 gratuitous versions. When these are done over a UDP 484 encapsulation, like VXLAN, the problem is worse since it is 485 trivial for a non-trusted entity to spoof UDP packets. 487 In Layer-3 based overlay data center networks, the problem of 488 address spoofing may arise. An NVE may have untrusted tasks 489 attached. This usually happens in cases like the VMs (tasks) 490 running third party applications. This requires the usage of 491 stronger security mechanisms. 493 10. IANA Considerations 495 This document makes no request to IANA. 497 11. Acknowledgments 499 The authors are grateful to Bob Briscoe, David Black, Dave R. 500 Worley, Qiang Zu, Andrew Malis for helpful comments. 502 12. Change Log 504 . submitted version -00 as a working group draft after adoption 506 . submitted version -01 with these changes: references are updated, 507 o added packets in flight definition to Section 2 509 . submitted version -02 with updated address. 511 . submitted version -03 to fix the nits. 513 . submitted version -04 in reference to the WG Last call comments. 515 . Submitted version - 05 to address IETF LC comments from TSV area. 517 13. References 519 13.1. Normative References 521 [RFC0826] Plummer, D., "An Ethernet Address Resolution Protocol: Or 522 Converting Network Protocol Addresses to 48.bit Ethernet 523 Address for Transmission on Ethernet Hardware", STD 37, 524 RFC 826, DOI 10.17487/RFC0826, November 1982, 525 . 527 [RFC0903] Finlayson, R., Mann, T., Mogul, J., and M. Theimer, "A 528 Reverse Address Resolution Protocol", STD 38, RFC 903, 529 DOI 10.17487/RFC0903, June 1984, . 532 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 533 Requirement Levels", BCP 14, RFC 2119, March 1997. 535 [RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629, 536 DOI 10.17487/RFC2629, June 1999, . 539 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 540 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 541 DOI 10.17487/RFC4861, September 2007, . 544 [RFC7348] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., 545 Kreeger, L., Sridhar, T., Bursell, M., and C. Wright, 546 "Virtual eXtensible Local Area Network (VXLAN): A 547 Framework for Overlaying Virtualized Layer 2 Networks over 548 Layer 3 Networks", RFC 7348, DOI 10.17487/RFC7348, August 549 2014, . 551 [RFC7364] Narten, T., Ed., Gray, E., Ed., Black, D., Fang, L., 552 Kreeger, L., and M. Napierala, "Problem Statement: 553 Overlays for Network Virtualization", RFC 7364, DOI 554 10.17487/RFC7364, October 2014, . 557 [RFC8014] Black, D., Hudson, J., Kreeger, L., Lasserre, M., and T. 558 Narten, "An Architecture for Data-Center Network 559 Virtualization over Layer 3 (NVO3)", RFC 8014, DOI 560 10.17487/RFC8014, December 2016, . 563 13.2. Informative References 565 [I-D.herbert-nvo3-ila] Herbert, T. and P. Lapukhov, "Identifier- 566 locator addressing for IPv6", draft-herbert-nvo3-ila-04 567 (work in progress), March 2017. 569 Authors' Addresses 571 Linda Dunbar 572 Futurewei 573 Email: ldunbar@futurewei.com 575 Behcet Sarikaya 576 Denpel Informatique 577 Email: sarikaya@ieee.org 579 Bhumip Khasnabish 580 Independent 581 55 Madison Avenue, Suite 160 582 Morristown, NJ 07960 583 Email: vumip1@gmail.com 585 Tom Herbert 586 Intel 587 Email: tom@herbertland.com 589 Saumya Dikshit 590 Aruba-HPE 591 Bangalore, India 592 Email: saumya.dikshit@hpe.com