idnits 2.17.1 draft-ietf-nvo3-mcast-framework-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 15, 2016) is 2987 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'VXLAN' is mentioned on line 178, but not defined == Missing Reference: 'FW' is mentioned on line 196, but not defined == Missing Reference: 'BIER-ARCH' is mentioned on line 424, but not defined == Unused Reference: 'RFC7348' is defined on line 538, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 6 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 NVO3 working group A. Ghanwani 2 Internet Draft Dell 3 Intended status: Informational L. Dunbar 4 Expires: August 14, 2016 M. McBride 5 Huawei 6 V. Bannai 7 Google 8 R. Krishnan 9 Dell 11 February 15, 2016 13 A Framework for Multicast in NVO3 14 draft-ietf-nvo3-mcast-framework-03 16 Status of this Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. This document may not be modified, 23 and derivative works of it may not be created, except to publish it 24 as an RFC and to translate it into languages other than English. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF), its areas, and its working groups. Note that 28 other groups may also distribute working documents as Internet- 29 Drafts. 31 Internet-Drafts are draft documents valid for a maximum of six 32 months and may be updated, replaced, or obsoleted by other documents 33 at any time. It is inappropriate to use Internet-Drafts as 34 reference material or to cite them other than as "work in progress." 36 The list of current Internet-Drafts can be accessed at 37 http://www.ietf.org/ietf/1id-abstracts.txt 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html 42 This Internet-Draft will expire on August 14, 2016. 44 Copyright Notice 46 Copyright (c) 2016 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with 54 respect to this document. Code Components extracted from this 55 document must include Simplified BSD License text as described in 56 Section 4.e of the Trust Legal Provisions and are provided without 57 warranty as described in the Simplified BSD License. 59 Abstract 61 This document discusses a framework of supporting multicast traffic 62 in a network that uses Network Virtualization Overlays over Layer 3 63 (NVO3). Both infrastructure multicast and application-specific 64 multicast are discussed. It describes the various mechanisms that 65 can be used for delivering such traffic as well as the data plane 66 and control plane considerations for each of the mechanisms. 68 Table of Contents 70 1. Introduction...................................................3 71 1.1. Infrastructure multicast..................................3 72 1.2. Application-specific multicast............................3 73 1.3. Terminology clarification.................................4 74 2. Acronyms.......................................................4 75 3. Multicast mechanisms in networks that use NVO3.................5 76 3.1. No multicast support......................................5 77 3.2. Replication at the source NVE.............................6 78 3.3. Replication at a multicast service node...................8 79 3.4. IP multicast in the underlay..............................9 80 3.5. Other schemes............................................11 81 4. Simultaneous use of more than one mechanism...................11 82 5. Other issues..................................................11 83 5.1. Multicast-agnostic NVEs..................................11 84 5.2. Multicast membership management for DC with VMs..........12 85 6. Summary.......................................................12 86 7. Security Considerations.......................................13 87 8. IANA Considerations...........................................13 88 9. References....................................................13 89 9.1. Normative References.....................................13 90 9.2. Informative References...................................13 91 10. Acknowledgments..............................................14 93 1. Introduction 95 Network virtualization using Overlays over Layer 3 (NVO3) is a 96 technology that is used to address issues that arise in building 97 large, multitenant data centers that make extensive use of server 98 virtualization [RFC7364]. 100 This document provides a framework for supporting multicast traffic, 101 in a network that uses Network Virtualization using Overlays over 102 Layer 3 (NVO3). Both infrastructure multicast (ARP/ND, DHCP, mDNS, 103 etc.) and application-specific multicast are considered. It 104 describes the various mechanisms and considerations that can be used 105 for delivering such traffic in networks that use NVO3. 107 The reader is assumed to be familiar with the terminology as defined 108 in the NVO3 Framework document [RFC7365] and NVO3 Architecture 109 document [NVO3-ARCH]. 111 1.1. Infrastructure multicast 113 Infrastructure multicast includes protocols such as ARP/ND, DHCP, 114 and mDNS. It is possible to provide solutions for these that do not 115 involve multicast in the underlay network. In the case of ARP/ND, 116 an NVA can be used for distributing the mappings of IP address to 117 MAC address to all NVEs. The NVEs can then trap ARP Request/ND 118 Neighbor Solicitation messages from the TSs that are attached to it 119 and respond to them, thereby eliminating the need to for 120 broadcast/multicast of such messages. In the case of DHCP, the NVE 121 can be configured to forward these messages using a helper function. 123 Of course it is possible to support all of these infrastructure 124 multicast protocols natively if the underlay provides multicast 125 transport. However, even in the presence of multicast transport, it 126 may be beneficial to use the optimizations mentioned above to reduce 127 the amount of such traffic in the network. 129 1.2. Application-specific multicast 131 Application-specific multicast traffic, which may be either Source- 132 Specific Multicast (SSM) or Any-Source Multicast (ASM)[RFC 3569], 133 has the following characteristics: 135 1. Receiver hosts are expected to subscribe to multicast content 136 using protocols such as IGMP [RFC3376] (IPv4) or MLD (IPv6). 137 Multicast sources and listeners participant in these protocols 138 using addresses that are in the Tenant System address domain. 140 2. The list of multicast listeners for each multicast group is not 141 known in advance. Therefore, it may not be possible for an NVA 142 to get the list of participants for each multicast group ahead 143 of time. 145 1.3. Terminology clarification 147 In this document, the terms host, tenant system (TS) and virtual 148 machine (VM) are used interchangeably to represent an end station 149 that originates or consumes data packets. 151 2. Acronyms 153 ASM: Any-Source Multicast 155 LISP: Locator/ID Separation Protocol 157 MSN: Multicast Service Node 159 NVA: Network Virtualization Authority 161 NVE: Network Virtualization Edge 163 NVGRE: Network Virtualization using GRE 165 SSM: Source-Specific Multicast 167 STT: Stateless Tunnel Transport 169 TS: Tenant system 171 VM: Virtual Machine 173 VXLAN: Virtual eXtensible LAN 175 3. Multicast mechanisms in networks that use NVO3 177 In NVO3 environments, traffic between NVEs is transported using an 178 encapsulation such as VXLAN [VXLAN], NVGRE [RFC7637], STT [STT], 179 etc. 181 Besides the need to support the Address Resolution Protocol (ARP) 182 and Neighbor Discovery (ND), there are several applications that 183 require the support of multicast and/or broadcast in data centers 184 [DC-MC]. With NVO3, there are many possible ways that multicast may 185 be handled in such networks. We discuss some of the attributes of 186 the following four methods: 188 1. No multicast support. 190 2. Replication at the source NVE. 192 3. Replication at a multicast service node. 194 4. IP multicast in the underlay. 196 These mechanisms are briefly mentioned in the NVO3 Framework [FW] 197 and NVO3 architecture [NVO3-ARCH] document. This document attempts 198 to provide more details about the basic mechanisms underlying each 199 of these mechanisms and discusses the issues and tradeoffs of each. 201 We note that other methods are also possible, such as [EDGE-REP], 202 but we focus on the above four because they are the most common. 204 3.1. No multicast support 206 In this scenario, there is no support whatsoever for multicast 207 traffic when using the overlay. This method can only work if the 208 following conditions are met: 210 1. All of the application traffic in the network is unicast 211 traffic and the only multicast/broadcast traffic is from ARP/ND 212 protocols. 214 2. A network virtualization authority (NVA) is used by the NVEs to 215 determine the mapping of a given Tenant System's MAC/IP address 216 to its NVE. In other words, there is no data plane learning. 217 Address resolution requests via ARP/ND that are issued by the 218 Tenant Systems must be resolved by the NVE that they are 219 attached to. 221 With this approach, it is not possible to support application- 222 specific multicast. However, certain multicast/broadcast 223 applications such as DHCP can be supported by use of a helper 224 function in the NVE. 226 The main drawback of this approach, even for unicast traffic, is 227 that it is not possible to initiate communication with a Tenant 228 System for which a mapping to an NVE does not already exist with the 229 NVA. This is a problem in the case where the NVE is implemented in 230 a physical switch and the Tenant System is a physical end station 231 that has not registered with the NVA. 233 3.2. Replication at the source NVE 235 With this method, the overlay attempts to provide a multicast 236 service without requiring any specific support from the underlay, 237 other than that of a unicast service. A multicast or broadcast 238 transmission is achieved by replicating the packet at the source 239 NVE, and making copies, one for each destination NVE that the 240 multicast packet must be sent to. 242 For this mechanism to work, the source NVE must know, a priori, the 243 IP addresses of all destination NVEs that need to receive the 244 packet. For the purpose of ARP/ND, this would involve knowing the 245 IP addresses of all the NVEs that have Tenant Systems in the virtual 246 network instance (VNI) of the Tenant System that generated the 247 request. For the support of application-specific multicast traffic, 248 a method similar to that of receiver-sites registration for a 249 particular multicast group described in [LISP-Signal-Free] can be 250 used. The registrations from different receiver-sites can be merged 251 at the NVA, which can construct a multicast replication-list 252 inclusive of all NVEs to which receivers for a particular multicast 253 group are attached. The replication-list for each specific multicast 254 group is maintained by the NVA. 256 The receiver-sites registration is achieved by egress NVEs 257 performing the IGMP/MLD snooping to maintain state for which 258 attached Tenant Systems have subscribed to a given IP multicast 259 group. When the members of a multicast group are outside the NVO3 260 domain, it is necessary for NVO3 gateways to keep track of the 261 remote members of each multicast group. The NVEs and NVO3 gateways 262 then communicate the multicast groups that are of interest to the 263 NVA. If the membership is not communicated to the NVA, and if it is 264 necessary to prevent hosts attached to an NVE that have not 265 subscribed to a multicast group from receiving the multicast 266 traffic, the NVE would need to maintain multicast group membership 267 information. 269 In multi-homing environments, i.e. in those where a TS is attached 270 to more than one NVE, the NVA would be expected to provide 271 information to all of the NVEs under its control about all of the 272 NVEs to which such a TS is attached. The ingress NVE can choose any 273 one of the egress NVEs for the data frames destined towards the TS. 275 In the absence of IGMP/MLD snooping, the traffic would be delivered 276 to all hosts that are part of the VNI. 278 This method requires multiple copies of the same packet to all NVEs 279 that participate in the VN. If, for example, a tenant subnet is 280 spread across 50 NVEs, the packet would have to be replicated 50 281 times at the source NVE. This also creates an issue with the 282 forwarding performance of the NVE. 284 Note that this method is similar to what was used in VPLS [RFC4792] 285 prior to support of MPLS multicast [RFC7117]. While there are some 286 similarities between MPLS VPN and the NVO3 overlay, there are some 287 key differences: 289 - The CE-to-PE attachment in VPNs is somewhat static, whereas in a 290 DC that allows VMs to migrate anywhere, the TS attachment to NVE 291 is much more dynamic. 293 - The number of PEs to which a single VPN customer is attached in 294 an MPLS VPN environment is normally far less than the number of 295 NVEs to which a VNI's VMs are attached in a DC. 297 When a VPN customer has multiple multicast groups, [RFC6513] 298 "Multicast VPN" combines all those multicast groups within each 299 VPN client to one single multicast group in the MPLS (or VPN) 300 core. The result is that messages from any of the multicast 301 groups belonging to one VPN customer will reach all the PE nodes 302 of the client. In other words, any messages belonging to any 303 multicast groups under customer X will reach all PEs of the 304 customer X. When the customer X is attached to only a handful of 305 PEs, the use of this approach does not result in excessive wastage 306 of bandwidth in the provider's network. 308 In a DC environment, a typical server/hypervisor based virtual 309 switch may only support 10's VMs (as of this writing). A subnet 310 with N VMs may be, in the worst case, spread across N vSwitches. 311 Using "MPLS VPN multicast" approach in such a scenario would 312 require the creation of a Multicast group in the core for this VNI 313 to reach all N NVEs. If only small percentage of this client's VMs 314 participate in application specific multicast, a great number of 315 NVEs will receive multicast traffic that is not forwarded to any 316 of their attached VMs, resulting in considerable wastage of 317 bandwidth. 319 Therefore, the Multicast VPN solution may not scale in DC 320 environment with dynamic attachment of Virtual Networks to NVEs and 321 greater number of NVEs for each virtual network. 323 3.3. Replication at a multicast service node 325 With this method, all multicast packets would be sent using a 326 unicast tunnel encapsulation from the ingress NVE to a multicast 327 service node (MSN). The MSN, in turn, would create multiple copies 328 of the packet and would deliver a copy, using a unicast tunnel 329 encapsulation, to each of the NVEs that are part of the multicast 330 group for which the packet is intended. 332 This mechanism is similar to that used by the ATM Forum's LAN 333 Emulation [LANE] specification [LANE]. 335 The following are the possible ways for the MSN to get the 336 membership information for each multicast group: 338 - The MSN can obtain this information by snooping the IGMP/MLD 339 messages from the Tenant Systems and/or sending query messages to 340 the Tenant Systems. In order for MSN to snoop the IGMP/MLD 341 messages between TSs and their corresponding routers, the NVEs 342 that TSs are attached have to encapsulate a special outer header, 343 e.g. outer destination being the multicast server node. See 344 Section 3.3.2 for detail. 346 - The MSN can obtain the membership information from the NVEs that 347 snoop the IGMP/MLD messages. This can be done by having the MSN 348 communicate with the NVEs, or by having the NVA obtain the 349 information from the NVEs, and in turn have MSN communicate with 350 the NVA. 352 Unlike the method described in Section 3.2, there is no performance 353 impact at the ingress NVE, nor are there any issues with multiple 354 copies of the same packet from the source NVE to the multicast 355 service node. However there remain issues with multiple copies of 356 the same packet on links that are common to the paths from the MSN 357 to each of the egress NVEs. Additional issues that are introduced 358 with this method include the availability of the MSN, methods to 359 scale the services offered by the MSN, and the sub-optimality of the 360 delivery paths. 362 Finally, the IP address of the source NVE must be preserved in 363 packet copies created at the multicast service node if data plane 364 learning is in use. This could create problems if IP source address 365 reverse path forwarding (RPF) checks are in use. 367 3.4. IP multicast in the underlay 369 In this method, the underlay supports IP multicast and the ingress 370 NVE encapsulates the packet with the appropriate IP multicast 371 address in the tunnel encapsulation header for delivery to the 372 desired set of NVEs. The protocol in the underlay could be any 373 variant of Protocol Independent Multicast (PIM), or protocol 374 dependent multicast, such as [ISIS-Multicast]. 376 If an NVE connects to its attached TSs via Layer 2 network, there 377 are multiple ways for NVEs to support the application specific 378 multicast: 380 - The NVE only supports the basic IGMP/MLD snooping function, let 381 the TSs routers handling the application specific multicast. This 382 scheme doesn't utilize the underlay IP multicast protocols. 384 - The NVE can act as a pseudo multicast router for the directly 385 attached VMs and support proper mapping of IGMP/MLD's messages to 386 the messages needed by the underlay IP multicast protocols. 388 With this method, there are none of the issues with the methods 389 described in Sections 3.2. 391 With PIM Sparse Mode (PIM-SM), the number of flows required would be 392 (n*g), where n is the number of source NVEs that source packets for 393 the group, and g is the number of groups. Bidirectional PIM (BIDIR- 394 PIM) would offer better scalability with the number of flows 395 required being g. 397 In the absence of any additional mechanism, e.g. using an NVA for 398 address resolution, for optimal delivery, there would have to be a 399 separate group for each tenant, plus a separate group for each 400 multicast address (used for multicast applications) within a tenant. 402 Additional considerations are that only the lower 23 bits of the IP 403 address (regardless of whether IPv4 or IPv6 is in use) are mapped to 404 the outer MAC address, and if there is equipment that prunes 405 multicasts at Layer 2, there will be some aliasing. Finally, a 406 mechanism to efficiently provision such addresses for each group 407 would be required. 409 There are additional optimizations which are possible, but they come 410 with their own restrictions. For example, a set of tenants may be 411 restricted to some subset of NVEs and they could all share the same 412 outer IP multicast group address. This however introduces a problem 413 of sub-optimal delivery (even if a particular tenant within the 414 group of tenants doesn't have a presence on one of the NVEs which 415 another one does, the former's multicast packets would still be 416 delivered to that NVE). It also introduces an additional network 417 management burden to optimize which tenants should be part of the 418 same tenant group (based on the NVEs they share), which somewhat 419 dilutes the value proposition of NVO3 which is to completely 420 decouple the overlay and physical network design allowing complete 421 freedom of placement of VMs anywhere within the data center. 423 Multicast schemes such as BIER (Bit Indexed Explicit Replication) 424 [BIER-ARCH] may be able to provide optimizations by allowing the 425 underlay network to provide optimum multicast delivery without 426 requiring routers in the core of the network to main per-multicast 427 group state. 429 3.5. Other schemes 431 There are still other mechanisms that may be used that attempt to 432 combine some of the advantages of the above methods by offering 433 multiple replication points, each with a limited degree of 434 replication [EDGE-REP]. Such schemes offer a trade-off between the 435 amount of replication at an intermediate node (router) versus 436 performing all of the replication at the source NVE or all of the 437 replication at a multicast service node. 439 4. Simultaneous use of more than one mechanism 441 While the mechanisms discussed in the previous section have been 442 discussed individually, it is possible for implementations to rely 443 on more than one of these. For example, the method of Section 3.1 444 could be used for minimizing ARP/ND, while at the same time, 445 multicast applications may be supported by one, or a combination of, 446 the other methods. For small multicast groups, the methods of 447 source NVE replication or the use of a multicast service node may be 448 attractive, while for larger multicast groups, the use of multicast 449 in the underlay may be preferable. 451 5. Other issues 453 5.1. Multicast-agnostic NVEs 455 Some hypervisor-based NVEs do not process or recognize IGMP/MLD 456 frames; i.e. those NVEs simply encapsulate the IGMP/MLD messages in 457 the same way as they do for regular data frames. 459 By default, TSs router periodically sends IGMP/MLD query messages to 460 all the hosts in the subnet to trigger the hosts that are interested 461 in the multicast stream to send back IGMP/MLD reports. In order for 462 the MSN to get the updated multicast group information, the MSN can 463 also send the IGMP/MLD query message comprising a client specific 464 multicast address, encapsulated in an overlay header to all the NVEs 465 to which the TSs in the VN are attached. 467 However, the MSN may not always be aware of the client specific 468 multicast addresses. In order to perform multicast filtering, the 469 MSN has to snoop the IGMP/MLD messages between TSs and their 470 corresponding routers to maintain the multicast membership. In order 471 for the MSN to snoop the IGMP/MLD messages between TSs and their 472 router, the NVA needs to configure the NVE to send copies of the 473 IGMP/MLD messages to the MSN in addition to the default behavior of 474 sending them to the TSs' routers; e.g. the NVA has to inform the 475 NVEs to encapsulate data frames with DA being 224.0.0.2 (destination 476 address of IGMP report) to TSs' router and MSN. 478 This process is similar to "Source Replication" described in Section 479 3.2, except the NVEs only replicate the message to TSs' router and 480 MSN. 482 5.2. Multicast membership management for DC with VMs 484 For data centers with virtualized servers, VMs can be added, deleted 485 or moved very easily. When VMs are added, deleted or moved, the NVEs 486 to which the VMs are attached are changed. 488 When a VM is deleted from an NVE or a new VM is added to an NVE, the 489 VM management system should notify the MSN to send the IGMP/MLD 490 query messages to the relevant NVEs, so that the multicast 491 membership can be updated promptly. Otherwise, if there are changes 492 of VMs attachment to NVEs, then for the duration of the configured 493 default time interval that the TSs routers use for IGMP/MLD queries, 494 multicast data may not reach the VM(s) that moved. 496 6. Summary 498 This document has identified various mechanisms for supporting 499 application specific multicast in networks that use NVO3. It 500 highlights the basics of each mechanism and some of the issues with 501 them. As solutions are developed, the protocols would need to 502 consider the use of these mechanisms and co-existence may be a 503 consideration. It also highlights some of the requirements for 504 supporting multicast applications in an NVO3 network. 506 7. Security Considerations 508 This draft does not introduce any new security considerations beyond 509 what may be present in proposed solutions 511 8. IANA Considerations 513 This document requires no IANA actions. RFC Editor: Please remove 514 this section before publication. 516 9. References 518 9.1. Normative References 520 [RFC7365] Lasserre, M. et al., "Framework for data center (DC) 521 network virtualization", October 2014. 523 [RFC7364] Narten, T. et al., "Problem statement: Overlays for 524 network virtualization", October 2014. 526 [NVO3-ARCH] 527 Narten, T. et al.," An Architecture for Overlay Networks 528 (NVO3)", work in progress. 530 [RFC3376] Cain B. et al., "Internet Group Management Protocol, 531 Version 3", October 2002. 533 [RFC6513] Rosen, E. et al., "Multicast in MPLS/BGP IP VPNs", 534 February 2012. 536 9.2. Informative References 538 [RFC7348] Mahalingam, M. et al., " Virtual eXtensible Local Area 539 Network (VXLAN): A Framework for Overlaying Virtualized 540 Layer 2 Networks over Layer 3 Networks", August 2014. 542 [RFC7637] Garg, P. and Wang, Y. (Eds.), "NVGRE: Network 543 Vvirtualization using Generic Routing Encapsulation", 544 September 2015. 546 [STT] Davie, B. and Gross, J., "A stateless transport tunneling 547 protocol for network virtualization," work in progress. 549 [DC-MC] McBride, M. and Lui, H., "Multicast in the data center 550 overview," work in progress. 552 [ISIS-Multicast] 553 Yong, L. et al., "ISIS Protocol Extension for Building 554 Distribution Trees", work in progress. 556 [RFC4792] Lasserre, M., and Kompella, V. (Eds.), "Virtual Private 557 LAN Service (VPLS) using Label Distribution Protocol (LDP) 558 signaling," RFC 4762, January 2007. 560 [RFC7117] Aggarwal, R. et al., "Multicast in VPLS," February 2014. 562 [LANE] "LAN emulation over ATM," The ATM Forum, af-lane-0021.000, 563 January 1995. 565 [EDGE-REP] 566 Marques P. et al., "Edge multicast replication for BGP IP 567 VPNs," work in progress.. 569 [RFC 3569] 570 S. Bhattacharyya, Ed., "An Overview of Source-Specific 571 Multicast (SSM)", July 2003. 573 [LISP-Signal-Free] 574 Moreno, V. and Farinacci, D., "Signal-Free LISP 575 Multicast", work in progress. 577 10. Acknowledgments 579 Thanks are due to Dino Farinacci, Erik Nordmark, Lucy Yong, Nicolas 580 Bouliane, and Saumya Dikshit for their comments and suggestions. 582 This document was prepared using 2-Word-v2.0.template.dot. 584 Authors' Addresses 586 Anoop Ghanwani 587 Dell 588 Email: anoop@alumni.duke.edu 590 Linda Dunbar 591 Huawei Technologies 592 5340 Legacy Drive, Suite 1750 593 Plano, TX 75024, USA 594 Phone: (469) 277 5840 595 Email: ldunbar@huawei.com 597 Mike McBride 598 Huawei Technologies 599 Email: mmcbride7@gmail.com 601 Vinay Bannai 602 Google 603 Email: vbannai@gmail.com 605 Ram Krishnan 606 Dell 607 Email: ramkri123@gmail.com