idnits 2.17.1 draft-ietf-nvo3-overlay-problem-statement-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 31, 2013) is 3923 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-11) exists of draft-ietf-l2vpn-evpn-04 == Outdated reference: A later version (-06) exists of draft-ietf-l3vpn-end-system-01 == Outdated reference: A later version (-07) exists of draft-raggarwa-data-center-mobility-05 -- Obsolete informational reference (is this intentional?): RFC 6830 (Obsoleted by RFC 9300, RFC 9301) == Outdated reference: A later version (-09) exists of draft-ietf-nvo3-framework-03 Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force T. Narten, Ed. 3 Internet-Draft IBM 4 Intended status: Informational E. Gray, Ed. 5 Expires: February 01, 2014 Ericsson 6 D. Black 7 EMC 8 L. Fang 9 L. Kreeger 10 Cisco 11 M. Napierala 12 AT&T 13 July 31, 2013 15 Problem Statement: Overlays for Network Virtualization 16 draft-ietf-nvo3-overlay-problem-statement-04 18 Abstract 20 This document describes issues associated with providing multi- 21 tenancy in large data center networks and how these issues may be 22 addressed using an overlay-based network virtualization approach. A 23 key multi-tenancy requirement is traffic isolation, so that one 24 tenant's traffic is not visible to any other tenant. Another 25 requirement is address space isolation, so that different tenants can 26 use the same address space within different virtual networks. 27 Traffic and address space isolation is achieved by assigning one or 28 more virtual networks to each tenant, where traffic within a virtual 29 network can only cross into another virtual network in a controlled 30 fashion (e.g., via a configured router and/or a security gateway). 31 Additional functionality is required to provision virtual networks, 32 associating a virtual machine's network interface(s) with the 33 appropriate virtual network, and maintaining that association as the 34 virtual machine is activated, migrated and/or deactivated. Use of an 35 overlay-based approach enables scalable deployment on large network 36 infrastructures. 38 Status of This Memo 40 This Internet-Draft is submitted in full conformance with the 41 provisions of BCP 78 and BCP 79. 43 Internet-Drafts are working documents of the Internet Engineering 44 Task Force (IETF). Note that other groups may also distribute 45 working documents as Internet-Drafts. The list of current Internet- 46 Drafts is at http://datatracker.ietf.org/drafts/current/. 48 Internet-Drafts are draft documents valid for a maximum of six months 49 and may be updated, replaced, or obsoleted by other documents at any 50 time. It is inappropriate to use Internet-Drafts as reference 51 material or to cite them other than as "work in progress." 53 This Internet-Draft will expire on February 01, 2014. 55 Copyright Notice 57 Copyright (c) 2013 IETF Trust and the persons identified as the 58 document authors. All rights reserved. 60 This document is subject to BCP 78 and the IETF Trust's Legal 61 Provisions Relating to IETF Documents 62 (http://trustee.ietf.org/license-info) in effect on the date of 63 publication of this document. Please review these documents 64 carefully, as they describe your rights and restrictions with respect 65 to this document. Code Components extracted from this document must 66 include Simplified BSD License text as described in Section 4.e of 67 the Trust Legal Provisions and are provided without warranty as 68 described in the Simplified BSD License. 70 Table of Contents 72 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 73 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 74 3. Problem Areas . . . . . . . . . . . . . . . . . . . . . . . . 6 75 3.1. Need For Dynamic Provisioning . . . . . . . . . . . . . . 6 76 3.2. Virtual Machine Mobility Limitations . . . . . . . . . . 6 77 3.3. Inadequate Forwarding Table Sizes . . . . . . . . . . . . 7 78 3.4. Need to Decouple Logical and Physical Configuration . . . 7 79 3.5. Need For Address Separation Between Virtual Networks . . 7 80 3.6. Need For Address Separation Between Virtual 81 Networks and Infrastructure . . . . . . . . . . . . . . . 8 82 3.7. Optimal Forwarding . . . . . . . . . . . . . . . . . . . 8 83 4. Using Network Overlays to Provide Virtual Networks . . . . . 9 84 4.1. Overview of Network Overlays . . . . . . . . . . . . . . 9 85 4.2. Communication Between Virtual and Non-virtualized 86 Networks . . . . . . . . . . . . . . . . . . . . . . . . 11 87 4.3. Communication Between Virtual Networks . . . . . . . . . 12 88 4.4. Overlay Design Characteristics . . . . . . . . . . . . . 12 89 4.5. Control Plane Overlay Networking Work Areas . . . . . . . 13 90 4.6. Data Plane Work Areas . . . . . . . . . . . . . . . . . . 14 91 5. Related IETF and IEEE Work . . . . . . . . . . . . . . . . . 15 92 5.1. BGP/MPLS IP VPNs . . . . . . . . . . . . . . . . . . . . 15 93 5.2. BGP/MPLS Ethernet VPNs . . . . . . . . . . . . . . . . . 15 94 5.3. 802.1 VLANs . . . . . . . . . . . . . . . . . . . . . . . 16 95 5.4. IEEE 802.1aq - Shortest Path Bridging . . . . . . . . . . 16 96 5.5. VDP . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 97 5.6. ARMD . . . . . . . . . . . . . . . . . . . . . . . . . . 17 98 5.7. TRILL . . . . . . . . . . . . . . . . . . . . . . . . . . 17 99 5.8. L2VPNs . . . . . . . . . . . . . . . . . . . . . . . . . 17 100 5.9. Proxy Mobile IP . . . . . . . . . . . . . . . . . . . . . 18 101 5.10. LISP . . . . . . . . . . . . . . . . . . . . . . . . . . 18 102 6. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 103 7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 18 104 8. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 19 105 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 106 10. Security Considerations . . . . . . . . . . . . . . . . . . . 19 107 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 20 108 11.1. Informative References . . . . . . . . . . . . . . . . . 20 109 11.2. Normative References . . . . . . . . . . . . . . . . . . 21 110 Appendix A. Change Log . . . . . . . . . . . . . . . . . . . . . 21 111 A.1. Changes From -03 to -04 . . . . . . . . . . . . . . . . . 21 112 A.2. Changes From -02 to -03 . . . . . . . . . . . . . . . . . 22 113 A.3. Changes From -01 to -02 . . . . . . . . . . . . . . . . . 22 114 A.4. Changes From -00 to -01 . . . . . . . . . . . . . . . . . 22 115 A.5. Changes from draft-narten-nvo3-overlay-problem- 116 statement-04.txt . . . . . . . . . . . . . . . . . . . . 23 117 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 23 119 1. Introduction 121 Data Centers are increasingly being consolidated and outsourced in an 122 effort to improve the deployment time of applications and reduce 123 operational costs. This coincides with an increasing demand for 124 compute, storage, and network resources from applications. In order 125 to scale compute, storage, and network resources, physical resources 126 are being abstracted from their logical representation, in what is 127 referred to as server, storage, and network virtualization. 128 Virtualization can be implemented in various layers of computer 129 systems or networks. 131 The demand for server virtualization is increasing in data centers. 132 With server virtualization, each physical server supports multiple 133 virtual machines (VMs), each running its own operating system, 134 middleware and applications. Virtualization is a key enabler of 135 workload agility, i.e., allowing any server to host any application 136 and providing the flexibility of adding, shrinking, or moving 137 services within the physical infrastructure. Server virtualization 138 provides numerous benefits, including higher utilization, increased 139 security, reduced user downtime, reduced power usage, etc. 141 Multi-tenant data centers are taking advantage of the benefits of 142 server virtualization to provide a new kind of hosting, a virtual 143 hosted data center. Multi-tenant data centers are ones where 144 individual tenants could belong to a different company (in the case 145 of a public provider) or a different department (in the case of an 146 internal company data center). Each tenant has the expectation of a 147 level of security and privacy separating their resources from those 148 of other tenants. For example, one tenant's traffic must never be 149 exposed to another tenant, except through carefully controlled 150 interfaces, such as a security gateway (e.g., a firewall). 152 To a tenant, virtual data centers are similar to their physical 153 counterparts, consisting of end stations attached to a network, 154 complete with services such as load balancers and firewalls. But 155 unlike a physical data center, tenant systems connect to a virtual 156 network. To tenant systems, a virtual network looks like a normal 157 network (e.g., providing an ethernet or L3 service), except that the 158 only end stations connected to the virtual network are those 159 belonging to a tenant's specific virtual network. 161 A tenant is the administrative entity on whose behalf one or more 162 specific virtual network instances and their associated services 163 (whether virtual or physical) are managed. In a cloud environment, a 164 tenant would correspond to the customer that is using a particular 165 virtual network. However, a tenant may also find it useful to create 166 multiple different virtual network instances. Hence, there is a one- 167 to-many mapping between tenants and virtual network instances. A 168 single tenant may operate multiple individual virtual network 169 instances, each associated with a different service. 171 How a virtual network is implemented does not generally matter to the 172 tenant; what matters is that the service provided (L2 or L3) has the 173 right semantics, performance, etc. It could be implemented via a 174 pure routed network, a pure bridged network or a combination of 175 bridged and routed networks. A key requirement is that each 176 individual virtual network instance be isolated from other virtual 177 network instances, with traffic crossing from one virtual network to 178 another only when allowed by policy. 180 For data center virtualization, two key issues must be addressed. 181 First, address space separation between tenants must be supported. 182 Second, it must be possible to place (and migrate) VMs anywhere in 183 the data center, without restricting VM addressing to match the 184 subnet boundaries of the underlying data center network. 186 The document outlines problems encountered in scaling the number of 187 isolated virtual networks in a data center. Furthermore, the 188 document presents issues associated with managing those virtual 189 networks, in relation to operations, such as virtual network creation 190 /deletion and end-node membership change. Finally, the document 191 makes the case that an overlay based approach has a number of 192 advantages over traditional, non-overlay approaches. The purpose of 193 this document is to identify the set of issues that any solution has 194 to address in building multi-tenant data centers. With this 195 approach, the goal is to allow the construction of standardized, 196 interoperable implementations to allow the construction of multi- 197 tenant data centers. 199 This document is the problem statement for the "Network 200 Virtualization over L3" (NVO3) Working Group. NVO3 is focused on the 201 construction of overlay networks that operate over an IP (L3) 202 underlay transport network. NVO3 expects to provide both L2 service 203 and IP service to end devices (though perhaps as two different 204 solutions). Some deployments require an L2 service, others an L3 205 service, and some may require both. 207 Section 2 gives terminology. Section 3 describes the problem space 208 details. Section 4 describes overlay networks in more detail. 209 Sections 5 and 6 review related and further work, while Section 7 210 closes with a summary. 212 2. Terminology 214 This document uses the same terminology as [I-D.ietf-nvo3-framework]. 215 In addition, this document use the following terms. 217 Overlay Network: A Virtual Network in which the separation of 218 tenants is hidden from the underlying physical infrastructure. 219 That is, the underlying transport network does not need to know 220 about tenancy separation to correctly forward traffic. IEEE 802.1 221 Provider Backbone Bridging (PBB) [IEEE-802.1Q]. is an example of 222 an L2 Overlay Network. PBB uses MAC-in-MAC encapsulation and the 223 underlying transport network forwards traffic using only the B-MAC 224 and B-VID in the outer header. The underlay transport network is 225 unaware of the tenancy separation provided by, for example, a 226 24-bit I-SID. 228 C-VLAN: This document refers to C-VLANs as implemented by many 229 routers, i.e., an L2 virtual network identified by a C-VID. An 230 end station (e.g., a VM) in this context that is part of an L2 231 virtual network will effectively belong to a C-VLAN. Within an 232 IEEE 802.1Q-2011 network, other tags may be used as well, but such 233 usage is generally not visible to the end station. Section 5.3 234 provides more details on VLANs defined by [IEEE-802.1Q]. 236 This document uses the phrase "virtual network instance" with its 237 ordinary meaning to represent an instance of a virtual network. Its 238 usage may differ from the VNI acronym defined in the framework 239 document [I-D.ietf-nvo3-framework]. The VNI acronym is not used in 240 this document. 242 3. Problem Areas 244 The following subsections describe aspects of multi-tenant data 245 center networking that pose problems for network infrastructure. 246 Different problem aspects may arise based on the network architecture 247 and scale. 249 3.1. Need For Dynamic Provisioning 251 Some service providers offer services to multiple customers whereby 252 services are dynamic and the resources assigned to support them must 253 be able to change quickly as demand changes. In current systems, it 254 can be difficult to provision resources for individual tenants (e.g., 255 QoS) in such a way that provisioned properties migrate automatically 256 when services are dynamically moved around within the data center to 257 optimize workloads. 259 3.2. Virtual Machine Mobility Limitations 261 A key benefit of server virtualization is virtual machine (VM) 262 mobility. A VM can be migrated from one server to another, live, 263 i.e., while continuing to run and without needing to shut it down and 264 restart it at the new location. A key requirement for live migration 265 is that a VM retain critical network state at its new location, 266 including its IP and MAC address(es). Preservation of MAC addresses 267 may be necessary, for example, when software licenses are bound to 268 MAC addresses. More generally, any change in the VM's MAC addresses 269 resulting from a move would be visible to the VM and thus potentially 270 result in unexpected disruptions. Retaining IP addresses after a 271 move is necessary to prevent existing transport connections (e.g., 272 TCP) from breaking and needing to be restarted. 274 In data center networks, servers are typically assigned IP addresses 275 based on their physical location, for example based on the Top of 276 Rack (ToR) switch for the server rack or the C-VLAN configured to the 277 server. Servers can only move to other locations within the same IP 278 subnet. This constraint is not problematic for physical servers, 279 which move infrequently, but it restricts the placement and movement 280 of VMs within the data center. Any solution for a scalable multi- 281 tenant data center must allow a VM to be placed (or moved) anywhere 282 within the data center, without being constrained by the subnet 283 boundary concerns of the host servers. 285 3.3. Inadequate Forwarding Table Sizes 287 Today's virtualized environments place additional demands on the 288 forwarding tables of forwarding nodes in the physical infrastructure. 289 The core problem is that location independence results in specific 290 end state information being propagated into the forwarding system 291 (e.g., /32 host routes in L3 networks, or MAC addresses in L2 292 networks). In L2 networks, for instance, instead of just one address 293 per server, the network infrastructure may have to learn addresses of 294 the individual VMs (which could range in the 100s per server). This 295 increases the demand on a forwarding node's table capacity compared 296 to non-virtualized environments. 298 3.4. Need to Decouple Logical and Physical Configuration 300 Data center operators must be able to achieve high utilization of 301 server and network capacity. For efficient and flexible allocation, 302 operators should be able to spread a virtual network instance across 303 servers in any rack in the data center. It should also be possible 304 to migrate compute workloads to any server anywhere in the network 305 while retaining the workload's addresses. 307 In networks of many types (e.g., IP subnets, MPLS VPNs, VLANs, etc.) 308 moving servers elsewhere in the network may require expanding the 309 scope of a portion of the network (e.g., subnet, VPN, VLAN, etc.) 310 beyond its original boundaries. While this can be done, it requires 311 potentially complex network configuration changes and may (in some 312 cases - e.g., a VLAN or L2VPN) conflict with the desire to bound the 313 size of broadcast domains. In addition, when VMs migrate, the 314 physical network (e.g., access lists) may need to be reconfigured 315 which can be time consuming and error prone. 317 An important use case is cross-pod expansion. A pod typically 318 consists of one or more racks of servers with associated network and 319 storage connectivity. A tenant's virtual network may start off on a 320 pod and, due to expansion, require servers/VMs on other pods, 321 especially the case when other pods are not fully utilizing all their 322 resources. This use case requires that virtual networks span 323 multiple pods in order to provide connectivity to all of its tenant's 324 servers/VMs. Such expansion can be difficult to achieve when tenant 325 addressing is tied to the addressing used by the underlay network or 326 when the expansion requires that the scope of the underlying C-VLAN 327 expand beyond its original pod boundary. 329 3.5. Need For Address Separation Between Virtual Networks 331 Individual tenants need control over the addresses they use within a 332 virtual network. But it can be problematic when different tenants 333 want to use the same addresses, or even if the same tenant wants to 334 reuse the same addresses in different virtual networks. 335 Consequently, virtual networks must allow tenants to use whatever 336 addresses they want without concern for what addresses are being used 337 by other tenants or other virtual networks. 339 3.6. Need For Address Separation Between Virtual Networks and 340 Infrastructure 342 As in the previous case, a tenant needs to be able to use whatever 343 addresses it wants in a virtual network independent of what addresses 344 the underlying data center network is using. Tenants (and the 345 underlay infrastructure provider) should be able use whatever 346 addresses make sense for them, without having to worry about address 347 collisions between addresses used by tenants and those used by the 348 underlay data center network. 350 3.7. Optimal Forwarding 352 Another problem area relates to the optimal forwarding of traffic 353 between peers that are not connected to the same virtual network. 354 Such forwarding happens when a host on a virtual network communicates 355 with a host not on any virtual network (e.g., an Internet host) as 356 well as when a host on a virtual network communicates with a host on 357 a different virtual network. A virtual network may have two (or 358 more) gateways for forwarding traffic onto and off of the virtual 359 network and the optimal choice of which gateway to use may depend on 360 the set of available paths between the communicating peers. The set 361 of available gateways may not be equally "close" to a given 362 destination. The issue appears both when a VM is initially 363 instantiated on a virtual network or when a VM migrates or is moved 364 to a different location. After a migration, for instance, a VM's 365 best-choice gateway for such traffic may change, i.e., the VM may get 366 better service by switching to the "closer" gateway, and this may 367 improve the utilization of network resources. 369 IP implementations in network endpoints typically do not distinguish 370 between multiple routers on the same subnet - there may only be a 371 single default gateway in use, and any use of multiple routers 372 usually considers all of them to be one-hop away. Routing protocol 373 functionality is constrained by the requirement to cope with these 374 endpoint limitations - for example VRRP has one router serve as the 375 master to handle all outbound traffic. This problem can be 376 particularly acute when the virtual network spans multiple data 377 centers, as a VM is likely to receive significantly better service 378 when forwarding external traffic through a local router by comparison 379 to using a router at a remote data center. 381 The optimal forwarding problem applies to both outbound and inbound 382 traffic. For outbound traffic, the choice of outbound router 383 determines the path of outgoing traffic from the VM, which may be 384 sub-optimal after a VM move. For inbound traffic, the location of 385 the VM within the IP subnet for the VM is not visible to the routers 386 beyond the virtual network. Thus, the routing infrastructure will 387 have no information as to which of the two externally visible 388 gateways leading into the virtual network would be the better choice 389 for reaching a particular VM. 391 The issue is further complicated when middleboxes (e.g., load- 392 balancers, firewalls, etc.) must be traversed. Middle boxes may have 393 session state that must be preserved for ongoing communication, and 394 traffic must continue to flow through the middle box, regardless of 395 which router is "closest". 397 4. Using Network Overlays to Provide Virtual Networks 399 Virtual Networks are used to isolate a tenant's traffic from that of 400 other tenants (or even traffic within the same tenant network that 401 requires isolation). There are two main characteristics of virtual 402 networks: 404 1. Virtual networks isolate the address space used in one virtual 405 network from the address space used by another virtual network. 406 The same network addresses may be used in different virtual 407 networks at the same time. In addition, the address space used 408 by a virtual network is independent from that used by the 409 underlying physical network. 411 2. Virtual Networks limit the scope of packets sent on the virtual 412 network. Packets sent by Tenant Systems attached to a virtual 413 network are delivered as expected to other Tenant Systems on that 414 virtual network and may exit a virtual network only through 415 controlled exit points such as a security gateway. Likewise, 416 packets sourced from outside of the virtual network may enter the 417 virtual network only through controlled entry points, such as a 418 security gateway. 420 4.1. Overview of Network Overlays 422 To address the problems described in Section 3, a network overlay 423 approach can be used. 425 The idea behind an overlay is quite straightforward. Each virtual 426 network instance is implemented as an overlay. The original packet 427 is encapsulated by the first-hop network device, called a Network 428 Virtualization Edge (NVE), and tunneled to a remote NVE. The 429 encapsulation identifies the destination of the device that will 430 perform the decapsulation (i.e., the egress NVE for the tunneled 431 packet) before delivering the original packet to the endpoint. The 432 rest of the network forwards the packet based on the encapsulation 433 header and can be oblivious to the payload that is carried inside. 435 Overlays are based on what is commonly known as a "map-and-encap" 436 architecture. When processing and forwarding packets, three distinct 437 and logically separable steps take place: 439 1. The first-hop overlay device implements a mapping operation that 440 determines where the encapsulated packet should be sent to reach 441 its intended destination VM. Specifically, the mapping function 442 maps the destination address (either L2 or L3) of a packet 443 received from a VM into the corresponding destination address of 444 the egress NVE device. The destination address will be the 445 underlay address of the NVE device doing the decapsulation and is 446 an IP address. 448 2. Once the mapping has been determined, the ingress overlay NVE 449 device encapsulates the received packet within an overlay header. 451 3. The final step is to actually forward the (now encapsulated) 452 packet to its destination. The packet is forwarded by the 453 underlay (i.e., the IP network) based entirely on its outer 454 address. Upon receipt at the destination, the egress overlay NVE 455 device decapsulates the original packet and delivers it to the 456 intended recipient VM. 458 Each of the above steps is logically distinct, though an 459 implementation might combine them for efficiency or other reasons. 460 It should be noted that in L3 BGP/VPN terminology, the above steps 461 are commonly known as "forwarding" or "virtual forwarding". 463 The first hop network NVE device can be a traditional switch or 464 router or the virtual switch residing inside a hypervisor. 465 Furthermore, the endpoint can be a VM or it can be a physical server. 466 Examples of architectures based on network overlays include BGP/MPLS 467 VPNs [RFC4364], TRILL [RFC6325], LISP [RFC6830], and Shortest Path 468 Bridging (SPB) [SPB]. 470 In the data plane, an overlay header provides a place to carry either 471 the virtual network identifier, or an identifier that is locally- 472 significant to the edge device. In both cases, the identifier in the 473 overlay header specifies which specific virtual network the data 474 packet belongs to. Since both routed and bridged semantics can be 475 supported by a virtual data center, the original packet carried 476 within the overlay header can be an Ethernet frame or just the IP 477 packet. 479 A key aspect of overlays is the decoupling of the "virtual" MAC and/ 480 or IP addresses used by VMs from the physical network infrastructure 481 and the infrastructure IP addresses used by the data center. If a VM 482 changes location, the overlay edge devices simply update their 483 mapping tables to reflect the new location of the VM within the data 484 center's infrastructure space. Because an overlay network is used, a 485 VM can now be located anywhere in the data center that the overlay 486 reaches without regards to traditional constraints imposed by the 487 underlay network such as the C-VLAN scope, or the IP subnet scope. 489 Multi-tenancy is supported by isolating the traffic of one virtual 490 network instance from traffic of another. Traffic from one virtual 491 network instance cannot be delivered to another instance without 492 (conceptually) exiting the instance and entering the other instance 493 via an entity (e.g., a gateway) that has connectivity to both virtual 494 network instances. Without the existence of a gateway entity, tenant 495 traffic remains isolated within each individual virtual network 496 instance. 498 Overlays are designed to allow a set of VMs to be placed within a 499 single virtual network instance, whether that virtual network 500 provides a bridged network or a routed network. 502 4.2. Communication Between Virtual and Non-virtualized Networks 504 Not all communication will be between devices connected to 505 virtualized networks. Devices using overlays will continue to access 506 devices and make use of services on non-virtualized networks, whether 507 in the data center, the public Internet, or at remote/branch 508 campuses. Any virtual network solution must be capable of 509 interoperating with existing routers, VPN services, load balancers, 510 intrusion detection services, firewalls, etc. on external networks. 512 Communication between devices attached to a virtual network and 513 devices connected to non-virtualized networks is handled 514 architecturally by having specialized gateway devices that receive 515 packets from a virtualized network, decapsulate them, process them as 516 regular (i.e., non-virtualized) traffic, and finally forward them on 517 to their appropriate destination (and vice versa). 519 A wide range of implementation approaches are possible. Overlay 520 gateway functionality could be combined with other network 521 functionality into a network device that implements the overlay 522 functionality, and then forwards traffic between other internal 523 components that implement functionality such as full router service, 524 load balancing, firewall support, VPN gateway, etc. 526 4.3. Communication Between Virtual Networks 528 Communication between devices on different virtual networks is 529 handled architecturally by adding specialized interconnect 530 functionality among the otherwise isolated virtual networks. For a 531 virtual network providing an L2 service, such interconnect 532 functionality could be IP forwarding configured as part of the 533 "default gateway" for each virtual network. For a virtual network 534 providing L3 service, the interconnect functionality could be IP 535 forwarding configured as part of routing between IP subnets or it can 536 be based on configured inter-virtual-network traffic policies. In 537 both cases, the implementation of the interconnect functionality 538 could be distributed across the NVEs and could be combined with other 539 network functionality (e.g., load balancing, firewall support) that 540 is applied to traffic forwarded between virtual networks. 542 4.4. Overlay Design Characteristics 544 Below are some of the characteristics of environments that must be 545 taken into account by the overlay technology. 547 1. Highly distributed systems: The overlay should work in an 548 environment where there could be many thousands of access 549 switches (e.g., residing within the hypervisors) and many more 550 Tenant Systems (e.g., VMs) connected to them. This leads to a 551 distributed mapping system that puts a low overhead on the 552 overlay tunnel endpoints. 554 2. Many highly distributed virtual networks with sparse membership: 555 Each virtual network could be highly dispersed inside the data 556 center. Also, along with expectation of many virtual networks, 557 the number of end systems connected to any one virtual network is 558 expected to be relatively low; Therefore, the percentage of NVEs 559 participating in any given virtual network would also be expected 560 to be low. For this reason, efficient delivery of multi- 561 destination traffic within a virtual network instance should be 562 taken into consideration. 564 3. Highly dynamic Tenant Systems: Tenant Systems connected to 565 virtual networks can be very dynamic, both in terms of creation/ 566 deletion/power-on/off and in terms of mobility from one access 567 device to another. 569 4. Be incrementally deployable, without necessarily requiring major 570 upgrade of the entire network: The first hop device (or end 571 system) that adds and removes the overlay header may require new 572 software and may require new hardware (e.g., for improved 573 performance). But the rest of the network should not need to 574 change just to enable the use of overlays. 576 5. Work with existing data center network deployments without 577 requiring major changes in operational or other practices: For 578 example, some data centers have not enabled multicast beyond 579 link-local scope. Overlays should be capable of leveraging 580 underlay multicast support where appropriate, but not require its 581 enablement in order to use an overlay solution. 583 6. Network infrastructure administered by a single administrative 584 domain: This is consistent with operation within a data center, 585 and not across the Internet. 587 4.5. Control Plane Overlay Networking Work Areas 589 There are three specific and separate potential work areas in the 590 area of control plane protocols needed to realize an overlay 591 solution. The areas correspond to different possible "on-the-wire" 592 protocols, where distinct entities interact with each other. 594 One area of work concerns the address dissemination protocol an NVE 595 uses to build and maintain the mapping tables it uses to deliver 596 encapsulated packets to their proper destination. One approach is to 597 build mapping tables entirely via learning (as is done in 802.1 598 networks). Another approach is to use a specialized control plane 599 protocol. While there are some advantages to using or leveraging an 600 existing protocol for maintaining mapping tables, the fact that large 601 numbers of NVE's will likely reside in hypervisors places constraints 602 on the resources (cpu and memory) that can be dedicated to such 603 functions. 605 From an architectural perspective, one can view the address mapping 606 dissemination problem as having two distinct and separable 607 components. The first component consists of a back-end Network 608 Virtualization Authority (NVA) that is responsible for distributing 609 and maintaining the mapping information for the entire overlay 610 system. For this document, we use the term NVA to refer to an entity 611 that supplies answers, without regard to how it knows the answers it 612 is providing. The second component consists of the on-the-wire 613 protocols an NVE uses when interacting with the NVA. 615 The back-end NVA could provide high performance, high resiliency, 616 failover, etc. and could be implemented in significantly different 617 ways. For example, one model uses a traditional, centralized 618 "directory-based" database, using replicated instances for 619 reliability and failover. A second model involves using and possibly 620 extending an existing routing protocol (e.g., BGP, IS-IS, etc.). To 621 support different architectural models, it is useful to have one 622 standard protocol for the NVE-NVA interaction while allowing 623 different protocols and architectural approaches for the NVA itself. 624 Separating the two allows NVEs to transparently interact with 625 different types of NVAs, i.e., either of the two architectural models 626 described above. Having separate protocols could also allow for a 627 simplified NVE that only interacts with the NVA for the mapping table 628 entries it needs and allows the NVA (and its associated protocols) to 629 evolve independently over time with minimal impact to the NVEs. 631 A third work area considers the attachment and detachment of VMs (or 632 Tenant Systems [I-D.ietf-nvo3-framework] more generally) from a 633 specific virtual network instance. When a VM attaches, the NVE 634 associates the VM with a specific overlay for the purposes of 635 tunneling traffic sourced from or destined to the VM. When a VM 636 disconnects, the NVE should notify the NVA that the Tenant System to 637 NVE address mapping is no longer valid. In addition, if this VM was 638 the last remaining member of the virtual network, then the NVE can 639 also terminate any tunnels used to deliver tenant multi-destination 640 packets within the VN to the NVE. In the case where an NVE and 641 hypervisor are on separate physical devices separated by an access 642 network, a standardized protocol may be needed. 644 In summary, there are three areas of potential work. The first area 645 concerns the implementation of the NVA function itself and any 646 protocols it needs (e.g., if implemented in a distributed fashion). 647 A second area concerns the interaction between the NVA and NVEs. The 648 third work area concerns protocols associated with attaching and 649 detaching a VM from a particular virtual network instance. All three 650 work areas are important to the development of scalable, 651 interoperable solutions. 653 4.6. Data Plane Work Areas 655 The data plane carries encapsulated packets for Tenant Systems. The 656 data plane encapsulation header carries a VN Context identifier 657 [I-D.ietf-nvo3-framework] for the virtual network to which the data 658 packet belongs. Numerous encapsulation or tunneling protocols 659 already exist that can be leveraged. In the absence of strong and 660 compelling justification, it would not seem necessary or helpful to 661 develop yet another encapsulation format just for NVO3. 663 5. Related IETF and IEEE Work 665 The following subsections discuss related IETF and IEEE work. The 666 items are not meant to provide complete coverage of all IETF and IEEE 667 data center related work, nor should the descriptions be considered 668 comprehensive. Each area aims to address particular limitations of 669 today's data center networks. In all areas, scaling is a common 670 theme as are multi-tenancy and VM mobility. Comparing and evaluating 671 the work result and progress of each work area listed is out of scope 672 of this document. The intent of this section is to provide a 673 reference to the interested readers. Note that NVO3 is scoped to 674 running over an IP/L3 underlay network. 676 5.1. BGP/MPLS IP VPNs 678 BGP/MPLS IP VPNs [RFC4364] support multi-tenancy, VPN traffic 679 isolation, address overlapping and address separation between tenants 680 and network infrastructure. The BGP/MPLS control plane is used to 681 distribute the VPN labels and the tenant IP addresses that identify 682 the tenants (or to be more specific, the particular VPN/virtual 683 network) and tenant IP addresses. Deployment of enterprise L3 VPNs 684 has been shown to scale to thousands of VPNs and millions of VPN 685 prefixes. BGP/MPLS IP VPNs are currently deployed in some large 686 enterprise data centers. The potential limitation for deploying BGP/ 687 MPLS IP VPNs in data center environments is the practicality of using 688 BGP in the data center, especially reaching into the servers or 689 hypervisors. There may be computing work force skill set issues, 690 equipment support issues, and potential new scaling challenges. A 691 combination of BGP and lighter weight IP signaling protocols, e.g., 692 XMPP, have been proposed to extend the solutions into DC environment 693 [I-D.ietf-l3vpn-end-system], while taking advantage of built-in VPN 694 features with its rich policy support; it is especially useful for 695 inter-tenant connectivity. 697 5.2. BGP/MPLS Ethernet VPNs 699 Ethernet Virtual Private Networks (E-VPNs) [I-D.ietf-l2vpn-evpn] 700 provide an emulated L2 service in which each tenant has its own 701 Ethernet network over a common IP or MPLS infrastructure. A BGP/MPLS 702 control plane is used to distribute the tenant MAC addresses and the 703 MPLS labels that identify the tenants and tenant MAC addresses. 704 Within the BGP/MPLS control plane a 32-bit Ethernet Tag is used to 705 identify the broadcast domains (VLANs) associated with a given L2 706 VLAN service instance and these Ethernet tags are mapped to VLAN IDs 707 understood by the tenant at the service edges. This means that any 708 customer site VLAN based limitation is associated with an individual 709 tenant service edge, enabling a much higher level of scalability. 710 Interconnection between tenants is also allowed in a controlled 711 fashion. 713 VM Mobility [I-D.raggarwa-data-center-mobility] introduces the 714 concept of a combined L2/L3 VPN service in order to support the 715 mobility of individual Virtual Machines (VMs) between Data Centers 716 connected over a common IP or MPLS infrastructure. 718 5.3. 802.1 VLANs 720 VLANs are a well understood construct in the networking industry, 721 providing an L2 service via a physical network in which tenant 722 forwarding information is part of the physical network 723 infrastructure. A VLAN is an L2 bridging construct that provides the 724 semantics of virtual networks mentioned above: a MAC address can be 725 kept unique within a VLAN, but it is not necessarily unique across 726 VLANs. Traffic scoped within a VLAN (including broadcast and 727 multicast traffic) can be kept within the VLAN it originates from. 728 Traffic forwarded from one VLAN to another typically involves router 729 (L3) processing. The forwarding table look up operation may be keyed 730 on {VLAN, MAC address} tuples. 732 VLANs are a pure L2 bridging construct and VLAN identifiers are 733 carried along with data frames to allow each forwarding point to know 734 what VLAN the frame belongs to. Various types of VLANs are available 735 today, which can be used for network virtualization even together. 736 The C-VLAN, S-VLAN and B-VLAN IDs [IEEE-802.1Q] are 12 bits. The 737 24-bit I-SID [SPB] allows the support of more than 16 million virtual 738 networks. 740 5.4. IEEE 802.1aq - Shortest Path Bridging 742 Shortest Path Bridging (SPB) [SPB] is an IS-IS based overlay that 743 operates over L2 Ethernets. SPB supports multi-pathing and addresses 744 a number of shortcomings in the original Ethernet Spanning Tree 745 Protocol. Shortest Path Bridging Mac (SPBM) uses IEEE 802.1ah PBB 746 (MAC-in-MAC) encapsulation and supports a 24-bit I-SID, which can be 747 used to identify virtual network instances. SPBM provides multi- 748 pathing and supports easy virtual network creation or update. 750 SPBM extends IS-IS in order to perform link-state routing among core 751 SPBM nodes, obviating the need for learning for communication among 752 core SPBM nodes. Learning is still used to build and maintain the 753 mapping tables of edge nodes to encapsulate Tenant System traffic for 754 transport across the SPBM core. 756 SPB is compatible with all other 802.1 standards thus allows 757 leveraging of other features, e.g., VSI Discovery Protocol (VDP), OAM 758 or scalability solutions. 760 5.5. VDP 762 VDP is the Virtual Station Interface (VSI) Discovery and 763 Configuration Protocol specified by IEEE P802.1Qbg [IEEE-802.1Qbg]. 764 VDP is a protocol that supports the association of a VSI with a port. 765 VDP is run between the end system (e.g., a hypervisor) and its 766 adjacent switch, i.e., the device on the edge of the network. VDP is 767 used for example to communicate to the switch that a Virtual Machine 768 (Virtual Station) is moving, i.e., designed for VM migration. 770 5.6. ARMD 772 The Address Resolution for Massive numbers of hosts in the Data 773 center (ARMD) WG examined data center scaling issues with a focus on 774 address resolution and developed a problem statement document 775 [RFC6820]. While an overlay-based approach may address some of the 776 "pain points" that were raised in ARMD (e.g., better support for 777 multi-tenancy), analysis will be needed to understand the scaling 778 tradeoffs of an overlay based approach compared with existing 779 approaches. On the other hand, existing IP-based approaches such as 780 proxy ARP may help mitigate some concerns. 782 5.7. TRILL 784 TRILL is a network protocol that provides an Ethernet L2 service to 785 end systems and is designed to operate over any L2 link type. TRILL 786 establishes forwarding paths using IS-IS routing and encapsulates 787 traffic within its own TRILL header. TRILL as originally defined, 788 supports only the standard (and limited) 12-bit C-VID identifier. 789 Work to extend TRILL to support more than 4094 VLANs has recently 790 completed and is defined in [I-D.ietf-trill-fine-labeling] 792 5.8. L2VPNs 794 The IETF has specified a number of approaches for connecting L2 795 domains together as part of the L2VPN Working Group. That group, 796 however has historically been focused on Provider-provisioned L2 797 VPNs, where the service provider participates in management and 798 provisioning of the VPN. In addition, much of the target environment 799 for such deployments involves carrying L2 traffic over WANs. Overlay 800 approaches as discussed in this document are intended be used within 801 data centers where the overlay network is managed by the data center 802 operator, rather than by an outside party. While overlays can run 803 across the Internet as well, they will extend well into the data 804 center itself (e.g., up to and including hypervisors) and include 805 large numbers of machines within the data center itself. 807 Other L2VPN approaches, such as L2TP [RFC3931] require significant 808 tunnel state at the encapsulating and decapsulating end points. 809 Overlays require less tunnel state than other approaches, which is 810 important to allow overlays to scale to hundreds of thousands of end 811 points. It is assumed that smaller switches (i.e., virtual switches 812 in hypervisors or the adjacent devices to which VMs connect) will be 813 part of the overlay network and be responsible for encapsulating and 814 decapsulating packets. 816 5.9. Proxy Mobile IP 818 Proxy Mobile IP [RFC5213] [RFC5844] makes use of the GRE Key Field 819 [RFC5845] [RFC6245], but not in a way that supports multi-tenancy. 821 5.10. LISP 823 LISP[RFC6830] essentially provides an IP over IP overlay where the 824 internal addresses are end station Identifiers and the outer IP 825 addresses represent the location of the end station within the core 826 IP network topology. The LISP overlay header uses a 24-bit Instance 827 ID used to support overlapping inner IP addresses. 829 6. Summary 831 This document has argued that network virtualization using overlays 832 addresses a number of issues being faced as data centers scale in 833 size. In addition, careful study of current data center problems is 834 needed for development of proper requirements and standard solutions. 836 This document identified three potential control protocol work areas. 837 The first involves a backend Network Virtualization Authority and how 838 it learns and distributes the mapping information NVEs use when 839 processing tenant traffic. A second involves the protocol an NVE 840 would use to communicate with the backend NVA to obtain the mapping 841 information. The third potential work concerns the interactions that 842 take place when a VM attaches or detaches from an specific virtual 843 network instance. 845 There are a number of approaches that provide some if not all of the 846 desired semantics of virtual networks. Each approach needs to be 847 analyzed in detail to assess how well it satisfies the requirements. 849 7. Acknowledgments 850 Helpful comments and improvements to this document have come from Lou 851 Berger, John Drake, Ilango Ganga, Ariel Hendel, Vinit Jain, Petr 852 Lapukhov, Thomas Morin, Benson Schliesser, Qin Wu, Xiaohu Xu, Lucy 853 Yong and many others on the NVO3 mailing list. 855 Special thanks to Janos Farkas for his persistence and numerous 856 detailed comments related to the lack of precision in the text 857 relating to IEEE 802.1 technologies. 859 8. Contributors 861 Dinesh Dutt and Murari Sridharin were original co-authors of the 862 Internet-Draft that led to the BoF that formed the NVO3 WG. That 863 original draft eventually became the basis for the WG Problem 864 Statement document. 866 9. IANA Considerations 868 This memo includes no request to IANA. 870 10. Security Considerations 872 Because this document describes the problem space associated with the 873 need for virtualization of networks in complex, large-scale, data- 874 center networks, it does not itself introduce any security risks. 875 However, it is clear that security concerns need to be a 876 consideration of any solutions proposed to address this problem 877 space. 879 Solutions will need to address both data plane and control plane 880 security concerns. 882 In the data plane, isolation of virtual network traffic from other 883 virtual networks is a primary concern - for NVO3, this isolation may 884 be based on VN identifiers that are not involved in underlay network 885 packet forwarding between overlay edges (NVEs). This reduces the 886 underlay network's role in isolating virtual networks by comparison 887 to approaches where VN identifiers are involved in packet forwarding 888 (e.g. - 802.1 VLANs - see Section 5.3). 890 In addition to isolation, assurances against spoofing, snooping, 891 transit modification and denial of service are examples of other 892 important data plane considerations. Some limited environments may 893 even require confidentiality. 895 In the control plane, the primary security concern is ensuring that 896 an unauthorized party does not compromise the control plane protocol 897 in ways that improperly impact the data plane. Some environments may 898 also be concerned about confidentiality of the control plane. 900 More generally, denial of service concerns may also be an 901 consideration. For example, a tenant on one virtual network could 902 consume excessive network resources in a way that degrades services 903 for other tenants on other virtual networks. 905 11. References 907 11.1. Informative References 909 [I-D.ietf-l2vpn-evpn] 910 Sajassi, A., Aggarwal, R., Henderickx, W., Balus, F., 911 Isaac, A., and J. Uttaro, "BGP MPLS Based Ethernet VPN", 912 draft-ietf-l2vpn-evpn-04 (work in progress), July 2013. 914 [I-D.ietf-l3vpn-end-system] 915 Marques, P., Fang, L., Pan, P., Shukla, A., Napierala, M., 916 and N. Bitar, "BGP-signaled end-system IP/VPNs.", draft- 917 ietf-l3vpn-end-system-01 (work in progress), April 2013. 919 [I-D.ietf-trill-fine-labeling] 920 Eastlake, D., Zhang, M., Agarwal, P., Perlman, R., and D. 921 Dutt, "TRILL (Transparent Interconnection of Lots of 922 Links): Fine-Grained Labeling", draft-ietf-trill-fine- 923 labeling-07 (work in progress), May 2013. 925 [I-D.raggarwa-data-center-mobility] 926 Aggarwal, R., Rekhter, Y., Henderickx, W., Shekhar, R., 927 Fang, L., and A. Sajassi, "Data Center Mobility based on 928 E-VPN, BGP/MPLS IP VPN, IP Routing and NHRP", draft- 929 raggarwa-data-center-mobility-05 (work in progress), June 930 2013. 932 [IEEE-802.1Q] 933 IEEE 802.1Q-2011, ., "IEEE standard for local and 934 metropolitan area networks: Media access control (MAC) 935 bridges and virtual bridged local area networks, ", August 936 2011. 938 [IEEE-802.1Qbg] 939 IEEE 802.1Qbg-2012, ., "IEEE standard for local and 940 metropolitan area networks: Media access control (MAC) 941 bridges and virtual bridged local area networks -- 942 Amendment 21: Edge virtual bridging, ", July 2012. 944 [RFC3931] Lau, J., Townsley, M., and I. Goyret, "Layer Two Tunneling 945 Protocol - Version 3 (L2TPv3)", RFC 3931, March 2005. 947 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 948 Networks (VPNs)", RFC 4364, February 2006. 950 [RFC5213] Gundavelli, S., Leung, K., Devarapalli, V., Chowdhury, K., 951 and B. Patil, "Proxy Mobile IPv6", RFC 5213, August 2008. 953 [RFC5844] Wakikawa, R. and S. Gundavelli, "IPv4 Support for Proxy 954 Mobile IPv6", RFC 5844, May 2010. 956 [RFC5845] Muhanna, A., Khalil, M., Gundavelli, S., and K. Leung, 957 "Generic Routing Encapsulation (GRE) Key Option for Proxy 958 Mobile IPv6", RFC 5845, June 2010. 960 [RFC6245] Yegani, P., Leung, K., Lior, A., Chowdhury, K., and J. 961 Navali, "Generic Routing Encapsulation (GRE) Key Extension 962 for Mobile IPv4", RFC 6245, May 2011. 964 [RFC6325] Perlman, R., Eastlake, D., Dutt, D., Gai, S., and A. 965 Ghanwani, "Routing Bridges (RBridges): Base Protocol 966 Specification", RFC 6325, July 2011. 968 [RFC6820] Narten, T., Karir, M., and I. Foo, "Address Resolution 969 Problems in Large Data Center Networks", RFC 6820, January 970 2013. 972 [RFC6830] Farinacci, D., Fuller, V., Meyer, D., and D. Lewis, "The 973 Locator/ID Separation Protocol (LISP)", RFC 6830, January 974 2013. 976 [SPB] IEEE 802.1aq, ., "IEEE standard for local and metropolitan 977 area networks: Media access control (MAC) bridges and 978 virtual bridged local area networks -- Amendment 20: 979 Shortest path bridging, ", June 2012. 981 11.2. Normative References 983 [I-D.ietf-nvo3-framework] 984 Lasserre, M., Balus, F., Morin, T., Bitar, N., and Y. 985 Rekhter, "Framework for DC Network Virtualization", draft- 986 ietf-nvo3-framework-03 (work in progress), July 2013. 988 Appendix A. Change Log 990 A.1. Changes From -03 to -04 991 Changes in response to IESG review; use rcsdiff to see changes. 993 A.2. Changes From -02 to -03 995 1. Comments from Janos Farkas, including: 997 * Defined C-VLAN and changed VLAN -> C-VLAN where appropriate. 999 * Improved references to IEEE work. 1001 * Removed Section "Further Work". 1003 2. Improved first paragraph in "Optimal Forwarding" Section (per Qin 1004 Wu). 1006 3. Replaced "oracle" term with Network Virtualization Authority, to 1007 match terminology discussion on list. 1009 4. Reduced number of authors to 6. Still above the usual guideline 1010 of 5, but chairs will ask for exception in this case. 1012 A.3. Changes From -01 to -02 1014 1. Security Considerations changes (Lou Berger) 1016 2. Changes to section on Optimal Forwarding (Xuxiaohu) 1018 3. More wording improvements in L2 details (Janos Farkas) 1020 4. References to ARMD and LISP documents are now RFCs. 1022 A.4. Changes From -00 to -01 1024 1. Numerous editorial and clarity improvements. 1026 2. Picked up updated terminology from the framework document (e.g., 1027 Tenant System). 1029 3. Significant changes regarding IEEE 802.1 Ethernets and VLANs. 1030 All text moved to the Related Work section, where the technology 1031 is summarized. 1033 4. Removed section on Forwarding Table Size limitations. This issue 1034 only occurs in some deployments with L2 bridging, and is not 1035 considered a motivating factor for the NVO3 work. 1037 5. Added paragraph in Introduction that makes clear that NVO3 is 1038 focused on providing both L2 and L3 service to end systems, and 1039 that IP is assumed as the underlay transport in the data center. 1041 6. Added new section (2.6) on Optimal Forwarding. 1043 7. Added a section on Data Plane issues. 1045 8. Significant improvement to Section describing SPBM. 1047 9. Added sub-section on VDP in "Related Work" 1049 A.5. Changes from draft-narten-nvo3-overlay-problem-statement-04.txt 1051 1. This document has only one substantive change relative to draft- 1052 narten-nvo3-overlay-problem-statement-04.txt. Two sentences were 1053 removed per the discussion that led to WG adoption of this 1054 document. 1056 Authors' Addresses 1058 Thomas Narten (editor) 1059 IBM 1061 Email: narten@us.ibm.com 1063 Eric Gray (editor) 1064 Ericsson 1066 Email: eric.gray@ericsson.com 1068 David Black 1069 EMC 1071 Email: david.black@emc.com 1073 Luyuan Fang 1074 Cisco 1075 111 Wood Avenue South 1076 Iselin, NJ 08830 1077 USA 1079 Email: lufang@cisco.com 1080 Lawrence Kreeger 1081 Cisco 1083 Email: kreeger@cisco.com 1085 Maria Napierala 1086 AT&T 1087 200 Laurel Avenue 1088 Middletown, NJ 07748 1089 USA 1091 Email: mnapierala@att.com