idnits 2.17.1 draft-ietf-bess-evpn-overlay-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 5 instances of too long lines in the document, the longest one being 2 characters in excess of 72. ** There are 17 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 10, 2014) is 3454 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC2119' is mentioned on line 172, but not defined == Missing Reference: 'GRE' is mentioned on line 307, but not defined == Missing Reference: 'RFC4023' is mentioned on line 543, but not defined == Missing Reference: 'NOV3-Framework' is mentioned on line 609, but not defined == Missing Reference: 'RFC6514' is mentioned on line 868, but not defined == Unused Reference: 'KEYWORDS' is defined on line 983, but no explicit reference was found in the text == Unused Reference: 'NOV3-FRWK' is defined on line 1022, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 4272 ** Obsolete normative reference: RFC 5512 (Obsoleted by RFC 9012) == Outdated reference: A later version (-07) exists of draft-ietf-l2vpn-evpn-req-01 == Outdated reference: A later version (-08) exists of draft-sridharan-virtualization-nvgre-01 == Outdated reference: A later version (-09) exists of draft-mahalingam-dutt-dcops-vxlan-02 == Outdated reference: A later version (-11) exists of draft-ietf-l2vpn-evpn-02 == Outdated reference: A later version (-04) exists of draft-ietf-nvo3-overlay-problem-statement-01 == Outdated reference: A later version (-09) exists of draft-ietf-nvo3-framework-01 Summary: 4 errors (**), 0 flaws (~~), 14 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 L2VPN Workgroup A. Sajassi (Editor) 3 INTERNET-DRAFT Cisco 4 Intended Status: Standards Track 5 J. Drake (Editor) 6 Y. Rekhter Juniper 7 R. Shekhar 8 B. Schliesser Nabil Bitar 9 Juniper Verizon 11 S. Salam Aldrin Isaac 12 K. Patel Bloomberg 13 D. Rao 14 S. Thoria James Uttaro 15 Cisco AT&T 17 L. Yong W. Henderickx 18 Huawei Alcatel-Lucent 20 D. Cai 21 S. Sinha 22 Cisco 24 Wen Lin 25 Nischal Sheth 26 Juniper 28 Expires: May 10, 2015 November 10, 2014 30 A Network Virtualization Overlay Solution using EVPN 31 draft-ietf-bess-evpn-overlay-00 33 Abstract 35 This document describes how EVPN can be used as an NVO solution and 36 explores the various tunnel encapsulation options over IP and their 37 impact on the EVPN control-plane and procedures. In particular, the 38 following encapsulation options are analyzed: MPLS over GRE, VXLAN, 39 and NVGRE. 41 Status of this Memo 43 This Internet-Draft is submitted to IETF in full conformance with the 44 provisions of BCP 78 and BCP 79. 46 Internet-Drafts are working documents of the Internet Engineering 47 Task Force (IETF), its areas, and its working groups. Note that 48 other groups may also distribute working documents as 49 Internet-Drafts. 51 Internet-Drafts are draft documents valid for a maximum of six months 52 and may be updated, replaced, or obsoleted by other documents at any 53 time. It is inappropriate to use Internet-Drafts as reference 54 material or to cite them other than as "work in progress." 56 The list of current Internet-Drafts can be accessed at 57 http://www.ietf.org/1id-abstracts.html 59 The list of Internet-Draft Shadow Directories can be accessed at 60 http://www.ietf.org/shadow.html 62 Copyright and License Notice 64 Copyright (c) 2012 IETF Trust and the persons identified as the 65 document authors. All rights reserved. 67 This document is subject to BCP 78 and the IETF Trust's Legal 68 Provisions Relating to IETF Documents 69 (http://trustee.ietf.org/license-info) in effect on the date of 70 publication of this document. Please review these documents 71 carefully, as they describe your rights and restrictions with respect 72 to this document. Code Components extracted from this document must 73 include Simplified BSD License text as described in Section 4.e of 74 the Trust Legal Provisions and are provided without warranty as 75 described in the Simplified BSD License. 77 Table of Contents 79 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 80 2 Specification of Requirements . . . . . . . . . . . . . . . . . 5 81 3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . 5 82 4 EVPN Features . . . . . . . . . . . . . . . . . . . . . . . . . 6 83 5 Encapsulation Options for EVPN Overlays . . . . . . . . . . . . 7 84 5.1 VXLAN/NVGRE Encapsulation . . . . . . . . . . . . . . . . . 7 85 5.1.1 Virtual Identifiers Scope . . . . . . . . . . . . . . . 8 86 5.1.1.1 Data Center Interconnect with Gateway . . . . . . . 8 87 5.1.1.2 Data Center Interconnect without Gateway . . . . . . 9 88 5.1.2 Virtual Identifiers to EVI Mapping . . . . . . . . . . . 9 89 5.1.2.1 Auto Derivation of RT . . . . . . . . . . . . . . . 10 90 5.1.3 Constructing EVPN BGP Routes . . . . . . . . . . . . . 11 91 5.2 MPLS over GRE . . . . . . . . . . . . . . . . . . . . . . . 12 92 6 EVPN with Multiple Data Plane Encapsulations . . . . . . . . . 13 93 7 NVE Residing in Hypervisor . . . . . . . . . . . . . . . . . . 13 94 7.1 Impact on EVPN BGP Routes & Attributes for VXLAN/NVGRE 95 Encapsulation . . . . . . . . . . . . . . . . . . . . . . . 14 96 7.2 Impact on EVPN Procedures for VXLAN/NVGRE Encapsulation . . 14 97 8 NVE Residing in ToR Switch . . . . . . . . . . . . . . . . . . 15 98 8.1 EVPN Multi-Homing Features . . . . . . . . . . . . . . . . 15 99 8.1.1 Multi-homed Ethernet Segment Auto-Discovery . . . . . . 16 100 8.1.2 Fast Convergence and Mass Withdraw . . . . . . . . . . . 16 101 8.1.3 Split-Horizon . . . . . . . . . . . . . . . . . . . . . 16 102 8.1.4 Aliasing and Backup-Path . . . . . . . . . . . . . . . . 16 103 8.1.5 DF Election . . . . . . . . . . . . . . . . . . . . . . 17 104 8.2 Impact on EVPN BGP Routes & Attributes . . . . . . . . . . . 17 105 8.3 Impact on EVPN Procedures . . . . . . . . . . . . . . . . . 17 106 8.3.1 Split Horizon . . . . . . . . . . . . . . . . . . . . . 18 107 8.3.2 Aliasing and Backup-Path . . . . . . . . . . . . . . . . 19 108 9 Support for Multicast . . . . . . . . . . . . . . . . . . . . . 19 109 10 Inter-AS . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 110 11 Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 21 111 12 Security Considerations . . . . . . . . . . . . . . . . . . . 21 112 13 IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 113 14 References . . . . . . . . . . . . . . . . . . . . . . . . . . 22 114 14.1 Normative References . . . . . . . . . . . . . . . . . . . 22 115 14.2 Informative References . . . . . . . . . . . . . . . . . . 22 116 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 23 118 1 Introduction 120 In the context of this document, a Network Virtualization Overlay 121 (NVO) is a solution to address the requirements of a multi-tenant 122 data center, especially one with virtualized hosts, e.g., Virtual 123 Machines (VMs). The key requirements of such a solution, as described 124 in [Problem-Statement], are: 126 - Isolation of network traffic per tenant 128 - Support for a large number of tenants (tens or hundreds of 129 thousands) 131 - Extending L2 connectivity among different VMs belonging to a given 132 tenant segment (subnet) across different PODs within a data center or 133 between different data centers 135 - Allowing a given VM to move between different physical points of 136 attachment within a given L2 segment 138 The underlay network for NVO solutions is assumed to provide IP 139 connectivity between NVO endpoints (NVEs). 141 This document describes how Ethernet VPN (EVPN) can be used as an NVO 142 solution and explores applicability of EVPN functions and procedures. 143 In particular, it describes the various tunnel encapsulation options 144 for EVPN over IP, and their impact on the EVPN control-plane and 145 procedures for two main scenarios: 147 a) when the NVE resides in the hypervisor, and 148 b) when the NVE resides in a ToR device 150 Note that the use of EVPN as an NVO solution does not necessarily 151 mandate that the BGP control-plane be running on the NVE. For such 152 scenarios, it is still possible to leverage the EVPN solution by 153 using XMPP, or alternative mechanisms, to extend the control-plane to 154 the NVE as discussed in [L3VPN-ENDSYSTEMS]. 156 The possible encapsulation options for EVPN overlays that are 157 analyzed in this document are: 159 - VXLAN and NVGRE 160 - MPLS over GRE 162 Before getting into the description of the different encapsulation 163 options for EVPN over IP, it is important to highlight the EVPN 164 solution's main features, how those features are currently supported, 165 and any impact that the encapsulation has on those features. 167 2 Specification of Requirements 169 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 170 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 171 document are to be interpreted as described in [RFC2119]. 173 3 Terminology 175 NVO: Network Virtualization Overlay 177 NVE: Network Virtualization Endpoint 179 VNI: Virtual Network Identifier (for VxLAN) 181 VSID: VIrtual Subnet Identifier (for NVGRE) 183 EVPN: Ethernet VPN 185 EVI: An EVPN instance spanning across the PEs participating in that 186 EVPN 188 MAC-VRF: A Virtual Routing and Forwarding table for MAC addresses on 189 a PE for an EVI 191 Ethernet Segment Identifier (ESI): If a CE is multi-homed to two or 192 more PEs, the set of Ethernet links that attaches the CE to the PEs 193 is an 'Ethernet segment'. Ethernet segments MUST have a unique non- 194 zero identifier, the 'Ethernet Segment Identifier'. 196 Ethernet Tag: An Ethernet Tag identifies a particular broadcast 197 domain, e.g., a VLAN. An EVPN instance consists of one or more 198 broadcast domains. Ethernet tag(s) are assigned to the broadcast 199 domains of a given EVPN instance by the provider of that EVPN, and 200 each PE in that EVPN instance performs a mapping between broadcast 201 domain identifier(s) understood by each of its attached CEs and the 202 corresponding Ethernet tag. 204 Single-Active Multihoming: When a device or a network is multihomed 205 to a group of two or more PEs and when only a single PE in such a 206 redundancy group can forward traffic to/from the multihomed device or 207 network for a given VLAN, such multihoming is referred to as "Single- 208 Active" 210 All-Active Multihoming: When a device is multihomed to a group of two 211 or more PEs and when all PEs in such redundancy group can forward 212 traffic to/from the multihomed device or network for a given VLAN, 213 such multihoming is referred to as "All-Active". 215 4 EVPN Features 217 EVPN was originally designed to support the requirements detailed in 218 [EVPN-REQ] and therefore has the following attributes which directly 219 address control plane scaling and ease of deployment issues. 221 1) Control plane traffic is distributed with BGP and Broadcast and 222 Multicast traffic is sent using a shared multicast tree or with 223 ingress replication. 225 2) Control plane learning is used for MAC (and IP) addresses instead 226 of data plane learning. The latter requires the flooding of unknown 227 unicast and ARP frames; whereas, the former does not require any 228 flooding. 230 3) Route Reflector is used to reduce a full mesh of BGP sessions 231 among PE devices to a single BGP session between a PE and the RR. 232 Furthermore, RR hierarchy can be leveraged to scale the number BGP 233 routes on the RR. 235 4) Auto-discovery via BGP is used to discover PE devices 236 participating in a given VPN, PE devices participating in a given 237 redundancy group, tunnel encapsulation types, multicast tunnel type, 238 multicast members, etc. 240 5) All-Active multihoming is used. This allows a given customer 241 device (CE) to have multiple links to multiple PEs, and traffic 242 to/from that CE fully utilizes all of these links. This set of links 243 is termed an Ethernet Segment (ES). 245 6) When a link between a CE and a PE fails, the PEs for that EVI are 246 notified of the failure via the withdrawal of a single EVPN route. 247 This allows those PEs to remove the withdrawing PE as a next hop for 248 every MAC address associated with the failed link. This is termed 249 'mass withdrawal'. 251 7) BGP route filtering and constrained route distribution are 252 leveraged to ensure that the control plane traffic for a given EVI is 253 only distributed to the PEs in that EVI. 255 8) When a 802.1Q interface is used between a CE and a PE, each of the 256 VLAN ID (VID) on that interface can be mapped onto a bridge domain 257 (for upto 4094 such bridge domains). All these bridge domains can 258 also be mapped onto a single EVI (in case of VLAN-aware bundle 259 service). 261 9) VM Mobility mechanisms ensure that all PEs in a given EVI know 262 the ES with which a given VM, as identified by its MAC and IP 263 addresses, is currently associated. 265 10) Route Targets are used to allow the operator (or customer) to 266 define a spectrum of logical network topologies including mesh, hub & 267 spoke, and extranets (e.g., a VPN whose sites are owned by different 268 enterprises), without the need for proprietary software or the aid of 269 other virtual or physical devices. 271 11) Because the design goal for NVO is millions of instances per 272 common physical infrastructure, the scaling properties of the control 273 plane for NVO are extremely important. EVPN and the extensions 274 described herein, are designed with this level of scalability in 275 mind. 277 5 Encapsulation Options for EVPN Overlays 279 5.1 VXLAN/NVGRE Encapsulation 281 Both VXLAN and NVGRE are examples of technologies that provide a data 282 plane encapsulation which is used to transport a packet over the 283 common physical IP infrastructure between NVEs, VXLAN Tunnel End 284 Point (VTEPs) in VXLAN and Network Virtualization Endpoint (NVEs) in 285 NVGRE. Both of these technologies include the identifier of the 286 specific NVO instance, Virtual Network Identifier (VNI) in VXLAN and 287 Virtual Subnet Identifier (VSID), NVGRE, in each packet. 289 Note that a Provider Edge (PE) is equivalent to a VTEP/NVE. 291 [VXLAN] encapsulation is based on UDP, with an 8-byte header 292 following the UDP header. VXLAN provides a 24-bit VNI, which 293 typically provides a one-to-one mapping to the tenant VLAN ID, as 294 described in [VXLAN]. In this scenario, the VTEP does not include an 295 inner VLAN tag on frame encapsulation, and discards decapsulated 296 frames with an inner VLAN tag. This mode of operation in [VXLAN] maps 297 to VLAN Based Service in [EVPN], where a tenant VLAN ID gets mapped 298 to an EVPN instance (EVI). 300 [VXLAN] also provides an option of including an inner VLAN tag in the 301 encapsulated frame, if explicitly configured at the VTEP. This mode 302 of operation can either map to VLAN Based Service or VLAN Bundle 303 Service in [EVPN] because inner VLAN tag is not used for lookup by 304 the disposition PE when performing VXLAN decapsulation as described 305 in section 6 of [VXLAN]. 307 [NVGRE] encapsulation is based on [GRE] and it mandates the inclusion 308 of the optional GRE Key field which carries the VSID. There is a one- 309 to-one mapping between the VSID and the tenant VLAN ID, as described 310 in [NVGRE] and the inclusion of an inner VLAN tag is prohibited. This 311 mode of operation in [NVGRE] maps to VLAN Based Service in [EVPN]. 313 As described in the next section there is no change to the encoding 314 of EVPN routes to support VXLAN or NVGRE encapsulation except for the 315 use of BGP Encapsulation extended community. However, there is 316 potential impact to the EVPN procedures depending on where the NVE is 317 located (i.e., in hypervisor or TOR) and whether multi-homing 318 capabilities are required. 320 5.1.1 Virtual Identifiers Scope 322 Although VNI or VSID are defined as 24-bit globally unique values, 323 there are scenarios in which it is desirable to use a locally 324 significant value for VNI or VSID, especially in the context of data 325 center interconnect: 327 5.1.1.1 Data Center Interconnect with Gateway 329 In the case where NVEs in different data centers need to be 330 interconnected, and the NVEs need to use VNIs or VSIDs as a globally 331 unique identifiers within a data center, then a Gateway needs to be 332 employed at the edge of the data center network. This is because the 333 Gateway will provide the functionality of translating the VNI or VSID 334 when crossing network boundaries, which may align with operator span 335 of control boundaries. As an example, consider the network of Figure 336 1 below. Assume there are three network operators: one for each of 337 the DC1, DC2 and WAN networks. The Gateways at the edge of the data 338 centers are responsible for translating the VNIs / VSIDs between the 339 values used in each of the data center networks and the values used 340 in the WAN. 342 +--------------+ 343 | | 344 +---------+ | WAN | +---------+ 345 +----+ | +---+ +----+ +----+ +---+ | +----+ 346 |NVE1|--| | | |WAN | |WAN | | | |--|NVE3| 347 +----+ |IP |GW |--|Edge| |Edge|--|GW | IP | +----+ 348 +----+ |Fabric +---+ +----+ +----+ +---+ Fabric | +----+ 349 |NVE2|--| | | | | |--|NVE4| 350 +----+ +---------+ +--------------+ +---------+ +----+ 352 |<------ DC 1 ------> <------ DC2 ------>| 354 Figure 1: Data Center Interconnect with Gateway 356 5.1.1.2 Data Center Interconnect without Gateway 358 In the case where NVEs in different data centers need to be 359 interconnected, and the NVEs need to use locally assigned VNIs or 360 VSIDs (e.g., as MPLS labels), then there may be no need to employ 361 Gateways at the edge of the data center network. More specifically, 362 the VNI or VSID value that is used by the transmitting NVE is 363 allocated by the NVE that is receiving the traffic (in other words, 364 this is a "downstream assigned" MPLS label). This allows the VNI or 365 VSID space to be decoupled between different data center networks 366 without the need for a dedicated Gateway at the edge of the data 367 centers. 369 +--------------+ 370 | | 371 +---------+ | WAN | +---------+ 372 +----+ | | +----+ +----+ | | +----+ 373 |NVE1|--| | |WAN | |WAN | | |--|NVE3| 374 +----+ |IP Fabric|---|Edge| |Edge|--|IP Fabric| +----+ 375 +----+ | | +----+ +----+ | | +----+ 376 |NVE2|--| | | | | |--|NVE4| 377 +----+ +---------+ +--------------+ +---------+ +----+ 379 |<------ DC 1 -----> <---- DC2 ------>| 381 Figure 2: Data Center Interconnect without Gateway 383 5.1.2 Virtual Identifiers to EVI Mapping 385 When the EVPN control plane is used in conjunction with VXLAN or 386 NVGRE, two options for mapping the VXLAN VNI or NVGRE VSID to an EVI 387 are possible: 389 1. Option 1: Single Subnet per EVI 391 In this option, a single subnet represented by a VNI or VSID is 392 mapped to a unique EVI. As such, a BGP RD and RT is needed per VNI / 393 VSID on every VTEP. The advantage of this model is that it allows the 394 BGP RT constraint mechanisms to be used in order to limit the 395 propagation and import of routes to only the VTEPs that are 396 interested in a given VNI or VSID. The disadvantage of this model may 397 be the provisioning overhead if RD and RT are not derived 398 automatically from VNI or VSID. 400 In this option, the MAC-VRF table is identified by the RT in the 401 control plane and by the VNI or VSID for the data-plane. In this 402 option, the specific the MAC-VRF table corresponds to only a single 403 bridge domain (e.g., a single subnet). 405 2. Option 2: Multiple Subnets per EVI 407 In this option, multiple subnets each represented by a unique VNI or 408 VSID are mapped to a unique EVI. For example, if a tenant has 409 multiple segments/subnets each represented by a VNI or VSID, then all 410 the VNIs (or VSIDs) for that tenant are mapped to a single EVI - 411 e.g., the EVI in this case represents the tenant and not a subnet . 412 The advantage of this model is that it doesn't require the 413 provisioning of RD/RT per VNI or VSID. However, this is a moot point 414 if option 1 with if auto-derivation is used. The disadvantage of this 415 model is that routes would be imported by VTEPs that may not be 416 interested in a given VNI or VSID. 418 In this option the MAC-VRF table is identified by the RT in the 419 control plane and a specific bridge domain for that MAC-VRF is 420 identified by the in the control plane. In this 421 option, the VNI/VSID in the data-plane is sufficient to identify a 422 specific bridge domain - e.g., no need to do a lookup based on 423 VNI/VSID field and Ethernet Tag ID fields to identify a bridge 424 domain. 426 5.1.2.1 Auto Derivation of RT 428 When the option of a single VNI or VSID per EVI is used, it is 429 important to auto-derive RT for EVPN BGP routes in order to simplify 430 configuration for data center operations. RD can be derived easily as 431 described in [EVPN] and RT can be auto-derived as described next. 433 Since a gateway PE as depicted in figure-1 participates in both the 434 DCN and WAN BGP sessions, it is important that when RT values are 435 auto-derived for VNIs (or VSIDs), there is no conflict in RT spaces 436 between DCN and WAN networks assuming that both are operating within 437 the same AS. Also, there can be scenarios where both VXLAN and NVGRE 438 encapsulations may be needed within the same DCN and their 439 corresponding VNIs and VSIDs are administered independently which 440 means VNI and VSID spaces can overlap. In order to ensure that no 441 such conflict in RT spaces arises, RT values for DCNs are auto- 442 derived as follow: 444 0 1 2 3 4 445 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 0 446 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++ 447 | AS # |A| TYPE| D-ID | Service Instance ID| 448 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++ 450 - 2 bytes of global admin field of the RT is set to the AS number. 452 - Three least significant bytes of the local admin field of the RT is 453 set to the VNI or VSID, I-SID, or VID. The most significant bit of 454 the local admin field of the RT is set as follow: 455 0: auto-derived 456 1: manually-derived 458 - The next 3 bits of the most significant byte of the local admin 459 field of the RT identifies the space in which the other 3 bytes are 460 defined. The following spaces are defined: 461 0 : VID 462 1 : VXLAN 463 2 : NVGRE 464 3 : I-SID 465 4 : EVI 466 5 : dual-VID 468 - The remaining 4 bits of the most significant byte of the local 469 admin field of the RT identifies the domain-id. The default value of 470 domain-id is zero indicating that only a single numbering space exist 471 for a given technology. However, if there are more than one number 472 space exist for a given technology (e.g., overlapping VXLAN spaces), 473 then each of the number spaces need to be identify by their 474 corresponding domain-id starting from 1. 476 5.1.3 Constructing EVPN BGP Routes 478 In EVPN, an MPLS label is distributed by the egress PE via the EVPN 479 control plane and is placed in the MPLS header of a given packet by 480 the ingress PE. This label is used upon receipt of that packet by the 481 egress PE for disposition of that packet. This is very similar to the 482 use of the VNI or VSID by the egress VTEP or NVE, respectively, with 483 the difference being that an MPLS label has local significance while 484 a VNI or VSID typically has global significance. Accordingly, and 485 specifically to support the option of locally assigned VNIs, the MPLS 486 label field in the MAC Advertisement, Ethernet AD per EVI, and 487 Inclusive Multicast Ethernet Tag routes is used to carry the VNI or 488 VSID. For the balance of this memo, the MPLS label field will be 489 referred to as the VNI/VSID field. The VNI/VSID field is used for 490 both locally and globally assigned VNIs or VSIDs. 492 For the VNI based mode (a single VNI per EVI), the Ethernet Tag field 493 in the MAC Advertisement, Ethernet AD per EVI, and Inclusive 494 Multicast route MUST be set to zero just as in the VLAN Based service 495 in [EVPN]. For the VNI bundle mode (multiple VNIs per EVI with a 496 single bridge domain), the Ethernet Tag field in the MAC 497 Advertisement, Ethernet AD per EVI, and Inclusive Multicast Ethernet 498 Tag routes MUST be set to zero just as in the VLAN Bundle service in 499 [EVPN]. 501 For the VNI-aware bundle mode (multiple VNIs per EVI each with its 502 own bridge domain), the Ethernet Tag field in the MAC Advertisement, 503 Ethernet AD per EVI, and Inclusive Multicast route MUST identify a 504 bridge domain within an EVI and the set of Ethernet Tags for that EVI 505 needs to be configured consistently on all PEs within that EVI. The 506 value advertised in the Ethernet Tag field MAY be a VNI as long as it 507 matches the existing semantics of the Ethernet Tag, i.e., it 508 identifies a bridge domain within an EVI and the set of VNIs are 509 configured consistently on each PE in that EVI. 511 In order to indicate that which type of data plane encapsulation 512 (i.e., VXLAN, NVGRE, MPLS, or MPLS in GRE) is to be used, the BGP 513 Encapsulation extended community defined in [RFC5512] is included 514 with all EVPN routes (i.e. MAC Advertisement, Ethernet AD per EVI, 515 Ethernet AD per ESI, Inclusive Multicast Ethernet Tag, and Ethernet 516 Segment) advertised by an egress PE. Four new values will be defined 517 to extend the list of encapsulation types defined in [RFC5512]: 519 + TBD (IANA assigned) - VXLAN Encapsulation 520 + TBD (IANA assigned) - NVGRE Encapsulation 521 + TBD (IANA assigned) - MPLS Encapsulation 522 + TBD (IANA assigned) - MPLS in GRE Encapsulation 524 If the BGP Encapsulation extended community is not present, then the 525 default MPLS encapsulation or a statically configured encapsulation 526 is assumed. 528 The Next Hop field of the MP_REACH_NLRI attribute of the route MUST 529 be set to the IPv4 or IPv6 address of the NVE. The remaining fields 530 in each route are set as per [EVPN]. 532 5.2 MPLS over GRE 534 The EVPN data-plane is modeled as an EVPN MPLS client layer sitting 535 over an MPLS PSN tunnel. Some of the EVPN functions (split-horizon, 536 aliasing and repair-path) are tied to the MPLS client layer. If MPLS 537 over GRE encapsulation is used, then the EVPN MPLS client layer can 538 be carried over an IP PSN tunnel transparently. Therefore, there is 539 no impact to the EVPN procedures and associated data-plane 540 operation. 542 The existing standards for MPLS over GRE encapsulation as defined by 543 [RFC4023] can be used for this purpose; however, when it is used in 544 conjunction with EVPN the key field SHOULD be present, and SHOULD be 545 used to provide a 32-bit entropy field. The Checksum and Sequence 546 Number fields are not needed and their corresponding C and S bits 547 MUST be set to zero. 549 6 EVPN with Multiple Data Plane Encapsulations 551 The use of the BGP Encapsulation extended community allows each PE in 552 a given EVI to know each of the encapsulations supported by each of 553 the other PEs in that EVI. I.e., each of the PEs in a given EVI may 554 support multiple data plane encapsulations. An ingress PE can send a 555 frame to an egress PE only if the set of encapsulations advertised by 556 the egress PE in the subject MAC Advertisement or Per EVI Ethernet AD 557 route, forms a non-empty intersection with the set of encapsulations 558 supported by the ingress PE, and it is at the discretion of the 559 ingress PE which encapsulation to choose from this intersection. 560 (As noted in section 5.1.3, if the BGP Encapsulation extended 561 community is not present, then the default MPLS encapsulation or a 562 statically configured encapsulation is assumed.) 564 If BGP Encapsulation extended community is not present, then the 565 default MPLS encapsulation (or statically configured encapsulation) 566 is used. However, if this attribute is present, then an ingress PE 567 can send a frame to an egress PE only if the set of encapsulations 568 advertised by the egress PE in the subject MAC Advertisement or Per 569 EVI Ethernet AD route, forms a non-empty intersection with the set of 570 encapsulations supported by the ingress PE, and it is at the 571 discretion of the ingress PE which encapsulation to choose from this 572 intersection. 574 An ingress node that uses shared multicast trees for sending 575 broadcast or multicast frames MUST maintain distinct trees for each 576 different encapsulation type. 578 It is the responsibility of the operator of a given EVI to ensure 579 that all of the PEs in that EVI support at least one common 580 encapsulation. If this condition is violated, it could result in 581 service disruption or failure. The use of the BGP Encapsulation 582 extended community provides a method to detect when this condition is 583 violated but the actions to be taken are at the discretion of the 584 operator and are outside the scope of this document. 586 7 NVE Residing in Hypervisor 587 When a PE and its CEs are co-located in the same physical device, 588 e.g., when the PE resides in a server and the CEs are its VMs, the 589 links between them are virtual and they typically share fate; i.e., 590 the subject CEs are typically not multi-homed or if they are multi- 591 homed, the multi-homing is a purely local matter to the server 592 hosting the VM, and need not be "visible" to any other PEs, and thus 593 does not require any specific protocol mechanisms. The most common 594 case of this is when the NVE resides in the hypervisor. 596 In the sub-sections that follow, we will discuss the impact on EVPN 597 procedures for the case when the NVE resides on the hypervisor and 598 the VXLAN or NVGRE encapsulation is used. 600 7.1 Impact on EVPN BGP Routes & Attributes for VXLAN/NVGRE Encapsulation 602 When the VXLAN VNI or NVGRE VSID is assumed to be a global value, one 603 might question the need for the Route Distinguisher (RD) in the EVPN 604 routes. In the scenario where all data centers are under a single 605 administrative domain, and there is a single global VNI/VSID space, 606 the RD MAY be set to zero in the EVPN routes. However, in the 607 scenario where different groups of data centers are under different 608 administrative domains, and these data centers are connected via one 609 or more backbone core providers as described in [NOV3-Framework], the 610 RD must be a unique value per EVI or per NVE as described in [EVPN]. 611 In other words, whenever there is more than one administrative domain 612 for global VNI or VSID, then a non-zero RD MUST be used, or whenever 613 the VNI or VSID value have local significance, then a non-zero RD 614 MUST be used. It is recommend to use a non-zero RD at all time. 616 When the NVEs reside on the hypervisor, the EVPN BGP routes and 617 attributes associated with multi-homing are no longer required. This 618 reduces the required routes and attributes to the following subset of 619 four out of the set of eight : 621 - MAC Advertisement Route 622 - Inclusive Multicast Ethernet Tag Route 623 - MAC Mobility Extended Community 624 - Default Gateway Extended Community 626 However, as noted in section 8.6 of [EVPN] in order to enable a 627 single-homed ingress PE to take advantage of fast convergence, 628 aliasing, and backup-path when interacting with multi-homed egress 629 PEs attached to a given Ethernet segment, a single-homed ingress PE 630 SHOULD be able to receive and process Ethernet AD per ES and Ethernet 631 AD per EVI routes." 633 7.2 Impact on EVPN Procedures for VXLAN/NVGRE Encapsulation 634 When the NVEs reside on the hypervisors, the EVPN procedures 635 associated with multi-homing are no longer required. This limits the 636 procedures on the NVE to the following subset of the EVPN procedures: 638 1. Local learning of MAC addresses received from the VMs per section 639 10.1 of [EVPN]. 641 2. Advertising locally learned MAC addresses in BGP using the MAC 642 Advertisement routes. 644 3. Performing remote learning using BGP per Section 10.2 of [EVPN]. 646 4. Discovering other NVEs and constructing the multicast tunnels 647 using the Inclusive Multicast Ethernet Tag routes. 649 5. Handling MAC address mobility events per the procedures of Section 650 16 in [EVPN]. 652 However, as noted in section 8.6 of [EVPN] in order to enable a 653 single-homed ingress PE to take advantage of fast convergence, 654 aliasing, and back-up path when interacting with multi-homed egress 655 PEs attached to a given Ethernet segment, a single-homed ingress PE 656 SHOULD implement the ingress node processing of Ethernet AD per ES 657 and Ethernet AD per EVI routes as defined in sections 8.2 Fast 658 Convergence and 8.4 Aliasing and Backup-Path of [EVPN]. 660 8 NVE Residing in ToR Switch 662 In this section, we discuss the scenario where the NVEs reside in the 663 Top of Rack (ToR) switches AND the servers (where VMs are residing) 664 are multi-homed to these ToR switches. The multi-homing may operate 665 in All-Active or Single-Active redundancy mode. If the servers are 666 single-homed to the ToR switches, then the scenario becomes similar 667 to that where the NVE resides in the hypervisor, as discussed in 668 Section 5, as far as the required EVPN functionality. 670 [EVPN] defines a set of BGP routes, attributes and procedures to 671 support multi-homing. We first describe these functions and 672 procedures, then discuss which of these are impacted by the 673 encapsulation (such as VXLAN or NVGRE) and what modifications are 674 required. 676 8.1 EVPN Multi-Homing Features 678 In this section, we will recap the multi-homing features of EVPN to 679 highlight the encapsulation dependencies. The section only describes 680 the features and functions at a high-level. For more details, the 681 reader is to refer to [EVPN]. 683 8.1.1 Multi-homed Ethernet Segment Auto-Discovery 685 EVPN NVEs (or PEs) connected to the same Ethernet Segment (e.g. the 686 same server via LAG) can automatically discover each other with 687 minimal to no configuration through the exchange of BGP routes. 689 8.1.2 Fast Convergence and Mass Withdraw 691 EVPN defines a mechanism to efficiently and quickly signal, to remote 692 NVEs, the need to update their forwarding tables upon the occurrence 693 of a failure in connectivity to an Ethernet segment (e.g., a link or 694 a port failure). This is done by having each NVE advertise an 695 Ethernet A-D Route per Ethernet segment for each locally attached 696 segment. Upon a failure in connectivity to the attached segment, the 697 NVE withdraws the corresponding Ethernet A-D route. This triggers all 698 NVEs that receive the withdrawal to update their next-hop adjacencies 699 for all MAC addresses associated with the Ethernet segment in 700 question. If no other NVE had advertised an Ethernet A-D route for 701 the same segment, then the NVE that received the withdrawal simply 702 invalidates the MAC entries for that segment. Otherwise, the NVE 703 updates the next-hop adjacencies to point to the backup NVE(s). 705 8.1.3 Split-Horizon 707 If a CE that is multi-homed to two or more NVEs on an Ethernet 708 segment ES1 operating in all-active redundancy mode sends a 709 multicast, broadcast or unknown unicast packet to a one of these 710 NVEs, then that NVE will forward that packet to all of the other PEs 711 in that EVI including the other NVEs attached to ES1 and those NVEs 712 MUST drop the packet and not forward back to the originating CE. 713 This is termed 'split horizon filtering'. 715 8.1.4 Aliasing and Backup-Path 717 In the case where a station is multi-homed to multiple NVEs, it is 718 possible that only a single NVE learns a set of the MAC addresses 719 associated with traffic transmitted by the station. This leads to a 720 situation where remote NVEs receive MAC advertisement routes, for 721 these addresses, from a single NVE even though multiple NVEs are 722 connected to the multi-homed station. As a result, the remote NVEs 723 are not able to effectively load-balance traffic among the NVEs 724 connected to the multi-homed Ethernet segment. This could be the 725 case, for e.g. when the NVEs perform data-path learning on the 726 access, and the load-balancing function on the station hashes traffic 727 from a given source MAC address to a single NVE. Another scenario 728 where this occurs is when the NVEs rely on control plane learning on 729 the access (e.g. using ARP), since ARP traffic will be hashed to a 730 single link in the LAG. 732 To alleviate this issue, EVPN introduces the concept of Aliasing. 733 This refers to the ability of an NVE to signal that it has 734 reachability to a given locally attached Ethernet segment, even when 735 it has learnt no MAC addresses from that segment. The Ethernet A-D 736 route per EVI is used to that end. Remote NVEs which receive MAC 737 advertisement routes with non-zero ESI SHOULD consider the MAC 738 address as reachable via all NVEs that advertise reachability to the 739 relevant Segment using Ethernet A-D routes with the same ESI and with 740 the Single-Active flag reset. 742 Backup-Path is a closely related function, albeit it applies to the 743 case where the redundancy mode is Single-Active. In this case, the 744 NVE signals that it has reachability to a given locally attached 745 Ethernet Segment using the Ethernet A-D route as well. Remote NVEs 746 which receive the MAC advertisement routes, with non-zero ESI, SHOULD 747 consider the MAC address as reachable via the advertising NVE. 748 Furthermore, the remote NVEs SHOULD install a Backup-Path, for said 749 MAC, to the NVE which had advertised reachability to the relevant 750 Segment using an Ethernet A-D route with the same ESI and with the 751 Single-Active flag set. 753 8.1.5 DF Election 755 If a CE is multi-homed to two or more NVEs on an Ethernet segment 756 operating in all-active redundancy mode, then for a given EVI only 757 one of these NVEs, termed the Designated Forwarder (DF) is 758 responsible for sending it broadcast, multicast, and, if configured 759 for that EVI, unknown unicast frames. 761 This is required in order to prevent duplicate delivery of multi- 762 destination frames to a multi-homed host or VM, in case of all-active 763 redundancy. 765 8.2 Impact on EVPN BGP Routes & Attributes 767 Since multi-homing is supported in this scenario, then the entire set 768 of BGP routes and attributes defined in [EVPN] are used. As discussed 769 in Section 3.1.3, the VSID or VNI is carried in the VNI/VSID field in 770 the MAC Advertisement, Ethernet AD per EVI, and Inclusive Multicast 771 Ethernet Tag routes. 773 8.3 Impact on EVPN Procedures 775 Two cases need to be examined here, depending on whether the NVEs are 776 operating in Active/Standby or in All-Active redundancy. 778 First, let's consider the case of Active/Standby redundancy, where 779 the hosts are multi-homed to a set of NVEs, however, only a single 780 NVE is active at a given point of time for a given VNI or VSID. In 781 this case, the Split-Horizon and Aliasing functions are not required 782 but other functions such as multi-homed Ethernet segment auto- 783 discovery, fast convergence and mass withdraw, repair path, and DF 784 election are required. In this case, the impact of the use of the 785 VXLAN/NVGRE encapsulation on the EVPN procedures is when the Backup- 786 Path function is supported, as discussed next: 788 In EVPN, the NVEs connected to a multi-homed site using 789 Active/Standby redundancy optionally advertise a VPN label, in the 790 Ethernet A-D Route per EVI, used to send traffic to the backup NVE in 791 the case where the primary NVE fails. In the case where VXLAN or 792 NVGRE encapsulation is used, some alternative means that does not 793 rely on MPLS labels is required to support Backup-Path. This is 794 discussed in Section 4.3.2 below. If the Backup-Path function is not 795 used, then the VXLAN/NVGRE encapsulation would have no impact on the 796 EVPN procedures. 798 Second, let's consider the case of All-Active redundancy. In this 799 case, out of the EVPN multi-homing features listed in section 4.1, 800 the use of the VXLAN or NVGRE encapsulation impacts the Split-Horizon 801 and Aliasing features, since those two rely on the MPLS client layer. 802 Given that this MPLS client layer is absent with these types of 803 encapsulations, alternative procedures and mechanisms are needed to 804 provide the required functions. Those are discussed in detail next. 806 8.3.1 Split Horizon 808 In EVPN, an MPLS label is used for split-horizon filtering to support 809 active/active multi-homing where an ingress ToR switch adds a label 810 corresponding to the site of origin (aka ESI MPLS Label) when 811 encapsulating the packet. The egress ToR switch checks the ESI MPLS 812 label when attempting to forward a multi-destination frame out an 813 interface, and if the label corresponds to the same site identifier 814 (ESI) associated with that interface, the packet gets dropped. This 815 prevents the occurrence of forwarding loops. 817 Since the VXLAN or NVGRE encapsulation does not include this ESI MPLS 818 label, other means of performing the split-horizon filtering function 819 MUST be devised. The following approach is recommended for split- 820 horizon filtering when VXLAN or NVGRE encapsulation is used. 822 Every NVE track the IP address(es) associated with the other NVE(s) 823 with which it has shared multi-homed Ethernet Segments. When the NVE 824 receives a multi-destination frame from the overlay network, it 825 examines the source IP address in the tunnel header (which 826 corresponds to the ingress NVE) and filters out the frame on all 827 local interfaces connected to Ethernet Segments that are shared with 828 the ingress NVE. With this approach, it is required that the ingress 829 NVE performs replication locally to all directly attached Ethernet 830 Segments (regardless of the DF Election state) for all flooded 831 traffic ingress from the access interfaces (i.e. from the hosts). 832 This approach is referred to as "Local Bias", and has the advantage 833 that only a single IP address needs to be used per NVE for split- 834 horizon filtering, as opposed to requiring an IP address per Ethernet 835 Segment per NVE. 837 In order to prevent unhealthy interactions between the split horizon 838 procedures defined in [EVPN] and the local bias procedures described 839 in this document, a mix of MPLS over GRE encapsulations on the one 840 hand and VXLAN/NVGRE encapsulations on the other on a given Ethernet 841 Segment is prohibited. 843 8.3.2 Aliasing and Backup-Path 845 The Aliasing and the Backup-Path procedures for VXLAN/NVGRE 846 encapsulation is very similar to the ones for MPLS. In case of MPLS, 847 two different Ethernet AD routes are used for this purpose. The one 848 used for Aliasing has a VPN scope and carries a VPN label but the one 849 used for Backup-Path has Ethernet segment scope and doesn't carry any 850 VPN specific info (e.g., Ethernet Tag and MPLS label are set to 851 zero). The same two routes are used when VXLAN or NVGRE encapsulation 852 is used with the difference that when Ethernet AD route is used for 853 Aliasing with VPN scope, the Ethernet Tag field is set to VNI or VSID 854 to indicate VPN scope (and MPLS field may be set to a VPN label if 855 needed). 857 9 Support for Multicast 859 The E-VPN Inclusive Multicast BGP route is used to discover the 860 multicast tunnels among the endpoints associated with a given VXLAN 861 VNI or NVGRE VSID. The Ethernet Tag field of this route is used to 862 encode the VNI for VLXAN or VSID for NVGRE. The Originating router's 863 IP address field is set to the NVE's IP address. This route is tagged 864 with the PMSI Tunnel attribute, which is used to encode the type of 865 multicast tunnel to be used as well as the multicast tunnel 866 identifier. The tunnel encapsulation is encoded by adding the BGP 867 Encapsulation extended community as per section 3.1.1. The following 868 tunnel types as defined in [RFC6514] can be used in the PMSI tunnel 869 attribute for VXLAN/NVGRE: 871 + 3 - PIM-SSM Tree 872 + 4 - PIM-SM Tree 873 + 5 - BIDIR-PIM Tree 874 + 6 - Ingress Replication 876 Except for Ingress Replication, this multicast tunnel is used by the 877 PE originating the route for sending multicast traffic to other PEs, 878 and is used by PEs that receive this route for receiving the traffic 879 originated by CEs connected to the PE that originated the route. 881 In the scenario where the multicast tunnel is a tree, both the 882 Inclusive as well as the Aggregate Inclusive variants may be used. In 883 the former case, a multicast tree is dedicated to a VNI or VSID. 884 Whereas, in the latter, a multicast tree is shared among multiple 885 VNIs or VSIDs. This is done by having the NVEs advertise multiple 886 Inclusive Multicast routes with different VNI or VSID encoded in the 887 Ethernet Tag field, but with the same tunnel identifier encoded in 888 the PMSI Tunnel attribute. 890 10 Inter-AS 892 For inter-AS operation, two scenarios must be considered: 894 - Scenario 1: The tunnel endpoint IP addresses are public 895 - Scenario 2: The tunnel endpoint IP addresses are private 897 In the first scenario, inter-AS operation is straight-forward and 898 follows existing BGP inter-AS procedures. However, in the first 899 scenario where the tunnel endpoint IP addresses are public, there may 900 be security concern regarding the distribution of these addresses 901 among different ASes. This security concern is one of the main 902 reasons for having the so called inter-AS "option-B" in MPLS VPN 903 solutions such as EVPN. 905 The second scenario is more challenging, because the absence of the 906 MPLS client layer from the VXLAN encapsulation creates a situation 907 where the ASBR has no fully qualified indication within the tunnel 908 header as to where the tunnel endpoint resides. To elaborate on this, 909 recall that with MPLS, the client layer labels (i.e. the VPN labels) 910 are downstream assigned. As such, this label implicitly has a 911 connotation of the tunnel endpoint, and it is sufficient for the ASBR 912 to look up the client layer label in order to identify the label 913 translation required as well as the tunnel endpoint to which a given 914 packet is being destined. With the VXLAN encapsulation, the VNI is 915 globally assigned and hence is shared among all endpoints. The 916 destination IP address is the only field which identifies the tunnel 917 endpoint in the tunnel header, and this address is privately managed 918 by every data center network. Since the tunnel address is allocated 919 out of a private address pool, then we either need to do a lookup 920 based on VTEP IP address in context of a VRF (e.g., use IP-VPN) or 921 terminate the VXLAN tunnel and do a lookup based on the tenant's MAC 922 address to identify the egress tunnel on the ASBR. This effectively 923 mandates that the ASBR to either run another overlay solution such as 924 IP-VPN over MPLS/IP core network or to be aware of the MAC addresses 925 of all VMs in its local AS, at the very least. 927 If VNIs/VSIDs have local significance, then the inter-AS operation 928 can be simplified to that of MPLS and thus MPLS inter-AS option B and 929 C can be leveraged in here. That's why the use of local significance 930 VNIs/VSIDs (e.g., MPLS labels) are recommended for inter-AS operation 931 of DC networks without gateways. 933 11 Acknowledgement 935 The authors would like to thank David Smith, John Mullooly, Thomas 936 Nadeau for their valuable comments and feedback. 938 12 Security Considerations 940 This document uses IP-based tunnel technologies to support data 941 plane transport. Consequently, the security considerations of those 942 tunnel technologies apply. This document defines support for [VXLAN] 943 and [NVGRE]. The security considerations from those documents as well 944 as [RFC4301] apply to the data plane aspects of this document. 946 As with [RFC5512], any modification of the information that is used 947 to form encapsulation headers, to choose a tunnel type, or to choose 948 a particular tunnel for a particular payload type may lead to user 949 data packets getting misrouted, misdelivered, and/or dropped. 951 More broadly, the security considerations for the transport of IP 952 reachability information using BGP are discussed in [RFC4271] and 953 [RFC4272], and are equally applicable for the extensions described 954 in this document. 956 If the integrity of the BGP session is not itself protected, then an 957 imposter could mount a denial-of-service attack by establishing 958 numerous BGP sessions and forcing an IPsec SA to be created for each 959 one. However, as such an imposter could wreak havoc on the entire 960 routing system, this particular sort of attack is probably not of 961 any special importance. 963 It should be noted that a BGP session may itself be transported over 964 an IPsec tunnel. Such IPsec tunnels can provide additional security 965 to a BGP session. The management of such IPsec tunnels is outside 966 the scope of this document. 968 13 IANA Considerations 970 IANA has allocated the following BGP Tunnel Encapsulation Attribute 971 Tunnel Types: 973 8 VXLAN Encapsulation 974 9 NVGRE Encapsulation 975 10 MPLS Encapsulation 976 11 MPLS in GRE Encapsulation 977 12 VxLAN GPE Encapsulation 979 14 References 981 14.1 Normative References 983 [KEYWORDS] Bradner, S., "Key words for use in RFCs to Indicate 984 Requirement Levels", BCP 14, RFC 2119, March 1997. 986 [RFC4271] Y. Rekhter, Ed., T. Li, Ed., S. Hares, Ed., "A Border 987 Gateway Protocol 4 (BGP-4)", January 2006. 989 [RFC4272] S. Murphy, "BGP Security Vulnerabilities Analysis.", 990 January 2006. 992 [RFC4301] S. Kent, K. Seo., "Security Architecture for the 993 Internet Protocol.", December 2005. 995 [RFC5512] Mohapatra, P. and E. Rosen, "The BGP Encapsulation 996 Subsequent Address Family Identifier (SAFI) and the BGP 997 Tunnel Encapsulation Attribute", RFC 5512, April 2009. 999 14.2 Informative References 1001 [EVPN-REQ] Sajassi et al., "Requirements for Ethernet VPN (EVPN)", 1002 draft-ietf-l2vpn-evpn-req-01.txt, work in progress, October 21, 2012. 1004 [NVGRE] Sridhavan, M., et al., "NVGRE: Network Virtualization using 1005 Generic Routing Encapsulation", draft-sridharan-virtualization-nvgre- 1006 01.txt, July 8, 2012. 1008 [VXLAN] Dutt, D., et al, "VXLAN: A Framework for Overlaying 1009 Virtualized Layer 2 Networks over Layer 3 Networks", draft- 1010 mahalingam-dutt-dcops-vxlan-02.txt, August 22, 2012. 1012 [EVPN] Sajassi et al., "BGP MPLS Based Ethernet VPN", draft-ietf- 1013 l2vpn-evpn-02.txt, work in progress, February, 2012. 1015 [Problem-Statement] Narten et al., "Problem Statement: Overlays for 1016 Network Virtualization", draft-ietf-nvo3-overlay-problem-statement- 1017 01, September 2012. 1019 [L3VPN-ENDSYSTEMS] Marques et al., "BGP-signaled End-system IP/VPNs", 1020 draft-ietf-l3vpn-end-system, work in progress, October 2012. 1022 [NOV3-FRWK] Lasserre et al., "Framework for DC Network 1023 Virtualization", draft-ietf-nvo3-framework-01.txt, work in progress, 1024 October 2012. 1026 Authors' Addresses 1028 Ali Sajassi 1029 Cisco 1030 Email: sajassi@cisco.com 1032 John Drake 1033 Juniper Networks 1034 Email: jdrake@juniper.net 1036 Nabil Bitar 1037 Verizon Communications 1038 Email : nabil.n.bitar@verizon.com 1040 Aldrin Isaac 1041 Bloomberg 1042 Email: aisaac71@bloomberg.net 1044 James Uttaro 1045 AT&T 1046 Email: uttaro@att.com 1048 Wim Henderickx 1049 Alcatel-Lucent 1050 e-mail: wim.henderickx@alcatel-lucent.com 1052 Ravi Shekhar 1053 Juniper Networks 1054 Email: rshekhar@juniper.net 1056 Samer Salam 1057 Cisco 1058 Email: ssalam@cisco.com 1060 Keyur Patel 1061 Cisco 1062 Email: Keyupate@cisco.com 1064 Dhananjaya Rao 1065 Cisco 1066 Email: dhrao@cisco.com 1068 Samir Thoria 1069 Cisco 1070 Email: sthoria@cisco.com