idnits 2.17.1 draft-raggarwa-data-center-mobility-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 3 instances of too long lines in the document, the longest one being 9 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC2119' is mentioned on line 118, but not defined ** Obsolete normative reference: RFC 1700 (Obsoleted by RFC 3232) -- Possible downref: Non-RFC (?) normative reference: ref. 'Default-Gateway' Summary: 3 errors (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group R. Aggarwal 3 Internet Draft Arktan Inc 4 Category: Standards Track 5 Expiration Date: December 2014 6 Y. Rekhter 7 Juniper Networks 9 W. Henderickx 10 Alcatel-Lucent 12 R. Shekhar 13 Juniper Networks 15 Luyuan Fang 16 Cisco Systems 18 Ali Sajassi 19 Cisco Systems 21 June 2 2014 23 Data Center Mobility based on E-VPN, BGP/MPLS IP VPN, IP Routing and NHRP 25 draft-raggarwa-data-center-mobility-07.txt 27 Status of this Memo 29 This Internet-Draft is submitted to IETF in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF), its areas, and its working groups. Note that other 34 groups may also distribute working documents as Internet-Drafts. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 The list of current Internet-Drafts can be accessed at 42 http://www.ietf.org/ietf/1id-abstracts.txt. 44 The list of Internet-Draft Shadow Directories can be accessed at 45 http://www.ietf.org/shadow.html. 47 Copyright and License Notice 49 Copyright (c) 2011 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with respect 57 to this document. Code Components extracted from this document must 58 include Simplified BSD License text as described in Section 4.e of 59 the Trust Legal Provisions and are provided without warranty as 60 described in the Simplified BSD License. 62 This document may contain material from IETF Documents or IETF 63 Contributions published or made publicly available before November 64 10, 2008. The person(s) controlling the copyright in some of this 65 material may not have granted the IETF Trust the right to allow 66 modifications of such material outside the IETF Standards Process. 67 Without obtaining an adequate license from the person(s) controlling 68 the copyright in such materials, this document may not be modified 69 outside the IETF Standards Process, and derivative works of it may 70 not be created outside the IETF Standards Process, except to format 71 it for publication as an RFC or to translate it into languages other 72 than English. 74 Abstract 76 This document describes a set of network-based solutions for seamless 77 Virtual Machine mobility in the data center. These solutions provide 78 a toolkit which is based on IP routing, E-VPNs, BGP/MPLS IP VPNs, and 79 NHRP. 81 Table of Contents 83 1 Specification of requirements ......................... 4 84 2 Introduction .......................................... 4 85 2.1 Terminology ........................................... 4 86 3 Problem Statement ..................................... 6 87 3.1 Maintaining Connectivity in the Presence of VM Mobility ...6 88 3.2 Layer 2 Extension ..................................... 6 89 3.3 Optimal IP Routing .................................... 7 90 4 Layer 2 Extension Solution ............................ 7 91 5 VM Default Gateway Solutions .......................... 9 92 5.1 VM Default Gateway Solution - Solution 1 .............. 10 93 5.2 VM Default Gateway Solution - Solution 2 .............. 11 94 6 Triangular Routing Solution ........................... 12 95 6.1 Intra Data Center Triangular Routing Solution ......... 12 96 6.2 Inter Data Center Triangular Routing Solution ......... 13 97 6.2.1 Propagating IP host routes ............................ 14 98 6.2.1.1 Constraining propagation scope with OSPF/ISIS ......... 15 99 6.2.1.2 Constraining propagation scope with BGP ............... 16 100 6.2.1.3 Policy based origination of VM Host IP Address Routes . 16 101 6.2.1.4 Policy based instantiation of VM Host IP Address Forwarding State 17 102 6.2.2 Propagating VPN-IP host routes ........................ 17 103 6.2.3 Triangular Routing Solution Based on NHRP ............. 18 104 6.2.3.1 Detailed Procedures ................................... 19 105 6.2.3.2 Failure scenarios ..................................... 22 106 6.2.3.2.1 DCBR Failure - Option 1 ............................... 22 107 6.2.3.2.2 DCBR Failure - Option 2 ............................... 23 108 7 IANA Considerations ................................... 23 109 8 Security Considerations ............................... 23 110 9 Acknowledgements ...................................... 23 111 10 References ............................................ 23 112 11 Author's Address ...................................... 24 113 1. Specification of requirements 115 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 116 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 117 document are to be interpreted as described in [RFC2119]. 119 2. Introduction 121 This document describes network-based solutions for seamless Virtual 122 Machine (VM) mobility, where seamless mobility is defined as the 123 ability to move a VM from one server in the data center to another 124 server in the same or different data center, while retaining the IP 125 and MAC address of the VM. In the context of this document the term 126 mobility, or a reference to moving a VM should be considered to imply 127 seamless mobility, unless otherwise stated. 129 The solutions described in this document provide a network-based 130 toolkit which is based on IP routing, E-VPN [E-VPN], BGP/MPLS IP VPNs 131 [RFC4364], and NHRP [RFC2332]. 133 Note that in the scenario where a VM is moved between servers located 134 in different data centers, there are certain constraints to how far 135 such data centers may be located geographically. This distance is 136 limited by the current state of the art of the Virtual Machine 137 technology, by the bandwidth that may be available between the data 138 centers, the ability to manage and operate such VM mobility etc. 139 This document describes a set of solutions for VM mobility. These 140 solutions form a toolkit that enables VMs to move across both small 141 and large geographical distances. However, the practical 142 applicability of these solutions will depend on these constraints. If 143 these constraints are relaxed over time, allowing VMs to move across 144 larger geographical boundaries, the solutions described here will 145 continue to be applicable. 147 2.1. Terminology 149 In this document the term "Top of Rack Switch (ToR)" is used to refer 150 to a switch in a data center that is connected to the servers that 151 host VMs. A data center may have multiple ToRs. 153 Several data centers could be connected by a network. In addition to 154 providing interconnect among the data centers, such a network could 155 provide connectivity between the VMs hosted in these data centers and 156 the sites that contain hosts communicating with such VMs. Each data 157 center has one or more Data Center Border Router (DCBR) that connects 158 the data center to the network, and provides (a) connectivity between 159 VMs hosted in the data center and VMs in other data centers, and (b) 160 connectivity between VMs hosted in the data center and hosts 161 communicating with these VMs. 163 The data centers and the network that interconnects them may be 164 either (a) under the same administrative control, or (b) controlled 165 by different administrations. 167 Consider a set of VMs that (as a matter of policy) are allowed to 168 communicate with each other, and a collection of devices that 169 interconnect these VMs. If communication among any VMs in that set 170 could be accomplished in such a way as to preserve MAC source and 171 destination addresses in the Ethernet header of the packets exchanged 172 among these VMs (as these packets traverse from their sources to 173 their destinations), we will refer to such set of VMs as an Layer 2 174 based Closed User Group (L2-based CUG). 176 A given VM may be a member of more than one L2-based CUG. 178 In terms of IP address assignment this document assumes that all VMs 179 of a given L2-based CUG have their IP addresses assigned out of a 180 single IP prefix. Thus, in the context of this document a single IP 181 subnet corresponds to a single L2-based CUG. 183 A VM that is a member of a given L2-based CUG may (as a matter of 184 policy) be allowed to communicate with VMs that belong to other 185 L2-based CUGs, or with other hosts. Such communication involves IP 186 forwarding, and thus would result in changing MAC source and 187 destination addresses in the Ethernet header of the packets being 188 exchanged. 190 In this document the term "L2 site" refers to a collection of 191 interconnected devices that perform forwarding based on the 192 information carried in the Ethernet header. Forwarding within an L2 193 site could be provided by such layer 2 technologies as Spanning Tree 194 Protocol (STP), etc... Note that any multi-chassis LAG is treated as 195 a single L2 site. 197 Servers connected to a given L2 site may host VMs that belong to 198 different L2-based CUGs. Enforcing L2-based CUGs boundaries among 199 these VMs within a single L2 site is accomplished by relying on Layer 200 2 mechanisms (e.g., VLANs). 202 We say that an L2 site contains a given VM (or that a given VM is in 203 a given L2 site), if the server presently hosting this VM is 204 connected to a ToR that is part of that site. 206 We say that a given L2-based CUG is present within a given data 207 center if one or more VMs that are part of that CUG are presently 208 hosted by the servers located in that data center. 210 This document assumes that VMs that belong to the same L2-based CUG, 211 and are in the same L2 site MUST use the same VLAN-ID. This document 212 assumes that VMs that belong to the same L2-based CUG, and are in 213 different L2 sites MAY use either the same or different VLAN-IDs. 215 This document assumes that VMs that belong to different L2-based 216 CUGs, and are in the same L2 site MUST use different VLAN-IDs. This 217 document assumes that VMs that belong to different L2-based CUGs, and 218 are in different L2 sites MAY use either the same, or different VLAN- 219 IDs. 221 3. Problem Statement 223 This section describes the specific problems that need to be 224 addressed to enable seamless VM mobility. 226 3.1. Maintaining Connectivity in the Presence of VM Mobility 228 In the context of this document the ability to maintain connectivity 229 in the presence of VM mobility means the ability to exchange traffic 230 between a VM and its peer(s), as the VM moves from one server to 231 another, where the peer(s) may be either other VM(s) or hosts. 233 3.2. Layer 2 Extension 235 Consider a scenario where a VM that is a member of a given L2-based 236 CUG moves from one server to another, and these two servers are in 237 different L2 sites, where these sites may be located in the same or 238 different data centers. In order to enable communication between this 239 VM and other VMs of that L2-based CUG, the new L2 site must become 240 interconnected with the other L2 site(s) that presently contain the 241 rest of the VMs of that CUG, and the interconnect must not violate 242 the L2-based CUG requirement to preserve source and destination MAC 243 addresses in the Ethernet header of the packets exchange between this 244 VM and other members of that CUG. 246 Moreover, if the previous site no longer contains any VMs of that 247 CUG, the previous site no longer needs to be interconnected with the 248 other L2 site(s) that contain the rest of the VMs of that CUG. 250 We will refer to this as the "layer 2 extension problem". 252 Note that the layer 2 extension problem is a special case of 253 maintaining connectivity in the presence of VM mobility, as the 254 former restricts communicating VMs to a single/common L2-based CUG, 255 while the latter does not. 257 3.3. Optimal IP Routing 259 In the context of this document optimal IP routing, or just optimal 260 routing, in the presence of VM mobility could be partitioned into two 261 problems: 263 + Optimal routing of a VM's outbound traffic. This means that as a 264 given VM moves from one server to another, the VM's default 265 gateway should be in a close topological proximity to the ToR 266 that connects the server presently hosting that VM. Note that 267 when we talk about optimal routing of the VM's outbound traffic, 268 we mean traffic from that VM to the destinations that are outside 269 of the VM's L2-based CUG. This document refers to this problem as 270 the VM default gateway problem. 272 + Optimal routing of VM's inbound traffic. This means that as a 273 given VM moves from one server to another, the (inbound) traffic 274 originated outside of the VM's L2-based CUG, and destined to that 275 VM be routed via the router of the VM's L2-based CUG that is in a 276 close topological proximity to the ToR that connects the server 277 presently hosting that VM, without first traversing some other 278 router of that L2-based CUG. This is also known as avoiding 279 "triangular routing". This document refers to this problem as the 280 triangular routing problem. 282 Note that optimal routing is a special case of maintaining 283 connectivity in the presence of VM mobility, as the former assumes 284 not only the ability to maintain connectivity, but also that this 285 connectivity is maintained using optimal routing. On the other hand, 286 maintaining connectivity does not make optimal routing a pre- 287 requisite. 289 4. Layer 2 Extension Solution 291 This document assumes that the solution for the layer 2 extension 292 problem, relies on [E-VPN]. That is, the L2 sites that contain VMs of 293 a given L2-based CUG are interconnected together using E-VPN. Thus a 294 given E-VPN corresponds/associated with one or more L2-based CUGs 295 (e.g., VLANs). An L2-based CUG is associated with a single E-VPN 296 Ethernet Tag Identifier. 298 This section provides a brief overview of how E-VPN is used as the 299 solution for the "layer 2 extension problem". Details of E-VPN 300 operations can be found in [E-VPN]. 302 A single L2 site could be as large as the whole network within a 303 single data center, in which case the DCBRs of that data center, in 304 addition to acting as IP routers for the L2-based CUGs present in the 305 data center, also act as PEs. In this scenario E-VPN is used to 306 handle VM migration between servers in different data centers. 308 A single L2 site could be as small as a single ToR with the servers 309 connected to it, in which case the ToR acts as a PE. In this scenario 310 E-VPN is used to handle VM migration between servers that are either 311 in the same or in different data centers. Note that even in this 312 scenario this document assumes that DCBRs, in addition to acting as 313 IP routers for the L2-based CUGs present in their data center, also 314 participate in the E-VPN procedures, acting as BGP Route Reflectors 315 for the E-VPN routes originated by the ToRs acting as PEs. 317 In the case where E-VPN is used to interconnect L2 sites in different 318 data centers, the network that interconnects DCBRs of these data 319 centers could provide either (a) only Ethernet or IP/MPLS 320 connectivity service among these DCBRs, or (b) may offer the E-VPN 321 service. In the former case DCBRs exchange E-VPN routes among 322 themselves relying only on the Ethernet or IP/MPLS connectivity 323 service provided by the network that interconnects these DCBRs. The 324 network does not directly participate in the exchange of these E-VPN 325 routes. In the latter case the routers at the edge of the network may 326 be either co-located with DCBRs, or may establish E-VPN peering with 327 DCBRs. Either way, in this case the network facilitates exchange of 328 E-VPN routes among DCBRs (as in this case DCBRs would not need to 329 exchange E-VPN routes directly with each other). 331 Please note that for the purpose of solving the layer 2 extension 332 problem the propagation scope of E-VPN routes for a given L2-based 333 CUG is constrained by the scope of the PEs connected to the L2 sites 334 that presently contain VMs of that CUG. This scope is controlled by 335 the Route Target of the E-VPN routes. Controlling propagation scope 336 could be further facilitated by using Route Target Constrain 337 [RFC4684]. 339 Use of E-VPN ensures that traffic among members of the same L2-based 340 CUG is optimally forwarded, irrespective of whether members of that 341 CUG are within the same or in different data centers. This follows 342 from the observation that E-VPN inherently enables (disaggregated) 343 forwarding at the granularity of the MAC address of the VM. 345 Optimal forwarding among VMs of a given L2-based CUG that are within 346 the same data center requires propagating VM MAC addresses, and comes 347 at the cost of disaggregated forwarding within a given data center. 348 However such disaggregated forwarding is not necessary between data 349 centers if a given L2-based CUG spans multiple data centers. For 350 example when a given ToR acts as a PE, this ToR has to maintain MAC 351 advertisement routes only to the VMs within its own data center (and 352 furthermore, only to the VMs that belong to the L2-based CUGs whose 353 site(s) are connected to that ToR), and then point a "default" MAC 354 route to one of the DCBRs of that data center. In this scenario a 355 DCBR of a given data center, when it receives MAC advertisement 356 routes from DCBR(s) in other data centers, does not re-advertise 357 these routes to the PEs within its own data center, but just 358 advertises a single "default" MAC advertisement route to these PEs. 360 When a given VM moves to a new L2 site, if in the new site this VM is 361 the only VM from its L2-based CUG, then the PE(s) connected to the 362 new site need to be provisioned with the E-VPN Instances (EVI) of the 363 E-VPN associated with this L2-based CUG. Likewise, if after the move 364 the old site no longer has any VMs that are in the same L2-based CUG 365 as the VM that moved, the PE(s) connected to the old site need to be 366 de-provisioned with the EVI of the E-VPN. Procedures to accomplish 367 this are outside the scope of this document. 369 5. VM Default Gateway Solutions 371 Once VM moves to a new L2 site, solving the VM Default Gateway 372 problem would require PE(s) connected to that L2 site to apply IP 373 forwarding to the inter-CUG/inter-subnet traffic originated from that 374 VM. That implies that (a) PE(s) should be capable of performing both 375 MAC-based and IP-based forwarding (although IP-based forwarding 376 functionality could be limited to just forwarding either based on IP 377 host routes, or based on the IP default route), and (b) PE(s) should 378 be able to distinguish between intra-CUG/intra-subnet and inter- 379 CUG/inter-subnet traffic originated by that VM (in order to apply 380 MAC-based forwarding to the former and IP-based forwarding to the 381 latter). 383 As VM moves to a new L2 site, the default gateway IP address of the 384 VM may not change. Further, the ARP cache of the VM may not time out. 385 Thus the destination MAC address in the inter-CUG/inter-subnet 386 traffic originated by that VM would not change as VM moves to the new 387 site. Given that, how would PE(s) connected to the new L2 site be 388 able to recognize inter-CUG/inter-subnet traffic originated by that 389 VM ? The following describes two possible solutions. 391 Both of the solutions assume that for inter-CUG/inter-subnet traffic 392 between VM and its peers outside of VM's own data center, one or more 393 DCBRs of that data center act as fully functional default gateways 394 for that traffic. 396 Both of these solutions also assume that VLAN-aware VLAN bundling 397 mode of E-VPN is used as the default mode such that different L2-CUGs 398 (different subnets) for the same tenant can be accommodated in a 399 single EVI. This facilitates provisioning since E-VPN related 400 provisioning (such as RT configuration) could be done on a per-tenant 401 basis as opposed to on a per-subnet (per L2-CUG) basis. In this 402 default mode, VMs' MAC addresses are maintained on a per bridge 403 domain basis (per subnet) within the EVI; however, VM's IP addresses 404 are maintained across all the subnets of that tenant in that EVI. In 405 the scenarios where communications among VMs of different subnets 406 belonging to the same tenant is to be restricted based on some 407 policies, then the VLAN mode of E-VPN should be used with each 408 VLAN/subnet mapping to its own EVI and E-VPN RT filtering can be 409 leveraged to enforce flexible policy-based communications among VMs 410 of different subnets for that tenant. 412 5.1. VM Default Gateway Solution - Solution 1 414 The first solution relies on the use of an anycast default gateway IP 415 address and an anycast default gateway MAC address. 417 If DCBRs act as PEs for an E-VPN corresponding to a given L2-based 418 CUG, then these anycast addresses are configured on these DCBRs. 419 Likewise, if ToRs act as PEs, then these anycast addresses are 420 configured on these ToRs. All VMs of that L2-based CUG are 421 (auto)configured with the (anycast) IP address of the default 422 gateway. 424 DCBRs (or ToRs) acting as PEs use these anycast addresses as follows: 426 + When a particular DCBR (or ToR) acting as a PE receives a packet 427 with the (anycast) default gateway MAC address, the DCBR (or ToR) 428 applies IP forwarding to the packet. 430 + When a particular DCBR (or ToR) acting as a PE receives an ARP 431 Request for the default gateway (anycast) IP address, the DCBR 432 (or ToR) generates ARP Reply. 434 This ensures that a particular DCBR (or ToR), acting as a PE, can 435 always apply IP forwarding to the packets sent by a VM to the 436 (anycast) default gateway MAC address. It also ensures that such DCBR 437 (or ToR) can respond to the ARP Request generated by a VM for the 438 default gateway (anycast) IP address. 440 DCBRs (or ToRs) acting as PEs must never use the anycast default 441 gateway MAC address as the source MAC address in the packets 442 originated by these DCBRs (or ToRs). 444 Note that multiple L2-based CUGs may share the same MAC address for 445 the purpose of using as the (anycast) MAC address of the default 446 gateway for these CUGs. 448 If the default gatewat functionality is not in TORs, then the default 449 gateway MAC/IP addresses need to be distributed using E-VPN 450 procedures. Note that with this approach when originating E-VPN MAC 451 advertisement routes for the MAC address of the default gateways of a 452 given L2-based CUG, all these routes MUST indicate that this MAC 453 address belongs to the same Ethernet Segment Identifier (ESI). 455 5.2. VM Default Gateway Solution - Solution 2 457 The second solution does not require to configure the anycast default 458 gateway IP and MAC address on the PEs. 460 Each DCBR (or each ToR) that acts as a default gateway for a given 461 L2-based CUG advertises in the E-VPN control plane its default 462 gateway IP and MAC address using the MAC advertisement route, and 463 indicates that such route is associated with the default gateway. 464 The MAC advertisement route MUST be advertised as per procedures in 465 [E-VPN]. The MAC address in such an advertisement MUST be set to the 466 default gateway MAC address of the DCBR (or ToR). The IP address in 467 such an advertisement MUST be set to the default gateway IP address 468 of the DCBR (or ToR). To indicate that such a route is associated 469 with a default gateway, the route MUST carry the Default Gateway 470 extended community [Default-Gateway]. 472 Each PE that receives this route and imports it as per procedures of 473 [E-VPN] MUST create MAC forwarding state that enables it to apply IP 474 forwarding to the packets destined to the MAC address carried in the 475 route. The PE that receives this E-VPN route follows procedures in 476 Section 12 of [E-VPN] when replying to ARP Requests that it receives 477 if such Requests are for the IP address in the received E-VPN route. 479 6. Triangular Routing Solution 481 The triangular routing solution could be partitioned into two 482 components: intra data center triangular routing solution, and inter 483 data center triangular routing solution. The former handles the 484 situation where communicating VMs are in the same data center. The 485 latter handles all other cases. 487 Both of these solutions assume that as a PE originates MAC 488 advertisement routes, such routes, in addition to MAC addresses of 489 the VMs, also carry IP addresses of these VMs. Procedures by which a 490 PE can learn the IP address associated with a given MAC address are 491 specified in [E-VPN]. 493 6.1. Intra Data Center Triangular Routing Solution 495 Consider a set of L2-based CUGs, such that VMs of these CUGs, as a 496 matter of policy, are allowed to communicate with each other. To 497 avoid triangular routing among such VMs that are in the same data 498 center this document relies on the E-VPN procedures, as follows. 500 Procedures in this section assume that ToRs act as PEs, and also able 501 to support IP forwarding functionality. 503 For a given set of L2-based CUGs whose VMs are allowed to communicate 504 with each other, consider a set of E-VPN instances (EVIs) of the E- 505 VPNs associated with these CUGs. We further restrict this set of EVIs 506 to only the EVIs that are within the same data center. To avoid 507 triangular routing among VMs within the same data center, E-VPN 508 routes originated by one of the EVIs within such set should be 509 imported by all other EVIs in that set, irrespective of whether these 510 other EVIs belong to the same E-VPN as the EVI that originates the 511 routes. 513 One possible way to accomplish this is (a) for each set of L2-based 514 CUGs whose VMs are allowed to communicate with each other, and for 515 each data center that contains such CUGs have a distinct RT (distinct 516 RT per set, per data center), (b) provision each EVI of the E-VPNs 517 associated with these CUGs to import routes that carry this RT, and 518 (c) make the E-VPN routes originated by such EVIs to carry this RT. 519 Note that these RTs are in addition to the RTs used to form 520 individual E-VPNs. Note also, that what is described here is 521 conceptually similar to the notion of "extranets" in BGP/MPLS VPNs 522 [RFC4364]. 524 When a PE imports an E-VPN route into a particular EVI, and this 525 route is associated with a VM that is not part of the L2-based CUG 526 associated with the E-VPN of that EVI, the PE creates IP forwarding 527 state to forward traffic to the IP address present in the NLRI of the 528 route towards the Next Hop, as specified in the route. 530 To illustrate how the above procedures avoid triangular routing, 531 consider the following example. Assume that a particular VM, VM-A, is 532 currently hosted by a server connected to a particular ToR, ToR-1, 533 and another VM, VM-B, is currently hosted by a server connected to 534 ToR-2. Assume that VM-A and VM-B belong to different L2-based CUGs, 535 and (as a matter of policy) VMs in these CUGs are allowed to 536 communicate with each other. Now assume that VM-B moves to another 537 server, and this server is connected to ToR-3. Assume that ToR-1, 538 ToR-2, and ToR-3 are in the same data center. While initially ToR-1 539 would forward data originated by VM-A and destined to VM-B to ToR-2, 540 after VM-B moves to the server connected to ToR-3, using the 541 procedures described above, ToR-1 would forward the data to ToR-3 542 (and not to ToR-2), thus avoiding triangular routing. 544 Note that for the purpose of redistributing E-VPN routes among 545 multiple L2-based CUGs, the above procedures limit the propagation 546 scope of routes to individual VMs to a single data center, and 547 furthermore, to only a subset of the PEs within that data center - 548 the PEs that have EVIs of the E-VPNs associated with the L2-based 549 CUGs whose VMs are allowed to communicate with each other. As a 550 result, the control plane overhead needed to avoid triangular routing 551 within a data center is localized to these PEs. 553 6.2. Inter Data Center Triangular Routing Solution 555 This section describes procedures to avoid triangular routing between 556 VMs in different data centers, or between a VM located in a given 557 data center and a host located in some other site that are based on 558 propagating host routes. 560 There are two inter data center triangular routing solutions proposed 561 in this document. 563 The first solution is based on propagating host routes to VMs IP 564 addresses, with careful consideration given to constraining the 565 propagation scope of these routes in order to be able to limit the 566 scope of the devices that need to carry additional control plane 567 load. In this solution a DCBR of a given data center originates host 568 routes for the VMs that are hosted by the servers in that data 569 center. Such routes could be either IP host routes or VPN-IP host 570 routes. 572 The second solution relies on using Next Hop Resolution Protocol 573 (NHRP). In this solution NHRP is used to provide (on demand) mapping 574 from a given VM's IP address into an IP address of the DCBR of the 575 data center that contains the server presently hosting this VM. 577 6.2.1. Propagating IP host routes 579 The approach described in this section assumes that all the 580 communicating VMs belong to the same routing/addressing realm. 582 Note that while the material in this section is presented in terms of 583 avoiding triangular routing between VMs that are in different data 584 centers, procedures described in this section are equally applicable 585 to communication between a VM and a host. 587 Procedures in this section assumes that DCBRs, in addition to acting 588 as routers for the L2-based CUGs present in their data center, also 589 participate in the E-VPN procedures either (a) acting as PEs, or (b) 590 acting as BGP Route Reflectors for the E-VPN routes originated by the 591 ToRs within their data center if these ToRs are acting as PEs. As a 592 result, a DCBR that acts as a router for a given L2-based CUG can 593 determine whether a particular VM that is a member of this CUG is in 594 the same data center as the DCBR itself. 596 Procedures in this section rely on DCBRs performing what amounts to a 597 redistribution of routes between E-VPN and OSPF/ISIS/BGP. In other 598 words, DCBRs in one data center use the E-VPN functionality to obtain 599 the information about IP addresses of the VMs currently being present 600 in their data center, and then advertise into OSPF/ISIS/BGP host 601 routes to these IP addresses. 603 DCBRs in other data centers receive these route, and use the 604 information carried in these routes to avoid triangular routing. 605 Note that even if ToRs within a given data center act as both PEs and 606 also perform IP-based forwarding, DCBRs of that data center SHOULD 607 NOT redistribute to these ToRs the host routes they receive from 608 DCBRs in other data centers - DCBRs SHOULD advertise only the IP 609 default route to these ToRs. 611 To illustrate how the above procedures avoid triangular routing 612 consider the following example. Assume that a particular VM, VM-A, is 613 currently being hosted by a server located in data center DC-1 with 614 DCBR-1 as its DCBR, and another VM, VM-B, is currently being hosted 615 by a server located in data center DC-2 with DCBR-2 as its DCBR. 616 Assume that VM-A and VM-B belong to different L2-based CUGs. Using 617 the E-VPN procedures DCBR-2 determines that VM-B is presently in its 618 own data center, and thus originates an IP host route to VM-B's IP 619 address. Using OSPF/ISIS/BGP this route ultimately gets propagated to 620 DCBR-1. Using this information DCBR-1 would forward data originated 621 by VM-A and destined to VM-B to DCBR-2. 623 Now assume that VM-B moves to another server, and this server is 624 located in data center DC-3 with DCBR-3 as its DCBR. Using the E-VPN 625 procedures, DCBR-2 determines that VM-B is no longer present in 626 DCBR-2's data center, and thus withdraws the previously originated IP 627 host route to VM-B's IP address. Using the E-VPN procedures, DCBR-3 628 now determines that VM-B is now present in DCBR-3's data center, and 629 thus originates an IP host route to VM-B's IP address. Using the 630 OSPF/ISIS/BGP procedures, this route ultimately gets propagated to 631 DCBR-1. Using this information DCBR-1 would now forward data 632 originated by VM-A and destined to VM-B to DCBR-3, thus avoiding 633 triangular routing. 635 As we mentioned above, essential to the scheme that relies on 636 propagating (host) routes to individual VM's IP addresses is the 637 ability to constrain the propagation scope of these routes. The 638 following describes possible approaches to accomplish this. 640 6.2.1.1. Constraining propagation scope with OSPF/ISIS 642 When DCBRs use OSPF or ISIS to exchange routing information among 643 themselves, OSPF/ISIS areas may be used as a boundary to constrain 644 propagation scope of host routes. That is, a host route originated by 645 a given DCBR is propagated only within the OSPF/ISIS area of that 646 DCBR, and thus received only by the DCBRs that are in the same 647 OSPF/ISIS area. ABRs connected to a particular OSPF/ISIS area 648 advertise outside of this area only routes to the IP subnets 649 associated with the L2-based CUGs present in the data centers whose 650 DCBRs are in that area, but do not advertise any host routes. 652 Note that this approach avoids triangular routing when VM is moved 653 between servers that are located in the data centers whose DCBRs 654 belong to the same OSPF/ISIS area, but does not avoid triangular 655 routing if these DCBRs belong to different OSPF/ISIS areas. However, 656 when these DCBRs belong to the same OSPF/ISIS area this approach 657 avoid triangular routing irrespective of whether the peer is in the 658 same or different OSPF/ISIS area as the VM itself. 660 Since this approach avoids triangular routing avoidance only within a 661 limited scope, to provide connectivity to the peers that are outside 662 of that scope, DCBRs connected to a given L2-based CUG, in addition 663 to advertising host routes, also advertise into OSPF/ISIS a route 664 associated with the IP subnet of that CUG. Propagation of such route 665 need not be limited to the OSPF/ISIS area(s) of these DCBRs. 667 6.2.1.2. Constraining propagation scope with BGP 669 When DCBRs use BGP to exchange routing information among themselves, 670 one could use Route Targets (RTs) to constrain the propagation scope 671 of host routes to a particular set of data centers, or to be more 672 precise to a particular set of DCBRs of these data centers. 674 To accomplish that, DCBRs in a particular set of data centers may be 675 configured with a particular import RT. DCBRs that originate host 676 routes and wish to constrain the propagation scope of these routes to 677 a particular set of data centers would advertise these routes with 678 the import RT provisioned for the DCBRs of the data centers in that 679 set. Route Target Constrain [RFC4684] MAY be used to facilitate 680 constrained distribution of these host routes. 682 Note that this approach avoids triangular routing only if both 683 communicating VMs are in the data centers whose DCBRs are provisioned 684 with the same import RT, and moreover, when VM moves between servers 685 that are located in the data centers whose DCBRs are configured with 686 the same import RT. 688 Note that at least in principle RIPv2 by carefully using routing 689 policies and tags in the routes can achieve similar results. 691 Since this approach avoids triangular routing avoidance only within a 692 limited scope, to provide connectivity to the peers that are outside 693 of that scope, DCBRs connected to a given L2-based CUG, in addition 694 to advertising host routes, also advertise into BGP a route to the IP 695 subnet associated with that CUG. 697 6.2.1.3. Policy based origination of VM Host IP Address Routes 699 When a DCBR (using E-VPN procedures) learns that a particular VM is 700 now moved to the DCBR's data center, the DCBR may not originate a 701 corresponding VM host route by default. Instead, it may optionally do 702 so based on a dynamic policy. For example, the policy may be to 703 originate such a route only when the traffic to the VM flowing 704 through that DCBR exceeds a certain threshold. Note that delaying 705 origination of the host route, while impacting routing optimality, 706 does not impact the ability to maintain connectivity between this VM 707 and its peers. 709 6.2.1.4. Policy based instantiation of VM Host IP Address Forwarding 710 State 712 When a ToR/DCBR learns (from another ToR or DCBR) a host route of a 713 VM, it may not immediately install this route in its forwarding 714 table. Instead, it may optionally do so based on a dynamic policy. 715 For example, the policy may be to install such forwarding state only 716 when the ToR/DCBR needs to forward the first packet to that 717 particular VM. Note that delaying installation of the host route, 718 while impacting routing optimality, does not impact the ability to 719 maintain connectivity between this VM and its peers. 721 6.2.2. Propagating VPN-IP host routes 723 In the scenario where one wants to restrict communication between VMs 724 in different L2-based CUGs to a particular set of L2-based CUGs, 725 and/or when one need to support multiple routing/addressing realms 726 (e.g., IP VPNs) this document proposes to use mechanisms of BGP/MPLS 727 VPN [RFC4364] as follows. 729 The set of L2-based CUGs whose VMs are allowed to communicate with 730 each other is considered as a single Layer 3 VPN. 732 A DCBR, in addition to implementing the E-VPN functionality, also 733 implements functionality of a Provider Edge (PE) router, as specified 734 in [RFC4364]. Specifically, this PE router would maintain multiple 735 VRFs, oner per each Layer 3 VPN whose L2-based CUGs are present in 736 the DCBR's data center. Such VRF would be populated from two sources: 737 (1) VPN-IP routes received from other VRFs that belong to the same 738 Layer 3 VPN, and (2) MAC advertisement routes received from the EVIs 739 that are in the same data center as the DCBR hosting the VRF, and 740 that belong to the E-VPNs associated with the L2-based CUGs that form 741 the Layer 3 VPN associated with the VRF. 743 Procedures of [RFC4364] constrain the propagation scope of the VPN-IP 744 host routes originated from a given VRF on a given DCBR to only the 745 other VRFs who belong to the same VPN as the VRF that originated the 746 routes (or even to a subset of such VRFs). 748 Using the extranet procedures such VPN-IP host routes could be 749 propagated to other VPNs. Alternatively, one or more VRFs of a given 750 Layer 3 VPN, in addition to originating the VPN-IP host routes, MAY 751 also originate a VPN-IP route to the IP subnet associated with the 752 L2-based CUG that belongs to the Layer 3 VPN associated with that 753 VRF. Such route could be distributed to other Layer 3 VPNs using the 754 extranet procedures. 756 Note that applicability of the approach described in this section is 757 not limited to the environment where one need to support multiple 758 routing/addressing realms (e.g., IP VPN environment) - this approach 759 is also well suitable for the environment that consists of a single 760 routing/addressing realm. 762 6.2.3. Triangular Routing Solution Based on NHRP 764 Triangular routing solution based on NHRP utilizes a subset of the 765 functionality provided by the Next Hop Resolution Protocol [RFC2332] 766 as follows. 768 Note that while most of the material in this section is presented in 769 terms of avoiding triangular routing between a VM located in a given 770 data center and a host located in some other site, procedures 771 described in this section are equally applicable to communication 772 between VMs in different data centers. 774 Consider a scenario where a host within a given site communicates 775 with a VM, and the VM could move among servers located in different 776 data centers. The following example illustrates how NHRP allows to 777 avoid triangular routing. 779 Assume that a given L2-based CUG spans two data centers, one in San 780 Francisco (SF) and another in Los Angelos (LA). DCBR-SF is the DCBR 781 for the SF data center. DCBR-LA is the DCBR for the LA data center. 782 Since this CUG spans both the SF data center and the LA data center, 783 at least one of DCBR-SF or DCBR-LA advertises a route to the IP 784 prefix of the IP subnet associated with the CUG (this is a route to a 785 prefix, and not a host route). Let's denote this IP prefix as X. 786 Advertising a route to this prefix is essential to avoid transient 787 disruptions in maintaining connectivity in the presence of VM 788 mobility. 790 DCBR-LA and DCBR-SF can determine whether a particular VM of that 791 L2-based CUG is in LA or SF by using the E-VPN procedures. 793 There is a site in Denver, and that site contains a host B that wants 794 to communicate with a particular VM, VM-A, that belong to that 795 L2-based CUG. 797 Assume that there is an IP infrastructure that connects the border 798 router of the site in Denver, DCBR-SF, and DCBR-LA. This 799 infrastructure could be provided by either 2547 VPNs, or IPSec 800 tunnels over the Internet, or by L2 circuits. [Note that this 801 infrastructure does not assume that the border router in Denver is 1 802 IP hop away from either DCBR-SF or DCBR-LA]. 804 To avoid triangular routing, if VM-A is in LA, then the border route 805 in Denver should send traffic for VM-A via DCBR-LA without going 806 first through DCBR-SF. If VM-A is in SF, then the border route in 807 Denver should send traffic for VM-A via DCBR-SF without going first 808 through DCBR-LA. This should be true except for some transients 809 during the move of VM-A between SF and LA. 811 To accomplish this we would require the border router in Denver, 812 DCBR-SF, and DCBR-LA to support a subset of the NHRP functionality, 813 as follows. In NHRP terminology DCBR-SF and DCBR-LA are NHRP Servers 814 (NHSs), while the border router in Denver is an NHRP Client (NHC). 816 This document does not rely on the use of NHRP Registration 817 Request/Reply messages, as DCBRs/NHSs rely on the information 818 provided by E-VPN. 820 DCBR-SF will be an authoritative NHS for all the IP addresses of the 821 VMs that are presently in the SF data center. Likewise, DCBR-LA will 822 be an authoritative NHS for all the IP addresses of the VMs that are 823 presently in the LA data center. Note that as a VM moves from SF to 824 LA, the authoritative NHS for the IP address of that VM moves from 825 DCBR-SF to DCBR-LA. 827 We assume that the border router in Denver can determine the subset 828 of the destination for which it has to apply NHRP. If DCBR-SF, DCBR- 829 LA, and the border router in Denver use OSPF to exchange routing 830 information, then a way to do this would be for DCBR-SF and DCBR-LA 831 to use a particular OSPF tag to mark routes advertised by these 832 DCBRs, and then make the border router in Denver to apply NHRP to any 833 destination that matches any route that carries that particular tag. 834 If DCBR-SF, DCBR-LA, and the border router in Denver use BGP to 835 exchange routing information, then a way to do this would be for 836 DCBR-SF and DCBR-LA to use a particular BGP community to mark routes 837 advertised by these DCBRs, and then make the border router in Denver 838 to apply NHRP to any destination that matches any route that carries 839 that particular BGP community. 841 6.2.3.1. Detailed Procedures 843 The following describes details of NHRP operations. 845 When the border router in Denver first receives a packet from B 846 destined to VM-A, the border router determines that VM-A falls into 847 the subset of the destination for which the border router has to 848 apply NHRP. Therefore, the border router originates an NHRP Request. 850 The mandatory part of the NHRP Request is constructed as follows. 852 The Source NBMA Address and the Source Protocol Address fields 853 contain the IP address of the border router in Denver; the 854 Destination Protocol Address field contains the IP address of VM-A. 855 This Request is encapsulated into an IP packet, whose source IP 856 address is the address of the border router, and whose destination IP 857 address is the address of VM-A. The packet carries the Router Alert 858 option. NHRP is carried directly over IP using IP Protocol Number 54 859 [RFC1700]. 861 Note that the trigger for the originating an NHRP Request may be 862 either the first packet destined to a particular host, or a 863 particular rate threshold for the traffic to that host. 865 Following the route to the prefix X the packet that carries the NHRP 866 Request will eventually get to either DCBR-SF or DCBR-LA. Let's 867 assume that it is DCBR-SF that receives the packet. (Note that none 868 of the routers, if any, between the site border router in Denver and 869 DCBR-SF or DCBR-LA would be required to support NHRP.) Since both 870 DCBR-SF and DCBR-LA assume to support NHRP, they would be required to 871 process the NHRP Request carried in the packet. 873 If DCBR-SF determines that VM-A is in the LA data center (DCBR-SF 874 determines this from the information provided by E-VPN), then DCBR-SF 875 will forward the packet that contains the NHRP Request to DCBR-LA, as 876 DCBR-SF is not an authoritative NHS for VM-A, while DCBR-LA is. 877 DCBR-SF can accomplish this by setting the destination MAC address in 878 the packet to the MAC address of DCBR-LA, in which case the packet 879 will be forwarded to DCBR-LA using the E-VPN procedures. 880 Alternatively, DCBR-SF could change to DCBR-LA the destination 881 address in the IP header of the packet that carries the NHRP Request, 882 in which case the packet will be forwarding to DCBR-LA using IP 883 forwarding procedures. 885 When the NHRP Request will reach DCBR-LA, and DCBR-LA determines that 886 VM-A is in the LA data center (DCBR-LA determines this from the 887 information provided by E-VPN), and thus DCBR-LA is an authoritative 888 NHS for VM-A, DCBR-LA sends back to the border router in Denver an 889 NHRP Reply indicating that DCBR-LA should be used for forwarding 890 traffic to VM-A. 892 The mandatory part of the NHRP Reply is constructed as follows. The 893 Source NBMA Address, the Source Protocol Address, and the Destination 894 Protocol Address fields in the mandatory part are copied from the 895 corresponding fields in the NHRP Request. The Reply carries a Client 896 Information Entry (CIE) with the Client NBMA Address field set to the 897 IP address of DCBR-LA, and the Client Protocol Address field set to 898 the IP address of VM-A. The Reply is encapsulated into an IP packet, 899 whose source IP address is the address of DCBR-LA, and whose 900 destination IP address is the IP address of the border router in 901 Denver (DCBR-LA determines this address from the information carried 902 in the NHRP Request). The packet does not carries the Router Alert 903 option. 905 Once the border router in Denver receives the Reply, the border 906 router will encapsulate all the subsequent packets destined to VM-A 907 into GRE with the outer header carrying DCBR-LA as the IP destination 908 address. (In effect that means that the border router in Denver will 909 install in its FIB a host route for VM-A indicating GRE encapsulation 910 with DCBR-LA as the destination IP address in the outer header.) 912 Now assume that VM-A moves from the data center in LA to the data 913 center in SF. Once DCBR-LA finds this out (from the information 914 provided by E-VPN), DCBR-LA sends an NHRP Purge to the border router 915 in Denver. Note that DCBR-LA can defer sending the Purge message 916 until it receives GRE-encapsulated data destined to VM-A. Note also, 917 that in this case DCBR-LA does not have to keep track of all the 918 requestors for VM-A to whom DCBR-LA subsequently sent NHRP Replies, 919 as DCBR-LA determines the address of these requestors from the outer 920 IP header of the GRE tunnel. 922 When the border router in Denver receives the Purge message, it will 923 purge the previously received information that VM-A is reachable via 924 DCBR-LA. In effect that means that the border router in Denver will 925 remove the host route for VM-A from its FIB (but will still retain a 926 route for the prefix X). 928 From that moment the border router in Denver will start forwarding 929 packets destined to VM-A using the route to the prefix X (relying on 930 plain IP routing). That means that these packets will get to DCBR-SF 931 (which is the desirable outcome anyway). 933 However, once the border router in Denver receives NHRP Purge, the 934 border router will issue another NHRP Request. This time, once this 935 NHRP Request reaches DCBR-SF, DCBR-SF will send back to the border 936 router in Denver an NHRP Reply, as at this point DCBR-SF determines 937 that VM-A is in SF, and therefore DCBR-SF is an authoritative NHS for 938 VM-A. Once the border router in Denver receives the Reply, the router 939 will encapsulate all the subsequent packets destined to VM-A into GRE 940 with the outer header carrying DCBR-SF as the IP destination address. 941 In effect that means that the border router in Denver will install in 942 its FIB a host route for VM-A indicating GRE encapsulation with DCBR- 943 SF as the destination IP address in the outer header. 945 6.2.3.2. Failure scenarios 947 To illustrate operations during failures let's modify the original 948 example by assuming that each data center has more than one DCBR. 949 Specifically, the data center in SF has DCBR-SF1 and DCBR-SF2. Both 950 of these are authoritative NHSs for all the VMs whose addresses are 951 take from prefix X, and who are presently in the SF data center. 952 Note also that both DCBR-SF1 and DCBR-SF2 advertise a route to the 953 prefix X. 955 Assume that VM-A is presently in SF, so the border router in Denver 956 tunnels the traffic to VM-A through DCBR-SF1. 958 Now assume that DCBR-SF1 crashes. At that point the border router in 959 Denver should stop tunnelling the traffic through DCBR-SF1, and 960 should switch to DCBR-SF2. The following sections describe two 961 possible options to accomplish this. 963 6.2.3.2.1. DCBR Failure - Option 1 965 One option to handle DCBRs failures is to make each DCBR to originate 966 a host route for its own IP address that it would advertise in the 967 NHRP Replies. This way when DCBR-SF1 crashes, the route to DCBR-SF1 968 IP address goes away, providing indication to the border router in 969 Denver that it no longer can use DCBR-SF1. At that point the border 970 router in Denver removes the route for VM-A from its FIB (but will 971 still retain a route for the prefix X). From that moment the border 972 router in Denver will start forwarding packets destined to VM-A using 973 the route to the prefix X. Since DCBR-SF1 crashes, these packets will 974 be routed to DCBR-SF2, as DCBR-SF2 advertises a route to prefix X 975 (and the route to prefix X that has been previously advertised by 976 DCBR-SF1 will be withdrawn due to crash of DCBR-SF1). 978 However, once the border router in Denver detects that DCBR-SF1 is 979 down, the border router will issue another NHRP Request. This time, 980 NHRP Request reaches DCBR-SF2, and DCBR-SF2 will send back to the 981 border router in Denver an NHRP Reply. Once the border router in 982 Denver receives the Reply, the router will encapsulate all the 983 subsequent packets destined to VM-A into GRE with the outer header 984 carrying DCBR-SF2 as the IP destination address. In effect that means 985 that the border router in Denver will install in its FIB a host route 986 for VM-A indicating GRE encapsulation with DCBR-SF2 as the 987 destination IP address in the outer header. 989 6.2.3.2.2. DCBR Failure - Option 2 991 Another option to handle DCBRs failures is to make both DCBRs to 992 advertise the same (anycast) IP address in the NHRP Replies. This way 993 when DCBR-SF1 crashes, the route to this address would not go away 994 (as DCBR-SF2 will continue to advertise a route to this address into 995 IP routing), and thus the traffic destined to that address will now 996 go to DCBR-SF2. Since DCBR-SF2 is an authoritative NHS for all the 997 VMs whose addresses are take from prefix X, and who are presently in 998 the SF data center, DCBR-SF2 will forward this traffic to VM-A. 1000 7. IANA Considerations 1002 This document introduces no new IANA Considerations. 1004 8. Security Considerations 1006 TBD. 1008 9. Acknowledgements 1010 We would like to thank Dave Katz for reviewing NHRP procedures. We 1011 would also like to thank John Drake for his review and comments. 1013 10. References 1015 [RFC1700] Reynolds J., Postel J., "ASSIGNED NUMBERS", RFC1700, 1016 October 1994 1018 [RFC2332] "NBMA Next Hop Resolution Protocol (NHRP)", RFC 2332, J. 1019 Luciani et. al. 1021 [RFC4364] Rosen, Rekhter, et. al., "BGP/MPLS IP VPNs", RFC4364, 1022 February 2006 1024 [RFC4684] Pedro Marques, et al., "Constrained Route Distribution for 1025 Border Gateway Protocol/MultiProtocol Label Switching (BGP/MPLS) 1026 Internet Protocol (IP) Virtual Private Networks (VPNs)", RFC4684, 1027 November 2006 1029 [E-VPN] Aggarwal R., et al., "BGP MPLS Based Ethernet VPN", draft- 1030 ietf-l2vpn-evpn, work in progress 1032 [Default-Gateway] http://www.iana.org/assignments/bgp-extended- 1033 communities 1035 11. Author's Address 1037 Rahul Aggarwal 1038 Arktan, Inc 1039 Email: raggarwa_1@yahoo.com 1041 Yakov Rekhter 1042 Juniper Networks 1043 1194 North Mathilda Ave. 1044 Sunnyvale, CA 94089 1045 Email: yakov@juniper.net 1047 Wim Henderickx 1048 Alcatel-Lucent 1049 Email: wim.henderickx@alcatel-lucent.com 1051 Ravi Shekhar 1052 Juniper Networks 1053 1194 North Mathilda Ave. 1054 Sunnyvale, CA 94089 1055 Email: rshekhar@juniper.net 1057 Luyuan Fang 1058 Cisco Systems 1059 111 Wood Avenue South 1060 Iselin, NJ 08830 1061 Email: lufang@cisco.com 1063 Ali Sajassi 1064 Cisco Systems 1065 Email: sajassi@cisco.com