idnits 2.17.1 draft-fang-l3vpn-virtual-ce-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 4, 2014) is 3576 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT Luyuan Fang 3 Intended Status: Standards track Microsoft 4 Expires: January 4, 2015 John Evans 5 David Ward 6 Rex Fernando 7 Cisco 8 Ning So 9 Vinci Systems 10 Nabil Bitar 11 Verizon 12 Maria Napierala 13 AT&T 15 July 4, 2014 17 BGP IP MPLS VPN Virtual CE 18 draft-fang-l3vpn-virtual-ce-03 20 Abstract 22 This document describes the architecture and solutions of using 23 virtual Customer Edge (vCE) of BGP IP MPLS VPN. The solution is aimed 24 at providing efficient service delivery capability through CE 25 virtualization, and is especially beneficial in virtual Private Cloud 26 (vPC) environments when extending IP MPLS VPN into tenant virtual 27 Data Center containers. This document includes: BGP IP MPLS VPN 28 virtual CE architecture; Control plane and forwarding options; Data 29 Center orchestration processes; integration with existing WAN 30 enterprise VPNs; management capability requirements; and security 31 considerations. The solution is generally applicable to any BGP IP 32 VPN deployment. The virtual CE solution is complementary to the 33 virtual PE solutions. 35 Today's data center's require multi-tenancy and mechanisms to 36 establish overlay network connectivity. This document describes one 37 approach to enabling data center network connectivity. 39 Status of this Memo 41 This Internet-Draft is submitted to IETF in full conformance with the 42 provisions of BCP 78 and BCP 79. 44 Internet-Drafts are working documents of the Internet Engineering 45 Task Force (IETF), its areas, and its working groups. Note that 46 other groups may also distribute working documents as 47 Internet-Drafts. 49 L. Fang et al. BGP IP VPN Virtual CE 51 Internet-Drafts are draft documents valid for a maximum of six months 52 and may be updated, replaced, or obsoleted by other documents at any 53 time. It is inappropriate to use Internet-Drafts as reference 54 material or to cite them other than as "work in progress." 56 The list of current Internet-Drafts can be accessed at 57 http://www.ietf.org/1id-abstracts.html 59 The list of Internet-Draft Shadow Directories can be accessed at 60 http://www.ietf.org/shadow.html 62 Copyright and License Notice 64 Copyright (c) 2014 IETF Trust and the persons identified as the 65 document authors. All rights reserved. 67 This document is subject to BCP 78 and the IETF Trust's Legal 68 Provisions Relating to IETF Documents 69 (http://trustee.ietf.org/license-info) in effect on the date of 70 publication of this document. Please review these documents 71 carefully, as they describe your rights and restrictions with respect 72 to this document. Code Components extracted from this document must 73 include Simplified BSD License text as described in Section 4.e of 74 the Trust Legal Provisions and are provided without warranty as 75 described in the Simplified BSD License. 77 Table of Contents 79 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 80 1.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . 4 81 1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . 6 82 1.3 Scope of the document . . . . . . . . . . . . . . . . . . . 6 83 2. Virtual CE Architecture and Reference Model . . . . . . . . . . 7 84 2.1 Virtual CE . . . . . . . . . . . . . . . . . . . . . . . . . 7 85 2.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . 8 86 3. Control Plane . . . . . . . . . . . . . . . . . . . . . . . . . 10 87 3.1 vCE Control Plane . . . . . . . . . . . . . . . . . . . . . 10 88 4. Forwarding Plane . . . . . . . . . . . . . . . . . . . . . . . 11 89 4.1 Forwarding between vCE and PE/vPE . . . . . . . . . . . . . 11 90 4.2 Forwarding between vCE and VM . . . . . . . . . . . . . . . 11 91 5. Addressing and QoS . . . . . . . . . . . . . . . . . . . . . . 11 92 5.1 Addressing . . . . . . . . . . . . . . . . . . . . . . . . . 11 93 5.2 QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 94 6. Management plane . . . . . . . . . . . . . . . . . . . . . . . 12 95 6.1 Network abstraction and management . . . . . . . . . . . . . 12 96 6.2 Service VM Management . . . . . . . . . . . . . . . . . . . 12 98 L. Fang et al. BGP IP VPN Virtual CE 100 7. Orchestration and IP VPN inter-provisioning . . . . . . . . . . 12 101 7.1 DC Instance to WAN IP VPN instance "binding" Requirements . 12 102 7.2. Provisioning/Orchestration . . . . . . . . . . . . . . . . 13 103 7.2.1 vCE Push model . . . . . . . . . . . . . . . . . . . . . 13 104 7.2.1.1 Inter-domain provisioning vCE Push Model . . . . . . 14 105 7.2.1.2 Cross-domain provisioning vCE Push Model . . . . . . 14 106 7.1.1 vCE Pull model . . . . . . . . . . . . . . . . . . . . . 15 107 8. vCE and vPE interaction . . . . . . . . . . . . . . . . . . . . 16 108 8.1 Traditional vCE-PE connectivity . . . . . . . . . . . . . . 16 109 8.2 vCE-vPE connectivity . . . . . . . . . . . . . . . . . . . . 16 110 8.2.1 Co-located vCE-vPE connectivity with vPE Model 1 . . . . 17 111 8.2.2 Co-located vCE-vPE connectivity with vPE Model 2 . . . . 18 112 8. Security Considerations . . . . . . . . . . . . . . . . . . . . 18 113 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 18 114 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 18 115 10.1 Normative References . . . . . . . . . . . . . . . . . . . 18 116 10.2 Informative References . . . . . . . . . . . . . . . . . . 19 117 11. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 20 118 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20 120 L. Fang et al. BGP IP VPN Virtual CE 122 1. Introduction 124 In the typical enterprise BGP/MPLS IP VPN [RFC4364] deployment, the 125 Provider Edge (PE) and Customer Edge (CE) are physical routers which 126 support the PE and CE functions. With the recent development of cloud 127 services, using virtual instances of PE or CE functions, which reside 128 in a compute device such as a server, can be beneficial to emulate 129 the same logical functions as the physical deployment model but now 130 achieved via cloud based network virtualization principles. 132 This document describes IP VPN virtual CE (vCE) solutions, while 133 Virtual PE (vPE) concept and implementation options are discussed in 134 [I-D.fang-l3vpn-virtual-pe], [I-D.ietf-l3vpn-end-system]. vPE and vCE 135 solutions provide two avenues to realize network virtualization. 137 1.1 Terminology 139 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 140 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 141 document are to be interpreted as described in RFC 2119 [RFC2119]. 143 Term Definition 144 ----------- -------------------------------------------------- 145 AAA Authentication, Authorization, and Accounting 146 ACL Access Control List 147 3GPP 3rd Generation Partnership Project (3GPP) 148 AS Autonomous Systems 149 ASBR Autonomous Systems Border Router 150 BFD Bidirectional Forwarding Detection 151 BGP Border Gateway Protocol 152 CE Customer Edge 153 DB Data Base 154 DMZ Demilitarized Zone, a.k.a. perimeter networking 155 ED End device: where Guest OS, Host OS/Hypervisor, 156 applications, VMs, and virtual router may reside 157 FE Front End 158 FIB Forwarding Information Base 159 Forwarder L3VPN forwarding function 160 FRR Fast Re-Route 161 FTP File Transfer Protocol 162 GRE Generic Routing Encapsulation 163 HTTP Hypertext Transfer Protocol 164 Hypervisor Virtual Machine Manager 165 I2RS Interface to Routing System 166 LDAP Lightweight Directory Access Protocol 167 MP-BGP Multi-Protocol Border Gateway Protocol 169 L. Fang et al. BGP IP VPN Virtual CE 171 NVGRE Network Virtualization using GRE 172 OSPF Open Shortest Path First 173 PE Provider Edge 174 QinQ Provider Bridging, stacked VLANs 175 RR Route Reflector 176 SDN Software Defined Network 177 SLA Service Level Agreement 178 SMTP Simple Mail Transfer Protocol 179 ToR Top of the Rack switch 180 VI Virtual Interface 181 vCE virtual Customer Edge Router 182 vLB virtual Load Balancer 183 VM Virtual Machine 184 VLAN Virtual Local Area Network 185 vPC virtual Private Cloud 186 vPE virtual Provider Edge Router 187 VPN Virtual Private Network 188 vRR virtual Route Reflector 189 vSG virtual Security Gateway 190 VXLAN Virtual eXtensible Local Area Network 191 WAN Wide Area Network 193 Definitions: 195 Virtual CE (vCE): A virtual instance of the Customer Edge (CE) 196 routing function which resides in one or more network or compute 197 devices. For example, the vCE data plane may reside in an end device, 198 such as a server, and as co-resident with application Virtual 199 Machines (VMs) on the server; the vCE control plane may reside in the 200 same device or in a separate entity such as a controller. 202 Network Container/Tenant Container: An abstraction of a set of 203 network and compute resources which can be physical and virtual, 204 providing the cloud services for a tenant. One tenant can have more 205 than one Tenant Containers. 207 Zone: A logical grouping of VMs and service assets within a tenant 208 container. Different security policies may be applied within and 209 between zones. 211 DMZ: Demilitarized zone, a.k.a. perimeter networking. It is often a 212 machine or a small subnet that sits between a trusted internal 213 network, such as a corporate private LAN, and an un-trusted external 214 network, such as the public Internet. Typically, the DMZ contains 215 devices accessible to Internet traffic, such as Web (HTTP) servers, 216 FTP servers, SMTP (e-mail) servers and DNS servers. 218 L. Fang et al. BGP IP VPN Virtual CE 220 1.2 Problem statement 222 With the growth of cloud services and the increase in the number of 223 CE devices, routers/switches, and appliances, such as Firewalls (FWs) 224 and Load Balancers (LBs), that need to be supported, there are 225 benefits to virtualize the Data Center tenant container. The 226 virtualized container can increase resource sharing, optimize routing 227 and forwarding of inter-segment and inter-service traffic, and 228 simplify design, provisioning, and management. 230 The following two aspects of the virtualized Data Center tenant 231 container for the IP VPN CE solution are discussed in this document. 233 1. Architecture re-design for virtualized DC. 235 The optimal architecture of the virtualized container includes 236 virtual CE, virtual appliances, application VMs. All these functions 237 are co-resitents on virtualized servers. In this arrangement, CEs and 238 appliances can be created and removed easily on demand, and the 239 virtual CE can interconnect the virtual appliances (e.g., FW, LB, 240 NAT), applications (e.g., Web, App., and DB) in a co-located fashion 241 for simplicity, routing/forwarding optimization, and easier service 242 chaining. Virtualizing these functions on a per-tenant basis provides 243 simplicity for the network operator in regards to managing per tenant 244 service orchestration, tenant container moves, capacity planning 245 across tennants and per-tenant policies. 247 2. Provisioning/orchestration. Two issues need to be addressed: 249 a) The provisioning/orchestration system of the virtualized data 250 center need to support VM life cycle and VM migration. 252 b) The provisioning/orchestration systems of the DC and the WAN 253 networks need to be coordinated to support end-to-end IP VPN from DC 254 to DC or from DC to enterprise remote office in the same VPN. The DC 255 and the WAN network are often operated by separate departments, even 256 if they belong to the same provider. Today, the process of inter- 257 connecting is slow and painful, and automation is highly desirable. 259 1.3 Scope of the document 261 It is assumed that the readers are familiar with BGP/MPLS IP VPN 262 [RFC4364] terms and technologies, the base technology and its 263 operation are not reviewed in details in this document. 265 As the majority (all in some networks) of applications are IP, this 266 vCE solution is focusing on IP VPN solutions to cover the most common 267 cases and keep matters as simple as possible. 269 L. Fang et al. BGP IP VPN Virtual CE 271 2. Virtual CE Architecture and Reference Model 273 2.1 Virtual CE 275 As described in [RFC4364], IP uses a "peer model" - the customers' 276 edge routers (CE routers) exchange routes with the Service Provider's 277 edge routers (PE routers); the CEs do not peer with each other. MP- 278 BGP [RFC4271, RFC4760] is used between the PEs (often with RRs) which 279 have a particular VPN attached to them to exchange the VPN routes. A 280 CE sends IP packets to the PE; no VPN labels for packets forwarded 281 between CE and PE. 283 A virtual CE (vCE) as defined in this document is a software instance 284 of IP VPN CE function which can reside in ANY network or compute 285 devices. For example, a vCE MAY reside in an end device, such as a 286 server in a Data Center, where the application VMs reside. The CE 287 functionality and management models remain the same as defined in 288 [RFC4364] regardless of whether the CE is physical or virtual. 290 Using the virtual CE model, the CE functions CAN easily co-located 291 with the VM/applications, e.g., in the same server. This allows 292 tenant inter-segment and inter-service routing to be optimized. 293 Likewise the vCE can be in a separate server (in the same DC rack or 294 across racks) than the application VMs, in which case VMs would 295 typically use standard L2 technologies to access the vCE via the DC 296 network. 298 Similar to the virtual PE solution, the control and forwarding of a 299 virtual CE can be on the same device, or decoupled and reside on 300 different physical devices. The provisioning of a virtual CE, 301 associated applications, and the tenant network container can be 302 supported through DC orchestration systems. 304 Unlike a physical or virtual PE which can support multi-tenants, a 305 physical or virtual CE supports a single tenant only. A single tenant 306 CAN use multiple physical or virtual CEs. An end device, such as a 307 server, CAN support one or more vCE(s). While the vCE is defined as a 308 single tenant device, each tenant can have multiple logical 309 departments which are under the tenants administrative control, 310 requiring logical separation, this is the same model as today's 311 physical CE deployments. 313 Virtual CE and virtual PE are complimentary approaches for extending 314 IP VPN into tenant containers. In the vCE solution, there is no IP 315 VPN within the data center or other type of service network, the vCE 316 can connect to the PE which is a centralized IP VPN PE/Gateway/ASBR, 317 or connect to distributed vPE on a server or on the Top of the Rack 318 switch (ToR). Virtual CE can be used to extend the SP managed CE 320 L. Fang et al. BGP IP VPN Virtual CE 322 solution to create new cloud enabled services and provide the same 323 topological model and features that are consistent with the physical 324 CE systems. 326 2.2 Architecture 328 Figure 1 illustrates the topology where vCE is resident in the 329 servers where the applications are hosted. 331 .''---'''---''. 332 ( ) 333 ( IP/MPLS ) 334 ( WAN ) 335 WAN '--,,,_,,,--' 336 ----------------|----------|------------------ 337 Service/DC | | 338 Network +-------+ +-------+ 339 |Gateway|---|Gateway| 340 | PE | | PE | 341 +-------+ +-------+ 342 | ,---. | 343 .---. ( '.---. 344 ( ' ' ') 345 (' Data Center ) 346 (. Fabric .) 347 ( ( ).--' 348 / ''--' '-''--' \ 349 / / \ \ 350 +-------+ +---+---+ +-------+ +-------+ 351 | vCE | |vCE|vCE| | vCE | |vCE|vCE| 352 +---+---+ +---+---+ +---+---+ +---+---+ 353 |VM |VM | |VM |VM | |VM |VM | |VM |VM | 354 +---+---+ +---+---+ +---+---+ +---+---+ 355 |VM |VM | |VM |VM | |VM |VM | |VM |VM | 356 +---+---+ +---+---+ +---+---+ +---+---+ 358 End Device End Device End Device End Device 360 Figure 1. Virtualized Data Center with vCE 362 Figure 1 shows above vCE solution in a virtualized Data Center with 363 application VMs on the servers. One or more vCEs MAY be used on each 364 server. 366 The vCEs logically connect to the PEs/Gateway PEs to join the 367 particular IP VPN which the tenant belongs to. Gateway PEs connect to 368 the IP MPLS WAN network for inter-DC and DC to enterprise VPN sites 370 L. Fang et al. BGP IP VPN Virtual CE 372 connection. The server physically connects to the DC Fabric for 373 packet forwarding. 375 ,---. ,---. 376 .--.( ) .--.( ) 377 ( ' '.---. ( ' '.---. 378 (' L3VPN ) (' Internet ) 379 '..( ).' '..( ).' 380 '--'---'' '--'---'' 381 +---+ +---+ +---+ +---+ 382 |PE | |PE | | R | | R | 383 +---+ +---+ +---+ +---+ 384 | | | | 385 """"""""""""""""""|"""""""|""""""""""""""|"""""""|""""""""""""""""" 386 " End Device | | +----+ | " 387 " (e.g. a server) +-------+-----+ +----|vSG |----+ " 388 " | | +----+ " 389 " +----+ " 390 " +---------------------|vCE |-----------+ " 391 " | +----+ | " 392 " +----+ | +----+ | | +----+ " 393 " |vLB |-| |vLB |--+-----------+ +--|vLB | " 394 " +----+ | +----+ | | +----+ " 395 " | | +----+ | " 396 " | | +------|vSG |-+------+ " 397 " | | | +----+ | " 398 " '''''''|'''''''''''|''''' ''''''|'''''''''|''''''''''|''''''''' " 399 " ' +--------+ +--------+ ' ' +-------+ +-------+ +-----------+ ' " 400 " ' | Apps/ | | Apps/ | ' ' | Apps/ | | Apps/ | |Apps |Apps | ' " 401 " ' | VMs | | VMs | ' ' | VMs | | VMs | |VMs |VMs | ' " 402 " ' | | | | ' ' | | | | |ZONE3|ZONE4| ' " 403 " ' | Public | |Protect-| ' ' | | | | +-----+-----+ ' " 404 " ' | Zone | | ed FE | ' ' | ZONE1 | | ZONE2 | |Apps |Apps | ' " 405 " ' | (DMZ) | | | ' ' | | | | |VMs |VMs | ' " 406 " ' | | | | ' ' | | | | |ZONE5|ZONE6| ' " 407 " ' +--------+ +--------+ ' ' +-------+ +-------+ +-----------+ ' " 408 " ' Front-end Zone ' ' Back-end Zone ' " 409 " ' ' ' ' " 410 " ''''''''''''''''''''''''' ''''''''''''''''''''''''''''''''''''' " 411 """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" 413 Figure 2. A Virtualized Container with vCE in an End Device 415 An end device shown in Figure 2 is a physical server supporting 416 multiple virtualized appliances and application, and hosts multiple 417 client VMs. An end device shown in Figure 2 is a physical server 418 supporting multiple In the traditional deployment, the topology often 419 involves multiple physical CEs, physical Security Gateways and Load 421 L. Fang et al. BGP IP VPN Virtual CE 423 Balancers residing in the same Data Center. 425 The virtualized approach provides the benefit of reduced number of 426 physical devices, simplified management, optimal routing due to the 427 co-location of vCE, services, and client VMs. 429 While the above diagram represents a simplified view of all of the 430 tenant service and application VMs residing in the same physical 431 server, the above model can also be represented with the VMs spread 432 across many physical servers and the DC network would provide the 433 physical inter-connectivity while the vCE and the VMs connected to 434 the vCE form the logical connections. 436 3. Control Plane 438 3.1 vCE Control Plane 440 The vCE control plane can be distributed or centralized. 442 1) Distributed control plane 444 vCE CAN exchange BGP routes with PE or vPE for the particular IP VPN 445 as described in [RFC4364]. 447 The vCE needs to support BGP if this approach is used. 449 The advantage of distributed protocols is to avoid single point of 450 failure and bottleneck. Service chaining can be easily and 451 efficiently supported in this approach. 453 BGP as PE-CE protocol is used in about 70% of cases in typical 454 Enterprise IP VPN PE-CE connections. BGP supports rich policy 455 compared to other alternatives. 457 2) Static routing. It is used in about 30% of cases in Enterprise IP 458 VPN PE-CE connections. It MAY be used if the operator prefers. 460 2. Using controller approach 462 Using controller is the Software Defined Nework (SDN) approach. A 463 controller can be distributed or centralized. The central controller 464 performs the control plane functions, and sends instructions to the 465 vCE on the end devices to configure the data plane. 467 This requires standard interface to routing system (I2RS). The 468 Interface to Routing System (I2RS) is work in progress in IETF [I- 469 D.ward-irs-framework], [I-D.rfernando-irs-framework-requirement]. 471 L. Fang et al. BGP IP VPN Virtual CE 473 4. Forwarding Plane 475 4.1 Forwarding between vCE and PE/vPE 477 No MPLS forwarding is required between PE and CE in typical PE-CE 478 connection scenarios, though MPLS label forwarding is required for 479 implementing Carriers' Carrier (CSC) model. 481 IPv4 and IPv6 packet forwarding MUST be supported. 483 Native fabric CAN be used to support isolation between vCEs to PE 484 connections. 486 Examples of native fabric include: 488 - VLANs [IEEE 802.1Q], Virtual Local Area Network- IEEE 802.1ad 489 [IEEE 802.1ad]/QinQ, Provider Bridge 491 Or overlay segmentation with better scalability: 493 - VXLANs [I-D.mahalingam-dutt-dcops-vxlan], Virtual Extensible 494 LAN- NVGRE [I-D.sridharan-virtualization-nvgre], Network 495 Virtualization using GRE 497 Note the the above references for overlay network are currently work 498 in progress in IETF. 500 4.2 Forwarding between vCE and VM 502 If the vCE and the VM the vCE is connecting are co-located in the 503 same server, the connection is internal to the server, no external 504 protocol involved. 506 If the vCE and the VM the vCE is connecting are located in different 507 devices, standard external protocols are needed. The forwarding can 508 be native or overlay techniques as listed in the above sub-section. 510 5. Addressing and QoS 512 5.1 Addressing 514 IPv4 and IPv6 addressing MUST be supported. 516 IP address allocation for vCEs and applications/client: 518 1) IP address MAY be assigned by central management/provisioning 519 with predetermined blocks through planning process. 521 L. Fang et al. BGP IP VPN Virtual CE 523 2) IP address MAY be obtained through DHCP server. 525 Address space separation: The IP addresses used for clients in the IP 526 VPNs in the Data Center SHOULD be in separate address blocks outside 527 the blocks used for the underlay infrastructure of the Data Center. 528 The purpose is to protect the Data Center infrastructure from being 529 attacked if the attacker gain access of the tenant VPNs. 531 5.2 QoS 533 Differentiated Services [RFC2475] Quality of Service (QoS) is 534 standard functionality for physical CEs and MUST be supported on vCE. 535 This is important to ensure seamless end-to-end SLA from IP VPN in 536 the WAN into service network/Data center. The use of MPLS Diffserv 537 tunnel model Pipe Mode (RFC3270) with explicit null LSP must be 538 supported. 540 6. Management plane 542 6.1 Network abstraction and management 544 The use of vCE with single tenant virtual service instances can 545 simplify management requirements as there is no need to discover 546 device capabilities, track tenant dependencies and manage service 547 resources. 549 vCE North bound interface SHOULD be standards based. 551 The Interface to Routing System (I2RS) is work in progress in IETF 552 [I-D.ward-irs-framework], [I-D.rfernando-irs-framework-requirement]. 554 vCE element management MUST be supported, it can be in the similar 555 fashion as for physical CE, without the hardware aspects. 557 6.2 Service VM Management 559 Service VM Management SHOULD be hypervisor agnostic, e.g. On demand 560 service VMs turning-up SHOULD be supported. 562 The management tool SHOULD be open standards. 564 7. Orchestration and IP VPN inter-provisioning 566 7.1 DC Instance to WAN IP VPN instance "binding" Requirements 568 - MUST support service activation in the physical and virtual 569 environment. 571 L. Fang et al. BGP IP VPN Virtual CE 573 For example, assign VLAN to correct VRF. 575 - MUST support per VLAN Authentication, Authorization, and Accounting 576 (AAA). 578 The PE function is an OA&M boundary. 580 - MUST be able to apply other policies to VLAN. 582 For example, per VLAN QOS, ACLs. 584 - MUST ensure that WAN IP VPN state and Data cCentre state are 585 dynamically synchronized. 587 Ensure that there is no possibility of customer being connected to 588 the wrong VRF. For example, remove all tenant state when service 589 instance terminated. 591 - MUST integrate with existing WAN IP VPN provisioning processes. 593 - MUST scale to at least 10,000 tenant service instances. 595 - MUST cope with rapid (sub minute) tenant mobility. 597 - MAY support Automated cross provisioning accounting correlation 598 between WAN IP VPN and cloud/DC for the same tenant. 600 - MAY support Automated cross provisioning state correlation between 601 WAN IP VPN and cloud/DC/extended Data Center for the same tenant. 603 7.2. Provisioning/Orchestration 605 There are two primary approaches for IP VPN provisioning - push and 606 pull, both CAN be used for provisioning/orchestration. 608 7.2.1 vCE Push model 610 Push model: It is a top down approach - push IP VPN provisioning from 611 network management system or other central control provisioning 612 systems to the IP VPN network elements. 614 This approach supports service activation and it is commonly used in 615 the existing IP VPN enterprise deployment. When existing the IP VPN 616 solution into the cloud/data center or separate Data Center, it MUST 617 support off-line accounting correlation between the WAN IP VPN and 618 the cloud/DC IP VPN for the tenant, the systems SHOULD be able to 619 bind interface accounting to particular tenant. It MAY requires 621 L. Fang et al. BGP IP VPN Virtual CE 623 offline state correlation as well, for example, bind interface state 624 to tenant. 626 7.2.1.1 Inter-domain provisioning vCE Push Model 628 Provisioning process: 630 1) Cloud/DC orchestration configures vCE. 632 2) Orchestration initiates WAN IP VPN provisioning; passes connection 633 IDs (e.g., of VLAN/VXLAN) and tenant context to WAN IP VPN 634 provisioning systems. 636 3) WAN IP VPN provisioning system provisions PE VRF and other 637 policies per normal enterprise IP VPN provisioning processes. 639 This model requires the following: 641 - The DC Orchestration system or the WAN IP VPN provisioning system 642 know the topology inter-connecting the DC and WAN VPN. For 643 example, which interface on the WAN core device connects to which 644 interface on the DC PE. 646 - Offline state correlation. 648 - Offline accounting correlation. 650 - Per SP integration. 652 Dynamic BGP session between PE/vPE and vCE MAY be used to automate 653 the PE provisioning in the PE-vCE model, that will remove the needs 654 for PE configuration. Other protocols can be used for this purpose as 655 well, for example, use Enhanced Interior Gateway Routing Protocol 656 (EIGRP) for dynamic neighbour relationship establishment. 658 The dynamic routing Prevents the need to configure the PEs in PE-vCE 659 model. 661 Caution: This is only under the assumption that the DC provisioning 662 system is trusted and could support dynamic establishment of PE-vCE 663 BGP neighbor relationships, for example, the WAN network and the 664 cloud/DC belongs to the same Service Provider. 666 7.2.1.2 Cross-domain provisioning vCE Push Model 668 Provisioning Process: 670 1) Cross-domain orchestration system initiates DC orch. 672 L. Fang et al. BGP IP VPN Virtual CE 674 2) DC orchestration system configures vCE 676 3) DC orchestration system passes back VLAN/VXLAN and tenant context 677 to Cross-domain orchestration system 679 4) Cross-domain orchestration system initiates WAN IP VPN 680 provisioning 682 5) WAN IP VPN provisioning system provisions PE VRF and other 683 policies as per normal enterprise IP VPN provisioning processes. 685 This model requires the following: 687 - Cross-domain orchestration system knows the topology connecting the 688 DC and WAN IP VPN, for example, which interface on core device 689 connects to which interface on DC PE.- Offline state correlation. 691 - Offline accounting correlation. 693 - Per SP integration. 695 7.1.1 vCE Pull model 697 Pull model: It is a bottom-up approach - pull from network elements 698 to network management/AAA based upon data plane or control plane 699 activity. It supports service activation, this approach is often used 700 in broadband deployment. Dynamic accounting correlation and dynamic 701 state correlation are supported. For example, session based 702 accounting is implicitly includes tenant context state correlation, 703 as well as session based state which implicitly includes tenant 704 context. 706 Inter-domain Provisioning: 708 Process: 710 1) Cloud/DC orchestration system configures vCE 712 2) Cloud/DC Orchestration system primes WAN IP VPN provisioning/AAA 713 for new service, passes connection IDs (e.g., VLAN/VXLAN) and tenant 714 context WAN IP VPN provisioning systems. 716 3) Cloud/DC PE detects new VLAN, send Radius Access-Request. 718 4) Radius Access-Accept with VRF and other policies. 720 This model requires VLAN/VLAN information and tenant context to 721 passed on a per transaction basis. In practice, it may simplify to 723 L. Fang et al. BGP IP VPN Virtual CE 725 use DC orchestration updating LDAP directory 727 Auto accounting correlation and auto state correlation is supported. 729 8. vCE and vPE interaction 731 A vPE ([I-D.fang-l3vpn-virtual-pe] [I-D.ietf-l3vpn-end-system]) is 732 treating the VMs in the server as a virtual CE. In this section, the 733 relationship between the vPE and such vCE is discussed. vPE can 734 support one of the following two models: 736 Model 1: a limited control-plane functionality that advertises 737 local VPN routes to a controller and receive VPN routes from the 738 controller. 740 Model 2: a control plane component physically separated from the 741 forwarding component that fully performs the control plane routing 742 functionality and communicate FIB entries to the vPE forwarding 743 entity implemented on servers. 745 A vCE provides subnet routing, firewalling or SLB services to host 746 VMs. The underlying connectivity between the vCE and these VMs can be 747 at layer 2 or layer 3. In addition, the vCE can be connected to other 748 vCEs over Layer 2 or using an IP VPN infrastructure. In this section, 749 the focus is on IP VPN connectivity and more importantly on the 750 interaction between a vCE, a traditional PE (simply referred to as 751 PE), and between a vCE and a vPE. 753 8.1 Traditional vCE-PE connectivity 755 This connectivity is described in BGP/MPLS IPVPN [RFC4364]. The only 756 distinction being that the VE is a virtual CE. The vCE attaches to 757 the layer 3 PE using a layer2 logical connection, e.g., Ethernet 758 VLAN, or a tunnel (e.g., IP/GRE, VXLAN) that are presented as IP 759 interfaces to a corresponding VRF at the PE. Routing between the vCE 760 and PE can be static or based on a dynamic routing protocol (e.g., 761 OSPF, BGP). A routing protocol, in addition to enabling the exchange 762 of routing information between the PE and vCE, provides liveliness 763 check between the vCE and the PE. In the absence of a dynamic routing 764 protocol, the vCE must support a mechanism that provides for 765 liveliness check, or an out-of-band mechanism must be implemented to 766 monitor the liveliness of a vCE and a connected PE, and effect 767 routing changes upon a failure. Options for in-band liveliness check 768 include IP BFD [RFC5880], Ethernet Continuity Check (CC) [IEEE 769 802.1ag], and IP ping [RFC4560]. IP BFD must be supported while the 770 other mechanisms are optional. 772 8.2 vCE-vPE connectivity 773 L. Fang et al. BGP IP VPN Virtual CE 775 In this model, the vcE and vPE forwarding plane can be: (1) co- 776 located on the same end device, e.g., a server, or (2) located on 777 different servers. In addition, the control plane interaction differs 778 between vPE model 1 and model 2. 780 8.2.1 Co-located vCE-vPE connectivity with vPE Model 1 782 In vPE Model 1, there is a control plane component of the vPE 783 implemented on the end-server (e.g., [I-D.ietf-l3vpn-end-system], [I- 784 D.fang-l3vpn-virtual-pe]). In addition, there is a control plane 785 component implemented on a separate control plane entity (out-of- 786 band) that enables the exchange of routing information among vPEs. In 787 [I-D.ietf-l3vpn-end-system], the out-of-band control plane component 788 is referred to as router server; in [I-D.fang-l3vpn-virtual-pe], it 789 is referred as vPE-C. There are two cases that must be considered: 791 Case 1-A: vCE to vPE local route exchange on a server / vPE-C 793 Case 1-B: vCE to route server / vPE-C route exchange. 795 In these two cases, the vPE control plane or route server must send 796 the CE a default route with next hop being the co-located vPE 797 forwarding plane entity. 799 In case 1-A, the vCE must send local routes to the vPE control plane 800 with itself being the next hop. The vPE control plane entity in turn 801 updates the out-of-band control entity (e.g., route server) with 802 routes reachable via the local CE, as VPN routes, with itself being 803 the next hop for these routes. The vPE also receives from the route 804 server VPN routes reachable via other vPEs [end-system]. It should be 805 noted in this case, that the vCE must be able support one or more 806 routing contexts, each with separate attachment circuit to the vPE. 807 Each such routing context must be associated with a VPN and one or 808 more VPNs must be supported. 810 In case 1-B, the vCE must have a control channel with a route server. 811 There must be a control channel per vCE routing context or 812 alternatively must allow the unambiguous multiplexing of routes that 813 belong to different routing context on the same channel. The vCE 814 sends routes reachable via the vCE to the route server with itself 815 being the next hop. The route server must learn from the co-located 816 vPE control plane component reachability to the local vCE IP address 817 used as next hop. This IP address must be exchanged between the vCE 818 and vPE in-band over a corresponding attachment circuit that 819 identifies the routing context . Alternatively, the route server/vPE- 820 C must be programmed with the association of the vCE control channel, 821 a VPN and an end-device IP address. As a result, the route 822 server/vPE-C must populate the vPE distributed control plane with the 824 L. Fang et al. BGP IP VPN Virtual CE 826 corresponding routes as non-VPN routes and the vPE must respond with 827 VPN routes that correspond to each of these routes. Alternatively, 828 routes reachable via a vCE must be defined via in portal per routing 829 context and therefore VPN, and then correlated upon instantiation of 830 the vCE on an end-system with the end-system IP address and the 831 appropriate VRF on that end-system. In addition, the vCE must be 832 configured with default routes per routing context with the next hop 833 being the vPE. 835 8.2.2 Co-located vCE-vPE connectivity with vPE Model 2 837 In this model, there is no control plane routing component 838 implemented on the end-system. That, is the end-system does not 839 generate VPN routes and only receives VPN FIB entries from the out- 840 of-band control plane component for routes reachable locally and for 841 remote routes. The vCE-control plane interaction is similar to that 842 of the interaction in Model 1 case 1-B described in the previous 843 section whereby route population is management-driven. 845 8. Security Considerations 847 vCE creation on server - is server owned by the the operator? is this 848 managed CE model? how to authenticate? 850 vCE in DC connecting VPN in WAN IP - are the DC and WAN IP VPN belong 851 to the same SP or different? How much info are permitted to pass 852 through auto-provisioning? How to authenticate connections, 853 especially in pull models? 855 How vCE protects itself from attach from client VMs? 857 Additional security procedures in all virtualized cloud/DC 858 environment, FW placement. All virtualized appliances need to be 859 protected against attack. 861 Three tier (Web, App, DB) interaction access control. 863 Details to be added. 865 9. IANA Considerations 867 None. 869 10. References 871 10.1 Normative References 873 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 875 L. Fang et al. BGP IP VPN Virtual CE 877 Requirement Levels", BCP 14, RFC 2119, March 1997. 879 [RFC4271] Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A 880 Border Gateway Protocol 4 (BGP-4)", RFC 4271, January 881 2006. 883 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 884 Networks (VPNs)", RFC 4364, February 2006. 886 [RFC4560] Quittek, J., Ed., and K. White, Ed., "Definitions of 887 Managed Objects for Remote Ping, Traceroute, and Lookup 888 Operations", RFC 4560, June 2006. 890 [RFC4760] Bates, T., Chandra, R., Katz, D., and Y. Rekhter, 891 "Multiprotocol Extensions for BGP-4", RFC 4760, January 892 2007. 894 [RFC5880] Katz, D. and D. Ward, "Bidirectional Forwarding Detection 895 (BFD)", RFC 5880, June 2010. 897 [I-D.ietf-l3vpn-end-system] Marques, P., Fang, L., Pan, 898 P., Shukla, A., Napierala, M., "BGP-signaled end-system 899 IP/VPNs", draft-ietf-l3vpn-end-system, work in progress. 901 [IEEE 802.1ad] IEEE, "Provider Bridges", 2005. 903 [IEEE 802.1q] IEEE, "802.1Q - Virtual LANs", 2006. 905 [IEEE 802.1ag] IEEE "802.1ag - Connectivity Fault 906 Management", 2007. 908 10.2 Informative References 910 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 911 and W. Weiss, "An Architecture for Differentiated 912 Service", RFC 2475, December 1998. 914 [I-D.fang-l3vpn-virtual-pe] Fang, L., Ward, D., Fernando, R., 915 Napierala, M., Bitar, N., Rao, D., Rijsman, B., So, N., 916 "BGP IP VPN Virtual PE", draft-fang-l3vpn-virtual-pe, work 917 in progress. 919 [I-D.ward-irs-framework] Atlas, A., Nadeau, T., Ward. D., "Interface 920 to the Routing System Framework", draft-ward-irs- 921 framework, work in progress. 923 L. Fang et al. BGP IP VPN Virtual CE 925 [I-D.rfernando-irs-framework-requirement] Fernando, R., Medved, J., 926 Ward, D., Atlas, A., Rijsman, B., "IRS Framework 927 Requirements", draft-rfernando-irs-framework-requirement- 928 00, work in progress. 930 [I-D.mahalingam-dutt-dcops-vxlan]: Mahalingam, M, Dutt, D.., et al., 931 "A Framework for Overlaying Virtualized Layer 2 Networks 932 over Layer 3 Networks" draft-mahalingam-dutt-dcops-vxlan, 933 work in progress. 935 [I-D.sridharan-virtualization-nvgre]: SridharanNetwork, M., et al., 936 "Virtualization using Generic Routing Encapsulation", 937 draft-sridharan-virtualization-nvgre,, work in progress. 939 11. Acknowledgement 941 The authors would like to thank Vaughn Suazo for his review and 942 comments. 944 Authors' Addresses 946 Luyuan Fang 947 Microsoft 948 5600 148th Ave NE 949 Redmond, WA 98052 950 US 951 Email: lufang@microsoft.com 953 John Evans 954 Cisco 955 16-18 Finsbury Circus 956 London, EC2M 7EB 957 UK 958 Email: joevans@cisco.com 960 David Ward 961 Cisco 962 170 W Tasman Dr 963 San Jose, CA 95134 964 US 965 Email: wardd@cisco.com 967 Rex Fernando 968 Cisco 969 170 W Tasman Dr 970 San Jose, CA 972 L. Fang et al. BGP IP VPN Virtual CE 974 US 975 Email: rex@cisco.com 977 Ning So 978 Vinci Systems 979 Email: ning.so@vinci-systems.com 981 Nabil Bitar 982 Verizon 983 40 Sylvan Road 984 Waltham, MA 02145 985 Email: nabil.bitar@verizon.com 987 Maria Napierala 988 AT&T 989 200 Laurel Avenue 990 Middletown, NJ 07748 991 Email: mnapierala@att.com