idnits 2.17.1 draft-dm-net2cloud-problem-statement-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 3 instances of too long lines in the document, the longest one being 8 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 2, 2018) is 2123 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'ITU-T-X1036' is defined on line 611, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Dunbar 2 Internet Draft A. Malis 3 Intended status: Informational Huawei 4 Expires: January 2018 C. Jacquenet 5 Orange 6 M. Toy 7 Verizon 8 July 2, 2018 10 Seamless Interconnect Underlay to Cloud Overlay Problem Statement 11 draft-dm-net2cloud-problem-statement-02 13 Abstract 15 This document describes the problems of enterprises face today in 16 connecting their branch offices to dynamic workloads in commercial 17 cloud data centers (DCs). 19 It examines some of the approaches for interconnecting workloads & 20 applications hosted in cloud DCs with enterprises' on-premises DCs & 21 branch offices. This document also describes some of the (network) 22 problems that many enterprises face when they have workloads & 23 applications & data split among hybrid data centers, especially for 24 those enterprises with multiple sites that are already 25 interconnected by VPNs (e.g. MPLS L2VPN/L3VPN) and leased lines. 27 Current operational problems in the field are examined to determine 28 whether there is a need for enhancements to existing protocols or 29 whether a new protocol is necessary to solve them. 31 Status of this Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. This document may not be modified, 38 and derivative works of it may not be created, except to publish it 39 as an RFC and to translate it into languages other than English. 41 Internet-Drafts are working documents of the Internet Engineering 42 Task Force (IETF), its areas, and its working groups. Note that 43 other groups may also distribute working documents as Internet- 44 Drafts. 46 Internet-Drafts are draft documents valid for a maximum of six 47 months and may be updated, replaced, or obsoleted by other documents 48 at any time. It is inappropriate to use Internet-Drafts as 49 reference material or to cite them other than as "work in progress." 51 The list of current Internet-Drafts can be accessed at 52 http://www.ietf.org/ietf/1id-abstracts.txt 54 The list of Internet-Draft Shadow Directories can be accessed at 55 http://www.ietf.org/shadow.html 57 This Internet-Draft will expire on January 2, 2009. 59 Copyright Notice 61 Copyright (c) 2018 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (http://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with 69 respect to this document. Code Components extracted from this 70 document must include Simplified BSD License text as described in 71 Section 4.e of the Trust Legal Provisions and are provided without 72 warranty as described in the Simplified BSD License. 74 Table of Contents 76 1. Introduction...................................................3 77 2. Definition of terms............................................4 78 3. Current Practices in Interconnecting Enterprise Sites with Cloud 79 DCs...............................................................5 80 3.1. Interconnect to Cloud DCs.................................5 81 3.2. Interconnect to Hybrid Cloud DCs..........................7 82 3.3. Connecting workloads among hybrid Cloud DCs...............7 83 4. Desired Properties for Networking that interconnects Hybrid Cloud 84 DCs...............................................................8 85 5. Problems with MPLS-based VPNs extending to Hybrid Cloud DCs....8 86 6. Problem with using IPsec tunnels to Cloud DCs.................10 87 6.1. Complexity of multi-point any-to-any interconnection.....10 88 6.2. Poor performance over long distance......................11 89 6.3. Scaling Issues with IPsec Tunnels........................11 90 7. Problems of Using SD-WAN to connect to Cloud DCs..............12 91 7.1. SD-WAN among branch offices vs. interconnect to Cloud DCs12 92 8. End-to-End Security Concerns for Data Flows...................15 93 9. Requirements for Dynamic Cloud Data Center VPNs...............15 94 10. Security Considerations......................................16 95 11. IANA Considerations..........................................16 96 12. References...................................................16 97 12.1. Normative References....................................16 98 12.2. Informative References..................................16 99 13. Acknowledgments..............................................17 101 1. Introduction 103 Cloud applications and services continue to change how businesses of 104 all sizes work and share information. "Cloud applications & 105 workloads" are those that are instantiated in third party DCs that 106 also host services for other customers. 108 With the advent of widely available third party cloud DCs in diverse 109 geographic locations and the advancement of tools for monitoring and 110 predicting application behaviors, it is technically feasible for 111 enterprises to instantiate applications and workloads in locations 112 that are geographically closest to their end users. This property 113 aids in improving end-to-end latency and overall user experience. 114 Conversely, an enterprise can easily shutdown applications and 115 workloads when their end users' geographic base changes (therefore 116 needing to change the networking connection to those relocated 117 applications and workloads). In addition, an enterprise may wish to 118 take advantage of more and more business applications offered by 119 third party private cloud DCs, such as SAP HANA, Oracle Cloud, 120 Salesforce Cloud, etc. 122 However, typically, enterprise branch offices & on-premises data 123 centers are connected via VPNs, such as MPLS based l2VPN/L3VPN, and 124 therefore connecting to the cloud-based resources may not be 125 straightforward if the provider of the VPN service does not have 126 direct connections to the cloud DCs. Under those circumstances, the 127 enterprise can upgrade their existing CPEs to utilize SD-WAN to 128 reach cloud resources (without any assistance from the VPN service 129 provider), or wait for their VPN service provider to make new 130 agreements with data center providers to connect to the Cloud 131 resources. Either way this is non-trivial and has additional 132 infrastructure costs, and is slow to operationalize. 134 In addition, it is an uptrend with more and more enterprises 135 changing their Apps & workloads so that they can be split among 136 hybrid DCs to maximize the benefits of geographical convenience & 137 elasticity and special property of on-premises DCs. 139 2. Definition of terms 141 Cloud DC: Off-Premise Data Centers that usually host applications 142 and workload owned by different organizations or 143 tenants. 145 Controller: Used interchangeably with SD-WAN controller to manage 146 SD-WAN overlay path creation/deletion and monitoring the 147 path conditions between two or more sites. 149 DMVPN: Dynamic Multipoint Virtual Private Network. DMVPN is a 150 secure network that exchanges data between sites without 151 needing to pass traffic through an organization's 152 headquarter virtual private network (VPN) server or 153 router. 155 Heterogeneous Cloud: applications & workloads split among Cloud DCs 156 owned & managed by different operators. 158 Hybrid Cloud: applications & workloads split between on-premises 159 Data centers and cloud DCs. In this document Hybrid 160 Cloud also include heterogeneous cloud as well. 162 SD-WAN: Software Defined Wide Area Network, which can mean many 163 different things. In this document, "SD-WAN" refers to 164 the solutions specified by ONUG (Open Network User 165 Group), https://www.onug.net/software-defined-wide-area- 166 network-sd-wan/, which is about pooling WAN bandwidth 167 from n service providers to get better WAN bandwidth 168 management, visibility & control. 170 VPC: Virtual Private Cloud. A service offered by many Cloud 171 DC operators to allocate a logically isolated cloud 172 resources, including compute, networking and storage. 174 3. Current Practices in Interconnecting Enterprise Sites with Cloud DCs 176 3.1. Interconnect to Cloud DCs 178 Most Cloud operators offer some type of network gateway through 179 which an enterprise can reach their workloads hosted in the Cloud 180 DC. For example, AWS (Amazon Web Services) offers the following 181 options to reach workloads in AWS Cloud DCs: 183 - Internet gateway for any external entities to reach the 184 workloads hosted in AWS Cloud DC via the internet. 185 - virtual gateway (vGW) to which IPsec tunnels [RFC6071] are 186 established between an enterprise's own gateways and AWS vGW, 187 so that the communications between those gateways can be 188 secured from the underlay (which might be the public internet). 189 - Direct Connect, which allows enterprises to purchase direct 190 connect from network service providers to get a private leased 191 line interconnecting the enterprises gateway(s) and the AWS 192 Direct Connect routers co-located with the network operators. 194 +------------------------+ 195 | ,---. ,---. | 196 | (TN-1 ) ( TN-2)| 197 | `-+-' +--+ `-+-' | 198 | +----|vR|-----+ | 199 | ++-+ | 200 | | +-+----+ 201 | | /Internet\ For External 202 | +-------+ Gateway +---------------------- 203 | \ / to reach via Internet 204 | +-+----+ 205 | | 206 +------------------------+ 208 +------------------------+ 209 | ,---. ,---. | 210 | (TN-1 ) ( TN-2)| 211 | `-+-' +--+ `-+-' | 212 | +----|vR|-----+ | 213 | ++-+ | 214 | | +-+----+ 215 | | / virtual\ For IPsec Tunnel 216 | +-------+ Gateway +---------------------- 217 | \ / termination 218 | +-+----+ 219 | | 220 +------------------------+ 222 +------------------------+ 223 | ,---. ,---. | 224 | (TN-1 ) ( TN-2)| 225 | `-+-' +--+ `-+-' | 226 | +----|vR|-----+ | 227 | ++-+ | 228 | | +-+----+ +------+ 229 | | / \ For Direct /customer\ 230 | +-------+ Gateway +----------+ gateway | 231 | \ / Connect \ / 232 | +-+----+ +------+ 233 | | 234 +------------------------+ 236 Figure 1: Examples of connecting to a Cloud DC 238 3.2. Interconnect to Hybrid Cloud DCs 240 According to Gartner, by 2020 "hybrid will be the most common usage 241 of the cloud" as more enterprises see the benefits of integrating 242 public and private cloud infrastructures. However, enabling the 243 growth of hybrid cloud deployments in the enterprise requires fast 244 and safe interconnection between public and private cloud services. 245 The Hybrid Cloud scenario also includes heterogeneous Cloud DCs. 247 For enterprises to connect to applications & workloads hosted in 248 multiple Cloud DCs, enterprises can use IPsec tunnels or lease 249 private lines to connect their own gateways to each of the Cloud 250 DC's gateways or any other suitable design (including a combination 251 thereof). 253 Some users prefer to instantiate their own virtual CPEs inside the 254 public Cloud DC to connect the workloads within the Cloud DC. Then 255 an overlay path is established between customer gateways to the 256 virtual CPEs for reaching the workloads inside the cloud DC. 258 3.3. Connecting workloads among hybrid Cloud DCs 260 When workloads among different Cloud DCs need to communicate, one 261 way is to hairpin all the traffic through the customer gateway, 262 which creates additional transmission delay & incurs cost exiting 263 Cloud DCs. Another way is to establish direct tunnels among 264 different VPCs (Virtual Private Clouds), such as using DMVPN 265 (Dynamic Multipoint Virtual Private Network) or DSVPN (Dynamic Smart 266 VPN) to establish direct Multi-edge tunnels. 268 DMVPN (and DSVPN) uses NHRP (Next Hop Resolution Protocol) [RFC2735] 269 so that spoke nodes can register their IP addresses with the hub 270 node. The IETF ION WG, Internetworking over NBMA (non-broadcast 271 multiple access), standardized NHRP for connection-oriented NBMA 272 network (such as ATM) network address resolution more than two 273 decades ago. 275 There are many differences between virtual routers in Public Cloud 276 DCs and the nodes in an NBMA network. It would be useful for the 277 IETF community to examine the effectiveness of NHRP as the 278 registration protocol for registering virtual routers in Cloud DCs 279 to gateways or entities that connect to enterprise private networks. 281 As the result of this evaluation, enhancement or new registration 282 protocols may result. 284 4. Desired Properties for Networking that interconnects Hybrid Cloud 285 DCs 286 The networks that interconnect hybrid Cloud DCs have to enable users 287 to take advantage of Cloud DCs: 288 - High availability, any time usage for any length of time. 289 Many enterprises incorporate Cloud as their disaster recovery 290 strategy, e.g. periodically backup data into the cloud, or 291 running backup applications in the Cloud, etc. Therefore, the 292 connection to the cloud DCs may not be permanent, but rather 293 needs to be on-demand. 295 - Global accessibility in different geographical zones, thereby 296 facilitating the proximity of applications as a function of the 297 end users' location, for improved latency. 298 - Elasticity and mobility, to instantiate additional applications 299 at Cloud DCs when end users' usages increase and shut down 300 applications at locations with fewer end users. 301 Some enterprises have front-end web portals running in Cloud 302 DCs and Database servers in their on-premises DCs. Those Front- 303 end web portals need to be reachable from the public Internet. 304 The backend connection to the sensitive data in database 305 servers hosted in the on-premises DCs might need secure 306 connections. 308 5. Problems with MPLS-based VPNs extending to Hybrid Cloud DCs 310 Traditional MPLS-based VPNs have been widely deployed as an 311 effective way to support businesses and organizations that require 312 network performance and reliability. MPLS shifted the burden of 313 managing a VPN service from enterprises to service providers. The 314 CPEs for MPLS VPN are also simpler and less expensive, since they do 315 not need to manage how to send packets to remote sites; they simply 316 pass all outbound traffic to the MPLS VPN PEs to which the CPE is 317 attached (albeit multi-homing scenarios require more processing 318 logic on CPEs). MPLS has addressed the problems of scale, 319 availability, and fast recovery from network faults, and 320 incorporated traffic-engineering capabilities. 322 However, traditional MPLS-based VPN solutions are not optimized for 323 connecting end-users to dynamic workloads/applications in cloud DCs 324 because: 326 - The Provider Edge (PE) nodes of the enterprise's VPNs might not 327 have direct connection to the third party cloud DCs that are 328 optimal for hosting workloads with the goal of easy access to 329 enterprises' end users. 331 - It takes a relatively long time to deploy provider edge (PE) 332 routers at new locations. When enterprise's workloads are 333 changed from one cloud DC to another (i.e., removed from one DC 334 and re-instantiated to another location when demand changes), 335 the enterprise branch offices need to be connected to the new 336 cloud DC, but the network service provider might not have PEs 337 located at the new location. 339 One of the main drivers for moving workloads into the cloud is 340 the widely available cloud DCs at geographically diverse 341 locations, where apps can be instantiated so that they can be 342 as close to their end users as possible. When the user base 343 changes, the applications may be moved to a new cloud DC 344 location closest to the new user base. 346 - Most of the cloud DCs do not expose their internal networks, so 347 the provider MPLS based VPNs cannot reach the workloads 348 natively. 350 - Many cloud DCs use an overlay to connect their gateways to the 351 workloads inside the DC. There has not been any standard to 352 address the interworking between the Cloud Overlay and the 353 enterprise' existing underlay networks. 355 Another roadblock is the lack of a standard way to express and 356 enforce consistent security policies to workloads that not only use 357 virtual addresses, do not have a port number, but also have a high 358 chance of placement in different locations within the Cloud DC 360 [RFC8192]. The traditional VPN path computation and bandwidth 361 allocation schemes may not be flexible enough to address the need 362 for enterprises to rapidly connect to dynamically instantiated (or 363 removed) workloads and applications regardless of their 364 location/nature (i.e., third party cloud DCs). 366 6. Problem with using IPsec tunnels to Cloud DCs 367 As described in the previous section, many Cloud operators expose 368 their gateways for external entities (which can be enterprises 369 themselves) to directly establish IPsec tunnels. If there is only 370 one enterprise location that needs to reach the Cloud DC, an IPsec 371 tunnel is a very convenient solution. 373 However, many medium-to-large enterprises usually have multiple 374 sites and multiple data centers. For workloads and apps hosted in 375 Cloud DCs, multiple sites need to communicate securely with those 376 Cloud workloads and apps. This section documents some of the issues 377 associated with using IPsec tunnels to connect enterprise' sites 378 with Cloud operator's Gateways. 380 6.1. Complexity of multi-point any-to-any interconnection 382 The dynamic workload instantiated in cloud DC needs to communicate 383 with multiple branch offices and on-premises data centers. Most 384 enterprises need multi-point interconnection among multiple 385 locations, as done by MPLS L2/L3 VPNs. 387 Using IPsec overlay paths to connect all branches & on-premises data 388 centers to cloud DCs require CPEs to manage routing among Cloud DCs 389 gateways and the CPEs located at other branch locations, which can 390 dramatically increase the complexity of the design, possibly at the 391 cost of jeopardizing the CPE performance. 393 The complexity of requiring CPEs to maintain routing among other 394 CPEs is one of the reasons why enterprises migrated from Frame Relay 395 based services to MPLS-based VPN services. 397 MPLS-based VPNs have their PEs directly connected to the CPEs. 398 Therefore, CPEs only need to forward all traffic to the directly 399 attached PEs, which are therefore responsible for enforcing the 400 routing policy within the corresponding VPNs. Even for multi-homed 401 CPEs, the CPEs only need to forward traffic among the directly 402 connected PEs (note: the complexity may vary for IPv6 network). 404 However, when using IPsec tunnels between CPEs and Cloud DCs, the 405 CPEs need to manage the routing for traffic to Cloud DCs, to remote 406 CPEs via VPN, or directly. 408 6.2. Poor performance over long distance 410 When enterprise CPEs or gateways are far away from Cloud DC gateways 411 or across country/continent boundaries, performance of IPsec tunnels 412 over the public Internet can be problematic and unpredictable. Even 413 though there are many monitoring tools available to measure delay 414 and various performance characteristics of the network, the 415 measurement for paths over the Internet is passive and past 416 measurements may not represent future performance. 418 Many cloud providers can replicate workloads in different available 419 zones. An App instantiated in a Cloud DC closest to clients may have 420 to cooperate with another App (or its mirror image) in another 421 region or the database server in the on-premises DC. This kind of 422 coordination requires predicable networking behavior/performance 423 among those locations. 425 6.3. Scaling Issues with IPsec Tunnels 427 IPsec can achieve secure overlay connections between two locations 428 over any underlay networks, e.g., between CPEs and Cloud DC 429 Gateways. 431 If there is only one enterprise location connected to the Cloud 432 gateway, a small number of IPsec tunnels can be configured on-demand 433 between the on-premises DC and the Cloud DC, which is an easy and 434 flexible solution. 436 However, for multiple enterprise locations to reach workloads hosted 437 in cloud DCs, the Cloud DC gateway needs to maintain multiple IPsec 438 tunnels to all those locations (e.g. hub & spoke topology). For a 439 company with hundreds or thousands of locations, there could be 440 hundreds (or even thousands) of IPsec tunnels terminating at the 441 Cloud DC gateway, which is not only very expensive (because Cloud 442 Operators charge based on connections), but can be very processing 443 intensive for the gateway. Many cloud operators only allow a limited 444 number of IPsec tunnels to each customer. Alternatively, you could 445 use a solution like group encryption where a single IPSec SA is 446 necessary at the GW but the drawback here is key distribution and 447 maintenance of a key server etc. 449 7. Problems of Using SD-WAN to connect to Cloud DCs 450 SD-WAN enables multiple parallel paths between two locations, for 451 example, two CPEs interconnected by a traditional MPLS VPN 452 ([RFC4364] or [RFC4664]) as well as overlay tunnels. The overlay, 453 possibly secured by IPsec tunnels [RFC6071], can traverse over the 454 public Internet using fiber, cable, DSL-based Internet access, Wi- 455 Fi, or 4G/Long Term Evolution (LTE). 457 SD-WAN lets enterprises augment their current VPN network with cost- 458 effective, readily available Broadband Internet connectivity, 459 enabling some traffic offloaded to overlay paths based on traffic 460 forwarding policy (application-based or otherwise), or when the MPLS 461 VPN connection between the two locations is congested, or otherwise 462 undesirable or unavailable. 464 7.1. SD-WAN among branch offices vs. interconnect to Cloud DCs 466 SD-WAN interconnection of branch offices is not as simple as it 467 appears. For an enterprise with multiple sites, using SD-WAN overlay 468 paths among sites requires each CPE to manage all the addresses that 469 local hosts have the potential to reach, i.e. map internal VPN 470 addresses to appropriate SD-WAN paths. This is similar to the 471 complexity of Frame Relay based VPNs, where each CPE needed to 472 maintain mesh routing for all destinations if they were to avoid an 473 extra hop through a hub router. Even though SD-WAN CPEs can get 474 assistance from a central controller (instead of running a routing 475 protocol) to resolve the mapping between destinations and SD-WAN 476 paths, SD-WAN CPEs are still responsible for routing table 477 maintenance as remote destinations change their attachments, e.g., 478 the dynamic workload in other DCs are de-commissioned or added. 480 Even though originally envisioned for interconnecting branch 481 offices, SD-WAN offers a very attractive way for enterprises to 482 connect to Cloud DCs. 484 The SD-WAN for interconnecting branch offices and the SD-WAN for 485 interconnecting to Cloud DCs have some differences: 487 - SD-WAN for interconnecting branch offices usually have two end- 488 points (e.g. CPEs) controlled by one entity (e.g., a controller 489 or management system operated by the enterprise). 491 - SD-WAN for interconnecting to Cloud DCs may have CPEs owned or 492 managed by the enterprise and remote end-points being managed 493 or controlled by Cloud DCs (For the ease of description, let's 494 call it asymmetrically managed CPEs). 496 - Cloud DCs may have different entering points (or devices) with 497 one terminating private direct connect (such as MPLS, or direct 498 line) and other points being the device terminating the IPsec 499 tunnels, as shown in the following diagram. 501 Therefore, the SD-WAN becomes asymmetric. 502 +------------------------+ 503 | ,---. ,---. | 504 | (TN-1 ) ( TN-2)| 505 | `-+-' +---+ `-+-' | 506 | +----|vR1|----+ | 507 | ++--+ | 508 | | +-+----+ 509 | | /Internet\ One path via 510 | +-------+ Gateway +---------------------+ 511 | \ / Internet \ 512 | +-+----+ \ 513 +------------------------+ \ 514 \ 515 +------------------------+ \ 516 | ,---. ,---. | | 517 | (TN-3 ) ( TN-4)| | 518 | `-+-' +--+ `-+-' | | +------+ 519 | +----|vR|-----+ | +------+ CPE | 520 | ++-+ | | +------+ 521 | | +-+----+ | 522 | | / virtual\ One path via IPsec Tunnel | 523 | +-------+ Gateway +-------------------------- + 524 | \ / | 525 | +-+----+ | 526 +------------------------+ | 527 | 528 +------------------------+ | 529 | ,---. ,---. | | 530 | (TN-5 ) ( TN-6)| | 531 | `-+-' +--+ `-+-' | | 532 | +----|vR|-----+ | | 533 | ++-+ | | 534 | | +-+----+ +------+ | 535 | | / \ Via Direct /customer\ | 536 | +-------+ Gateway +----------+ gateway |-----+ 537 | \ / Connect \ / 538 | +-+----+ +------+ 539 +------------------------+ 541 Figure 2: Asymmetric Paths SD-WAN 543 8. End-to-End Security Concerns for Data Flows 545 When IPsec tunnels from enterprise on-premises CPEs are terminated 546 at the Cloud DC gateway where the workloads or applications are 547 hosted, some enterprises have concerns regarding traffic to/from 548 their workload being exposed to others behind the data center 549 gateway (e.g., exposed to other organizations that have workloads 550 in the same data center). 551 To ensure that traffic to/from workloads is not exposed to 552 unwanted entities, it is necessary to have the IPsec tunnels go 553 all the way to the workload (servers, or VMs) within the DC. 555 9. Requirements for Dynamic Cloud Data Center VPNs 557 [Editor's note: this section is only a place holder. The requirement 558 listed here are only to stimulate more discussions] 560 In order to address the aforementioned issues, any solution for 561 enterprise VPNs that includes connectivity to dynamic workloads or 562 applications in cloud data centers should satisfy a set of 563 requirements: 565 - The solution should allow enterprises to take advantage of the 566 current state-of-the-art in VPN technology, in both traditional 567 MPLS-based VPNs and IPsec-based VPNs (or any combination 568 thereof) that run over-the-top of the public Internet. 569 - The solution should not require an enterprise to upgrade all 570 their existing CPEs. 571 - The solution should not require either CPEs or routers to 572 support a large number of IPsec tunnels simultaneously. 573 - The solution needs to support easy and fast VPN connections to 574 dynamic workloads and applications in third party data centers, 575 and easily allow these workloads to migrate both within a data 576 center and between data centers. 577 - Allow VPNs to provide bandwidth and other performance 578 guarantees. 580 - Be a cost-effective solution for enterprises to incorporate 581 dynamic cloud-based applications and workloads into their 582 existing VPN environment. 584 10. Security Considerations 586 For the most part, we introduce no new security concerns beyond 587 those of existing MPLS based VPNs, which are widely deployed. The 588 one addition to MPLS VPNs is selective use of SD-WAN, which uses 589 IPsec tunnels for the privacy and separation of VPN traffic. 591 Also see Section 8 for a discussion of end-to-end security for data 592 flows. 594 11. IANA Considerations 596 This document requires no IANA actions. RFC Editor: Please remove 597 this section before publication. 599 12. References 601 12.1. Normative References 603 12.2. Informative References 605 [RFC2735] B. Fox, et al "NHRP Support for Virtual Private 606 networks". Dec. 1999. 608 [RFC8192] S. Hares, et al "Interface to Network Security Functions 609 (I2NSF) Problem Statement and Use Cases", July 2017 611 [ITU-T-X1036] ITU-T Recommendation X.1036, "Framework for creation, 612 storage, distribution and enforcement of policies for 613 network security", Nov 2007. 615 [RFC6071] S. Frankel and S. Krishnan, "IP Security (IPsec) and 616 Internet Key Exchange (IKE) Document Roadmap", Feb 2011. 618 [RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private 619 Networks (VPNs)", Feb 2006 621 [RFC4664] L. Andersson and E. Rosen, "Framework for Layer 2 Virtual 622 Private Networks (L2VPNs)", Sept 2006. 624 13. Acknowledgments 626 Many thanks to Ignas Bagdonas, Michael Huang, Liu Yuan Jiao, 627 Katherine Zhao, and Jim Guichard for the discussion and 628 contributions. 630 Authors' Addresses 632 Linda Dunbar 633 Huawei 634 Email: Linda.Dunbar@huawei.com 636 Andrew G. Malis 637 Huawei 638 Email: agmalis@gmail.com 640 Christian Jacquenet 641 France Telecom 642 Rennes, 35000 643 France 644 Email: Christian.jacquenet@orange.com 646 Mehmet Toy 647 Verizon 648 One Verizon Way 649 Basking Ridge, NJ 07920 650 Email: mehmet.toy@verizon.com