idnits 2.17.1 draft-dm-net2cloud-problem-statement-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 3 instances of too long lines in the document, the longest one being 8 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 21, 2018) is 2014 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'ITU-T-X1036' is defined on line 605, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Dunbar 2 Internet Draft A. Malis 3 Intended status: Informational Huawei 4 Expires: January 2018 C. Jacquenet 5 Orange 6 M. Toy 7 Verizon 8 October 21, 2018 10 Seamless Interconnect Underlay to Cloud Overlay Problem Statement 11 draft-dm-net2cloud-problem-statement-03 13 Abstract 15 This document describes the problems of enterprises face today in 16 connecting their branch offices to dynamic workloads in third party 17 data centers (a.k.a. Cloud DCs). 19 It examines some of the approaches for interconnecting workloads & 20 applications hosted in cloud DCs with enterprises' on-premises DCs & 21 branch offices. This document also describes some of the (network) 22 problems that many enterprises face when they have workloads & 23 applications & data split among hybrid data centers, especially for 24 those enterprises with multiple sites that are already 25 interconnected by VPNs (e.g. MPLS L2VPN/L3VPN) and leased lines. 27 Current operational problems in the field are examined to determine 28 whether there is a need for enhancements to existing protocols or 29 whether a new protocol is necessary to solve them. 31 Status of this Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. This document may not be modified, 38 and derivative works of it may not be created, except to publish it 39 as an RFC and to translate it into languages other than English. 41 Internet-Drafts are working documents of the Internet Engineering 42 Task Force (IETF), its areas, and its working groups. Note that 43 other groups may also distribute working documents as Internet- 44 Drafts. 46 Internet-Drafts are draft documents valid for a maximum of six 47 months and may be updated, replaced, or obsoleted by other documents 48 at any time. It is inappropriate to use Internet-Drafts as 49 reference material or to cite them other than as "work in progress." 51 The list of current Internet-Drafts can be accessed at 52 http://www.ietf.org/ietf/1id-abstracts.txt 54 The list of Internet-Draft Shadow Directories can be accessed at 55 http://www.ietf.org/shadow.html 57 This Internet-Draft will expire on January 21, 2009. 59 Copyright Notice 61 Copyright (c) 2018 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (http://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with 69 respect to this document. Code Components extracted from this 70 document must include Simplified BSD License text as described in 71 Section 4.e of the Trust Legal Provisions and are provided without 72 warranty as described in the Simplified BSD License. 74 Table of Contents 76 1. Introduction...................................................3 77 2. Definition of terms............................................4 78 3. Current Practices in Interconnecting Enterprise Sites with Cloud 79 DCs...............................................................5 80 3.1. Interconnect to Cloud DCs.................................5 81 3.2. Interconnect to Hybrid Cloud DCs..........................7 82 3.3. Connecting workloads among hybrid Cloud DCs...............7 83 4. Desired Properties for Networking that interconnects Hybrid Cloud 84 DCs...............................................................8 85 5. Problems with MPLS-based VPNs extending to Hybrid Cloud DCs....8 86 6. Problem with using IPsec tunnels to Cloud DCs.................10 87 6.1. Complexity of multi-point any-to-any interconnection.....10 88 6.2. Poor performance over long distance......................11 89 6.3. Scaling Issues with IPsec Tunnels........................11 90 7. Problems of Using SD-WAN to connect to Cloud DCs..............11 91 7.1. SD-WAN among branch offices vs. interconnect to Cloud DCs12 92 8. End-to-End Security Concerns for Data Flows...................14 93 9. Requirements for Dynamic Cloud Data Center VPNs...............14 94 10. Security Considerations......................................15 95 11. IANA Considerations..........................................15 96 12. References...................................................15 97 12.1. Normative References....................................15 98 12.2. Informative References..................................15 99 13. Acknowledgments..............................................16 101 1. Introduction 103 Cloud applications and services continue to change how businesses of 104 all sizes work and share information. "Cloud applications & 105 workloads" are those that are instantiated in third party DCs that 106 also host services for other customers. 108 With the advent of widely available third party cloud DCs in diverse 109 geographic locations and the advancement of tools for monitoring and 110 predicting application behaviors, it is technically feasible for 111 enterprises to instantiate applications and workloads in locations 112 that are geographically closest to their end users. This property 113 aids in improving end-to-end latency and overall user experience. 114 Conversely, an enterprise can easily shutdown applications and 115 workloads when their end users' geographic base changes (therefore 116 needing to change the networking connection to those relocated 117 applications and workloads). In addition, an enterprise may wish to 118 take advantage of more and more business applications offered by 119 third party private cloud DCs, such as Dropbox, Microsoft365, SAP 120 HANA, Oracle Cloud, Salesforce Cloud, etc. 122 Most of those enterprise branch offices & on-premises data centers 123 are already connected via VPNs, such as MPLS based l2VPN/L3VPN. Then 124 connecting to the cloud-based resources may not be straightforward 125 if the provider of the VPN service does not have direct connections 126 to the cloud DCs. Under those circumstances, the enterprise can 127 upgrade their existing CPEs to utilize SD-WAN to reach cloud 128 resources (without any assistance from the VPN service provider), or 129 wait for their VPN service provider to make new agreements with data 130 center providers to connect to the Cloud resources. Either way has 131 additional infrastructure costs and is slow to operationalize. 133 In addition, it is an uptrend with more and more enterprises 134 changing their Apps & workloads so that they can be split among 135 hybrid DCs to maximize the benefits of geographical convenience & 136 elasticity and special property of on-premises DCs. 138 2. Definition of terms 140 Cloud DC: Third party Data Centers that usually host applications 141 and workload owned by different organizations or 142 tenants. 144 Controller: Used interchangeably with SD-WAN controller to manage 145 SD-WAN overlay path creation/deletion and monitoring the 146 path conditions between two or more sites. 148 DSVPN: Dynamic Smart Virtual Private Network. DSVPN is a secure 149 network that exchanges data between sites without 150 needing to pass traffic through an organization's 151 headquarter virtual private network (VPN) server or 152 router. 154 Heterogeneous Cloud: applications & workloads split among Cloud DCs 155 owned & managed by different operators. 157 Hybrid Cloud: applications & workloads split between on-premises 158 Data centers and cloud DCs. In this document Hybrid 159 Cloud also include heterogeneous cloud as well. 161 SD-WAN: Software Defined Wide Area Network, which can mean many 162 different things. In this document, "SD-WAN" refers to 163 the solutions specified by ONUG (Open Network User 164 Group), https://www.onug.net/software-defined-wide-area- 165 network-sd-wan/, which is about pooling WAN bandwidth 166 from n service providers to get better WAN bandwidth 167 management, visibility & control. 169 VPC: Virtual Private Cloud. A service offered by many Cloud 170 DC operators to allocate a logically isolated cloud 171 resources, including compute, networking and storage. 173 3. Current Practices in Interconnecting Enterprise Sites with Cloud DCs 175 3.1. Interconnect to Cloud DCs 177 Most Cloud operators offer some type of network gateway through 178 which an enterprise can reach their workloads hosted in the Cloud 179 DC. For example, AWS (Amazon Web Services) offers the following 180 options to reach workloads in AWS Cloud DCs: 182 - Internet gateway for any external entities to reach the 183 workloads hosted in AWS Cloud DC via the internet. 184 - virtual gateway (vGW) to which IPsec tunnels [RFC6071] are 185 established between an enterprise's own gateways and AWS vGW, 186 so that the communications between those gateways can be 187 secured from the underlay (which might be the public internet). 188 - Direct Connect, which allows enterprises to purchase direct 189 connect from network service providers to get a private leased 190 line interconnecting the enterprises gateway(s) and the AWS 191 Direct Connect routers co-located with the network operators. 193 +------------------------+ 194 | ,---. ,---. | 195 | (TN-1 ) ( TN-2)| 196 | `-+-' +--+ `-+-' | 197 | +----|vR|-----+ | 198 | ++-+ | 199 | | +-+----+ 200 | | /Internet\ For External 201 | +-------+ Gateway +---------------------- 202 | \ / to reach via Internet 203 | +-+----+ 204 | | 205 +------------------------+ 207 +------------------------+ 208 | ,---. ,---. | 209 | (TN-1 ) ( TN-2)| 210 | `-+-' +--+ `-+-' | 211 | +----|vR|-----+ | 212 | ++-+ | 213 | | +-+----+ 214 | | / virtual\ For IPsec Tunnel 215 | +-------+ Gateway +---------------------- 216 | \ / termination 217 | +-+----+ 218 | | 219 +------------------------+ 221 +------------------------+ 222 | ,---. ,---. | 223 | (TN-1 ) ( TN-2)| 224 | `-+-' +--+ `-+-' | 225 | +----|vR|-----+ | 226 | ++-+ | 227 | | +-+----+ +------+ 228 | | / \ For Direct /customer\ 229 | +-------+ Gateway +----------+ gateway | 230 | \ / Connect \ / 231 | +-+----+ +------+ 232 | | 233 +------------------------+ 235 Figure 1: Examples of connecting to a Cloud DC 237 3.2. Interconnect to Hybrid Cloud DCs 239 According to Gartner, by 2020 "hybrid will be the most common usage 240 of the cloud" as more enterprises see the benefits of integrating 241 public and private cloud infrastructures. However, enabling the 242 growth of hybrid cloud deployments in the enterprise requires fast 243 and safe interconnection between public and private cloud services. 244 The Hybrid Cloud scenario also includes heterogeneous Cloud DCs. 246 For an enterprise to connect to applications & workloads hosted in 247 multiple Cloud DCs, the enterprise can use IPsec tunnels or lease 248 private lines to connect its own gateways to each of the Cloud DC's 249 gateways or any other suitable design (including a combination 250 thereof). 252 Some users prefer to instantiate their own virtual CPEs inside the 253 public Cloud DC to connect the workloads within the Cloud DC. Then 254 an overlay path is established between customer gateways to the 255 virtual CPEs for reaching the workloads inside the cloud DC. 257 3.3. Connecting workloads among hybrid Cloud DCs 259 When workloads among different Cloud DCs need to communicate, one 260 way is to hairpin all the traffic through the customer gateway, 261 which creates additional transmission delay & incurs cost exiting 262 Cloud DCs. Another way is to establish direct tunnels among 263 different VPCs (Virtual Private Clouds), such as using DMVPN 264 (Dynamic Multipoint Virtual Private Network) or DSVPN (Dynamic Smart 265 VPN) to establish direct Multi-edge tunnels. 267 DMVPN & DSVPN use NHRP (Next Hop Resolution Protocol) [RFC2735] so 268 that spoke nodes can register their IP addresses with the hub node. 269 The IETF ION (Internetworking over NBMA (non-broadcast multiple 270 access) WG, standardized NHRP for connection-oriented NBMA network 271 (such as ATM) network address resolution more than two decades ago. 273 There are many differences between virtual routers in Public Cloud 274 DCs and the nodes in an NBMA network. It would be useful for the 275 IETF community to examine the effectiveness of NHRP as the 276 registration protocol for registering virtual routers in Cloud DCs 277 to gateways or entities that connect to enterprise private networks. 278 As the result of this evaluation, enhancement or new protocols for 279 distributing edge node/port properties may come out. 281 4. Desired Properties for Networking that interconnects Hybrid Cloud 282 DCs 283 The networks that interconnect hybrid Cloud DCs have to enable users 284 to take advantage of Cloud DCs: 285 - High availability, any time usage for any length of time. 286 Many enterprises incorporate Cloud as their disaster recovery 287 strategy, e.g. periodically backup data into the cloud, or 288 running backup applications in the Cloud, etc. Therefore, the 289 connection to the cloud DCs may not be permanent, but rather 290 needs to be on-demand. 292 - Global accessibility in different geographical zones, thereby 293 facilitating the proximity of applications as a function of the 294 end users' location, for improved latency. 295 - Elasticity and mobility, to instantiate additional applications 296 at Cloud DCs when end users' usages increase and shut down 297 applications at locations with fewer end users. 298 Some enterprises have front-end web portals running in Cloud 299 DCs and Database servers in their on-premises DCs. Those Front- 300 end web portals need to be reachable from the public Internet. 301 The backend connection to the sensitive data in database 302 servers hosted in the on-premises DCs might need secure 303 connections. 305 5. Problems with MPLS-based VPNs extending to Hybrid Cloud DCs 307 Traditional MPLS-based VPNs have been widely deployed as an 308 effective way to support businesses and organizations that require 309 network performance and reliability. MPLS shifted the burden of 310 managing a VPN service from enterprises to service providers. The 311 CPEs for MPLS VPN are also simpler and less expensive, since they do 312 not need to manage how to send packets to remote sites; they simply 313 pass all outbound traffic to the MPLS VPN PEs to which the CPE is 314 attached (albeit multi-homing scenarios require more processing 315 logic on CPEs). MPLS has addressed the problems of scale, 316 availability, and fast recovery from network faults, and 317 incorporated traffic-engineering capabilities. 319 However, traditional MPLS-based VPN solutions are not optimized for 320 connecting end-users to dynamic workloads/applications in cloud DCs 321 because: 323 - The Provider Edge (PE) nodes of the enterprise's VPNs might not 324 have direct connection to the third party cloud DCs that are 325 optimal for hosting workloads with the goal of easy access to 326 enterprises' end users. 328 - It takes a relatively long time to deploy provider edge (PE) 329 routers at new locations. When enterprise's workloads are 330 changed from one cloud DC to another (i.e., removed from one DC 331 and re-instantiated to another location when demand changes), 332 the enterprise branch offices need to be connected to the new 333 cloud DC, but the network service provider might not have PEs 334 located at the new location. 336 One of the main drivers for moving workloads into the cloud is 337 the widely available cloud DCs at geographically diverse 338 locations, where apps can be instantiated so that they can be 339 as close to their end users as possible. When the user base 340 changes, the applications may be moved to a new cloud DC 341 location closest to the new user base. 343 - Most of the cloud DCs do not expose their internal networks, so 344 the provider MPLS based VPNs cannot reach the workloads 345 natively. 347 - Many cloud DCs use an overlay to connect their gateways to the 348 workloads inside the DC. There has not been any standard to 349 address the interworking between the Cloud Overlay and the 350 enterprise' existing underlay networks. 352 Another roadblock is the lack of a standard way to express and 353 enforce consistent security policies to workloads that not only use 354 virtual addresses, but also have a high chance of placement in 355 different locations within the Cloud DC [RFC8192]. The traditional 356 VPN path computation and bandwidth allocation schemes may not be 357 flexible enough to address the need for enterprises to rapidly 358 connect to dynamically instantiated (or removed) workloads and 359 applications regardless of their location/nature (i.e., third party 360 cloud DCs). 362 6. Problem with using IPsec tunnels to Cloud DCs 363 As described in the previous section, many Cloud operators expose 364 their gateways for external entities (which can be enterprises 365 themselves) to directly establish IPsec tunnels. If there is only 366 one enterprise location that needs to reach the Cloud DC, an IPsec 367 tunnel is a very convenient solution. 369 However, many medium-to-large enterprises usually have multiple 370 sites and multiple data centers. For workloads and apps hosted in 371 Cloud DCs, multiple sites need to communicate securely with those 372 Cloud workloads and apps. This section documents some of the issues 373 associated with using IPsec tunnels to connect enterprise' sites 374 with Cloud operator's Gateways. 376 6.1. Complexity of multi-point any-to-any interconnection 378 The dynamic workload instantiated in cloud DC needs to communicate 379 with multiple branch offices and on-premises data centers. Most 380 enterprises need multi-point interconnection among multiple 381 locations, as done by MPLS L2/L3 VPNs. 383 Using IPsec overlay paths to connect all branches & on-premises data 384 centers to cloud DCs require CPEs to manage routing among Cloud DCs 385 gateways and the CPEs located at other branch locations, which can 386 dramatically increase the complexity of the design, possibly at the 387 cost of jeopardizing the CPE performance. 389 The complexity of requiring CPEs to maintain routing among other 390 CPEs is one of the reasons why enterprises migrated from Frame Relay 391 based services to MPLS-based VPN services. 393 MPLS-based VPNs have their PEs directly connected to the CPEs. 394 Therefore, CPEs only need to forward all traffic to the directly 395 attached PEs, which are therefore responsible for enforcing the 396 routing policy within the corresponding VPNs. Even for multi-homed 397 CPEs, the CPEs only need to forward traffic among the directly 398 connected PEs (note: the complexity may vary for IPv6 network). 399 However, when using IPsec tunnels between CPEs and Cloud DCs, the 400 CPEs need to manage the routing for traffic to Cloud DCs, to remote 401 CPEs via VPN, or directly. 403 6.2. Poor performance over long distance 405 When enterprise CPEs or gateways are far away from Cloud DC gateways 406 or across country/continent boundaries, performance of IPsec tunnels 407 over the public Internet can be problematic and unpredictable. Even 408 though there are many monitoring tools available to measure delay 409 and various performance characteristics of the network, the 410 measurement for paths over the Internet is passive and past 411 measurements may not represent future performance. 413 Many cloud providers can replicate workloads in different available 414 zones. An App instantiated in a Cloud DC closest to clients may have 415 to cooperate with another App (or its mirror image) in another 416 region or the database server in the on-premises DC. This kind of 417 coordination requires predicable networking behavior/performance 418 among those locations. 420 6.3. Scaling Issues with IPsec Tunnels 422 IPsec can achieve secure overlay connections between two locations 423 over any underlay networks, e.g., between CPEs and Cloud DC 424 Gateways. 426 If there is only one enterprise location connected to the Cloud 427 gateway, a small number of IPsec tunnels can be configured on-demand 428 between the on-premises DC and the Cloud DC, which is an easy and 429 flexible solution. 431 However, for multiple enterprise locations to reach workloads hosted 432 in cloud DCs, the Cloud DC gateway needs to maintain multiple IPsec 433 tunnels to all those locations (e.g. hub & spoke topology). For a 434 company with hundreds or thousands of locations, there could be 435 hundreds (or even thousands) of IPsec tunnels terminating at the 436 Cloud DC gateway, which is not only very expensive (because Cloud 437 Operators charge based on connections), but can be very processing 438 intensive for the gateway. Many cloud operators only allow a limited 439 number of IPsec tunnels to each customer. Alternatively, you could 440 use a solution like group encryption where a single IPSec SA is 441 necessary at the GW but the drawback here is key distribution and 442 maintenance of a key server etc. 444 7. Problems of Using SD-WAN to connect to Cloud DCs 445 SD-WAN enables multiple parallel paths between two locations, for 446 example, two CPEs interconnected by a traditional MPLS VPN 447 ([RFC4364] or [RFC4664]) as well as overlay tunnels. The overlay, 448 possibly secured by IPsec tunnels [RFC6071], can traverse over the 449 public Internet using fiber, cable, DSL-based Internet access, Wi- 450 Fi, or 4G/Long Term Evolution (LTE). 452 SD-WAN lets enterprises augment their current VPN network with cost- 453 effective, readily available Broadband Internet connectivity, 454 enabling some traffic offloaded to overlay paths based on traffic 455 forwarding policy (application-based or otherwise), or when the MPLS 456 VPN connection between the two locations is congested, or otherwise 457 undesirable or unavailable. 459 7.1. SD-WAN among branch offices vs. interconnect to Cloud DCs 461 SD-WAN interconnection of branch offices is not as simple as it 462 appears. For an enterprise with multiple sites, using SD-WAN overlay 463 paths among sites requires each CPE to manage all the addresses that 464 local hosts have the potential to reach, i.e. map internal VPN 465 addresses to appropriate SD-WAN paths. This is similar to the 466 complexity of Frame Relay based VPNs, where each CPE needed to 467 maintain mesh routing for all destinations if they were to avoid an 468 extra hop through a hub router. Even though SD-WAN CPEs can get 469 assistance from a central controller (instead of running a routing 470 protocol) to resolve the mapping between destinations and SD-WAN 471 paths, SD-WAN CPEs are still responsible for routing table 472 maintenance as remote destinations change their attachments, e.g., 473 the dynamic workload in other DCs are de-commissioned or added. 475 Even though originally envisioned for interconnecting branch 476 offices, SD-WAN offers a very attractive way for enterprises to 477 connect to Cloud DCs. 479 The SD-WAN for interconnecting branch offices and the SD-WAN for 480 interconnecting to Cloud DCs have some differences: 482 - SD-WAN for interconnecting branch offices usually have two end- 483 points (e.g. CPEs) controlled by one entity (e.g., a controller 484 or management system operated by the enterprise). 485 - SD-WAN for interconnecting to Cloud DCs may have CPEs owned or 486 managed by the enterprise and remote end-points being managed 487 or controlled by Cloud DCs (For the ease of description, let's 488 call it asymmetrically managed CPEs). 490 - Cloud DCs may have different entering points (or devices) with 491 one terminating private direct connect (such as MPLS, or direct 492 line) and other points being the device terminating the IPsec 493 tunnels, as shown in the following diagram. 495 Therefore, the SD-WAN becomes asymmetric. 496 +------------------------+ 497 | ,---. ,---. | 498 | (TN-1 ) ( TN-2)| 499 | `-+-' +---+ `-+-' | 500 | +----|vR1|----+ | 501 | ++--+ | 502 | | +-+----+ 503 | | /Internet\ One path via 504 | +-------+ Gateway +---------------------+ 505 | \ / Internet \ 506 | +-+----+ \ 507 +------------------------+ \ 508 \ 509 +------------------------+ \ 510 | ,---. ,---. | | 511 | (TN-3 ) ( TN-4)| | 512 | `-+-' +--+ `-+-' | | +------+ 513 | +----|vR|-----+ | +------+ CPE | 514 | ++-+ | | +------+ 515 | | +-+----+ | 516 | | / virtual\ One path via IPsec Tunnel | 517 | +-------+ Gateway +-------------------------- + 518 | \ / | 519 | +-+----+ | 520 +------------------------+ | 521 | 522 +------------------------+ | 523 | ,---. ,---. | | 524 | (TN-5 ) ( TN-6)| | 525 | `-+-' +--+ `-+-' | | 526 | +----|vR|-----+ | | 527 | ++-+ | | 528 | | +-+----+ +------+ | 529 | | / \ Via Direct /customer\ | 530 | +-------+ Gateway +----------+ gateway |-----+ 531 | \ / Connect \ / 532 | +-+----+ +------+ 533 +------------------------+ 535 Figure 2: Asymmetric Paths SD-WAN 537 8. End-to-End Security Concerns for Data Flows 539 When IPsec tunnels from enterprise on-premises CPEs are terminated 540 at the Cloud DC gateway where the workloads or applications are 541 hosted, some enterprises have concerns regarding traffic to/from 542 their workload being exposed to others behind the data center 543 gateway (e.g., exposed to other organizations that have workloads 544 in the same data center). 545 To ensure that traffic to/from workloads is not exposed to 546 unwanted entities, it is necessary to have the IPsec tunnels go 547 all the way to the workload (servers, or VMs) within the DC. 549 9. Requirements for Dynamic Cloud Data Center VPNs 551 [Editor's note: this section is only a place holder. The requirement 552 listed here are only to stimulate more discussions] 554 In order to address the aforementioned issues, any solution for 555 enterprise VPNs that includes connectivity to dynamic workloads or 556 applications in cloud data centers should satisfy a set of 557 requirements: 559 - The solution should allow enterprises to take advantage of the 560 current state-of-the-art in VPN technology, in both traditional 561 MPLS-based VPNs and IPsec-based VPNs (or any combination 562 thereof) that run over-the-top of the public Internet. 563 - The solution should not require an enterprise to upgrade all 564 their existing CPEs. 565 - The solution should not require either CPEs or routers to 566 support a large number of IPsec tunnels simultaneously. 567 - The solution needs to support easy and fast VPN connections to 568 dynamic workloads and applications in third party data centers, 569 and easily allow these workloads to migrate both within a data 570 center and between data centers. 571 - Allow VPNs to provide bandwidth and other performance 572 guarantees. 574 - Be a cost-effective solution for enterprises to incorporate 575 dynamic cloud-based applications and workloads into their 576 existing VPN environment. 578 10. Security Considerations 580 For the most part, we introduce no new security concerns beyond 581 those of existing MPLS based VPNs, which are widely deployed. The 582 one addition to MPLS VPNs is selective use of SD-WAN, which uses 583 IPsec tunnels for the privacy and separation of VPN traffic. 585 Also see Section 8 for a discussion of end-to-end security for data 586 flows. 588 11. IANA Considerations 590 This document requires no IANA actions. RFC Editor: Please remove 591 this section before publication. 593 12. References 595 12.1. Normative References 597 12.2. Informative References 599 [RFC2735] B. Fox, et al "NHRP Support for Virtual Private 600 networks". Dec. 1999. 602 [RFC8192] S. Hares, et al "Interface to Network Security Functions 603 (I2NSF) Problem Statement and Use Cases", July 2017 605 [ITU-T-X1036] ITU-T Recommendation X.1036, "Framework for creation, 606 storage, distribution and enforcement of policies for 607 network security", Nov 2007. 609 [RFC6071] S. Frankel and S. Krishnan, "IP Security (IPsec) and 610 Internet Key Exchange (IKE) Document Roadmap", Feb 2011. 612 [RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private 613 Networks (VPNs)", Feb 2006 615 [RFC4664] L. Andersson and E. Rosen, "Framework for Layer 2 Virtual 616 Private Networks (L2VPNs)", Sept 2006. 618 13. Acknowledgments 620 Many thanks to Ignas Bagdonas, Michael Huang, Liu Yuan Jiao, 621 Katherine Zhao, and Jim Guichard for the discussion and 622 contributions. 624 Authors' Addresses 626 Linda Dunbar 627 Huawei 628 Email: Linda.Dunbar@huawei.com 630 Andrew G. Malis 631 Huawei 632 Email: agmalis@gmail.com 634 Christian Jacquenet 635 France Telecom 636 Rennes, 35000 637 France 638 Email: Christian.jacquenet@orange.com 640 Mehmet Toy 641 Verizon 642 One Verizon Way 643 Basking Ridge, NJ 07920 644 Email: mehmet.toy@verizon.com