idnits 2.17.1 draft-dm-net2cloud-problem-statement-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 3 instances of too long lines in the document, the longest one being 8 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 5, 2018) is 2215 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'ITU-T-X1036' is defined on line 608, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Dunbar 2 Internet Draft A. Malis 3 Intended status: Informational Huawei 4 Expires: January 2018 C. Jacquenet 5 Orange 6 M. Toy 7 Verizon 8 March 5, 2018 10 Seamless Interconnect Underlay to Cloud Overlay Problem Statement 11 draft-dm-net2cloud-problem-statement-01 13 Abstract 15 This document describes common approaches deployed by enterprises 16 for interconnection of workloads & applications hosted in Cloud DCs 17 with on-premises DCs & branch offices. This document also describes 18 some of the (network) problems that many enterprises face when they 19 have workloads & applications & data split among hybrid data 20 centers, especially for those enterprises with multiple sites that 21 are already interconnected by VPNs (e.g. MPLS L2VPN/L3VPN) and 22 leased lines. 24 Current operational problems in the field are examined to determine 25 whether there is a need for enhancements to existing protocols or 26 whether a new protocol is necessary to solve them. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. This document may not be modified, 35 and derivative works of it may not be created, except to publish it 36 as an RFC and to translate it into languages other than English. 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF), its areas, and its working groups. Note that 40 other groups may also distribute working documents as Internet- 41 Drafts. 43 Internet-Drafts are draft documents valid for a maximum of six 44 months and may be updated, replaced, or obsoleted by other documents 45 at any time. It is inappropriate to use Internet-Drafts as 46 reference material or to cite them other than as "work in progress." 48 The list of current Internet-Drafts can be accessed at 49 http://www.ietf.org/ietf/1id-abstracts.txt 51 The list of Internet-Draft Shadow Directories can be accessed at 52 http://www.ietf.org/shadow.html 54 This Internet-Draft will expire on September 5, 2018. 56 Copyright Notice 58 Copyright (c) 2018 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with 66 respect to this document. Code Components extracted from this 67 document must include Simplified BSD License text as described in 68 Section 4.e of the Trust Legal Provisions and are provided without 69 warranty as described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction...................................................3 74 2. Definition of terms............................................4 75 3. Current Practices in Interconnecting Enterprise Sites with Cloud 76 DCs...............................................................5 77 3.1. Interconnect to Cloud DCs.................................5 78 3.2. Interconnect to Hybrid Cloud DCs..........................7 79 3.3. Connecting workloads among hybrid Cloud DCs...............7 80 4. Desired Properties for Networking that interconnects Hybrid Cloud 81 DCs...............................................................8 82 5. Problems with MPLS-based VPNs extending to Hybrid Cloud DCs....8 83 6. Problem with using IPsec tunnels to Cloud DCs.................10 84 6.1. Complexity of multi-point any-to-any interconnection.....10 85 6.2. Poor performance over long distance......................11 86 6.3. Scaling Issues with IPsec Tunnels........................11 87 7. Problems of Using SD-WAN to connect to Cloud DCs..............12 88 7.1. SD-WAN among branch offices vs. interconnect to Cloud DCs12 89 8. End-to-End Security Concerns for Data Flows...................15 90 9. Requirements for Dynamic Cloud Data Center VPNs...............15 91 10. Security Considerations......................................16 92 11. IANA Considerations..........................................16 93 12. References...................................................16 94 12.1. Normative References....................................16 95 12.2. Informative References..................................16 96 13. Acknowledgments..............................................17 98 1. Introduction 100 Cloud applications and services continue to change how businesses of 101 all sizes work and share information. "Cloud applications & 102 workloads" are those that are instantiated in third party DCs that 103 also host services for other customers. 105 With the advent of widely available third party cloud DCs in diverse 106 geographic locations and the advancement of tools for monitoring and 107 predicting application behaviors, it is technically feasible for 108 enterprises to instantiate applications and workloads in locations 109 that are geographically closest to their end users. This property 110 aids in improving end-to-end latency and overall user experience. 111 Conversely, an enterprise can easily shutdown applications and 112 workloads when their end users' geographic base changes (therefore 113 needing to change the networking connection to those relocated 114 applications and workloads). In addition, an enterprise may wish to 115 take advantage of more and more business applications offered by 116 third party private cloud DCs, such as SAP HANA, Oracle Cloud, 117 Salesforce Cloud, etc. 119 However, typically, enterprise branch offices & on-premises data 120 centers are connected via VPNs, such as MPLS based l2VPN/L3VPN, and 121 therefore connecting to the cloud-based resources may not be 122 straightforward if the provider of the VPN service does not have 123 direct connections to the Cloud DCs. Under those circumstances, the 124 enterprise can upgrade their existing CPEs to utilize SD-WAN to 125 reach cloud resources (without any assistance from the VPN service 126 provider), or wait for their VPN service provider to make new 127 agreements with data center providers to connect to the Cloud 128 resources. Either way this is non-trivial and has additional 129 infrastructure costs, and is slow to operationalize. 131 In addition, it is an uptrend with more and more enterprises 132 changing their Apps & workloads so that they can be split among 133 hybrid DCs to maximize the benefits of geographical convenience & 134 elasticity and special property of on-premises DCs. 136 2. Definition of terms 138 Cloud DC: Off-Premise Data Centers that usually host applications 139 and workload owned by different organizations or 140 tenants. 142 Controller: Used interchangeably with SD-WAN controller to manage 143 SD-WAN overlay path creation/deletion and monitoring the 144 path conditions between two or more sites. 146 DMVPN: Dynamic Multipoint Virtual Private Network. DMVPN is a 147 secure network that exchanges data between sites without 148 needing to pass traffic through an organization's 149 headquarter virtual private network (VPN) server or 150 router. 152 Heterogeneous Cloud: applications & workloads split among Cloud DCs 153 owned & managed by different operators. 155 Hybrid Cloud: applications & workloads split between on-premises 156 Data centers and Cloud DCs. In this document Hybrid 157 Cloud also include heterogeneous cloud as well. 159 SD-WAN: Software Defined Wide Area Network, which can mean many 160 different things. In this document, "SD-WAN" refers to 161 the solutions specified by ONUG (Open Network User 162 Group), https://www.onug.net/software-defined-wide-area- 163 network-sd-wan/, which is about pooling WAN bandwidth 164 from n service providers to get better WAN bandwidth 165 management, visibility & control. 167 VPC: Virtual Private Cloud. A service offered by many Cloud 168 DC operators to allocate a logically isolated cloud 169 resources, including compute, networking and storage. 171 3. Current Practices in Interconnecting Enterprise Sites with Cloud DCs 173 3.1. Interconnect to Cloud DCs 175 Most Cloud operators offer some type of network gateway through 176 which an enterprise can reach their workloads hosted in the Cloud 177 DC. For example, AWS (Amazon Web Services) offers the following 178 options to reach workloads in AWS Cloud DCs: 180 - Internet gateway for any external entities to reach the 181 workloads hosted in AWS Cloud DC via the internet. 182 - virtual gateway (vGW) to which IPsec tunnels [RFC6071] are 183 established between an enterprise's own gateways and AWS vGW, 184 so that the communications between those gateways can be 185 secured from the underlay (which might be the public internet). 186 - Direct Connect, which allows enterprises to purchase direct 187 connect from network service providers to get a private leased 188 line interconnecting the enterprises gateway(s) and the AWS 189 Direct Connect routers co-located with the network operators. 191 +------------------------+ 192 | ,---. ,---. | 193 | (TN-1 ) ( TN-2)| 194 | `-+-' +--+ `-+-' | 195 | +----|vR|-----+ | 196 | ++-+ | 197 | | +-+----+ 198 | | /Internet\ For External 199 | +-------+ Gateway +---------------------- 200 | \ / to reach via Internet 201 | +-+----+ 202 | | 203 +------------------------+ 205 +------------------------+ 206 | ,---. ,---. | 207 | (TN-1 ) ( TN-2)| 208 | `-+-' +--+ `-+-' | 209 | +----|vR|-----+ | 210 | ++-+ | 211 | | +-+----+ 212 | | / virtual\ For IPsec Tunnel 213 | +-------+ Gateway +---------------------- 214 | \ / termination 215 | +-+----+ 216 | | 217 +------------------------+ 219 +------------------------+ 220 | ,---. ,---. | 221 | (TN-1 ) ( TN-2)| 222 | `-+-' +--+ `-+-' | 223 | +----|vR|-----+ | 224 | ++-+ | 225 | | +-+----+ +------+ 226 | | / \ For Direct /customer\ 227 | +-------+ Gateway +----------+ gateway | 228 | \ / Connect \ / 229 | +-+----+ +------+ 230 | | 231 +------------------------+ 233 Figure 1: Examples of connecting to a Cloud DC 235 3.2. Interconnect to Hybrid Cloud DCs 237 According to Gartner, by 2020 "hybrid will be the most common usage 238 of the cloud" as more enterprises see the benefits of integrating 239 public and private cloud infrastructures. However, enabling the 240 growth of hybrid cloud deployments in the enterprise requires fast 241 and safe interconnection between public and private cloud services. 242 The Hybrid Cloud scenario also includes heterogeneous Cloud DCs. 244 For enterprises to connect to applications & workloads hosted in 245 multiple Cloud DCs, enterprises can use IPsec tunnels or lease 246 private lines to connect their own gateways to each of the Cloud 247 DC's gateways or any other suitable design (including a combination 248 thereof). 250 Some users prefer to instantiate their own virtual CPEs inside the 251 public Cloud DC to connect the workloads within the Cloud DC. Then 252 an overlay path is established between customer gateways to the 253 virtual CPEs for reaching the workloads inside the cloud DC. 255 3.3. Connecting workloads among hybrid Cloud DCs 257 When workloads among different Cloud DCs need to communicate, one 258 way is to hairpin all the traffic through the customer gateway, 259 which creates additional transmission delay & incurs cost exiting 260 Cloud DCs. Another way is to establish direct tunnels among 261 different VPCs (Virtual Private Clouds), such as using DMVPN 262 (Dynamic Multipoint Virtual Private Network) or DSVPN (Dynamic Smart 263 VPN) to establish direct Multi-edge tunnels. 265 DMVPN (and DSVPN) uses NHRP (Next Hop Resolution Protocol) [RFC2735] 266 so that spoke nodes can register their IP addresses with the hub 267 node. The IETF ION WG, Internetworking over NBMA (non-broadcast 268 multiple access), standardized NHRP for connection-oriented NBMA 269 network (such as ATM) network address resolution more than two 270 decades ago. 272 There are many differences between virtual routers in Public Cloud 273 DCs and the nodes in an NBMA network. It would be useful for the 274 IETF community to examine the effectiveness of NHRP as the 275 registration protocol for registering virtual routers in Cloud DCs 276 to gateways or entities that connect to enterprise private networks. 278 As the result of this evaluation, enhancement or new registration 279 protocols may result. 281 4. Desired Properties for Networking that interconnects Hybrid Cloud 282 DCs 283 The networks that interconnect hybrid Cloud DCs have to enable users 284 to take advantage of Cloud DCs: 285 - High availability, any time usage for any length of time. 286 Many enterprises incorporate Cloud as their disaster recovery 287 strategy, e.g. periodically backup data into the cloud, or 288 running backup applications in the Cloud, etc. Therefore, the 289 connection to the cloud DCs may not be permanent, but rather 290 needs to be on-demand. 292 - Global accessibility in different geographical zones, thereby 293 facilitating the proximity of applications as a function of the 294 end users' location, for improved latency. 295 - Elasticity and mobility, to instantiate additional applications 296 at Cloud DCs when end users' usages increase and shut down 297 applications at locations with fewer end users. 298 Some enterprises have front-end web portals running in Cloud 299 DCs and Database servers in their on-premises DCs. Those Front- 300 end web portals need to be reachable from the public Internet. 301 The backend connection to the sensitive data in database 302 servers hosted in the on-premises DCs might need secure 303 connections. 305 5. Problems with MPLS-based VPNs extending to Hybrid Cloud DCs 307 Traditional MPLS-based VPNs have been widely deployed as an 308 effective way to support businesses and organizations that require 309 network performance and reliability. MPLS shifted the burden of 310 managing a VPN service from enterprises to service providers. The 311 CPEs for MPLS VPN are also simpler and less expensive, since they do 312 not need to manage how to send packets to remote sites; they simply 313 pass all outbound traffic to the MPLS VPN PEs to which the CPE is 314 attached (albeit multi-homing scenarios require more processing 315 logic on CPEs). MPLS has addressed the problems of scale, 316 availability, and fast recovery from network faults, and 317 incorporated traffic-engineering capabilities. 319 However, traditional MPLS-based VPN solutions are not optimized for 320 connecting end-users to dynamic workloads/applications in cloud DCs 321 because: 323 - The Provider Edge (PE) nodes of the enterprise's VPNs might not 324 have direct connection to the third party cloud DCs that are 325 optimal for hosting workloads with the goal of easy access to 326 enterprises' end users. 328 - It takes a relatively long time to deploy provider edge (PE) 329 routers at new locations. When enterprise's workloads are 330 changed from one cloud DC to another (i.e., removed from one DC 331 and re-instantiated to another location when demand changes), 332 the enterprise branch offices need to be connected to the new 333 cloud DC, but the network service provider might not have PEs 334 located at the new location. 336 One of the main drivers for moving workloads into the cloud is 337 the widely available cloud DCs at geographically diverse 338 locations, where apps can be instantiated so that they can be 339 as close to their end users as possible. When the user base 340 changes, the applications may be moved to a new cloud DC 341 location closest to the new user base. 343 - Most of the cloud DCs do not expose their internal networks, so 344 the provider MPLS based VPNs cannot reach the workloads 345 natively. 347 - Many cloud DCs use an overlay to connect their gateways to the 348 workloads inside the DC. There has not been any standard to 349 address the interworking between the Cloud Overlay and the 350 enterprise' existing underlay networks. 352 Another roadblock is the lack of a standard way to express and 353 enforce consistent security policies to workloads that not only use 354 virtual addresses, do not have a port number, but also have a high 355 chance of placement in different locations within the Cloud DC 357 [RFC8192]. The traditional VPN path computation and bandwidth 358 allocation schemes may not be flexible enough to address the need 359 for enterprises to rapidly connect to dynamically instantiated (or 360 removed) workloads and applications regardless of their 361 location/nature (i.e., third party cloud DCs). 363 6. Problem with using IPsec tunnels to Cloud DCs 364 As described in the previous section, many Cloud operators expose 365 their gateways for external entities (which can be enterprises 366 themselves) to directly establish IPsec tunnels. If there is only 367 one enterprise location that needs to reach the Cloud DC, an IPsec 368 tunnel is a very convenient solution. 370 However, many medium-to-large enterprises usually have multiple 371 sites and multiple data centers. For workloads and apps hosted in 372 Cloud DCs, multiple sites need to communicate securely with those 373 Cloud workloads and apps. This section documents some of the issues 374 associated with using IPsec tunnels to connect enterprise' sites 375 with Cloud operator's Gateways. 377 6.1. Complexity of multi-point any-to-any interconnection 379 The dynamic workload instantiated in cloud DC needs to communicate 380 with multiple branch offices and on-premises data centers. Most 381 enterprises need multi-point interconnection among multiple 382 locations, as done by MPLS L2/L3 VPNs. 384 Using IPsec overlay paths to connect all branches & on-premises data 385 centers to cloud DCs require CPEs to manage routing among Cloud DCs 386 gateways and the CPEs located at other branch locations, which can 387 dramatically increase the complexity of the design, possibly at the 388 cost of jeopardizing the CPE performance. 390 The complexity of requiring CPEs to maintain routing among other 391 CPEs is one of the reasons why enterprises migrated from Frame Relay 392 based services to MPLS-based VPN services. 394 MPLS-based VPNs have their PEs directly connected to the CPEs. 395 Therefore, CPEs only need to forward all traffic to the directly 396 attached PEs, which are therefore responsible for enforcing the 397 routing policy within the corresponding VPNs. Even for multi-homed 398 CPEs, the CPEs only need to forward traffic among the directly 399 connected PEs (note: the complexity may vary for IPv6 network). 401 However, when using IPsec tunnels between CPEs and Cloud DCs, the 402 CPEs need to manage the routing for traffic to Cloud DCs, to remote 403 CPEs via VPN, or directly. 405 6.2. Poor performance over long distance 407 When enterprise CPEs or gateways are far away from Cloud DC gateways 408 or across country/continent boundaries, performance of IPsec tunnels 409 over the public Internet can be problematic and unpredictable. Even 410 though there are many monitoring tools available to measure delay 411 and various performance characteristics of the network, the 412 measurement for paths over the Internet is passive and past 413 measurements may not represent future performance. 415 Many cloud providers can replicate workloads in different available 416 zones. An App instantiated in a Cloud DC closest to clients may have 417 to cooperate with another App (or its mirror image) in another 418 region or the database server in the on-premises DC. This kind of 419 coordination requires predicable networking behavior/performance 420 among those locations. 422 6.3. Scaling Issues with IPsec Tunnels 424 IPsec can achieve secure overlay connections between two locations 425 over any underlay networks, e.g., between CPEs and Cloud DC 426 Gateways. 428 If there is only one enterprise location connected to the Cloud 429 gateway, a small number of IPsec tunnels can be configured on-demand 430 between the on-premises DC and the Cloud DC, which is an easy and 431 flexible solution. 433 However, for multiple enterprise locations to reach workloads hosted 434 in cloud DCs, the Cloud DC gateway needs to maintain multiple IPsec 435 tunnels to all those locations (e.g. hub & spoke topology). For a 436 company with hundreds or thousands of locations, there could be 437 hundreds (or even thousands) of IPsec tunnels terminating at the 438 Cloud DC gateway, which is not only very expensive (because Cloud 439 Operators charge based on connections), but can be very processing 440 intensive for the gateway. Many cloud operators only allow a limited 441 number of IPsec tunnels to each customer. Alternatively, you could 442 use a solution like group encryption where a single IPSec SA is 443 necessary at the GW but the drawback here is key distribution and 444 maintenance of a key server etc. 446 7. Problems of Using SD-WAN to connect to Cloud DCs 447 SD-WAN enables multiple parallel paths between two locations, for 448 example, two CPEs interconnected by a traditional MPLS VPN 449 ([RFC4364] or [RFC4664]) as well as overlay tunnels. The overlay, 450 possibly secured by IPsec tunnels [RFC6071], can traverse over the 451 public Internet using fiber, cable, DSL-based Internet access, Wi- 452 Fi, or 4G/Long Term Evolution (LTE). 454 SD-WAN lets enterprises augment their current VPN network with cost- 455 effective, readily available Broadband Internet connectivity, 456 enabling some traffic offloaded to overlay paths based on traffic 457 forwarding policy (application-based or otherwise), or when the MPLS 458 VPN connection between the two locations is congested, or otherwise 459 undesirable or unavailable. 461 7.1. SD-WAN among branch offices vs. interconnect to Cloud DCs 463 SD-WAN interconnection of branch offices is not as simple as it 464 appears. For an enterprise with multiple sites, using SD-WAN overlay 465 paths among sites requires each CPE to manage all the addresses that 466 local hosts have the potential to reach, i.e. map internal VPN 467 addresses to appropriate SD-WAN paths. This is similar to the 468 complexity of Frame Relay based VPNs, where each CPE needed to 469 maintain mesh routing for all destinations if they were to avoid an 470 extra hop through a hub router. Even though SD-WAN CPEs can get 471 assistance from a central controller (instead of running a routing 472 protocol) to resolve the mapping between destinations and SD-WAN 473 paths, SD-WAN CPEs are still responsible for routing table 474 maintenance as remote destinations change their attachments, e.g., 475 the dynamic workload in other DCs are de-commissioned or added. 477 Even though originally envisioned for interconnecting branch 478 offices, SD-WAN offers a very attractive way for enterprises to 479 connect to Cloud DCs. 481 The SD-WAN for interconnecting branch offices and the SD-WAN for 482 interconnecting to Cloud DCs have some differences: 484 - SD-WAN for interconnecting branch offices usually have two end- 485 points (e.g. CPEs) controlled by one entity (e.g., a controller 486 or management system operated by the enterprise). 488 - SD-WAN for interconnecting to Cloud DCs may have CPEs owned or 489 managed by the enterprise and remote end-points being managed 490 or controlled by Cloud DCs (For the ease of description, let's 491 call it asymmetrically managed CPEs). 493 - Cloud DCs may have different entering points (or devices) with 494 one terminating private direct connect (such as MPLS, or direct 495 line) and other points being the device terminating the IPsec 496 tunnels, as shown in the following diagram. 498 Therefore, the SD-WAN becomes asymmetric. 499 +------------------------+ 500 | ,---. ,---. | 501 | (TN-1 ) ( TN-2)| 502 | `-+-' +---+ `-+-' | 503 | +----|vR1|----+ | 504 | ++--+ | 505 | | +-+----+ 506 | | /Internet\ One path via 507 | +-------+ Gateway +---------------------+ 508 | \ / Internet \ 509 | +-+----+ \ 510 +------------------------+ \ 511 \ 512 +------------------------+ \ 513 | ,---. ,---. | | 514 | (TN-3 ) ( TN-4)| | 515 | `-+-' +--+ `-+-' | | +------+ 516 | +----|vR|-----+ | +------+ CPE | 517 | ++-+ | | +------+ 518 | | +-+----+ | 519 | | / virtual\ One path via IPsec Tunnel | 520 | +-------+ Gateway +-------------------------- + 521 | \ / | 522 | +-+----+ | 523 +------------------------+ | 524 | 525 +------------------------+ | 526 | ,---. ,---. | | 527 | (TN-5 ) ( TN-6)| | 528 | `-+-' +--+ `-+-' | | 529 | +----|vR|-----+ | | 530 | ++-+ | | 531 | | +-+----+ +------+ | 532 | | / \ Via Direct /customer\ | 533 | +-------+ Gateway +----------+ gateway |-----+ 534 | \ / Connect \ / 535 | +-+----+ +------+ 536 +------------------------+ 538 Figure 2: Asymmetric Paths SD-WAN 540 8. End-to-End Security Concerns for Data Flows 542 When IPsec tunnels from enterprise on-premises CPEs are terminated 543 at the Cloud DC gateway where the workloads or applications are 544 hosted, some enterprises have concerns regarding traffic to/from 545 their workload being exposed to others behind the data center 546 gateway (e.g., exposed to other organizations that have workloads 547 in the same data center). 548 To ensure that traffic to/from workloads is not exposed to 549 unwanted entities, it is necessary to have the IPsec tunnels go 550 all the way to the workload (servers, or VMs) within the DC. 552 9. Requirements for Dynamic Cloud Data Center VPNs 554 [Editor's note: this section is only a place holder. The requirement 555 listed here are only to stimulate more discussions] 557 In order to address the aforementioned issues, any solution for 558 enterprise VPNs that includes connectivity to dynamic workloads or 559 applications in cloud data centers should satisfy a set of 560 requirements: 562 - The solution should allow enterprises to take advantage of the 563 current state-of-the-art in VPN technology, in both traditional 564 MPLS-based VPNs and IPsec-based VPNs (or any combination 565 thereof) that run over-the-top of the public Internet. 566 - The solution should not require an enterprise to upgrade all 567 their existing CPEs. 568 - The solution should not require either CPEs or routers to 569 support a large number of IPsec tunnels simultaneously. 570 - The solution needs to support easy and fast VPN connections to 571 dynamic workloads and applications in third party data centers, 572 and easily allow these workloads to migrate both within a data 573 center and between data centers. 574 - Allow VPNs to provide bandwidth and other performance 575 guarantees. 577 - Be a cost-effective solution for enterprises to incorporate 578 dynamic cloud-based applications and workloads into their 579 existing VPN environment. 581 10. Security Considerations 583 For the most part, we introduce no new security concerns beyond 584 those of existing MPLS based VPNs, which are widely deployed. The 585 one addition to MPLS VPNs is selective use of SD-WAN, which uses 586 IPsec tunnels for the privacy and separation of VPN traffic. 588 Also see Section 8 for a discussion of end-to-end security for data 589 flows. 591 11. IANA Considerations 593 This document requires no IANA actions. RFC Editor: Please remove 594 this section before publication. 596 12. References 598 12.1. Normative References 600 12.2. Informative References 602 [RFC2735] B. Fox, et al "NHRP Support for Virtual Private 603 networks". Dec. 1999. 605 [RFC8192] S. Hares, et al "Interface to Network Security Functions 606 (I2NSF) Problem Statement and Use Cases", July 2017 608 [ITU-T-X1036] ITU-T Recommendation X.1036, "Framework for creation, 609 storage, distribution and enforcement of policies for 610 network security", Nov 2007. 612 [RFC6071] S. Frankel and S. Krishnan, "IP Security (IPsec) and 613 Internet Key Exchange (IKE) Document Roadmap", Feb 2011. 615 [RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private 616 Networks (VPNs)", Feb 2006 618 [RFC4664] L. Andersson and E. Rosen, "Framework for Layer 2 Virtual 619 Private Networks (L2VPNs)", Sept 2006. 621 13. Acknowledgments 623 Many thanks to Ignas Bagdonas, Mehmet Toy, Michael Huang, Liu Yuan 624 Jiao, Katherine Zhao, and Jim Guichard for the discussion and 625 contributions. 627 Authors' Addresses 629 Linda Dunbar 630 Huawei 631 Email: Linda.Dunbar@huawei.com 633 Andrew G. Malis 634 Huawei 635 Email: agmalis@gmail.com 637 Christian Jacquenet 638 France Telecom 639 Rennes, 35000 640 France 641 Email: Christian.jacquenet@orange.com 643 Mehmet Toy 644 Verizon 645 One Verizon Way 646 Basking Ridge, NJ 07920 647 Email: mehmet.toy@verizon.com