idnits 2.17.1 draft-dm-net2cloud-problem-statement-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There are 3 instances of too long lines in the document, the longest one being 8 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (December 11, 2018) is 1956 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'ITU-T-X1036' is defined on line 620, but no explicit reference was found in the text Summary: 2 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Dunbar 2 Internet Draft A. Malis 3 Intended status: Informational Huawei 4 Expires: January 2018 C. Jacquenet 5 Orange 6 M. Toy 7 Verizon 8 December 11, 2018 10 Seamless Interconnect Underlay to Cloud Overlay Problem Statement 11 draft-dm-net2cloud-problem-statement-04 13 Abstract 15 This document describes the problems of enterprises face today in 16 connecting their branch offices to dynamic workloads in third party 17 data centers (a.k.a. Cloud DCs). 19 It examines some of the approaches for interconnecting workloads & 20 applications hosted in cloud DCs with enterprises' on-premises DCs & 21 branch offices. This document also describes some of the (network) 22 problems that many enterprises face when they have workloads & 23 applications & data split among hybrid data centers, especially for 24 those enterprises with multiple sites that are already 25 interconnected by VPNs (e.g. MPLS L2VPN/L3VPN) and leased lines. 27 Current operational problems in the field are examined to determine 28 whether there is a need for enhancements to existing protocols or 29 whether a new protocol is necessary to solve them. 31 Status of this Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. This document may not be modified, 38 and derivative works of it may not be created, except to publish it 39 as an RFC and to translate it into languages other than English. 41 Internet-Drafts are working documents of the Internet Engineering 42 Task Force (IETF), its areas, and its working groups. Note that 43 other groups may also distribute working documents as Internet- 44 Drafts. 46 Internet-Drafts are draft documents valid for a maximum of six 47 months and may be updated, replaced, or obsoleted by other documents 48 at any time. It is inappropriate to use Internet-Drafts as 49 reference material or to cite them other than as "work in progress." 51 The list of current Internet-Drafts can be accessed at 52 http://www.ietf.org/ietf/1id-abstracts.txt 54 The list of Internet-Draft Shadow Directories can be accessed at 55 http://www.ietf.org/shadow.html 57 This Internet-Draft will expire on June 12, 2009. 59 Copyright Notice 61 Copyright (c) 2018 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (http://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with 69 respect to this document. Code Components extracted from this 70 document must include Simplified BSD License text as described in 71 Section 4.e of the Trust Legal Provisions and are provided without 72 warranty as described in the Simplified BSD License. 74 Table of Contents 76 1. Introduction...................................................3 77 2. Definition of terms............................................4 78 3. Current Practices in Interconnecting Enterprise Sites with Cloud 79 DCs...............................................................5 80 3.1. Interconnect to Cloud DCs.................................5 81 3.2. Interconnect to Hybrid Cloud DCs..........................7 82 3.3. Connecting workloads among hybrid Cloud DCs...............7 83 4. Desired Properties for Networking that interconnects Hybrid Cloud 84 DCs...............................................................8 85 5. Problems with MPLS-based VPNs extending to Hybrid Cloud DCs....8 86 6. Problem with using IPsec tunnels to Cloud DCs.................10 87 6.1. Complexity of multi-point any-to-any interconnection.....10 88 6.2. Poor performance over long distance......................11 89 6.3. Scaling Issues with IPsec Tunnels........................11 90 7. Problems of Using SD-WAN to connect to Cloud DCs..............12 91 7.1. SD-WAN among branch offices vs. interconnect to Cloud DCs12 92 8. End-to-End Security Concerns for Data Flows...................15 93 9. Requirements for Dynamic Cloud Data Center VPNs...............15 94 10. Security Considerations......................................16 95 11. IANA Considerations..........................................16 96 12. References...................................................16 97 12.1. Normative References....................................16 98 12.2. Informative References..................................16 99 13. Acknowledgments..............................................17 101 1. Introduction 103 Cloud applications and services continue to change how businesses of 104 all sizes work and share information. "Cloud applications & 105 workloads" are those that are instantiated in third party DCs that 106 also host services for other customers. 108 With the advent of widely available third party cloud DCs in diverse 109 geographic locations and the advancement of tools for monitoring and 110 predicting application behaviors, it is technically feasible for 111 enterprises to instantiate applications and workloads in locations 112 that are geographically closest to their end users. This property 113 aids in improving end-to-end latency and overall user experience. 114 Conversely, an enterprise can easily shutdown applications and 115 workloads when their end users' geographic base changes (therefore 116 needing to change the networking connection to those relocated 117 applications and workloads). In addition, an enterprise may wish to 118 take advantage of more and more business applications offered by 119 third party private cloud DCs, such as Dropbox, Microsoft365, SAP 120 HANA, Oracle Cloud, Salesforce Cloud, etc. 122 Most of those enterprise branch offices & on-premises data centers 123 are already connected via VPNs, such as MPLS based l2VPN/L3VPN. Then 124 connecting to the cloud-based resources may not be straightforward 125 if the provider of the VPN service does not have direct connections 126 to the cloud DCs. Under those circumstances, the enterprise can 127 upgrade their existing CPEs to utilize SD-WAN to reach cloud 128 resources (without any assistance from the VPN service provider), or 129 wait for their VPN service provider to make new agreements with data 130 center providers to connect to the Cloud resources. Either way has 131 additional infrastructure costs and is slow to operationalize. 133 In addition, it is an uptrend with more and more enterprises 134 changing their Apps & workloads so that they can be split among 135 hybrid DCs to maximize the benefits of geographical convenience & 136 elasticity and special property of on-premises DCs. 138 2. Definition of terms 140 Cloud DC: Third party Data Centers that usually host applications 141 and workload owned by different organizations or 142 tenants. 144 Controller: Used interchangeably with SD-WAN controller to manage 145 SD-WAN overlay path creation/deletion and monitoring the 146 path conditions between two or more sites. 148 DSVPN: Dynamic Smart Virtual Private Network. DSVPN is a secure 149 network that exchanges data between sites without 150 needing to pass traffic through an organization's 151 headquarter virtual private network (VPN) server or 152 router. 154 Heterogeneous Cloud: applications & workloads split among Cloud DCs 155 owned & managed by different operators. 157 Hybrid Cloud: applications & workloads split between on-premises 158 Data centers and cloud DCs. In this document Hybrid 159 Cloud also include heterogeneous cloud as well. 161 SD-WAN: Software Defined Wide Area Network, which can mean many 162 different things. In this document, "SD-WAN" refers to 163 the solutions specified by ONUG (Open Network User 164 Group), https://www.onug.net/software-defined-wide-area- 165 network-sd-wan/, which is about pooling WAN bandwidth 166 from n service providers to get better WAN bandwidth 167 management, visibility & control. 169 VPC: Virtual Private Cloud. A service offered by many Cloud 170 DC operators to allocate a logically isolated cloud 171 resources, including compute, networking and storage. 173 3. Current Practices in Interconnecting Enterprise Sites with Cloud DCs 175 3.1. Interconnect to Cloud DCs 177 Most Cloud operators offer some type of network gateway through 178 which an enterprise can reach their workloads hosted in the Cloud 179 DC. For example, AWS (Amazon Web Services) offers the following 180 options to reach workloads in AWS Cloud DCs: 182 - Internet gateway for any external entities to reach the 183 workloads hosted in AWS Cloud DC via the internet. 184 - virtual gateway (vGW) to which IPsec tunnels [RFC6071] are 185 established between an enterprise's own gateways and AWS vGW, 186 so that the communications between those gateways can be 187 secured from the underlay (which might be the public internet). 188 - Direct Connect, which allows enterprises to purchase direct 189 connect from network service providers to get a private leased 190 line interconnecting the enterprises gateway(s) and the AWS 191 Direct Connect routers co-located with the network operators. 192 Via Direct Connect, AWS Transit Gateway can be utilized to 193 interconnect multiple VPCs in different Availability Zones. 195 +------------------------+ 196 | ,---. ,---. | 197 | (TN-1 ) ( TN-2)| 198 | `-+-' +--+ `-+-' | 199 | +----|vR|-----+ | 200 | ++-+ | 201 | | +-+----+ 202 | | /Internet\ For External 203 | +-------+ Gateway +---------------------- 204 | \ / to reach via Internet 205 | +-+----+ 206 | | 207 +------------------------+ 209 +------------------------+ 210 | ,---. ,---. | 211 | (TN-1 ) ( TN-2)| 212 | `-+-' +--+ `-+-' | 213 | +----|vR|-----+ | 214 | ++-+ | 215 | | +-+----+ 216 | | / virtual\ For IPsec Tunnel 217 | +-------+ Gateway +---------------------- 218 | \ / termination 219 | +-+----+ 220 | | 221 +------------------------+ 223 +------------------------+ 224 | ,---. ,---. | 225 | (TN-1 ) ( TN-2)| 226 | `-+-' +--+ `-+-' | 227 | +----|vR|-----+ | 228 | ++-+ | 229 | | +-+----+ +------+ 230 | | / \ For Direct /customer\ 231 | +-------+ Gateway +----------+ gateway | 232 | \ / Connect \ / 233 | +-+----+ +------+ 234 | | 235 +------------------------+ 237 Figure 1: Examples of connecting to a Cloud DC 239 3.2. Interconnect to Hybrid Cloud DCs 241 According to Gartner, by 2020 "hybrid will be the most common usage 242 of the cloud" as more enterprises see the benefits of integrating 243 public and private cloud infrastructures. However, enabling the 244 growth of hybrid cloud deployments in the enterprise requires fast 245 and safe interconnection between public and private cloud services. 246 The Hybrid Cloud scenario also includes heterogeneous Cloud DCs. 248 For an enterprise to connect to applications & workloads hosted in 249 multiple Cloud DCs, the enterprise can use IPsec tunnels or lease 250 private lines to connect its on-premises gateways to each of the 251 Cloud DC's gateways, virtual routers instantiated in the Cloud DCs, 252 or any other suitable design (including a combination thereof). 254 Some enterprises prefer to instantiate their own virtual 255 CPEs/routers inside the Cloud DC to connect the workloads within the 256 Cloud DC. Then an overlay path is established between customer 257 gateways to the virtual CPEs/routers for reaching the workloads 258 inside the cloud DC. 260 3.3. Connecting workloads among hybrid Cloud DCs 262 When workloads among different Cloud DCs need to communicate, one 263 way is to hairpin all the traffic through the customer gateway, 264 which creates additional transmission delay & incurs cost exiting 265 Cloud DCs. Another way is to establish direct tunnels among 266 different VPCs (Virtual Private Clouds), such as using DMVPN 267 (Dynamic Multipoint Virtual Private Network) or DSVPN (Dynamic Smart 268 VPN) to establish direct Multi-edge tunnels. 270 DMVPN & DSVPN use NHRP (Next Hop Resolution Protocol) [RFC2735] so 271 that spoke nodes can register their IP addresses & WAN ports with 272 the hub node. The IETF ION (Internetworking over NBMA (non-broadcast 273 multiple access) WG, standardized NHRP for connection-oriented NBMA 274 network (such as ATM) network address resolution more than two 275 decades ago. 277 There are many differences between virtual routers in Public Cloud 278 DCs and the nodes in an NBMA network. It would be useful for the 279 IETF community to examine the effectiveness of NHRP as the 280 registration protocol for registering virtual routers in Cloud DCs 281 to gateways or entities that connect to enterprise private networks, 282 or evaluate using other protocols such as BGP approach described in 283 [BGP-SDWAN]. As the result of this evaluation, enhancement or new 284 protocols for distributing edge node/port properties may come out. 286 4. Desired Properties for Networks that interconnects Hybrid Clouds 287 The networks that interconnect hybrid Cloud DCs have to enable users 288 to take advantage of Cloud DCs: 289 - High availability, any time usage for any length of time. 290 Many enterprises incorporate Cloud as their disaster recovery 291 strategy, e.g. periodically backup data into the cloud, or 292 running backup applications in the Cloud, etc. Therefore, the 293 connection to the cloud DCs may not be permanent, but rather 294 needs to be on-demand. 296 - Global accessibility in different geographical zones, thereby 297 facilitating the proximity of applications as a function of the 298 end users' location, for improved latency. 299 - Elasticity and mobility, to instantiate additional applications 300 at Cloud DCs when end users' usages increase and shut down 301 applications at locations with fewer end users. 302 Some enterprises have front-end web portals running in Cloud 303 DCs and Database servers in their on-premises DCs. Those Front- 304 end web portals need to be reachable from the public Internet. 305 The backend connection to the sensitive data in database 306 servers hosted in the on-premises DCs might need secure 307 connections. 309 - Scalable security management. IPsec is commonly used to 310 interconnect Cloud GW with enterprises on-premises GWs. For 311 enterprises with large number or branch offices, managing the 312 IPsec's pair-wise security associations among many nodes can be 313 very difficult. 315 5. Problems with MPLS-based VPNs extending to Hybrid Cloud DCs 317 Traditional MPLS-based VPNs have been widely deployed as an 318 effective way to support businesses and organizations that require 319 network performance and reliability. MPLS shifted the burden of 320 managing a VPN service from enterprises to service providers. The 321 CPEs attached to MPLS VPN are also simpler and less expensive, since 322 they do not need to manage routes to remote sites; they simply pass 323 all outbound traffic to the MPLS VPN PEs to which the CPEs are 324 attached (albeit multi-homing scenarios require more processing 325 logic on CPEs). MPLS has addressed the problems of scale, 326 availability, and fast recovery from network faults, and 327 incorporated traffic-engineering capabilities. 329 However, traditional MPLS-based VPN solutions are not optimized for 330 connecting end-users to dynamic workloads/applications in cloud DCs 331 because: 333 - The Provider Edge (PE) nodes of the enterprise's VPNs might not 334 have direct connection to the third party cloud DCs that are 335 optimal for hosting workloads with the goal of easy access to 336 enterprises' end users. 338 - It takes a relatively long time to deploy provider edge (PE) 339 routers at new locations. When enterprise's workloads are 340 changed from one cloud DC to another (i.e., removed from one DC 341 and re-instantiated to another location when demand changes), 342 the enterprise branch offices need to be connected to the new 343 cloud DC, but the network service provider might not have PEs 344 located at the new location. 346 One of the main drivers for moving workloads into the cloud is 347 the widely available cloud DCs at geographically diverse 348 locations, where apps can be instantiated so that they can be 349 as close to their end users as possible. When the user base 350 changes, the applications may be moved to a new cloud DC 351 location closest to the new user base. 353 - Most of the cloud DCs do not expose their internal networks, so 354 the provider MPLS based VPNs cannot reach the workloads 355 natively. 357 - Many cloud DCs use an overlay to connect their gateways to the 358 workloads inside the DC. There has not been any standard to 359 address the interworking between the Cloud Overlay and the 360 enterprise' existing underlay networks. 362 Another roadblock is the lack of a standard way to express and 363 enforce consistent security policies to workloads that not only use 364 virtual addresses, but also have a high chance of placement in 365 different locations within the Cloud DC [RFC8192]. The traditional 366 VPN path computation and bandwidth allocation schemes may not be 367 flexible enough to address the need for enterprises to rapidly 368 connect to dynamically instantiated (or removed) workloads and 369 applications regardless of their location/nature (i.e., third party 370 cloud DCs). 372 6. Problem with using IPsec tunnels to Cloud DCs 373 As described in the previous section, many Cloud operators expose 374 their gateways for external entities (which can be enterprises 375 themselves) to directly establish IPsec tunnels. Enterprises can 376 also instantiate virtual routers within Cloud DCs to connect to its 377 on-premises devices via IPsec tunnels. If there is only one 378 enterprise location that needs to reach the Cloud DC, an IPsec 379 tunnel is a very convenient solution. 381 However, many medium-to-large enterprises usually have multiple 382 sites and multiple data centers. For workloads and apps hosted in 383 Cloud DCs, multiple sites need to communicate securely with those 384 Cloud workloads and apps. This section documents some of the issues 385 associated with using IPsec tunnels to connect enterprise' sites 386 with Cloud operator's Gateways. 388 6.1. Complexity of multi-point any-to-any interconnection 390 The dynamic workload instantiated in cloud DC needs to communicate 391 with multiple branch offices and on-premises data centers. Most 392 enterprises need multi-point interconnection among multiple 393 locations, as done by MPLS L2/L3 VPNs. 395 Using IPsec overlay paths to connect all branches & on-premises data 396 centers to cloud DCs require CPEs to manage routing among Cloud DCs 397 gateways and the CPEs located at other branch locations, which can 398 dramatically increase the complexity of the design, possibly at the 399 cost of jeopardizing the CPE performance. 401 The complexity of requiring CPEs to maintain routing among other 402 CPEs is one of the reasons why enterprises migrated from Frame Relay 403 based services to MPLS-based VPN services. 405 MPLS-based VPNs have their PEs directly connected to the CPEs. 406 Therefore, CPEs only need to forward all traffic to the directly 407 attached PEs, which are therefore responsible for enforcing the 408 routing policy within the corresponding VPNs. Even for multi-homed 409 CPEs, the CPEs only need to forward traffic among the directly 410 connected PEs (note: the complexity may vary for IPv6 network). 411 However, when using IPsec tunnels between CPEs and Cloud DCs, the 412 CPEs need to manage the routing for traffic to Cloud DCs, to remote 413 CPEs via VPN, or directly. 415 6.2. Poor performance over long distance 417 When enterprise CPEs or gateways are far away from Cloud DC gateways 418 or across country/continent boundaries, performance of IPsec tunnels 419 over the public Internet can be problematic and unpredictable. Even 420 though there are many monitoring tools available to measure delay 421 and various performance characteristics of the network, the 422 measurement for paths over the Internet is passive and past 423 measurements may not represent future performance. 425 Many cloud providers can replicate workloads in different available 426 zones. An App instantiated in a Cloud DC closest to clients may have 427 to cooperate with another App (or its mirror image) in another 428 region or database server(s) in the on-premises DC. This kind of 429 coordination requires predicable networking behavior/performance 430 among those locations. 432 6.3. Scaling Issues with IPsec Tunnels 434 IPsec can achieve secure overlay connections between two locations 435 over any underlay networks, e.g., between CPEs and Cloud DC 436 Gateways. 438 If there is only one enterprise location connected to the Cloud 439 gateway, a small number of IPsec tunnels can be configured on-demand 440 between the on-premises DC and the Cloud DC, which is an easy and 441 flexible solution. 443 However, for multiple enterprise locations to reach workloads hosted 444 in cloud DCs, the Cloud DC gateway needs to maintain multiple IPsec 445 tunnels to all those locations (e.g. hub & spoke topology). For a 446 company with hundreds or thousands of locations, there could be 447 hundreds (or even thousands) of IPsec tunnels terminating at the 448 Cloud DC gateway, which is not only very expensive (because Cloud 449 Operators charge based on connections), but can be very processing 450 intensive for the gateway. Many cloud operators only allow a limited 451 number of IPsec tunnels & bandwidth to each customer. 452 Alternatively, you could use a solution like group encryption where 453 a single IPSec SA is necessary at the GW but the drawback here is 454 key distribution and maintenance of a key server etc. 456 7. Problems of Using SD-WAN to connect to Cloud DCs 457 SD-WAN can establish multiple parallel (overlay) paths between two 458 locations on-demand, for example, two CPEs interconnected by a 459 traditional MPLS VPN ([RFC4364] or [RFC4664]) as well as overlay 460 tunnels. The overlay, possibly secured by IPsec tunnels [RFC6071], 461 can traverse over the public Internet using fiber, cable, DSL-based 462 Internet access, Wi-Fi, or 4G/Long Term Evolution (LTE). 464 SD-WAN lets enterprises augment their current VPN network with cost- 465 effective, readily available Broadband Internet connectivity, 466 enabling some traffic offloaded to overlay paths based on traffic 467 forwarding policy (application-based or otherwise), or when the MPLS 468 VPN connection between the two locations is congested, or otherwise 469 undesirable or unavailable. 471 7.1. SD-WAN among branch offices vs. interconnect to Cloud DCs 473 SD-WAN interconnection of branch offices is not as simple as it 474 appears. For an enterprise with multiple sites, using SD-WAN overlay 475 paths among sites requires each CPE to manage all the addresses that 476 local hosts have the potential to reach, i.e. map internal VPN 477 addresses to appropriate SD-WAN paths. This is similar to the 478 complexity of Frame Relay based VPNs, where each CPE needed to 479 maintain mesh routing for all destinations if they were to avoid an 480 extra hop through a hub router. Even though SD-WAN CPEs can get 481 assistance from a central controller (instead of running a routing 482 protocol) to resolve the mapping between destinations and SD-WAN 483 paths, SD-WAN CPEs are still responsible for routing table 484 maintenance as remote destinations change their attachments, e.g., 485 the dynamic workload in other DCs are de-commissioned or added. 487 Even though originally envisioned for interconnecting branch 488 offices, SD-WAN offers a very attractive way for enterprises to 489 connect to Cloud DCs. 491 The SD-WAN for interconnecting branch offices and the SD-WAN for 492 interconnecting to Cloud DCs have some differences: 494 - SD-WAN for interconnecting branch offices usually have two end- 495 points (e.g. CPEs) controlled by one entity (e.g., a controller 496 or management system operated by the enterprise). 497 - SD-WAN for interconnecting to Cloud DCs may have CPEs owned or 498 managed by the enterprise and remote end-points being managed 499 or controlled by Cloud DCs (For the ease of description, let's 500 call it asymmetrically managed CPEs). 502 - Cloud DCs may have different entering points (or devices) with 503 one terminating private direct connect (such as MPLS, or direct 504 line) and other points being the device terminating the IPsec 505 tunnels, as shown in the following diagram. 507 Therefore, the SD-WAN becomes asymmetric. 508 +------------------------+ 509 | ,---. ,---. | 510 | (TN-1 ) ( TN-2)| 511 | `-+-' +---+ `-+-' | 512 | +----|vR1|----+ | 513 | ++--+ | 514 | | +-+----+ 515 | | /Internet\ One path via 516 | +-------+ Gateway +---------------------+ 517 | \ / Internet \ 518 | +-+----+ \ 519 +------------------------+ \ 520 \ 521 +------------------------+ \ 522 | ,---. ,---. | | 523 | (TN-3 ) ( TN-4)| | 524 | `-+-' +--+ `-+-' | | +------+ 525 | +----|vR|-----+ | +------+ CPE | 526 | ++-+ | | +------+ 527 | | +-+----+ | 528 | | / virtual\ One path via IPsec Tunnel | 529 | +-------+ Gateway +-------------------------- + 530 | \ / | 531 | +-+----+ | 532 +------------------------+ | 533 | 534 +------------------------+ | 535 | ,---. ,---. | | 536 | (TN-5 ) ( TN-6)| | 537 | `-+-' +--+ `-+-' | | 538 | +----|vR|-----+ | | 539 | ++-+ | | 540 | | +-+----+ +------+ | 541 | | / \ Via Direct /customer\ | 542 | +-------+ Gateway +----------+ gateway |-----+ 543 | \ / Connect \ / 544 | +-+----+ +------+ 545 +------------------------+ 547 Figure 2: Asymmetric Paths SD-WAN 549 8. End-to-End Security Concerns for Data Flows 551 When IPsec tunnels from enterprise on-premises CPEs are terminated 552 at the Cloud DC gateway where the workloads or applications are 553 hosted, some enterprises have concerns regarding traffic to/from 554 their workload being exposed to others behind the data center 555 gateway (e.g., exposed to other organizations that have workloads 556 in the same data center). 557 To ensure that traffic to/from workloads is not exposed to 558 unwanted entities; it is worthwhile to consider having the IPsec 559 tunnels go all the way to the workload (servers, or VMs) within 560 the DC. 562 9. Requirements for Dynamic Cloud Data Center VPNs 564 [Editor's note: this section is only a place holder. The requirement 565 listed here are only to stimulate more discussions] 567 In order to address the aforementioned issues, any solution for 568 enterprise VPNs that includes connectivity to dynamic workloads or 569 applications in cloud data centers should satisfy a set of 570 requirements: 572 - The solution should allow enterprises to take advantage of the 573 current state-of-the-art in VPN technology, in both traditional 574 MPLS-based VPNs and IPsec-based VPNs (or any combination 575 thereof) that run over-the-top of the public Internet. 576 - The solution should not require an enterprise to upgrade all 577 their existing CPEs. 578 - The solution should support scalable IPsec key management among 579 all the nodes. 580 - The solution needs to support easy and fast VPN connections to 581 dynamic workloads and applications in third party data centers, 582 and easily allow these workloads to migrate both within a data 583 center and between data centers. 584 - Allow VPNs to provide bandwidth and other performance 585 guarantees. 587 - Be a cost-effective solution for enterprises to incorporate 588 dynamic cloud-based applications and workloads into their 589 existing VPN environment. 591 10. Security Considerations 593 This draft describes the problem space of using SD-WAN to 594 interconnect branch offices with Cloud DCs. As it is a problem 595 statement, the draft itself does not introduce any security 596 concerns. The draft does discuss security requirements as a part of 597 the problem space, particularly in sections 4, 5, and 8. 599 11. Solution drafts resulting from this work will address particular 600 security concerns inherent in the solution(s), including both 601 protocol aspects and the importance (for example) of securing 602 workloads in cloud DCs and the use of secure interconnection 603 mechanisms.IANA Considerations 605 This document requires no IANA actions. RFC Editor: Please remove 606 this section before publication. 608 12. References 610 12.1. Normative References 612 12.2. Informative References 614 [RFC2735] B. Fox, et al "NHRP Support for Virtual Private 615 networks". Dec. 1999. 617 [RFC8192] S. Hares, et al "Interface to Network Security Functions 618 (I2NSF) Problem Statement and Use Cases", July 2017 620 [ITU-T-X1036] ITU-T Recommendation X.1036, "Framework for creation, 621 storage, distribution and enforcement of policies for 622 network security", Nov 2007. 624 [RFC6071] S. Frankel and S. Krishnan, "IP Security (IPsec) and 625 Internet Key Exchange (IKE) Document Roadmap", Feb 2011. 627 [RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private 628 Networks (VPNs)", Feb 2006 630 [RFC4664] L. Andersson and E. Rosen, "Framework for Layer 2 Virtual 631 Private Networks (L2VPNs)", Sept 2006. 633 [BGP-SDWAN] L. Dunbar, et al. "BGP Extension for SDWAN Overlay 634 Networks", draft-dunbar-idr-bgp-sdwan-overlay-ext-03, 635 work-in-progress, Nov 2018. 637 13. Acknowledgments 639 Many thanks to Ignas Bagdonas, Michael Huang, Liu Yuan Jiao, 640 Katherine Zhao, and Jim Guichard for the discussion and 641 contributions. 643 Authors' Addresses 645 Linda Dunbar 646 Huawei 647 Email: Linda.Dunbar@huawei.com 649 Andrew G. Malis 650 Huawei 651 Email: agmalis@gmail.com 653 Christian Jacquenet 654 France Telecom 655 Rennes, 35000 656 France 657 Email: Christian.jacquenet@orange.com 659 Mehmet Toy 660 Verizon 661 One Verizon Way 662 Basking Ridge, NJ 07920 663 Email: mehmet.toy@verizon.com