idnits 2.17.1 draft-dm-net2cloud-problem-statement-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There are 3 instances of too long lines in the document, the longest one being 4 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (January 24, 2019) is 1919 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'ITU-T-X1036' is defined on line 639, but no explicit reference was found in the text Summary: 2 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Dunbar 2 Internet Draft A. Malis 3 Intended status: Informational Huawei 4 Expires: July 2019 C. Jacquenet 5 Orange 6 M. Toy 7 Verizon 8 January 24, 2019 10 Seamless Interconnect Underlay to Cloud Overlay Problem Statement 11 draft-dm-net2cloud-problem-statement-05 13 Abstract 15 This document describes the problems of enterprises face today in 16 connecting their branch offices to dynamic workloads in third party 17 data centers (a.k.a. Cloud DCs). 19 It examines the approach of using SD-WAN to utilize multiple 20 underlay networks by different providers to maximize or optimize 21 interconnection among branch offices, on-premises DCs and Hybrid 22 Cloud DCs. 24 This document also describes some of the (network) problems that 25 many enterprises face when they have workloads & applications & data 26 split among hybrid data centers, especially for those enterprises 27 with multiple sites that are already interconnected by VPNs (e.g. 28 MPLS L2VPN/L3VPN) and leased lines. 30 Current operational problems in the field are examined to determine 31 whether there is a need for enhancements to existing protocols or 32 whether a new protocol is necessary to solve them. 34 Status of this Memo 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 This Internet-Draft is submitted in full conformance with the 40 provisions of BCP 78 and BCP 79. This document may not be modified, 41 and derivative works of it may not be created, except to publish it 42 as an RFC and to translate it into languages other than English. 44 Internet-Drafts are working documents of the Internet Engineering 45 Task Force (IETF), its areas, and its working groups. Note that 46 other groups may also distribute working documents as Internet- 47 Drafts. 49 Internet-Drafts are draft documents valid for a maximum of six 50 months and may be updated, replaced, or obsoleted by other documents 51 at any time. It is inappropriate to use Internet-Drafts as 52 reference material or to cite them other than as "work in progress." 54 The list of current Internet-Drafts can be accessed at 55 http://www.ietf.org/ietf/1id-abstracts.txt 57 The list of Internet-Draft Shadow Directories can be accessed at 58 http://www.ietf.org/shadow.html 60 This Internet-Draft will expire on July 24, 2019. 62 Copyright Notice 64 Copyright (c) 2019 IETF Trust and the persons identified as the 65 document authors. All rights reserved. 67 This document is subject to BCP 78 and the IETF Trust's Legal 68 Provisions Relating to IETF Documents 69 (http://trustee.ietf.org/license-info) in effect on the date of 70 publication of this document. Please review these documents 71 carefully, as they describe your rights and restrictions with 72 respect to this document. Code Components extracted from this 73 document must include Simplified BSD License text as described in 74 Section 4.e of the Trust Legal Provisions and are provided without 75 warranty as described in the Simplified BSD License. 77 Table of Contents 79 1. Introduction...................................................3 80 2. Definition of terms............................................4 81 3. Current Practices in Interconnecting Enterprise Sites with Cloud 82 DCs...............................................................5 83 3.1. Interconnect to Cloud DCs.................................5 84 3.2. Interconnect to Hybrid Cloud DCs..........................7 85 3.3. Connecting workloads among hybrid Cloud DCs...............7 86 4. Desired Properties for Networks that interconnects Hybrid Clouds 87 ..................................................................8 88 5. Problems with MPLS-based VPNs extending to Hybrid Cloud DCs....9 89 6. Problem with using IPsec tunnels to Cloud DCs.................10 90 6.1. Complexity of multi-point any-to-any interconnection.....11 91 6.2. Poor performance over long distance......................11 92 6.3. Scaling Issues with IPsec Tunnels........................12 93 7. Problems of Using SD-WAN to connect to Cloud DCs..............12 94 7.1. SD-WAN among branch offices vs. interconnect to Cloud DCs12 95 8. End-to-End Security Concerns for Data Flows...................15 96 9. Requirements for Dynamic Cloud Data Center VPNs...............15 97 10. Security Considerations......................................16 98 11. Solution drafts resulting from this work will address particular 99 security concerns inherent in the solution(s), including both 100 protocol aspects and the importance (for example) of securing 101 workloads in cloud DCs and the use of secure interconnection 102 mechanisms.IANA Considerations...................................16 103 12. References...................................................16 104 12.1. Normative References....................................16 105 12.2. Informative References..................................16 106 13. Acknowledgments..............................................17 108 1. Introduction 110 Cloud applications and services continue to change how businesses of 111 all sizes work and share information. "Cloud applications & 112 workloads" are those that are instantiated in third party DCs that 113 also host services for other customers. 115 With the advent of widely available third party cloud DCs in diverse 116 geographic locations and the advancement of tools for monitoring and 117 predicting application behaviors, it is technically feasible for 118 enterprises to instantiate applications and workloads in locations 119 that are geographically closest to their end users. This property 120 aids in improving end-to-end latency and overall user experience. 121 Conversely, an enterprise can easily shutdown applications and 122 workloads when their end users' geographic base changes (therefore 123 needing to change the networking connection to those relocated 124 applications and workloads). In addition, an enterprise may wish to 125 take advantage of more and more business applications offered by 126 third party private cloud DCs, such as Dropbox, Microsoft365, SAP 127 HANA, Oracle Cloud, Salesforce Cloud, etc. 129 Most of those enterprise branch offices & on-premises data centers 130 are already connected via VPNs, such as MPLS based l2VPN/L3VPN. Then 131 connecting to the cloud-based resources may not be straightforward 132 if the provider of the VPN service does not have direct connections 133 to the cloud DCs. Under those circumstances, the enterprise can 134 upgrade their existing CPEs to utilize SD-WAN to reach cloud 135 resources (without any assistance from the VPN service provider), or 136 wait for their VPN service provider to make new agreements with data 137 center providers to connect to the Cloud resources. Either way has 138 additional infrastructure and operational costs. 140 In addition, it is an uptrend with more enterprises instantiating 141 their Apps & workloads in different Cloud DCs to maximize the 142 benefits of geographical convenience, elasticity and special 143 features offered by different Cloud DCs. 145 2. Definition of terms 147 Cloud DC: Third party Data Centers that usually host applications 148 and workload owned by different organizations or 149 tenants. 151 Controller: Used interchangeably with SD-WAN controller to manage 152 SD-WAN overlay path creation/deletion and monitoring the 153 path conditions between two or more sites. 155 DSVPN: Dynamic Smart Virtual Private Network. DSVPN is a secure 156 network that exchanges data between sites without 157 needing to pass traffic through an organization's 158 headquarter virtual private network (VPN) server or 159 router. 161 Heterogeneous Cloud: applications & workloads split among Cloud DCs 162 owned & managed by different operators. 164 Hybrid Cloud: On premise DC plus Cloud DCs owned and managed by 165 different organizations. In this document Hybrid Cloud 166 also include heterogeneous cloud as well. 168 SD-WAN: Software Defined Wide Area Network, which can mean many 169 different things. In this document, "SD-WAN" refers to 170 the solutions specified by ONUG (Open Network User 171 Group), https://www.onug.net/software-defined-wide-area- 172 network-sd-wan/, which is about pooling WAN bandwidth 173 from multiple underlay networks to get better WAN 174 bandwidth management, visibility & control. Some of the 175 underlay networks are private networks over which 176 traffic can traverse without encryption, others requires 177 IPsec tunnels between SD-WAN nodes to carry the traffic. 179 VPC: Virtual Private Cloud. A service offered by many Cloud 180 DC operators to allocate a logically isolated cloud 181 resources, including compute, networking and storage. 183 3. Current Practices in Interconnecting Enterprise Sites with Cloud DCs 185 3.1. Interconnect to Cloud DCs 187 Most Cloud operators offer some type of network gateway through 188 which an enterprise can reach their workloads hosted in the Cloud 189 DCs. For example, AWS (Amazon Web Services) offers the following 190 options to reach workloads in AWS Cloud DCs: 192 - Internet gateway for any external entities to reach the 193 workloads hosted in AWS Cloud DC via the internet. 194 - Virtual gateway (vGW) to which IPsec tunnels [RFC6071] are 195 established between an enterprise's own gateways and AWS vGW, 196 so that the communications between those gateways can be 197 secured from the underlay (which might be the public internet). 198 - Direct Connect, which allows enterprises to purchase direct 199 connect from network service providers to get a private leased 200 line interconnecting the enterprises gateway(s) and the AWS 201 Direct Connect routers co-located with the network operators. 203 Via Direct Connect, AWS Transit Gateway can be utilized to 204 interconnect multiple VPCs in different Availability Zones. 206 CPEs at one Enterprise branch office can have ports facing internet 207 to connect to AWS's vGW via IPsec and other ports connected to AWS 208 DirectConnect via private network (without encryption). 209 +------------------------+ 210 | ,---. ,---. | 211 | (TN-1 ) ( TN-2)| 212 | `-+-' +--+ `-+-' | 213 | +----|vR|-----+ | 214 | ++-+ | 215 | | +-+----+ 216 | | /Internet\ For External 217 | +-------+ Gateway +---------------------- 218 | \ / to reach via Internet 219 | +-+----+ 220 | | 221 +------------------------+ 223 +------------------------+ 224 | ,---. ,---. | 225 | (TN-1 ) ( TN-2)| 226 | `-+-' +--+ `-+-' | 227 | +----|vR|-----+ | 228 | ++-+ | 229 | | +-+----+ 230 | | / virtual\ For IPsec Tunnel 231 | +-------+ Gateway +---------------------- 232 | \ / termination 233 | +-+----+ 234 | | 235 +------------------------+ 237 +------------------------+ 238 | ,---. ,---. | 239 | (TN-1 ) ( TN-2)| 240 | `-+-' +--+ `-+-' | 241 | +----|vR|-----+ | 242 | ++-+ | 243 | | +-+----+ +------+ 244 | | / \ For Direct /customer\ 245 | +-------+ Gateway +----------+ gateway | 246 | \ / Connect \ / 247 | +-+----+ +------+ 248 | | 249 +------------------------+ 250 Figure 1: Examples of connecting to a Cloud DC 252 3.2. Interconnect to Hybrid Cloud DCs 254 According to Gartner, by 2020 "hybrid will be the most common usage 255 of the cloud" as more enterprises see the benefits of integrating 256 public and private cloud infrastructures. However, enabling the 257 growth of hybrid cloud deployments in the enterprise requires fast 258 and safe interconnection between public and private cloud services. 259 The Hybrid Cloud scenario also includes heterogeneous Cloud DCs, 260 meaning Cloud DCs owned and managed by different organizations. 262 For an enterprise to connect to applications & workloads hosted in 263 multiple Cloud DCs, the enterprise can use IPsec tunnels over 264 internet or/and lease private networks to connect its on-premises 265 gateways to each of the Cloud DC's gateways, virtual routers 266 instantiated in the Cloud DCs, or any other suitable design 267 (including a combination thereof). 269 Some enterprises prefer to instantiate their own virtual 270 CPEs/routers inside the Cloud DC to connect the workloads within the 271 Cloud DC. Then an overlay path is established between customer 272 gateways to the virtual CPEs/routers for reaching the workloads 273 inside the cloud DC. 275 3.3. Connecting workloads among hybrid Cloud DCs 277 There are multiple approaches to interconnect workloads among 278 different Cloud DCs: 280 - Utilize Cloud DCs provided transit gateways, which usually does 281 not work if Cloud DCs are owned and managed by different Cloud 282 providers. 283 - Hairpin all the traffic through the customer gateway, which 284 creates additional transmission delay & incurs cost exiting 285 Cloud DCs, or 286 - Establish direct tunnels among different VPCs (Virtual Private 287 Clouds) via client's own virtual routers instantiated within 288 Cloud DCs. Using DMVPN (Dynamic Multipoint Virtual Private 289 Network) or DSVPN (Dynamic Smart VPN) to establish direct 290 Multi-edge tunnels among those client's own virtual routers. 292 DMVPN & DSVPN use NHRP (Next Hop Resolution Protocol) [RFC2735] so 293 that spoke nodes can register their IP addresses & WAN ports with 294 the hub node. The IETF ION (Internetworking over NBMA (non-broadcast 295 multiple access) WG, standardized NHRP for connection-oriented NBMA 296 network (such as ATM) network address resolution more than two 297 decades ago. 299 There are many differences between virtual routers in Public Cloud 300 DCs and the nodes in an NBMA network. NHRP & DSVPN are not effective 301 in registering virtual routers in Cloud DCs without major extension. 302 Another option is using other protocols such as BGP approach 303 described in [BGP-SDWAN]. As the result of this evaluation, 304 enhancement or new protocols for distributing edge node/port 305 properties may come out. 307 4. Desired Properties for Networks that interconnects Hybrid Clouds 308 The networks that interconnect hybrid Cloud DCs have to enable users 309 to take advantage of Cloud DCs: 310 - High availability, any time usage for any length of time. 311 Many enterprises incorporate Cloud as their disaster recovery 312 strategy, e.g. periodically backup data into the cloud, or 313 running backup applications in the Cloud, etc. Therefore, the 314 connection to the cloud DCs may not be permanent, but rather 315 needs to be on-demand. 317 - Global accessibility in different geographical zones, thereby 318 facilitating the proximity of applications as a function of the 319 end users' location, for improved latency. 320 - Elasticity and mobility, to instantiate additional applications 321 at Cloud DCs when end users' usages increase and shut down 322 applications at locations with fewer end users. 323 Some enterprises have front-end web portals running in Cloud 324 DCs and Database servers in their on-premises DCs. Those Front- 325 end web portals need to be reachable from the public Internet. 326 The backend connection to the sensitive data in database 327 servers hosted in the on-premises DCs might need secure 328 connections. 330 - Scalable security management. IPsec is commonly used to 331 interconnect Cloud GW with enterprises on-premises GWs. For 332 enterprises with large number or branch offices, managing the 333 IPsec's pair-wise security associations among many nodes can be 334 very difficult. 336 5. Problems with MPLS-based VPNs extending to Hybrid Cloud DCs 338 Traditional MPLS-based VPNs have been widely deployed as an 339 effective way to support businesses and organizations that require 340 network performance and reliability. MPLS shifted the burden of 341 managing a VPN service from enterprises to service providers. The 342 CPEs attached to MPLS VPN are also simpler and less expensive, since 343 they do not need to manage routes to remote sites; they simply pass 344 all outbound traffic to the MPLS VPN PEs to which the CPEs are 345 attached (albeit multi-homing scenarios require more processing 346 logic on CPEs). MPLS has addressed the problems of scale, 347 availability, and fast recovery from network faults, and 348 incorporated traffic-engineering capabilities. 350 However, traditional MPLS-based VPN solutions are not optimized for 351 connecting end-users to dynamic workloads/applications in cloud DCs 352 because: 354 - The Provider Edge (PE) nodes of the enterprise's VPNs might not 355 have direct connection to the third party cloud DCs that are 356 optimal for hosting workloads with the goal of easy access to 357 enterprises' end users. 359 - It takes a relatively long time to deploy provider edge (PE) 360 routers at new locations. When enterprise's workloads are 361 changed from one cloud DC to another (i.e., removed from one DC 362 and re-instantiated to another location when demand changes), 363 the enterprise branch offices need to be connected to the new 364 cloud DC, but the network service provider might not have PEs 365 located at the new location. 367 One of the main drivers for moving workloads into the cloud is 368 the widely available cloud DCs at geographically diverse 369 locations, where apps can be instantiated so that they can be 370 as close to their end users as possible. When the user base 371 changes, the applications may be moved to a new cloud DC 372 location closest to the new user base. 374 - Most of the cloud DCs do not expose their internal networks, so 375 the provider MPLS based VPNs cannot reach the workloads 376 natively. 378 - Many cloud DCs use an overlay to connect their gateways to the 379 workloads inside the DC. There has not been any standard to 380 address the interworking between the Cloud Overlay and the 381 enterprise' existing underlay networks. 383 Another roadblock is the lack of a standard way to express and 384 enforce consistent security policies to workloads that not only use 385 virtual addresses, but also have a high chance of placement in 386 different locations within the Cloud DC [RFC8192]. The traditional 387 VPN path computation and bandwidth allocation schemes may not be 388 flexible enough to address the need for enterprises to rapidly 389 connect to dynamically instantiated (or removed) workloads and 390 applications regardless of their location/nature (i.e., third party 391 cloud DCs). 393 6. Problem with using IPsec tunnels to Cloud DCs 394 As described in the previous section, many Cloud operators expose 395 their gateways for external entities (which can be enterprises 396 themselves) to directly establish IPsec tunnels. Enterprises can 397 also instantiate virtual routers within Cloud DCs to connect to its 398 on-premises devices via IPsec tunnels. If there is only one 399 enterprise location that needs to reach the Cloud DC, an IPsec 400 tunnel is a very convenient solution. 402 However, many medium-to-large enterprises usually have multiple 403 sites and multiple data centers. For workloads and apps hosted in 404 Cloud DCs, multiple sites need to communicate securely with those 405 Cloud workloads and apps. This section documents some of the issues 406 associated with using IPsec tunnels to connect enterprise' sites 407 with Cloud operator's Gateways. 409 6.1. Complexity of multi-point any-to-any interconnection 411 The dynamic workload instantiated in cloud DC needs to communicate 412 with multiple branch offices and on-premises data centers. Most 413 enterprises need multi-point interconnection among multiple 414 locations, as done by MPLS L2/L3 VPNs. 416 Using IPsec overlay paths to connect all branches & on-premises data 417 centers to cloud DCs requires CPEs to manage routing among Cloud DCs 418 gateways and the CPEs located at other branch locations, which can 419 dramatically increase the complexity of the design, possibly at the 420 cost of jeopardizing the CPE performance. 422 The complexity of requiring CPEs to maintain routing among other 423 CPEs is one of the reasons why enterprises migrated from Frame Relay 424 based services to MPLS-based VPN services. 426 MPLS-based VPNs have their PEs directly connected to the CPEs. 427 Therefore, CPEs only need to forward all traffic to the directly 428 attached PEs, which are therefore responsible for enforcing the 429 routing policy within the corresponding VPNs. Even for multi-homed 430 CPEs, the CPEs only need to forward traffic among the directly 431 connected PEs (note: the complexity may vary for IPv6 network). 432 However, when using IPsec tunnels between CPEs and Cloud DCs, the 433 CPEs need to manage the routing for traffic to Cloud DCs, to remote 434 CPEs via VPN, or directly. 436 6.2. Poor performance over long distance 438 When enterprise CPEs or gateways are far away from Cloud DC gateways 439 or across country/continent boundaries, performance of IPsec tunnels 440 over the public Internet can be problematic and unpredictable. Even 441 though there are many monitoring tools available to measure delay 442 and various performance characteristics of the network, the 443 measurement for paths over the Internet is passive and past 444 measurements may not represent future performance. 446 Many cloud providers can replicate workloads in different available 447 zones. An App instantiated in a Cloud DC closest to clients may have 448 to cooperate with another App (or its mirror image) in another 449 region or database server(s) in the on-premises DC. This kind of 450 coordination requires predicable networking behavior/performance 451 among those locations. 453 6.3. Scaling Issues with IPsec Tunnels 455 IPsec can achieve secure overlay connections between two locations 456 over any underlay networks, e.g., between CPEs and Cloud DC 457 Gateways. 459 If there is only one enterprise location connected to the Cloud 460 gateway, a small number of IPsec tunnels can be configured on-demand 461 between the on-premises DC and the Cloud DC, which is an easy and 462 flexible solution. 464 However, for multiple enterprise locations to reach workloads hosted 465 in cloud DCs, the Cloud DC gateway needs to maintain multiple IPsec 466 tunnels to all those locations (e.g. hub & spoke topology). For a 467 company with hundreds or thousands of locations, there could be 468 hundreds (or even thousands) of IPsec tunnels terminating at the 469 Cloud DC gateway, which is not only very expensive (because Cloud 470 Operators charge based on connections), but can be very processing 471 intensive for the gateway. Many cloud operators only allow a limited 472 number of IPsec tunnels & bandwidth to each customer. 473 Alternatively, you could use a solution like group encryption where 474 a single IPSec SA is necessary at the GW but the drawback here is 475 key distribution and maintenance of a key server etc. 477 7. Problems of Using SD-WAN to connect to Cloud DCs 478 SD-WAN can establish parallel paths over multiple underlay networks 479 between two locations on-demand, for example, two CPEs 480 interconnected by a traditional MPLS VPN ([RFC4364] or [RFC4664]) as 481 well as IPsec [RFC6071] overlay tunnels over Internet. 483 SD-WAN lets enterprises augment their current VPN network with cost- 484 effective, readily available Broadband Internet connectivity, 485 enabling some traffic offloaded to paths over Internet based on 486 traffic forwarding policy (application-based or otherwise), or when 487 the MPLS VPN connection between the two locations is congested, or 488 otherwise undesirable or unavailable. 490 7.1. SD-WAN among branch offices vs. interconnect to Cloud DCs 492 SD-WAN interconnection of branch offices is not as simple as it 493 appears. For an enterprise with multiple sites, using SD-WAN overlay 494 paths among sites requires each CPE to manage all the addresses that 495 local hosts have the potential to reach, i.e. map internal VPN 496 addresses to appropriate SD-WAN paths. This is similar to the 497 complexity of Frame Relay based VPNs, where each CPE needed to 498 maintain mesh routing for all destinations if they were to avoid an 499 extra hop through a hub router. Even though SD-WAN CPEs can get 500 assistance from a central controller (instead of running a routing 501 protocol) to resolve the mapping between destinations and SD-WAN 502 paths, SD-WAN CPEs are still responsible for routing table 503 maintenance as remote destinations change their attachments, e.g., 504 the dynamic workload in other DCs are de-commissioned or added. 506 Even though originally envisioned for interconnecting branch 507 offices, SD-WAN offers a very attractive way for enterprises to 508 connect to Cloud DCs. 510 The SD-WAN for interconnecting branch offices and the SD-WAN for 511 interconnecting to Cloud DCs have some differences: 513 - SD-WAN for interconnecting branch offices usually have two end- 514 points (e.g. CPEs) controlled by one entity (e.g., a controller 515 or management system operated by the enterprise). 516 - SD-WAN for interconnecting to Cloud DCs may have CPEs owned or 517 managed by the enterprise and remote end-points being managed 518 or controlled by Cloud DCs (For the ease of description, let's 519 call it asymmetrically managed CPEs). 521 - Cloud DCs may have different entering points (or devices) with 522 one terminating private direct connect (such as MPLS, or direct 523 line) and other points being the device terminating the IPsec 524 tunnels, as shown in the following diagram. 526 Therefore, the SD-WAN becomes asymmetric. 527 +------------------------+ 528 | ,---. ,---. | 529 | (TN-1 ) ( TN-2)| 530 | `-+-' +---+ `-+-' | 531 | +----|vR1|----+ | 532 | ++--+ | 533 | | +-+----+ 534 | | /Internet\ One path via 535 | +-------+ Gateway +---------------------+ 536 | \ / Internet \ 537 | +-+----+ \ 538 +------------------------+ \ 539 \ 540 +------------------------+ native traffic \ 541 | ,---. ,---. | without encryption| 542 | (TN-3 ) ( TN-4)| over insecure network | 543 | `-+-' +--+ `-+-' | | +------+ 544 | +----|vR|-----+ | +--+ CPE | 545 | ++-+ | | +------+ 546 | | +-+----+ | 547 | | / virtual\ One path via IPsec Tunnel | 548 | +-------+ Gateway +-------------------------- + 549 | \ / Encrypted traffic over| 550 | +-+----+ public network | 551 +------------------------+ | 552 | 553 +------------------------+ | 554 | ,---. ,---. | Native traffic | 555 | (TN-5 ) ( TN-6)| without encryption | 556 | `-+-' +--+ `-+-' | over secure network| 557 | +----|vR|-----+ | | 558 | ++-+ | | 559 | | +-+----+ +------+ | 560 | | / \ Via Direct /customer\ | 561 | +-------+ Gateway +----------+ gateway |-----+ 562 | \ / Connect \ / 563 | +-+----+ +------+ 564 +------------------------+ 566 Figure 2: Asymmetric Paths SD-WAN 568 8. End-to-End Security Concerns for Data Flows 570 When IPsec tunnels from enterprise on-premises CPEs are terminated 571 at the Cloud DC gateway where the workloads or applications are 572 hosted, some enterprises have concerns regarding traffic to/from 573 their workload being exposed to others behind the data center 574 gateway (e.g., exposed to other organizations that have workloads 575 in the same data center). 576 To ensure that traffic to/from workloads is not exposed to 577 unwanted entities; it is worthwhile to consider having the IPsec 578 tunnels go all the way to the workload (servers, or VMs) within 579 the DC. 581 9. Requirements for Dynamic Cloud Data Center VPNs 583 [Editor's note: this section is only a place holder. The requirement 584 listed here are only to stimulate more discussions] 586 In order to address the aforementioned issues, any solution for 587 enterprise VPNs that includes connectivity to dynamic workloads or 588 applications in cloud data centers should satisfy a set of 589 requirements: 591 - The solution should allow enterprises to take advantage of the 592 current state-of-the-art in VPN technology, in both traditional 593 MPLS-based VPNs and IPsec-based VPNs (or any combination 594 thereof) that run over-the-top of the public Internet. 595 - The solution should not require an enterprise to upgrade all 596 their existing CPEs. 597 - The solution should support scalable IPsec key management among 598 all the nodes. 599 - The solution needs to support easy and fast VPN connections to 600 dynamic workloads and applications in third party data centers, 601 and easily allow these workloads to migrate both within a data 602 center and between data centers. 603 - Allow VPNs to provide bandwidth and other performance 604 guarantees. 606 - Be a cost-effective solution for enterprises to incorporate 607 dynamic cloud-based applications and workloads into their 608 existing VPN environment. 610 10. Security Considerations 612 This draft describes the problem space of using SD-WAN to 613 interconnect branch offices with Cloud DCs. As it is a problem 614 statement, the draft itself does not introduce any security 615 concerns. The draft does discuss security requirements as a part of 616 the problem space, particularly in sections 4, 5, and 8. 618 11. Solution drafts resulting from this work will address particular 619 security concerns inherent in the solution(s), including both 620 protocol aspects and the importance (for example) of securing 621 workloads in cloud DCs and the use of secure interconnection 622 mechanisms.IANA Considerations 624 This document requires no IANA actions. RFC Editor: Please remove 625 this section before publication. 627 12. References 629 12.1. Normative References 631 12.2. Informative References 633 [RFC2735] B. Fox, et al "NHRP Support for Virtual Private 634 networks". Dec. 1999. 636 [RFC8192] S. Hares, et al "Interface to Network Security Functions 637 (I2NSF) Problem Statement and Use Cases", July 2017 639 [ITU-T-X1036] ITU-T Recommendation X.1036, "Framework for creation, 640 storage, distribution and enforcement of policies for 641 network security", Nov 2007. 643 [RFC6071] S. Frankel and S. Krishnan, "IP Security (IPsec) and 644 Internet Key Exchange (IKE) Document Roadmap", Feb 2011. 646 [RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private 647 Networks (VPNs)", Feb 2006 649 [RFC4664] L. Andersson and E. Rosen, "Framework for Layer 2 Virtual 650 Private Networks (L2VPNs)", Sept 2006. 652 [BGP-SDWAN] L. Dunbar, et al. "BGP Extension for SDWAN Overlay 653 Networks", draft-dunbar-idr-bgp-sdwan-overlay-ext-03, 654 work-in-progress, Nov 2018. 656 13. Acknowledgments 658 Many thanks to Ignas Bagdonas, Michael Huang, Liu Yuan Jiao, 659 Katherine Zhao, and Jim Guichard for the discussion and 660 contributions. 662 Authors' Addresses 664 Linda Dunbar 665 Huawei 666 Email: Linda.Dunbar@huawei.com 668 Andrew G. Malis 669 Huawei 670 Email: agmalis@gmail.com 672 Christian Jacquenet 673 France Telecom 674 Rennes, 35000 675 France 676 Email: Christian.jacquenet@orange.com 678 Mehmet Toy 679 Verizon 680 One Verizon Way 681 Basking Ridge, NJ 07920 682 Email: mehmet.toy@verizon.com