idnits 2.17.1 draft-fang-actn-multidomain-dci-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 305: '...t network domain SHOULD support standa...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 5, 2014) is 3606 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 4 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Luyuan Fang 2 Internet Draft Microsoft 3 Intended status: Informational 5 June 5, 2014 7 ACTN Use-case for Multi-domain Data Center Interconnect 9 draft-fang-actn-multidomain-dci-00.txt 11 Status of this Memo 13 This Internet-Draft is submitted to IETF in full conformance with 14 the provisions of BCP 78 and BCP 79. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six 22 months and may be updated, replaced, or obsoleted by other documents 23 at any time. It is inappropriate to use Internet-Drafts as 24 reference material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html. 32 This Internet-Draft will expire on December 5, 2014. 34 Copyright Notice 36 Copyright (c) 2014 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (http://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with 44 respect to this document. 46 Abstract 48 This document discusses a use-case for data center operators that 49 need to interface multi-domain transport networks to offer their 50 global data center applications and services. As data center 51 operators face multi-domain and diverse transport technology, 52 interoperability based on standard-based abstraction is required for 53 dynamic and flexible applications and services. 55 Table of Contents 57 1. Introduction..................................................2 58 2. Multi-domain Data Center Interconnection Applications..........3 59 2.1. VM Migration..............................................3 60 2.2. Global Load Balancing.....................................4 61 2.3. Disaster Recovery.........................................4 62 2.4. On-demand Virtual Connection/Circuit Services.............5 63 3. Issues and Challenges for Multi-domain Data Center 64 Interconnection Operations........................................5 65 4. Requirements...................................................7 66 5. References.....................................................9 67 6. Contributors...................................................9 68 Authors' Addresses................................................9 69 Intellectual Property Statement...................................9 70 Disclaimer of Validity...........................................10 72 1. Introduction 74 This document discusses a use-case for data center operators that 75 need to interface multi-domain transport networks to offer their 76 global data center applications and services. As data center 77 providers face multi-domain and diverse transport technology, 78 interoperability based on standard-based abstraction is required for 79 dynamic and flexible applications and services. 81 This use-case is a part of the overarching work, called Abstraction 82 and Control of Transport Networks (ACTN). The goal of ACTN is to 83 facilitate virtual network operation by: 85 . The creation of a virtualized environment allowing operators to 86 view the abstraction of the underlying multi-admin, multi- 87 vendor, multi-technology networks and 89 . The operation and control/management of these multiple networks 90 as a single virtualized network. 92 This will accelerate rapid service deployment of new services, 93 including more dynamic and elastic services, and improve overall 94 network operations and scaling of existing services. 96 Related documents are the ACTN-framework [ACTN-Frame] and the 97 problem statement [ACTN-PS]. 99 Multi-domain transport networks herein are referred to physical WAN 100 infrastructure whose operation may or may not belong to the same 101 administrative domain as the data center operation. Some data center 102 operators may wholly own the entire physical WAN infrastructure 103 while others may own partially or even not at all. In all cases, 104 data center operation needs to establish multi-domain relationships 105 with one or more physical network infrastructure operations. 107 Data center based applications are used to provide a wide variety of 108 services such as video gaming, cloud storage and computing, grid 109 application, data base tools, and mobile applications, and others. 110 High-bandwidth video applications such as remote medical surgery, 111 video streaming for live concerts and sporting events are also 112 emerging. This document is mainly concerned with data center 113 applications that in aggregate or individually make substantial 114 bandwidth demands that traverse multi-domain transport networks, 115 some of which may belong to different administrative domains. In 116 addition, these applications may require specific bounds on QoS 117 related parameters such as guaranteed bandwidth, latency and jitter 118 and others. 120 The organization of this document is as follows: Section 2 will 121 discuss multi-domain Data Center interconnection and its various 122 application scenarios. Section 3 will discuss the issues and 123 challenges for Multi-domain Data Center Interconnection Operations 124 Architecture. Section 4 will provide high-level requirements. 126 2. Multi-domain Data Center Interconnection Applications 128 2.1. VM Migration 130 A key enabler for data center cost savings, consolidation, 131 flexibility and application scalability has been the technology of 132 compute virtualization or Virtual Machines (VMs). A VM to the 133 software application looks like a dedicated processor with dedicated 134 memory and dedicated operating system. In modern data centers or 135 "computing clouds", the smallest unit of computing resource is the 136 VM. In public data centers one can buy computing capacity in terms 137 of VMs for a particular amount of time. Though different VM 138 configurations may be offered that are optimized for different types 139 of processing (e.g., memory intensive, throughput intensive). 141 VMs offer not only a unit of compute power but also as an 142 "application environment" that can be replicated, backed up and 143 moved. Although VM migration started in the LAN, the need for inter- 144 DC VM migration for workload burst/overflow management on the WAN 145 has been a real need for Data Center Operators. 147 Virtual machine migration has a variety of modes: (i) scheduled vs. 148 dynamic; (ii) bulk vs. sequential; (iii) point-to-point vs. point- 149 to-multi-point. Transport network capability can impact virtual 150 machine migration strategy. For certain mission critical 151 applications, dynamic bandwidth guarantee as well as performance 152 guarantee must be provided by the network. Make-before-break 153 capability is also critical to support seamless migration. 155 2.2. Global Load Balancing 157 As the many data center applications are distributed geographically 158 across many data centers and over multi-domain networks, load 159 balancing is no longer a local decision. As such, the decision as to 160 selecting a server for an application request from the users or 161 selecting data centers for migrating or instantiating VMs needs to 162 be done globally. This refers to global load balancing. 164 There are many factors that can negatively affect the quality of 165 experience (QoE) for the application. Among them are: the 166 utilization of the servers, the underlying network loading 167 conditions within a data center (LAN), the underlying network 168 loading conditions between data centers (MAN/WAN), the underlying 169 network conditions between the end-user and data center (Access 170 Network). To allow data center operators to facilitate global load 171 balancing over heterogeneous multi-domain transports from access 172 networks to metro/core transport networks, on-line network resource 173 information needs to be abstracted and represented from each 174 involving network domain. 176 2.3. Disaster Recovery 178 For certain applications, disaster recovery in real-time is 179 required. This requires transport of extremely large amount of data 180 from various data center locations to other locations and a quick 181 feedback mechanism between data center operator and infrastructure 182 network providers to facilitate the complexity associated with real- 183 time disaster recovery. 185 As this operation requires real-time concurrent connections with a 186 large amount of bandwidth, a strict guarantee of bandwidth and a 187 very low latency between a set of data centers, the underlying 188 physical network infrastructure is required to support these network 189 capability. Moreover, as the data center operator interfaces 190 multiple network infrastructure providers, standard-based interfaces 191 and a common ways to abstract network resources and connections are 192 necessary to facilitate its operations. 194 2.4. On-demand Virtual Connection/Circuit Services 196 Related to the real-time operations discussed in other applications 197 in the previous sections, many applications require on-demand 198 virtual connection/circuit services with an assured quality of 199 service across multiple domain transport networks. 201 The on-demand aspect of this service applies not only in setting up 202 the initial virtual connections/circuits but also in increasing 203 bandwidth, changing the QoS/SLA, adding a new protection scheme to 204 an existing service. 206 The on-demand network query to estimate available SLA/QoS (e.g., BW 207 availability, latency range, etc.) between a few data center 208 locations is also part of this application. 210 3. Issues and Challenges for Multi-domain Data Center Interconnection 211 Operations 213 This section discusses operational issues and challenges for multi- 214 domain data center interconnection. Figure 1 shows a typical multi- 215 domain data center interconnection operations architecture. 217 ----- 218 ///- -\\\ 219 +-----+ // \\ 220 |DC 3 |-- | +-----+ 221 +-----+ | Domain 2 |--|DC 4 | 222 | | +-----+ 223 +-----+ \\ // 224 |DC 1 | \/\- -//\ 225 +-----+ / ----- \ +-----+ 226 | / | \ |DC 5 | 227 | / | \ +-----+ 228 | ----- / | ----- | 229 |///- -\\ | ///- -\\\| 230 // \\ | // \\ 231 | | | | | 232 | Domain 1 | | | Domain 3 | 233 | | | | | 234 \\ // | \\ // 235 |\\- -//\ | \\/- -/// | 236 | ----- \ | / ----- | 237 | \ | / | 238 | \ ----- / +-----+ 239 +-----+ \ ///- -\\\ / |DC 6 | 240 |DC 2 | \/ /\ +-----+ 241 +-----+ | | 242 | Domain 4 | 243 | | 244 \\ // 245 \\\- -/// 246 ----- 248 Figure 1. Multi-domain Data Center Interconnect Operations 249 Architecture 251 Figure 1 shows several characteristics pertaining to current multi- 252 domain data center operations. 254 1. Data centers are geographically spread and homed on possibly a 255 number of mutually independent physical network infrastructure 256 provider domains. 258 2. Between the data center operator domain and each of mutually 259 independent physical network provider domains must establish 260 trusted relationships amongst the involved entities. In some cases 261 where data center operator owns the whole or partial physical 262 network infrastructure domains, a trusted relationship is still 263 required between the data center operation and the network 264 operations due to organizational boundaries although it is less 265 strict than a pure multi-domain case. 267 3. Data center operator may lease facility from physical network 268 infrastructure providers for intra-domain connectivity or own the 269 facility. For instance, there may be an intra-domain leased 270 facility for connectivity between DC 1 to DC 2. It is also 271 possible that the data center provider may own this intra-domain 272 facility such as dark fibers for connectivity between DC 1 and DC 273 2. 275 4. There may be need for connectivity that may traverse multi-domain 276 networks. For instance, Data Center 1 may have VMs that need to be 277 transported to Data Center 6. Typically, multi-domain connectivity 278 is arranged statically such that the routes are pre-negotiated 279 with the involved operators. For instance, if Data Center 1 were 280 to send its VMs to Data Center 6, the route may take on Domain 1 - 281 Domain 4 - Domain 3 based on a pre-negotiated agreement prior to 282 connectivity request. In such case, the inter-domain facilities 283 between Domains 1 & 4 Domains 4 & 3 are a part of this pre- 284 negotiated agreement. There could be alternative route choices. 285 Whether there may be alternate routing or not is subject to 286 policy. Alternate routing may be static or dynamic depending on 287 policy. 289 5. These transport network domains may be diverse in terms of local 290 policy, transport technology and its capability and vendor 291 equipment. Due to this diversity, new service introduction, 292 requiring connections that traverse multiple domains, need 293 significant planning, and several manual operations to interface 294 different vendor equipment and technology. New applications 295 requiring dynamic and elastic services and real-time mobility may 296 be hampered by these manual operational factors. 298 4. Requirements 300 This section provides high-level requirements to fulfill multi- 301 domain data center interconnection to support various applications 302 discussed in the previous sections. 304 1. The interfaces between the Data Center Operation and each 305 transport network domain SHOULD support standards-based 306 abstraction with a common information/data model. 308 2. The Data Center Operation should be able to create a single 309 virtual network view. 311 3. The following capability should be supported: 313 a. Network Query (Pull Model) from the Data Center Operation to 314 each transport network domain to collect potential resource 315 availability (e.g., BW availability, latency range, etc.) 316 between a few data center locations. 318 i. The level of abstracted topology (e.g., tunnel-level, 319 graph-form, etc.) 321 b. Network Path Computation Request from the Data Center 322 Operation to each transport network domain to estimate the 323 path availability. 325 c. Network Virtual Connections/Circuits Request from the Data 326 Center Operation to each transport domain to establish an 327 end-to-end virtual connections/circuits. 329 i. The type of the connection: P2P, P2MP, etc. 331 ii. Concurrency of the request (this indicates if the 332 connections must be simultaneously available or not in 333 case of multiple connection requests). 335 iii. The duration of the connections 337 iv. SLA/QoS parameters: minimum guaranteed bandwidth, 338 latency range, etc. 340 v. Protection/Reroute Options (e.g., SRLG requirement, 341 etc.) 343 vi. Policy Constraints (e.g., peering preferences, etc.) 345 d. Network Virtual Connections/Circuits Modification Request 346 from the Data Center Operation to each transport domain to 347 change QoS/SLA, protection schemes of the existing 348 connections/circuits. 350 e. Network Abnormality Report (Push Model) from each transport 351 domain to the Data Center Operation indicating the service 352 impacting network conditions or the potential degradation 353 indications of the existing virtual connections/circuits. 355 5. References 357 [ACTN-Frame] D. Ceccarelli, L. Fang, Y. Lee and D. Lopez, "Framework 358 for Abstraction and Control of Transport Networks," draft- 359 ceccarelli-actn-framework, work in progress. 361 [ACTN-PS] Y. Lee, D. King, M. Boucadair, R. Jing and L. Murillo, 362 "Problem Statement for the Abstraction and Control of 363 Transport Networks," draft-leeking-actn-problem-statement, 364 work in progress. 366 6. Contributors 368 Authors' Addresses 370 Luyuan Fang 371 Microsoft 372 Email : lufang@microsoft.com 374 Intellectual Property Statement 376 The IETF Trust takes no position regarding the validity or scope of 377 any Intellectual Property Rights or other rights that might be 378 claimed to pertain to the implementation or use of the technology 379 described in any IETF Document or the extent to which any license 380 under such rights might or might not be available; nor does it 381 represent that it has made any independent effort to identify any 382 such rights. 384 Copies of Intellectual Property disclosures made to the IETF 385 Secretariat and any assurances of licenses to be made available, or 386 the result of an attempt made to obtain a general license or 387 permission for the use of such proprietary rights by implementers or 388 users of this specification can be obtained from the IETF on-line 389 IPR repository at http://www.ietf.org/ipr 391 The IETF invites any interested party to bring to its attention any 392 copyrights, patents or patent applications, or other proprietary 393 rights that may cover technology that may be required to implement 394 any standard or specification contained in an IETF Document. Please 395 address the information to the IETF at ietf-ipr@ietf.org. 397 Disclaimer of Validity 399 All IETF Documents and the information contained therein are 400 provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION 401 HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, 402 THE IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL 403 WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY 404 WARRANTY THAT THE USE OF THE INFORMATION THEREIN WILL NOT INFRINGE 405 ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS 406 FOR A PARTICULAR PURPOSE. 408 Acknowledgment 410 Funding for the RFC Editor function is currently provided by the 411 Internet Society.