idnits 2.17.1 draft-klee-teas-actn-connectivity-multi-domain-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 31, 2017) is 2461 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 3 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Kwang-koog Lee 2 Internet Draft Hosong Lee 3 Intended status: Informational KT 4 Expires January 2018 Ricard Vilalta 5 CTTC 6 Victor Lopez 7 Telefonica 9 July 31, 2017 11 ACTN use-case for E2E network services in multiple vendor domain 12 transport networks 14 draft-klee-teas-actn-connectivity-multi-domain-03.txt 16 Status of this Memo 18 This Internet-Draft is submitted to IETF in full conformance with 19 the provisions of BCP 78 and BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that 23 other groups may also distribute working documents as Internet- 24 Drafts. 26 Internet-Drafts are draft documents valid for a maximum of six 27 months and may be updated, replaced, or obsoleted by other documents 28 at any time. It is inappropriate to use Internet-Drafts as 29 reference material or to cite them other than as "work in progress." 31 The list of current Internet-Drafts can be accessed at 32 http://www.ietf.org/ietf/1id-abstracts.txt 34 The list of Internet-Draft Shadow Directories can be accessed at 35 http://www.ietf.org/shadow.html. 37 This Internet-Draft will expire on January 31, 2018. 39 Copyright Notice 41 Copyright (c) 2014 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with 49 respect to this document. 51 Abstract 53 This document provides a use-case that addresses the need for 54 facilitating the application of virtual network abstractions and the 55 control and management of end-to-end network services that traverse 56 multiple vendor domain transport networks. 58 These abstractions shall help create a virtualized environment 59 supporting operators in viewing and controlling different vendor 60 domains, especially for end-to-end network connectivity service for 61 a single operator. 63 Table of Contents 65 1. Introduction...................................................2 66 2. End-to-End Network Services in Multi-vendor Domain Transport 67 Networks..........................................................3 68 2.1. Example A - Leased Line Services..........................5 69 2.2. Example B - Data Center Interconnect (DCI)................6 70 3. Requirements...................................................7 71 4. References....................................................10 72 5. Acknowledgement...............................................10 73 6. Contributors..................................................11 75 1. Introduction 77 Network operators build and operate their network using multiple 78 domains (i.e., core and access networks) in different dimensions. 79 Domains may be defined by a collection of links and nodes (each of a 80 different technology), administrative zones under the concern of a 81 particular business entity, or vendor-specific "islands" where 82 specific control mechanisms have to be applied. Due to the 83 technology of each vendor, the optical components cannot be 84 interconnected. Even, the interconnection of components supporting 85 the same technology is not easy since their control and management 86 system is tightly coupled with their own devices. Therefore each 87 optical domain becomes an isolated island in terms of the control 88 and management of end-to-end services traversing multiple domains. 90 The network operators use vendor-specific NMS implementations along 91 with an operator-tailored umbrella provisioning system, which may 92 include a technology specific Operations Support System (OSS). 93 Thanks to the evolution of vendor specific SDN controllers, the 94 network operators require a network entity, which abstracts the 95 details of the optical layer while enabling control and management 96 of the end-to-end services traversing multiple domains. The 97 establishment of end-to-end connections spanning several of these 98 domains is a perpetual problem for operators, which need to address 99 both interoperability and operational concerns at the control and 100 data planes. 102 The introduction of new services, often requiring connections that 103 traverse multiple domains, needs significant planning, and several 104 manual operations to interface multiple vendor-specific domains in 105 which specific control/management mechanisms of the vendor equipment 106 have to be applied (e.g., EMS/NMS, OSS/BSS, control plane, SDN 107 controller, etc.). Undoubtedly, establishing an on-demand end-to-end 108 connection which requires provisioning based on dynamic resource 109 information is more difficult in the current network context. 111 This document provides a use-case that addresses the need for 112 creating a virtualized environment supporting operators in viewing 113 and controlling different vendor domains, especially for end-to-end 114 network services for a single operator. Surely, the use-case could 115 be also available for the interconnection of end-to-end services for 116 multiple operators. This will accelerate rapid deployment of new 117 services, including more dynamic and elastic services, and improve 118 overall network operations and scaling of existing services. 120 This use-case is a part of the overarching work, called Abstraction 121 and Control of Transport Networks (ACTN). Related documents are the 122 ACTN Requirements [ACTN-Req], ACTN-framework [ACTN-Frame] and the 123 problem statement [ACTN-PS]. 125 2. End-to-End Network Services in Multi-vendor Domain Transport 126 Networks 128 This section provides an architecture example to illustrate the 129 context of the current challenges and issues operators face in 130 delivering end-to-end network services in operators' multi-vendor 131 domain transport networks. 133 | 134 | / End-to-End Connection \ | 135 |/-----------------------------------------------------------\| 136 |\-----------------------------------------------------------/| 137 | \ / | 138 | | 139 | +----------------+ | 140 | | | | 141 | | Converged | | 142 | | Packet-Optical| | 143 | +-------------+ | Core Domain | +-------------+ | 144 | | |--| (Vendor A) |--| | | 145 +----+ | Access | +----------------+ | Access | +----+ 146 | CE1|--| Domain 1 | | | | Domain 3 |--| CE2| 147 +----+ | (Vendor B) |----- -----| (Vendor C) | +----+ 148 +-------------+ +-------------+ 150 Figure 1. Multi-vendor Domains 152 As an illustrative example, consider a multi-domain transport 153 network consisting of three domains: one core converged packet- 154 optical domain (Vendor A) and two access domains (Vendors B and C). 155 Each access domain is managed by its domain control/management 156 mechanism which is often a proprietary vendor-specific scheme. The 157 core domain is also managed by Vendor A's proprietary 158 control/management mechanism (e.g., EMS/NMS, OSS/BSS, Control Plane, 159 SDN Controller, or any combination of these entities, etc.) that may 160 not interoperate with access domain control/management mechanisms or 161 at best partially interoperate if Vendor A is same as Vendor B or 162 Vendor C. 164 Due to these domain boundaries, facilitating end-to-end connections 165 (e.g., Ethernet Virtual Connections, etc.) that traverse multi- 166 domains is not readily achieved. These domain controls are optimized 167 for its local operation and in most cases not suited for controlling 168 the end-to-end connectivity services. For instance, the discovery of 169 the edge nodes that belong to other domains is hard to achieve 170 partly because of the lack of the common API and its information 171 model and control mechanisms thereof to disseminate the relevant 172 information. 174 Moreover, the path computation for any on-demand end-to-end 175 connection would need abstraction of dynamic network resources and 176 ways to find an optimal path that meets the connection's service 177 requirements. This would require knowledge of both the domain level 178 dynamic network resource information and the inter-domain 179 connectivity information including domain gateway/peering points and 180 the local domain policy. 182 From an end-to-end connection provisioning perspective, in order to 183 facilitate a fast and reliable end-to-end signaling, each domain 184 operation and management elements should ideally speak the same 185 control protocols to its neighboring domains. However, this is not 186 possible for the current network context unless a folk-lift green 187 field technology deployment with a single vendor solution would be 188 done. Although each domain applies the same protocol for the data 189 plane, an end-to-end connectivity traversing multiple vendor domains 190 might not be provided due to a management and control mechanism 191 focusing only on its own domain. 193 In addition, the end-to-end connection provisioning via multiple 194 domains might not be treated by a single layer but the control and 195 management of multi-layer should be necessary. In this case, the 196 control plane of an upper layer first interacts with its lower layer 197 in order to check whether the lower layer can provide network 198 resources to meet service-level requirements such as SLA (Service 199 Level Agreement) and user-defined policies. Then, the upper layer 200 could proceed to provision its own layer referring to provisioning 201 results provided from its lower layer. However, vendor-specific 202 management systems have no awareness of adjacent layers of network 203 resources. Even, a single vendor management system covering multi- 204 layer avoids the interaction with control planes of the multi-layer 205 due to the complexity of implementation. 207 From a network connectivity management perspective, it would require 208 a mechanism to disseminate any connectivity issues from the local 209 domain to the other domains whenever the local domain cannot resolve 210 a connectivity issues. This is hard to achieve due to the lack of 211 the common API and its agreed-upon information model and control 212 mechanisms thereof to disseminate the relevant information. 214 From an operation's perspective, the current network environments 215 are not conducive to offering end-to-end connectivity services in 216 multi-vendor domain transport networks. For instance, when the 217 performance monitoring inquiry is requested, operators manually 218 monitor each domain and aggregate the performance results. However, 219 it may not be precise because of the different measurement timing 220 employed by each domain. 222 2.1. Example A - Leased Line Services 224 Service providers offer to customers a leased line service 225 connecting geographically distant two or more locations. 226 Traditionally, leased lines have been offered using SONET/SDH 227 networks in speed ranging from E1 to STM64 to meet specific business 228 needs. But, as demand grew on data network, service providers 229 started to offer Ethernet-based leased line services in speed 230 ranging from 10Mbps and 10Gbps. Especially, MPLS-TP defined by IETF 231 is mainly used to achieve the carrier-grade characteristics (high- 232 level availability, guaranteed bandwidth, and OAM capabilities) of 233 SONET/SDH. 235 However, unlike the SONET/SDH networks, there are some limitations 236 to provide an end-to-end connectivity on top of telco infrastructure 237 with multi-vendor domains, because the EMS/NMS system of each vendor 238 does not communicate with each other in order to operate the multi- 239 vendor MPLS-TP networks due to the lack of common API. Each vendor 240 devices can support the same OAM and protection mechanisms, but end- 241 to-end connections are only created through data planes manually 242 configured from operators. Operators should manually give specific 243 MPLS label values to network elements acting as LSRs/LERs 244 considering the collision of used label values. In addition, the 245 exchange of specific OAM and protection switching information should 246 be achieved for appropriate end-to-end OAM and protection switching 247 operations. Even, it is hard to operate such kinds of connections 248 because of network elements non-visible from each vendor's EMS/NMS 249 system. As a result, operators might have difficulties in terms of 250 execution of on-demand OAM functions and manual protection switching 251 commands. 253 To achieve an end-to-end connectivity, each vendor domain creates 254 their own MPLS-TP tunnels by their EMS/NMS systems and then, MEF- 255 based ENNIs are used at the interconnection links. But, this 256 solution still requires manual configurations from operators for S- 257 VLAN handling and various ENNI parameters (connectivity information, 258 and bandwidth profile). 260 2.2. Example B - Data Center Interconnect (DCI) 262 Enterprise customer data centers are consolidating, growing and 263 becoming more redundant and dynamic. Thereby, service providers 264 provide a new data center interconnect (DCI) service connecting 265 geographically separate data centers to their customers to extend 266 their computing resources or enhance the service availability 267 between data centers or main sites. 269 DCI supports two possible transport scenarios over metro and 270 regional area networks. 272 - DWDM or CWDM: This is a Layer 1 type of DCI service using optical 273 WDM equipment between data centers to meet the highest bandwidth 274 and lowest latency requirements. Depending on distance and 275 capacity requirements between data centers, provider choose a 276 short-reach CWDM or long-reach DWDM technology. Then, customers 277 are provided a private optical network using an optical 278 wavelength. 280 - L2VPN: This is a Layer 2 type of DCI service using Ethernet-based 281 L2VPN technologies for customers requiring seamless LAN extension. 282 Two kinds of L2VPN technologies are applicable where one is 283 carrier Ethernet transport technology and another is L2VPN 284 solutions based on IP/MPLS or overlay network technologies such as 285 VxLAN, NVGRE or GRE. The Carrier Ethernet technology provides 286 predictable performance in terms of various Ethernet services 287 (i.e., E-Line, E-Tree, E-LAN) defined by MEF (Metro Ethernet 288 Forum). In the meantime, L2VPN based on IP technologies cover less 289 stringent latency and bandwidth requirements. 291 The scenarios are mapped into networking technologies that can meet 292 the underlying requirements. From the provider perspectives, the 293 providers should support a number of different network technologies 294 metro and regional area network between data centers in order for 295 offering appropriate virtual private networks depending on the 296 customer requirements. Unlike the case of Example A in this section, 297 some of IP-based overlay solutions only provide SDN-based interface 298 technologies such as NetConf, RestConf, OpenFlow and SNMP without 299 any EMS/NMS systems. Therefore, it is necessary to implement an 300 integration system for the control and management of the L1 and 301 L2/L3 devices. Resource discovery and control throughout the 302 multiple layers should be achieved in provider networks to offer 303 more rapid service agility and efficiency of the DCI service. 305 3. Requirements 307 In the previous section, we discussed the current challenges and 308 issues that prevent operators from offering end-to-end connectivity 309 services in multi-vendor domain transport networks. 311 This section provides a high-level requirement for enabling end-to- 312 end connectivity services in multi-vendor domain transport networks. 314 Figure 2 shows information flow requirements of the aforementioned 315 context. 317 +-------------------------------------------------+ 318 | | 319 | Customer On-demand Network Service | 320 | | 321 +-------------------------------------------------+ 322 /|\ 323 | 324 \|/ 325 +-------------------------------------------------+ 326 | | 327 | Abstracted Global View | 328 | | 329 +-------------------------------------------------+ 330 /|\ 331 | 332 \|/ 333 +-------------------------------------------------+ 334 | | 335 | Single Integrated E2E Network View | 336 | | 337 +-------------------------------------------------+ 338 /|\ /|\ /|\ 339 | | | 340 \|/ \|/ \|/ 341 +-------------+ +-------------+ +-------------+ 342 | | | | | | 343 | Domain A | | Domain B | | Domain C | 344 | Control(NMS)| | Control(NMS)| | Control(Dev)| 345 +-------------+ +-------------+ +-------------+ 347 Figure 2. Information Flow Requirements for Enabling End-to-End 348 Network Connectivity Service in Multi-vendor Domain Networks 350 There are a number of key requirements from Figure 2. 352 - A single integrated end-to-end network view is necessary to be 353 able to provision the end-to-end paths that traverse multiple 354 vendor domains. In this approach the scalability and 355 confidentiality problems are solved, but new considerations must 356 be taken into account: 358 o Limited awareness, by the VNC, of the intra-domain resources 359 availability. 361 o Sub-optimal path selection. 363 - The path computations shall be performed in two stages: first on 364 the abstracted end-to-end network view (happening at VNC), and on 365 the second stage it shall be expanded by each PNC. 367 - In order to create a single integrated end-to-end network view, 368 discovery of inter-connection data between domains including the 369 domain border nodes/links is necessary. (The entity to collect 370 domain-level data is responsible for collecting inter-connection 371 links/nodes) 373 - The entity to collect domain-level data should recognize 374 interoperability method between each domain. (There might be 375 several interoperability mechanisms according to technology being 376 applied.) 378 - The entity responsible to collect domain-level data and create an 379 integrated end-to-end view should support push/pull model with 380 respect to all its interfaces. 382 - The same entity should coordinate a signaling flow for end-to-end 383 connections to each domain involved. (This entity to domain 384 control is analogous to an NMS to EMS relationship) 386 - The entity responsible to create abstract global view should 387 support push/pull model with respect to all its interfaces. (Note 388 that the two entities (an entity to create an integrated end-to- 389 end view and an entity to create an abstracted global view) can be 390 assumed by the same entity, which is an implementation issue. 392 - Hierarchical composition of integrated network views should be 393 enabled by a common API between NorthBound Interface of the Single 394 Integrated End-to-End view (handled by VNC) and Domain Control 395 (handled by PNC). 397 - There is a need for a common API between each domain control to 398 the entity that is responsible for creating a single integrated 399 end-to-end network view. At the minimum, the following items are 400 required on the API: 402 o Programmability of the API. 404 o The multiple levels/granularities of the abstraction of 405 network resource (which is subject to policy and service 406 need). 408 o The abstraction of network resource should include customer 409 end points and inter-domain gateway nodes/links. 411 o Any physical network constraints (such as SRLG, link 412 distance, etc.) should be reflected in abstraction. 414 o Domain preference and local policy (such as preferred peering 415 point(s), preferred route, etc.) 417 o Domain network capability (e.g., support of push/pull model). 419 - The entity responsible for abstraction of a global view into a 420 customer view should provide a programmable API to allow the 421 flexibility. Abstraction might be provided by representing each 422 domain as a virtual node (node abstraction) or a set of virtual 423 nodes and links (link abstraction). Node abstraction creates a 424 network topology composed by nodes representing each network 425 domain and the inter-domain links between the border nodes of each 426 domain. 428 o Abstraction of a global view into a customer view should be 429 provided to allow customer to dynamically request network on- 430 demand services including connectivity services. 432 o What level of details customer should be allowed to view 433 network is subject to negotiation between the customer and 434 the operator. 436 4. References 438 [ACTN-Frame] D. Ceccarelli, L. Fang, Y. Lee and D. Lopez, "Framework 439 for Abstraction and Control of Transport Networks," draft- 440 ceccarelli-actn-framework, work in progress. 442 [ACTN-PS] Y. Lee, D. King, M. Boucadair, R. Jing, and L. Murillo, 443 "Problem Statement for the Abstraction and Control of 444 Transport Networks," draft-leeking-actn-problem-statement, 445 work in progress. 447 [ACTN-Req] Y. Lee, D. Dhody, S. Belotti, K. Pithewan, D. Ceccarelli, 448 "Requirements for Abstraction and Control of Transport 449 Networks", draft-lee-teas-actn-requirements, work in 450 progress. 452 5. Acknowledgement 454 The authors wish to thank Young Lee for the discussions in the 455 document. 457 6. Contributors 459 Authors' Addresses 461 Kwang-koog Lee 463 KT 464 Email: kwangkoog.lee@kt.com 466 Hosong Lee 468 KT 469 Email: hosong.lee@kt.com 471 Ricard Vilalta 472 CTTC 473 Email: ricard.vilalta@cttc.es 475 Victor Lopez 476 Telefonica 477 Email: victor.lopezalvarez@telefonica.com