idnits 2.17.1 draft-ietf-teas-actn-framework-15.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 28, 2018) is 2157 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Daniele Ceccarelli (Ed) 2 Internet Draft Ericsson 3 Intended status: Informational Young Lee (Ed) 4 Expires: November 28, 2018 Huawei 6 May 28, 2018 8 Framework for Abstraction and Control of Traffic Engineered Networks 10 draft-ietf-teas-actn-framework-15 12 Abstract 14 Traffic Engineered networks have a variety of mechanisms to 15 facilitate the separation of the data plane and control plane. They 16 also have a range of management and provisioning protocols to 17 configure and activate network resources. These mechanisms represent 18 key technologies for enabling flexible and dynamic networking. The 19 term "Traffic Engineered network" refers to a network that uses any 20 connection-oriented technology under the control of a distributed or 21 centralized control plane to support dynamic provisioning of end-to- 22 end connectivity. 24 Abstraction of network resources is a technique that can be applied 25 to a single network domain or across multiple domains to create a 26 single virtualized network that is under the control of a network 27 operator or the customer of the operator that actually owns 28 the network resources. 30 This document provides a framework for Abstraction and Control of 31 Traffic Engineered Networks (ACTN) to support virtual network 32 services and connectivity services. 34 Status of this Memo 36 This Internet-Draft is submitted to IETF in full conformance with 37 the provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF), its areas, and its working groups. Note that 41 other groups may also distribute working documents as Internet- 42 Drafts. 44 Internet-Drafts are draft documents valid for a maximum of six 45 months and may be updated, replaced, or obsoleted by other documents 46 at any time. It is inappropriate to use Internet-Drafts as 47 reference material or to cite them other than as "work in progress." 48 The list of current Internet-Drafts can be accessed at 49 http://www.ietf.org/ietf/1id-abstracts.txt 51 The list of Internet-Draft Shadow Directories can be accessed at 52 http://www.ietf.org/shadow.html. 54 This Internet-Draft will expire on November 3, 2018. 56 Copyright Notice 58 Copyright (c) 2018 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with 66 respect to this document. Code Components extracted from this 67 document must include Simplified BSD License text as described in 68 Section 4.e of the Trust Legal Provisions and are provided without 69 warranty as described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction...................................................3 74 2. Overview.......................................................4 75 2.1. Terminology...............................................5 76 2.2. VNS Model of ACTN.........................................7 77 2.2.1. Customers............................................9 78 2.2.2. Service Providers...................................10 79 2.2.3. Network Operators...................................10 80 3. ACTN Base Architecture........................................10 81 3.1. Customer Network Controller..............................12 82 3.2. Multi-Domain Service Coordinator.........................13 83 3.3. Provisioning Network Controller..........................13 84 3.4. ACTN Interfaces..........................................14 85 4. Advanced ACTN Architectures...................................15 86 4.1. MDSC Hierarchy...........................................15 87 4.2. Functional Split of MDSC Functions in Orchestrators......16 88 5. Topology Abstraction Methods..................................17 89 5.1. Abstraction Factors......................................17 90 5.2. Abstraction Types........................................18 91 5.2.1. Native/White Topology...............................18 92 5.2.2. Black Topology......................................19 93 5.2.3. Grey Topology.......................................20 94 5.3. Methods of Building Grey Topologies......................21 95 5.3.1. Automatic Generation of Abstract Topology by 96 Configuration..............................................21 97 5.3.2. On-demand Generation of Supplementary Topology via Path 98 Compute Request/Reply......................................21 99 5.4. Hierarchical Topology Abstraction Example................22 100 5.5. VN Recursion with Network Layers.........................24 101 6. Access Points and Virtual Network Access Points...............25 102 6.1. Dual-Homing Scenario.....................................27 103 7. Advanced ACTN Application: Multi-Destination Service..........28 104 7.1. Pre-Planned End Point Migration..........................29 105 7.2. On the Fly End-Point Migration...........................30 106 8. Manageability Considerations..................................30 107 8.1. Policy...................................................31 108 8.2. Policy Applied to the Customer Network Controller........32 109 8.3. Policy Applied to the Multi-Domain Service Coordinator...32 110 8.4. Policy Applied to the Provisioning Network Controller....32 111 9. Security Considerations.......................................33 112 9.1. CNC-MDSC Interface (CMI).................................34 113 9.2. MDSC-PNC Interface (MPI).................................34 114 10. IANA Considerations..........................................34 115 11. References...................................................35 116 11.1. Informative References..................................35 117 12. Contributors.................................................36 118 Authors' Addresses...............................................37 119 APPENDIX A - Example of MDSC and PNC Functions Integrated in A 120 Service/Network Orchestrator.....................................37 122 1. Introduction 124 The term "Traffic Engineered network" refers to a network that uses 125 any connection-oriented technology under the control of a 126 distributed or centralized control plane to support dynamic 127 provisioning of end-to-end connectivity. Traffic Engineered (TE) 128 networks have a variety of mechanisms to facilitate the separation 129 of data plane and control plane including distributed signaling for 130 path setup and protection, centralized path computation for planning 131 and traffic engineering, and a range of management and provisioning 132 protocols to configure and activate network resources. These 133 mechanisms represent key technologies for enabling flexible and 134 dynamic networking. Some examples of networks that are in scope of 135 this definition are optical networks, Multiprotocol Label Switching 136 (MPLS) Transport Profile (MPLS-TP) networks [RFC5654], and MPLS-TE 137 networks [RFC2702]. 139 One of the main drivers for Software Defined Networking (SDN) 140 [RFC7149] is a decoupling of the network control plane from the data 141 plane. This separation has been achieved for TE networks with the 142 development of MPLS/GMPLS [RFC3945] and the Path Computation Element 143 (PCE) [RFC4655]. One of the advantages of SDN is its logically 144 centralized control regime that allows a global view of the 145 underlying networks. Centralized control in SDN helps improve 146 network resource utilization compared with distributed network 147 control. For TE-based networks, a PCE may serve as a logically 148 centralized path computation function. 150 This document describes a set of management and control functions 151 used to operate one or more TE networks to construct virtual 152 networks that can be presented to customers and that are built from 153 abstractions of the underlying TE networks. For example, a link in 154 the customer's network is constructed from a path or collection of 155 paths in the underlying networks. We call this set of functions 156 "Abstraction and Control of Traffic Engineered Networks" (ACTN). 158 2. Overview 160 Three key aspects that need to be solved by SDN are: 162 . Separation of service requests from service delivery so that 163 the configuration and operation of a network is transparent 164 from the point of view of the customer, but remains responsive 165 to the customer's services and business needs. 167 . Network abstraction: As described in [RFC7926], abstraction is 168 the process of applying policy to a set of information about a 169 TE network to produce selective information that represents the 170 potential ability to connect across the network. The process 171 of abstraction presents the connectivity graph in a way that is 172 independent of the underlying network technologies, 173 capabilities, and topology so that the graph can be used to 174 plan and deliver network services in a uniform way 176 . Coordination of resources across multiple independent networks 177 and multiple technology layers to provide end-to-end services 178 regardless of whether the networks use SDN or not. 180 As networks evolve, the need to provide support for distinct 181 services, separated service orchestration, and resource abstraction 182 have emerged as key requirements for operators. In order to support 183 multiple customers each with its own view of and control of the 184 server network, a network operator needs to partition (or "slice") 185 or manage sharing of the network resources. Network slices can be 186 assigned to each customer for guaranteed usage which is a step 187 further than shared use of common network resources. 189 Furthermore, each network represented to a customer can be built 190 from virtualization of the underlying networks so that, for example, 191 a link in the customer's network is constructed from a path or 192 collection of paths in the underlying network. 194 ACTN can facilitate virtual network operation via the creation of a 195 single virtualized network or a seamless service. This supports 196 operators in viewing and controlling different domains (at any 197 dimension: applied technology, administrative zones, or vendor- 198 specific technology islands) and presenting virtualized networks to 199 their customers. 201 The ACTN framework described in this document facilitates: 203 . Abstraction of the underlying network resources to higher-layer 204 applications and customers [RFC7926]. 206 . Virtualization of particular underlying resources, whose 207 selection criterion is the allocation of those resources to a 208 particular customer, application, or service [ONF-ARCH]. 210 . TE Network slicing of infrastructure to meet specific 211 customers' service requirements. 213 . Creation of an abstract environment allowing operators to view 214 and control multi-domain networks as a single abstract network. 216 . The presentation to customers of networks as a virtual network 217 via open and programmable interfaces. 219 2.1. Terminology 221 The following terms are used in this document. Some of them are 222 newly defined, some others reference existing definitions: 224 . Domain: A domain [RFC4655] is any collection of network 225 elements within a common sphere of address management or path 226 computation responsibility. Specifically within this document 227 we mean a part of an operator's network that is under common 228 management (i.e., under shared operational management using the 229 same instances of a tool and the same policies). Network 230 elements will often be grouped into domains based on technology 231 types, vendor profiles, and geographic proximity. 233 . Abstraction: This process is defined in [RFC7926]. 235 . TE Network Slicing: In the context of ACTN, a TE network slice 236 is a collection of resources that is used to establish a 237 logically dedicated virtual network over one or more TE 238 networks. TE network slicing allows a network operator to 239 provide dedicated virtual networks for applications/customers 240 over a common network infrastructure. The logically dedicated 241 resources are a part of the larger common network 242 infrastructures that are shared among various TE network slice 243 instances which are the end-to-end realization of TE network 244 slicing, consisting of the combination of physically or 245 logically dedicated resources. 247 . Node: A node is a vertex on the graph representation of a TE 248 topology. In a physical network topology, a node corresponds 249 to a physical network element (NE) such as a router. In an 250 abstract network topology, a node (sometimes called an abstract 251 node) is a representation as a single vertex of one or more 252 physical NEs and their connecting physical connections. The 253 concept of a node represents the ability to connect from any 254 access to the node (a link end) to any other access to that 255 node, although "limited cross-connect capabilities" may also be 256 defined to restrict this functionality. Network abstraction 257 may be applied recursively, so a node in one topology may be 258 created by applying abstraction to the nodes in the underlying 259 topology. 261 . Link: A link is an edge on the graph representation of a TE 262 topology. Two nodes connected by a link are said to be 263 "adjacent" in the TE topology. In a physical network topology, 264 a link corresponds to a physical connection. In an abstract 265 network topology, a link (sometimes called an abstract link) is 266 a representation of the potential to connect a pair of points 267 with certain TE parameters (see [RFC7926] for details). 268 Network abstraction may be applied recursively, so a link in 269 one topology may be created by applying abstraction to the 270 links in the underlying topology. 272 . Abstract Topology: The topology of abstract nodes and abstract 273 links presented through the process of abstraction by a lower 274 layer network for use by a higher layer network. 276 . A Virtual Network (VN) is a network provided by a service 277 provider to a customer for the customer to use in any way it 278 wants as though it was a physical network. There are two views 279 of a VN as follows: 281 a) The VN can be abstracted as a set of edge-to-edge links (a 282 Type 1 VN). Each link is referred as a VN member and is 283 formed as an end-to-end tunnel across the underlying 284 networks. Such tunnels may be constructed by recursive 285 slicing or abstraction of paths in the underlying networks 286 and can encompass edge points of the customer's network, 287 access links, intra-domain paths, and inter-domain links. 289 b) The VN can also be abstracted as a topology of virtual nodes 290 and virtual links (a Type 2 VN). The operator needs to map 291 the VN to actual resource assignment, which is known as 292 virtual network embedding. The nodes in this case include 293 physical end points, border nodes, and internal nodes as well 294 as abstracted nodes. Similarly the links include physical 295 access links, inter-domain links, and intra-domain links as 296 well as abstract links. 298 Clearly a Type 1 VN is a special case of a Type 2 VN. 300 . Access link: A link between a customer node and a operator 301 node. 303 . Inter-domain link: A link between domains under distinct 304 management administration. 306 . Access Point (AP): An AP is a logical identifier shared between 307 the customer and the operator used to identify an access link. 308 The AP is used by the customer when requesting a VNS. Note that 309 the term "TE Link Termination Point" (LTP) defined in [TE-Topo] 310 describes the end points of links, while an AP is a common 311 identifier for the link itself. 313 . VN Access Point (VNAP): A VNAP is the binding between an AP and 314 a given VN. 316 . Server Network: As defined in [RFC7926], a server network is a 317 network that provides connectivity for another network (the 318 Client Network) in a client-server relationship. 320 2.2. VNS Model of ACTN 322 A Virtual Network Service (VNS) is the service agreement between a 323 customer and operator to provide a VN. When a VN is a simple 324 connectivity between two points, the difference between VNS and 325 connectivity service becomes blurred. There are three types of VNS 326 defined in this document. 328 o Type 1 VNS refers to a VNS in which the customer is allowed 329 to create and operate a Type 1 VN. 331 o Type 2a and 2b VNS refer to VNSs in which the customer is 332 allowed to create and operates a Type 2 VN. With a Type 333 2a VNS, the VN is statically created at service 334 configuration time and the customer is not allowed to 335 change the topology (e.g., by adding or deleting abstract 336 nodes and links). A Type 2b VNS is the same as a Type 2a 337 VNS except that the customer is allowed to make dynamic 338 changes to the initial topology created at service 339 configuration time. 341 VN Operations are functions that a customer can exercise on a VN 342 depending on the agreement between the customer and the operator. 344 o VN Creation allows a customer to request the instantiation 345 of a VN. This could be through off-line pre-configuration 346 or through dynamic requests specifying attributes to a 347 Service Level Agreement (SLA) to satisfy the customer's 348 objectives. 350 o Dynamic Operations allow a customer to modify or delete the 351 VN. The customer can further act upon the virtual network 352 to create/modify/delete virtual links and nodes. These 353 changes will result in subsequent tunnel management in the 354 operator's networks. 356 There are three key entities in the ACTN VNS model: 358 - Customers 359 - Service Providers 360 - Network Operators 362 These entities are related in a three tier model as shown in Figure 363 1. 365 +----------------------+ 366 | Customer | 367 +----------------------+ 368 | 369 VNS || | /\ VNS 370 Request || | || Reply 371 \/ | || 372 +----------------------+ 373 | Service Provider | 374 +----------------------+ 375 / | \ 376 / | \ 377 / | \ 378 / | \ 379 +------------------+ +------------------+ +------------------+ 380 |Network Operator 1| |Network Operator 2| |Network Operator 3| 381 +------------------+ +------------------+ +------------------+ 383 Figure 1: The Three Tier Model. 385 The commercial roles of these entities are described in the 386 following sections. 388 2.2.1. Customers 390 Basic customers include fixed residential users, mobile users, and 391 small enterprises. Each requires a small amount of resources and is 392 characterized by steady requests (relatively time invariant). Basic 393 customers do not modify their services themselves: if a service 394 change is needed, it is performed by the provider as a proxy. 396 Advanced customers include enterprises and governments. Such 397 customers ask for both point-to point and multipoint connectivity 398 with high resource demands varying significantly in time. This is 399 one of the reasons why a bundled service offering is not enough and 400 it is desirable to provide each advanced customer with a customized 401 virtual network service. Advanced customers may also have the 402 ability to modify their service parameters within the scope of their 403 virtualized environments. The primary focus of ACTN is Advanced 404 Customers. 406 As customers are geographically spread over multiple network 407 operator domains, they have to interface to multiple operators and 408 may have to support multiple virtual network services with different 409 underlying objectives set by the network operators. To enable these 410 customers to support flexible and dynamic applications they need to 411 control their allocated virtual network resources in a dynamic 412 fashion, and that means that they need a view of the topology that 413 spans all of the network operators. Customers of a given service 414 provider can in turn offer a service to other customers in a 415 recursive way. 417 2.2.2. Service Providers 419 In the scope of ACTN, service providers deliver VNSs to their 420 customers. Service providers may or may not own physical network 421 resources (i.e., may or may not be network operators as described in 422 Section 2.2.3). When a service provider is the same as the network 423 operator, this is similar to existing VPN models applied to a single 424 operator although it may be hard to use this approach when the 425 customer spans multiple independent network operator domains. 427 When network operators supply only infrastructure, while distinct 428 service providers interface to the customers, the service providers 429 are themselves customers of the network infrastructure operators. 430 One service provider may need to keep multiple independent network 431 operators because its end-users span geographically across multiple 432 network operator domains. In some cases, service provider is also a 433 network operator when it owns network infrastructure on which 434 service is provided. 436 2.2.3. Network Operators 438 Network operators are the infrastructure operators that provision 439 the network resources and provide network resources to their 440 customers. The layered model described in this architecture 441 separates the concerns of network operators and customers, with 442 service providers acting as aggregators of customer requests. 444 3. ACTN Base Architecture 446 This section provides a high-level model of ACTN showing the 447 interfaces and the flow of control between components. 449 The ACTN architecture is based on a 3-tier reference model and 450 allows for hierarchy and recursion. The main functionalities within 451 an ACTN system are: 453 . Multi-domain coordination: This function oversees the specific 454 aspects of different domains and builds a single abstracted 455 end-to-end network topology in order to coordinate end-to-end 456 path computation and path/service provisioning. Domain 457 sequence path calculation/determination is also a part of this 458 function. 460 . Abstraction: This function provides an abstracted view of the 461 underlying network resources for use by the customer - a 462 customer may be the client or a higher level controller entity. 463 This function includes network path computation based on 464 customer service connectivity request constraints, path 465 computation based on the global network-wide abstracted 466 topology, and the creation of an abstracted view of network 467 resources allocated to each customer. These operations depend 468 on customer-specific network objective functions and customer 469 traffic profiles. 471 . Customer mapping/translation: This function is to map customer 472 requests/commands into network provisioning requests that can 473 be sent from the Multi-Domain Service Coordinator (MDSC) to the 474 Provisioning Network Controller (PNC) according to business 475 policies provisioned statically or dynamically at the Operations 476 Support System (OSS)/ Network Management System (NMS). 477 Specifically, it provides mapping and translation of a 478 customer's service request into a set of parameters that are 479 specific to a network type and technology such that network 480 configuration process is made possible. 482 . Virtual service coordination: This function translates customer 483 service-related information into virtual network service 484 operations in order to seamlessly operate virtual networks 485 while meeting a customer's service requirements. In the 486 context of ACTN, service/virtual service coordination includes 487 a number of service orchestration functions such as multi- 488 destination load balancing, guarantees of service quality, 489 bandwidth and throughput. It also includes notifications for 490 service fault and performance degradation and so forth. 492 The base ACTN architecture defines three controller types and the 493 corresponding interfaces between these controllers. The following 494 types of controller are shown in Figure 2: 496 . CNC - Customer Network Controller 497 . MDSC - Multi-Domain Service Coordinator 498 . PNC - Provisioning Network Controller 500 Figure 2 also shows the following interfaces: 502 . CMI - CNC-MDSC Interface 503 . MPI - MDSC-PNC Interface 504 . SBI - Southbound Interface 506 +---------+ +---------+ +---------+ 507 | CNC | | CNC | | CNC | 508 +---------+ +---------+ +---------+ 509 \ | / 510 \ | / 511 Boundary ========\==================|=====================/======= 512 Between \ | / 513 Customer & ----------- | CMI -------------- 514 Network Operator \ | / 515 +---------------+ 516 | MDSC | 517 +---------------+ 518 / | \ 519 ------------ | MPI ------------- 520 / | \ 521 +-------+ +-------+ +-------+ 522 | PNC | | PNC | | PNC | 523 +-------+ +-------+ +-------+ 524 | SBI / | / \ 525 | / | SBI SBI / \ 526 --------- ----- | / \ 527 ( ) ( ) | / \ 528 - Control - ( Phys. ) | / ----- 529 ( Plane ) ( Net ) | / ( ) 530 ( Physical ) ----- | / ( Phys. ) 531 ( Network ) ----- ----- ( Net ) 532 - - ( ) ( ) ----- 533 ( ) ( Phys. ) ( Phys. ) 534 --------- ( Net ) ( Net ) 535 ----- ----- 537 Figure 2: ACTN Base Architecture 539 Note that this is a functional architecture: an implementation and 540 deployment might collocate one or more of the functional components. 541 Figure 2 shows a case where service provider is also a network 542 operator. 544 3.1. Customer Network Controller 546 A Customer Network Controller (CNC) is responsible for communicating 547 a customer's VNS requirements to the network operator over the CNC- 548 MDSC Interface (CMI). It has knowledge of the end-points associated 549 with the VNS (expressed as APs), the service policy, and other QoS 550 information related to the service. 552 As the Customer Network Controller directly interfaces to the 553 applications, it understands multiple application requirements and 554 their service needs. The capability of a CNC beyond its CMI role is 555 outside the scope of ACTN and may be implemented in different ways. 556 For example, the CNC may in fact be a controller or part of a 557 controller in the customer's domain, or the CNC functionality could 558 also be implemented as part of a service provider's portal. 560 3.2. Multi-Domain Service Coordinator 562 A Multi-Domain Service Coordinator (MDSC) is a functional block that 563 implements all of the ACTN functions listed in Section 3 and 564 described further in Section 4.2. Two functions of the MDSC, 565 namely, multi-domain coordination and virtualization/abstraction are 566 referred to as network-related functions while the other two 567 functions, namely, customer mapping/translation and virtual service 568 coordination are referred to as service-related functions. The MDSC 569 sits at the center of the ACTN model between the CNC that issues 570 connectivity requests and the Provisioning Network Controllers 571 (PNCs) that manage the network resources. 572 The key point of the MDSC (and of the whole ACTN framework) is 573 detaching the network and service control from underlying technology 574 to help the customer express the network as desired by business 575 needs. The MDSC envelopes the instantiation of the right technology 576 and network control to meet business criteria. In essence it 577 controls and manages the primitives to achieve functionalities as 578 desired by the CNC. 580 In order to allow for multi-domain coordination a 1:N relationship 581 must be allowed between MDSCs and PNCs. 583 In addition to that, it could also be possible to have an M:1 584 relationship between MDSCs and PNC to allow for network resource 585 partitioning/sharing among different customers not necessarily 586 connected to the same MDSC (e.g., different service providers) but 587 all using the resources of a common network infrastructure operator. 589 3.3. Provisioning Network Controller 591 The Provisioning Network Controller (PNC) oversees configuring the 592 network elements, monitoring the topology (physical or virtual) of 593 the network, and collecting information about the topology (either 594 raw or abstracted). 596 The PNC functions can be implemented as part of an SDN domain 597 controller, a Network Management System (NMS), an Element Management 598 System (EMS), an active PCE-based controller [Centralized] or any 599 other means to dynamically control a set of nodes and implementing a 600 north bound interface from the standpoint of the nodes (which is out 601 of the scope of this document). A PNC domain includes all the 602 resources under the control of a single PNC. It can be composed of 603 different routing domains and administrative domains, and the 604 resources may come from different layers. The interconnection 605 between PNC domains is illustrated in Figure 3. 607 _______ _______ 608 _( )_ _( )_ 609 _( )_ _( )_ 610 ( ) Border ( ) 611 ( PNC ------ Link ------ PNC ) 612 ( Domain X |Border|========|Border| Domain Y ) 613 ( | Node | | Node | ) 614 ( ------ ------ ) 615 (_ _) (_ _) 616 (_ _) (_ _) 617 (_______) (_______) 619 Figure 3: PNC Domain Borders 621 3.4. ACTN Interfaces 623 Direct customer control of transport network elements and 624 virtualized services is not a viable proposition for network 625 operators due to security and policy concerns. Therefore, the 626 network has to provide open, programmable interfaces, through which 627 customer applications can create, replace and modify virtual network 628 resources and services in an interactive, flexible and dynamic 629 fashion. 631 Three interfaces exist in the ACTN architecture as shown in Figure 632 2. 634 . CMI: The CNC-MDSC Interface (CMI) is an interface between a CNC 635 and an MDSC. The CMI is a business boundary between customer 636 and network operator. It is used to request a VNS for an 637 application. All service-related information is conveyed over 638 this interface (such as the VNS type, topology, bandwidth, and 639 service constraints). Most of the information over this 640 interface is agnostic of the technology used by network 641 operators, but there are some cases (e.g., access link 642 configuration) where it is necessary to specify technology- 643 specific details. 645 . MPI: The MDSC-PNC Interface (MPI) is an interface between an 646 MDSC and a PNC. It communicates requests for new connectivity 647 or for bandwidth changes in the physical network. In multi- 648 domain environments, the MDSC needs to communicate with 649 multiple PNCs each responsible for control of a domain. The 650 MPI presents an abstracted topology to the MDSC hiding 651 technology specific aspects of the network and hiding topology 652 according to policy. 654 . SBI: The Southbound Interface (SBI) is out of scope of ACTN. 655 Many different SBIs have been defined for different 656 environments, technologies, standards organizations, and 657 vendors. It is shown in Figure 3 for reference reason only. 659 4. Advanced ACTN Architectures 661 This section describes advanced configurations of the ACTN 662 architecture. 664 4.1. MDSC Hierarchy 666 A hierarchy of MDSCs can be foreseen for many reasons, among which 667 are scalability, administrative choices, or putting together 668 different layers and technologies in the network. In the case where 669 there is a hierarchy of MDSCs, we introduce the terms higher-level 670 MDSC (MDSC-H) and lower-level MDSC (MDSC-L). The interface between 671 them is a recursion of the MPI. An implementation of an MDSC-H 672 makes provisioning requests as normal using the MPI, but an MDSC-L 673 must be able to receive requests as normal at the CMI and also at 674 the MPI. The hierarchy of MDSCs can be seen in Figure 4. 676 Another implementation choice could foresee the usage of an MDSC-L 677 for all the PNCs related to a given technology (e.g., Internet 678 Protocol (IP)/Multiprotocol Label Switching (MPLS)) and a different 679 MDSC-L for the PNCs related to another technology (e.g., Optical 680 Transport Network (OTN)/Wavelength Division Multiplexing (WDM)) and 681 an MDSC-H to coordinate them. 683 +--------+ 684 | CNC | 685 +--------+ 686 | +-----+ 687 | CMI | CNC | 688 +----------+ +-----+ 689 -------| MDSC-H |---- | 690 | +----------+ | | CMI 691 MPI | MPI | | 692 | | | 693 +---------+ +---------+ 694 | MDSC-L | | MDSC-L | 695 +---------+ +---------+ 696 MPI | | | | 697 | | | | 698 ----- ----- ----- ----- 699 | PNC | | PNC | | PNC | | PNC | 700 ----- ----- ----- ----- 702 Figure 4: MDSC Hierarchy 704 The hierarchy of MDSC can be recursive, where an MDSC-H is in turn 705 an MDSC-L to a higher level MDSC-H. 707 4.2. Functional Split of MDSC Functions in Orchestrators 709 An implementation choice could separate the MDSC functions into two 710 groups, one group for service-related functions and the other for 711 network-related functions. This enables the implementation of a 712 service orchestrator that provides the service-related functions of 713 the MDSC and a network orchestrator that provides the network- 714 related functions of the MDSC. This split is consistent with the 715 Yet Another Next Generation (YANG) service model architecture 716 described in [Service-YANG]. Figure 5 depicts this and shows how 717 the ACTN interfaces may map to YANG models. 719 +--------------------+ 720 | Customer | 721 | +-----+ | 722 | | CNC | | 723 | +-----+ | 724 +--------------------+ 725 CMI | Customer Service Model 726 | 727 +---------------------------------------+ 728 | Service | 729 ********|*********************** Orchestrator | 730 * MDSC | +-----------------+ * | 731 * | | Service-related | * | 732 * | | Functions | * | 733 * | +-----------------+ * | 734 * +----------------------*----------------+ 735 * * | Service Delivery 736 * * | Model 737 * +----------------------*----------------+ 738 * | * Network | 739 * | +-----------------+ * Orchestrator | 740 * | | Network-related | * | 741 * | | Functions | * | 742 * | +-----------------+ * | 743 ********|*********************** | 744 +---------------------------------------+ 745 MPI | Network Configuration 746 | Model 747 +------------------------+ 748 | Domain | 749 | +------+ Controller | 750 | | PNC | | 751 | +------+ | 752 +------------------------+ 753 SBI | Device Configuration 754 | Model 755 +--------+ 756 | Device | 757 +--------+ 759 Figure 5: ACTN Architecture in the Context of the YANG Service 760 Models 761 5. Topology Abstraction Methods 763 Topology abstraction is described in [RFC7926]. This section 764 discusses topology abstraction factors, types, and their context in 765 the ACTN architecture. 767 Abstraction in ACTN is performed by the PNC when presenting 768 available topology to the MDSC, or by an MDSC-L when presenting 769 topology to an MDSC-H. This function is different to the creation 770 of a VN (and particularly a Type 2 VN) which is not abstraction but 771 construction of virtual resources. 773 5.1. Abstraction Factors 775 As discussed in [RFC7926], abstraction is tied with policy of the 776 networks. For instance, per an operational policy, the PNC would 777 not provide any technology specific details (e.g., optical 778 parameters for Wavelength Switched Optical Network (WSON) in the 779 abstract topology it provides to the MDSC. Similarly, policy of the 780 networks may determine the abstraction type as described in Section 781 5.2. 783 There are many factors that may impact the choice of abstraction: 785 - Abstraction depends on the nature of the underlying domain 786 networks. For instance, packet networks may be abstracted with 787 fine granularity while abstraction of optical networks depends on 788 the switching units (such as wavelengths) and the end-to-end 789 continuity and cross-connect limitations within the network. 791 - Abstraction also depends on the capability of the PNCs. As 792 abstraction requires hiding details of the underlying network 793 resources, the PNC's capability to run algorithms impacts the 794 feasibility of abstraction. Some PNC may not have the ability to 795 abstract native topology while other PNCs may have the ability to 796 use sophisticated algorithms. 798 - Abstraction is a tool that can improve scalability. Where the 799 native network resource information is of large size there is a 800 specific scaling benefit to abstraction. 802 - The proper abstraction level may depend on the frequency of 803 topology updates and vice versa. 805 - The nature of the MDSC's support for technology-specific 806 parameters impacts the degree/level of abstraction. If the MDSC 807 is not capable of handling such parameters then a higher level of 808 abstraction is needed. 810 - In some cases, the PNC is required to hide key internal 811 topological data from the MDSC. Such confidentiality can be 812 achieved through abstraction. 814 5.2. Abstraction Types 816 This section defines the following three types of topology 817 abstraction: 819 . Native/White Topology (Section 5.2.1) 820 . Black Topology (Section 5.2.2) 821 . Grey Topology (Section 5.2.3) 823 5.2.1. Native/White Topology 825 This is a case where the PNC provides the actual network topology to 826 the MDSC without any hiding or filtering of information, i.e., no 827 abstraction is performed. In this case, the MDSC has the full 828 knowledge of the underlying network topology and can operate on it 829 directly. 830 5.2.2. Black Topology 832 A black topology replaces a full network with a minimal 833 representation of the edge-to-edge topology without disclosing any 834 node internal connectivity information. The entire domain network 835 may be abstracted as a single abstract node with the network's 836 access/egress links appearing as the ports to the abstract node and 837 the implication that any port can be 'cross-connected' to any other. 838 Figure 6 depicts a native topology with the corresponding black 839 topology with one virtual node and inter-domain links. In this 840 case, the MDSC has to make a provisioning request to the PNCs to 841 establish the port-to-port connection. If there is a large number 842 of inter-connected domains, this abstraction method may impose a 843 heavy coordination load at the MDSC level in order to find an 844 optimal end-to-end path since the abstraction hides so much 845 information that it is not possible to determine whether an end-to- 846 end path is feasible without asking each PNC to set up each path 847 fragment. For this reason, the MPI might need to be enhanced to 848 allow the PNCs to be queried for the practicality and 849 characteristics of paths across the abstract node. 850 ..................................... 851 : PNC Domain : 852 : +--+ +--+ +--+ +--+ : 853 ------+ +-----+ +-----+ +-----+ +------ 854 : ++-+ ++-+ +-++ +-++ : 855 : | | | | : 856 : | | | | : 857 : | | | | : 858 : | | | | : 859 : ++-+ ++-+ +-++ +-++ : 860 ------+ +-----+ +-----+ +-----+ +------ 861 : +--+ +--+ +--+ +--+ : 862 :.................................... 864 +----------+ 865 ---+ +--- 866 | Abstract | 867 | Node | 868 ---+ +--- 869 +----------+ 871 Figure 6: Native Topology with Corresponding Black Topology Expressed 872 as an Abstract Node 874 5.2.3. Grey Topology 876 A grey topology represents a compromise between black and white 877 topologies from a granularity point of view. In this case, the PNC 878 exposes an abstract topology containing all PNC domains border nodes 879 and an abstraction of the connectivity between those border nodes. 880 This abstraction may contain either physical or abstract 881 nodes/links. 883 Two types of grey topology are identified: 884 . In a type A grey topology, border nodes are connected by a full 885 mesh of TE links (see Figure 7). 886 . In a type B grey topology, border nodes are connected over a 887 more detailed network comprising internal abstract nodes and 888 abstracted links. This mode of abstraction supplies the MDSC 889 with more information about the internals of the PNC domain and 890 allows it to make more informed choices about how to route 891 connectivity over the underlying network. 893 ..................................... 894 : PNC Domain : 895 : +--+ +--+ +--+ +--+ : 896 ------+ +-----+ +-----+ +-----+ +------ 897 : ++-+ ++-+ +-++ +-++ : 898 : | | | | : 899 : | | | | : 900 : | | | | : 901 : | | | | : 902 : ++-+ ++-+ +-++ +-++ : 903 ------+ +-----+ +-----+ +-----+ +------ 904 : +--+ +--+ +--+ +--+ : 905 :.................................... 907 .................... 908 : Abstract Network : 909 : : 910 : +--+ +--+ : 911 -------+ +----+ +------- 912 : ++-+ +-++ : 913 : | \ / | : 914 : | \/ | : 915 : | /\ | : 917 : | / \ | : 918 : ++-+ +-++ : 919 -------+ +----+ +------- 920 : +--+ +--+ : 921 :..................: 923 Figure 7: Native Topology with Corresponding Grey Topology 925 5.3. Methods of Building Grey Topologies 927 This section discusses two different methods of building a grey 928 topology: 930 . Automatic generation of abstract topology by configuration 931 (Section 5.3.1) 932 . On-demand generation of supplementary topology via path 933 computation request/reply (Section 5.3.2) 935 5.3.1. Automatic Generation of Abstract Topology by Configuration 937 Automatic generation is based on the abstraction/summarization of 938 the whole domain by the PNC and its advertisement on the MPI. The 939 level of abstraction can be decided based on PNC configuration 940 parameters (e.g., "provide the potential connectivity between any PE 941 and any ASBR in an MPLS-TE network"). 943 Note that the configuration parameters for this abstract topology 944 can include available bandwidth, latency, or any combination of 945 defined parameters. How to generate such information is beyond the 946 scope of this document. 948 This abstract topology may need to be periodically or incrementally 949 updated when there is a change in the underlying network or the use 950 of the network resources that make connectivity more or less 951 available. 953 5.3.2. On-demand Generation of Supplementary Topology via Path Compute 954 Request/Reply 956 While abstract topology is generated and updated automatically by 957 configuration as explained in Section 5.3.1, additional 958 supplementary topology may be obtained by the MDSC via a path 959 compute request/reply mechanism. 961 The abstract topology advertisements from PNCs give the MDSC the 962 border node/link information for each domain. Under this scenario, 963 when the MDSC needs to create a new VN, the MDSC can issue path 964 computation requests to PNCs with constraints matching the VN 965 request as described in [ACTN-YANG]. An example is provided in 966 Figure 8, where the MDSC is creating a P2P VN between AP1 and AP2. 967 The MDSC could use two different inter-domain links to get from 968 domain X to domain Y, but in order to choose the best end-to-end 969 path it needs to know what domain X and Y can offer in terms of 970 connectivity and constraints between the PE nodes and the border 971 nodes. 973 ------- -------- 974 ( ) ( ) 975 - BrdrX.1------- BrdrY.1 - 976 (+---+ ) ( +---+) 977 -+---( |PE1| Dom.X ) ( Dom.Y |PE2| )---+- 978 | (+---+ ) ( +---+) | 979 AP1 - BrdrX.2------- BrdrY.2 - AP2 980 ( ) ( ) 981 ------- -------- 983 Figure 8: A Multi-Domain Example 984 The MDSC issues a path computation request to PNC.X asking for 985 potential connectivity between PE1 and border node BrdrX.1 and 986 between PE1 and BrdrX.2 with related objective functions and TE 987 metric constraints. A similar request for connectivity from the 988 border nodes in domain Y to PE2 will be issued to PNC.Y. The MDSC 989 merges the results to compute the optimal end-to-end path including 990 the inter domain links. The MDSC can use the result of this 991 computation to request the PNCs to provision the underlying 992 networks, and the MDSC can then use the end-to-end path as a virtual 993 link in the VN it delivers to the customer. 995 5.4. Hierarchical Topology Abstraction Example 997 This section illustrates how topology abstraction operates in 998 different levels of a hierarchy of MDSCs as shown in Figure 9. 1000 +-----+ 1001 | CNC | CNC wants to create a VN 1002 +-----+ between CE A and CE B 1003 | 1004 | 1005 +-----------------------+ 1006 | MDSC-H | 1007 +-----------------------+ 1008 / \ 1009 / \ 1011 +---------+ +---------+ 1012 | MDSC-L1 | | MDSC-L2 | 1013 +---------+ +---------+ 1014 / \ / \ 1015 / \ / \ 1016 +----+ +----+ +----+ +----+ 1017 CE A o----|PNC1| |PNC2| |PNC3| |PNC4|----o CE B 1018 +----+ +----+ +----+ +----+ 1020 Virtual Network Delivered to CNC 1022 CE A o==============o CE B 1024 Topology operated on by MDSC-H 1026 CE A o----o==o==o===o----o CE B 1028 Topology operated on by MDSC-L1 Topology operated on by MDSC-L2 1029 _ _ _ _ 1030 ( ) ( ) ( ) ( ) 1031 ( ) ( ) ( ) ( ) 1032 CE A o--(o---o)==(o---o)==Dom.3 Dom.2==(o---o)==(o---o)--o CE B 1033 ( ) ( ) ( ) ( ) 1034 (_) (_) (_) (_) 1036 Actual Topology 1037 ___ ___ ___ ___ 1038 ( ) ( ) ( ) ( ) 1039 ( o ) ( o ) ( o--o) ( o ) 1040 ( / \ ) ( |\ ) ( | | ) ( / \ ) 1041 CE A o---(o-o---o-o)==(o-o-o-o-o)==(o--o--o-o)==(o-o-o-o-o)---o CE B 1042 ( \ / ) ( | |/ ) ( | | ) ( \ / ) 1043 ( o ) (o-o ) ( o--o) ( o ) 1044 (___) (___) (___) (___) 1046 Domain 1 Domain 2 Domain 3 Domain 4 1048 Where 1049 o is a node 1050 --- is a link 1051 === border link 1053 Figure 9: Illustration of Hierarchical Topology Abstraction 1055 In the example depicted in Figure 9, there are four domains under 1056 control of PNCs PNC1, PNC2, PNC3, and PNC4. MDSC-L1 controls PNC1 1057 and PNC2 while MDSC-L2 controls PNC3 and PNC4. Each of the PNCs 1058 provides a grey topology abstraction that presents only border nodes 1059 and links across and outside the domain. The abstract topology 1060 MDSC-L1 that operates is a combination of the two topologies from 1061 PNC1 and PNC2. Likewise, the abstract topology that MDSC-L2 1062 operates is shown in Figure 9. Both MDSC-L1 and MDSC-L2 provide a 1063 black topology abstraction to MDSC-H in which each PNC domain is 1064 presented as a single virtual node. MDSC-H combines these two 1065 topologies to create the abstraction topology on which it operates. 1066 MDSC-H sees the whole four domain networks as four virtual nodes 1067 connected via virtual links. 1069 5.5. VN Recursion with Network Layers 1071 In some cases the VN supplied to a customer may be built using 1072 resources from different technology layers operated by different 1073 operators. For example, one operator may run a packet TE network 1074 and use optical connectivity provided by another operator. 1076 As shown in Figure 10, a customer asks for end-to-end connectivity 1077 between CE A and CE B, a virtual network. The customer's CNC makes 1078 a request to Operator 1's MDSC. The MDSC works out which network 1079 resources need to be configured and sends instructions to the 1080 appropriate PNCs. However, the link between Q and R is a virtual 1081 link supplied by Operator 2: Operator 1 is a customer of Operator 2. 1083 To support this, Operator 1 has a CNC that communicates to Operator 1084 2's MDSC. Note that Operator 1's CNC in Figure 10 is a functional 1085 component that does not dictate implementation: it may be embedded 1086 in a PNC. 1088 Virtual CE A o===============================o CE B 1089 Network 1091 ----- CNC wants to create a VN 1092 Customer | CNC | between CE A and CE B 1093 ----- 1094 : 1095 *********************************************** 1096 : 1097 Operator 1 --------------------------- 1098 | MDSC | 1099 --------------------------- 1100 : : : 1101 : : : 1102 ----- ------------- ----- 1104 | PNC | | PNC | | PNC | 1105 ----- ------------- ----- 1106 : : : : : 1107 Higher v v : v v 1108 Layer CE A o---P-----Q===========R-----S---o CE B 1109 Network | : | 1110 | : | 1111 | ----- | 1112 | | CNC | | 1113 | ----- | 1114 | : | 1115 *********************************************** 1116 | : | 1117 Operator 2 | ------ | 1118 | | MDSC | | 1119 | ------ | 1120 | : | 1121 | ------- | 1122 | | PNC | | 1123 | ------- | 1124 \ : : : / 1125 Lower \v v v/ 1126 Layer X--Y--Z 1127 Network 1129 Where 1131 --- is a link 1132 === is a virtual link 1134 Figure 10: VN recursion with Network Layers 1136 6. Access Points and Virtual Network Access Points 1138 In order to map identification of connections between the customer's 1139 sites and the TE networks and to scope the connectivity requested in 1140 the VNS, the CNC and the MDSC refer to the connections using the 1141 Access Point (AP) construct as shown in Figure 11. 1143 ------------- 1144 ( ) 1145 - - 1146 +---+ X ( ) Z +---+ 1147 |CE1|---+----( )---+---|CE2| 1148 +---+ | ( ) | +---+ 1149 AP1 - - AP2 1150 ( ) 1151 ------------- 1153 Figure 11: Customer View of APs 1155 Let's take as an example a scenario shown in Figure 11. CE1 is 1156 connected to the network via a 10 Gbps link and CE2 via a 40 Gbps 1157 link. Before the creation of any VN between AP1 and AP2 the 1158 customer view can be summarized as shown in Table 1. 1160 +----------+------------------------+ 1161 |End Point | Access Link Bandwidth | 1162 +-----+----------+----------+-------------+ 1163 |AP id| CE,port | MaxResBw | AvailableBw | 1164 +-----+----------+----------+-------------+ 1165 | AP1 |CE1,portX | 10 Gbps | 10 Gbps | 1166 +-----+----------+----------+-------------+ 1167 | AP2 |CE2,portZ | 40 Gbps | 40 Gbps | 1168 +-----+----------+----------+-------------+ 1170 Table 1: AP - Customer View 1172 On the other hand, what the operator sees is shown in Figure 12. 1174 ------- ------- 1175 ( ) ( ) 1176 - - - - 1177 W (+---+ ) ( +---+) Y 1178 -+---( |PE1| Dom.X )----( Dom.Y |PE2| )---+- 1179 | (+---+ ) ( +---+) | 1180 AP1 - - - - AP2 1181 ( ) ( ) 1182 ------- ------- 1184 Figure 12: Operator view of the AP 1186 Which results in a summarization as shown in Table 2. 1188 +----------+------------------------+ 1189 |End Point | Access Link Bandwidth | 1190 +-----+----------+----------+-------------+ 1191 |AP id| PE,port | MaxResBw | AvailableBw | 1192 +-----+----------+----------+-------------+ 1193 | AP1 |PE1,portW | 10 Gbps | 10 Gbps | 1194 +-----+----------+----------+-------------+ 1195 | AP2 |PE2,portY | 40 Gbps | 40 Gbps | 1196 +-----+----------+----------+-------------+ 1198 Table 2: AP - Operator View 1200 A Virtual Network Access Point (VNAP) needs to be defined as binding 1201 between an AP and a VN. It is used to allow for different VNs to 1202 start from the same AP. It also allows for traffic engineering on 1203 the access and/or inter-domain links (e.g., keeping track of 1204 bandwidth allocation). A different VNAP is created on an AP for 1205 each VN. 1207 In this simple scenario we suppose we want to create two virtual 1208 networks. The first with VN identifier 9 between AP1 and AP2 with 1209 bandwidth of 1 Gbps, while the second with VN identifier 5, again 1210 between AP1 and AP2 and with bandwidth 2 Gbps. 1212 The operator view would evolve as shown in Table 3. 1214 +----------+------------------------+ 1215 |End Point | Access Link/VNAP Bw | 1216 +---------+----------+----------+-------------+ 1217 |AP/VNAPid| PE,port | MaxResBw | AvailableBw | 1218 +---------+----------+----------+-------------+ 1219 |AP1 |PE1,portW | 10 Gbps | 7 Gbps | 1220 | -VNAP1.9| | 1 Gbps | N.A. | 1221 | -VNAP1.5| | 2 Gbps | N.A | 1222 +---------+----------+----------+-------------+ 1223 |AP2 |PE2,portY | 4 0Gbps | 37 Gbps | 1224 | -VNAP2.9| | 1 Gbps | N.A. | 1225 | -VNAP2.5| | 2 Gbps | N.A | 1226 +---------+----------+----------+-------------+ 1227 Table 3: AP and VNAP - Operator View after VNS Creation 1229 6.1. Dual-Homing Scenario 1231 Often there is a dual homing relationship between a CE and a pair of 1232 PEs. This case needs to be supported by the definition of VN, APs, 1233 and VNAPs. Suppose CE1 connected to two different PEs in the 1234 operator domain via AP1 and AP2 and that the customer needs 5 Gbps 1235 of bandwidth between CE1 and CE2. This is shown in Figure 12. 1237 ____________ 1238 AP1 ( ) AP3 1239 -------(PE1) (PE3)------- 1240 W / ( ) \ X 1241 +---+/ ( ) \+---+ 1242 |CE1| ( ) |CE2| 1243 +---+\ ( ) /+---+ 1244 Y \ ( ) / Z 1245 -------(PE2) (PE4)------- 1246 AP2 (____________) 1248 Figure 12: Dual-Homing Scenario 1250 In this case, the customer will request for a VN between AP1, AP2, 1251 and AP3 specifying a dual homing relationship between AP1 and AP2. 1252 As a consequence no traffic will flow between AP1 and AP2. The dual 1253 homing relationship would then be mapped against the VNAPs (since 1254 other independent VNs might have AP1 and AP2 as end points). 1256 The customer view would be shown in Table 4. 1258 +----------+------------------------+ 1259 |End Point | Access Link/VNAP Bw | 1260 +---------+----------+----------+-------------+-----------+ 1261 |AP/VNAPid| CE,port | MaxResBw | AvailableBw |Dual Homing| 1262 +---------+----------+----------+-------------+-----------+ 1263 |AP1 |CE1,portW | 10 Gbps | 5 Gbps | | 1264 | -VNAP1.9| | 5 Gbps | N.A. | VNAP2.9 | 1265 +---------+----------+----------+-------------+-----------+ 1266 |AP2 |CE1,portY | 40 Gbps | 35 Gbps | | 1267 | -VNAP2.9| | 5 Gbps | N.A. | VNAP1.9 | 1268 +---------+----------+----------+-------------+-----------+ 1269 |AP3 |CE2,portX | 50 Gbps | 45 Gbps | | 1270 | -VNAP3.9| | 5 Gbps | N.A. | NONE | 1271 +---------+----------+----------+-------------+-----------+ 1273 Table 4: Dual-Homing - Customer View after VN Creation 1275 7. Advanced ACTN Application: Multi-Destination Service 1277 A further advanced application of ACTN is in the case of Data Center 1278 selection, where the customer requires the Data Center selection to 1279 be based on the network status; this is referred to as Multi- 1280 Destination in [ACTN-REQ]. In terms of ACTN, a CNC could request a 1281 VNS between a set of source APs and destination APs and leave it up 1282 to the network (MDSC) to decide which source and destination access 1283 points to be used to set up the VNS. The candidate list of source 1284 and destination APs is decided by a CNC (or an entity outside of 1285 ACTN) based on certain factors which are outside the scope of ACTN. 1287 Based on the AP selection as determined and returned by the network 1288 (MDSC), the CNC (or an entity outside of ACTN) should further take 1289 care of any subsequent actions such as orchestration or service 1290 setup requirements. These further actions are outside the scope of 1291 ACTN. 1293 Consider a case as shown in Figure 14, where three data centers are 1294 available, but the customer requires the data center selection to be 1295 based on the network status and the connectivity service setup 1296 between the AP1 (CE1) and one of the destination APs (AP2 (DC-A), 1297 AP3 (DC-B), and AP4 (DC-C)). The MDSC (in coordination with PNCs) 1298 would select the best destination AP based on the constraints, 1299 optimization criteria, policies, etc., and setup the connectivity 1300 service (virtual network). 1302 ------- ------- 1303 ( ) ( ) 1304 - - - - 1305 +---+ ( ) ( ) +----+ 1306 |CE1|---+---( Domain X )----( Domain Y )---+---|DC-A| 1307 +---+ | ( ) ( ) | +----+ 1308 AP1 - - - - AP2 1309 ( ) ( ) 1310 ---+--- ---+--- 1311 | | 1312 AP3-+ AP4-+ 1313 | | 1314 +----+ +----+ 1315 |DC-B| |DC-C| 1316 +----+ +----+ 1318 Figure 14: End-Point Selection Based on Network Status 1320 7.1. Pre-Planned End Point Migration 1322 Furthermore, in case of Data Center selection, customer could 1323 request for a backup DC to be selected, such that in case of 1324 failure, another DC site could provide hot stand-by protection. As 1325 shown in Figure 15 DC-C is selected as a backup for DC-A. Thus, the 1326 VN should be setup by the MDSC to include primary connectivity 1327 between AP1 (CE1) and AP2 (DC-A) as well as protection connectivity 1328 between AP1 (CE1) and AP4 (DC-C). 1330 ------- ------- 1331 ( ) ( ) 1332 - - __ - - 1333 +---+ ( ) ( ) +----+ 1334 |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| 1335 +---+ | ( ) ( ) | +----+ 1336 AP1 - - - - AP2 | 1337 ( ) ( ) | 1338 ---+--- ---+--- | 1339 | | | 1340 AP3-| AP4-| HOT STANDBY 1341 | | | 1342 +----+ +----+ | 1343 |DC-D| |DC-C|<------------- 1344 +----+ +----+ 1346 Figure 15: Pre-planned End-Point Migration 1348 7.2. On the Fly End-Point Migration 1350 Compared to pre-planned end point migration, on the fly end point 1351 selection is dynamic in that the migration is not pre-planned but 1352 decided based on network condition. Under this scenario, the MDSC 1353 would monitor the network (based on the VN Service-level Agreement 1354 (SLA) and notify the CNC in case where some other destination AP 1355 would be a better choice based on the network parameters. The CNC 1356 should instruct the MDSC when it is suitable to update the VN with 1357 the new AP if it is required. 1359 8. Manageability Considerations 1361 The objective of ACTN is to manage traffic engineered resources, and 1362 provide a set of mechanisms to allow customers to request virtual 1363 connectivity across server network resources. ACTN supports 1364 multiple customers each with its own view of and control of a 1365 virtual network built on the server network, the network operator 1366 will need to partition (or "slice") their network resources, and 1367 manage the resources accordingly. 1369 The ACTN platform will, itself, need to support the request, 1370 response, and reservations of client and network layer connectivity. 1371 It will also need to provide performance monitoring and control of 1372 traffic engineered resources. The management requirements may be 1373 categorized as follows: 1375 . Management of external ACTN protocols 1376 . Management of internal ACTN interfaces/protocols 1377 . Management and monitoring of ACTN components 1378 . Configuration of policy to be applied across the ACTN system 1380 The ACTN framework and interfaces are defined to enable traffic 1381 engineering for virtual network services and connectivity services. 1382 Network operators may have other Operations, Administration, and 1383 Maintenance (OAM) tasks for service fulfillment, optimization, and 1384 assurance beyond traffic engineering. The realization of OAM beyond 1385 abstraction and control of traffic engineered networks is not 1386 considered in this document. 1388 8.1. Policy 1390 Policy is an important aspect of ACTN control and management. 1391 Policies are used via the components and interfaces, during 1392 deployment of the service, to ensure that the service is compliant 1393 with agreed policy factors and variations (often described in SLAs), 1394 these include, but are not limited to: connectivity, bandwidth, 1395 geographical transit, technology selection, security, resilience, 1396 and economic cost. 1398 Depending on the deployment of the ACTN architecture, some policies 1399 may have local or global significance. That is, certain policies 1400 may be ACTN component specific in scope, while others may have 1401 broader scope and interact with multiple ACTN components. Two 1402 examples are provided below: 1404 o A local policy might limit the number, type, size, and 1405 scheduling of virtual network services a customer may request 1406 via its CNC. This type of policy would be implemented locally 1407 on the MDSC. 1409 o A global policy might constrain certain customer types (or 1410 specific customer applications) to only use certain MDSCs, and 1411 be restricted to physical network types managed by the PNCs. A 1412 global policy agent would govern these types of policies. 1414 The objective of this section is to discuss the applicability of 1415 ACTN policy: requirements, components, interfaces, and examples. 1416 This section provides an analysis and does not mandate a specific 1417 method for enforcing policy, or the type of policy agent that would 1418 be responsible for propagating policies across the ACTN components. 1419 It does highlight examples of how policy may be applied in the 1420 context of ACTN, but it is expected further discussion in an 1421 applicability or solution specific document, will be required. 1423 8.2. Policy Applied to the Customer Network Controller 1425 A virtual network service for a customer application will be 1426 requested by the CNC. The request will reflect the application 1427 requirements and specific service needs, including bandwidth, 1428 traffic type and survivability. Furthermore, application access and 1429 type of virtual network service requested by the CNC, will be need 1430 adhere to specific access control policies. 1432 8.3. Policy Applied to the Multi-Domain Service Coordinator 1434 A key objective of the MDSC is to support the customer's expression 1435 of the application connectivity request via its CNC as a set of 1436 desired business needs, therefore policy will play an important 1437 role. 1439 Once authorized, the virtual network service will be instantiated 1440 via the CNC-MDSC Interface (CMI); it will reflect the customer 1441 application and connectivity requirements, and specific service 1442 transport needs. The CNC and the MDSC components will have agreed 1443 connectivity end-points; use of these end-points should be defined 1444 as a policy expression when setting up or augmenting virtual network 1445 services. Ensuring that permissible end-points are defined for CNCs 1446 and applications will require the MDSC to maintain a registry of 1447 permissible connection points for CNCs and application types. 1449 Conflicts may occur when virtual network service optimization 1450 criteria are in competition. For example, to meet objectives for 1451 service reachability a request may require an interconnection point 1452 between multiple physical networks; however, this might break a 1453 confidentially policy requirement of specific type of end-to-end 1454 service. Thus an MDSC may have to balance a number of the 1455 constraints on a service request and between different requested 1456 services. It may also have to balance requested services with 1457 operational norms for the underlying physical networks. This 1458 balancing may be resolved using configured policy and using hard and 1459 soft policy constraints. 1461 8.4. Policy Applied to the Provisioning Network Controller 1463 The PNC is responsible for configuring the network elements, 1464 monitoring physical network resources, and exposing connectivity 1465 (direct or abstracted) to the MDSC. It is therefore expected that 1466 policy will dictate what connectivity information will be exported 1467 between the PNC, via the MDSC-PNC Interface (MPI), and MDSC. 1469 Policy interactions may arise when a PNC determines that it cannot 1470 compute a requested path from the MDSC, or notices that (per a 1471 locally configured policy) the network is low on resources (for 1472 example, the capacity on key links become exhausted). In either 1473 case, the PNC will be required to notify the MDSC, which may (again 1474 per policy) act to construct a virtual network service across 1475 another physical network topology. 1477 Furthermore, additional forms of policy-based resource management 1478 will be required to provide virtual network service performance, 1479 security and resilience guarantees. This will likely be implemented 1480 via a local policy agent and additional protocol methods. 1482 9. Security Considerations 1484 The ACTN framework described in this document defines key components 1485 and interfaces for managed traffic engineered networks. Securing 1486 the request and control of resources, confidentially of the 1487 information, and availability of function, should all be critical 1488 security considerations when deploying and operating ACTN platforms. 1490 Several distributed ACTN functional components are required, and 1491 implementations should consider encrypting data that flows between 1492 components, especially when they are implemented at remote nodes, 1493 regardless these data flows are on external or internal network 1494 interfaces. 1496 The ACTN security discussion is further split into two specific 1497 categories described in the following sub-sections: 1499 o Interface between the Customer Network Controller and Multi- 1500 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) 1502 o Interface between the Multi-Domain Service Coordinator and 1503 Provisioning Network Controller (PNC), MDSC-PNC Interface (MPI) 1505 From a security and reliability perspective, ACTN may encounter many 1506 risks such as malicious attack and rogue elements attempting to 1507 connect to various ACTN components. Furthermore, some ACTN 1508 components represent a single point of failure and threat vector, 1509 and must also manage policy conflicts, and eavesdropping of 1510 communication between different ACTN components. 1512 The conclusion is that all protocols used to realize the ACTN 1513 framework should have rich security features, and customer, 1514 application and network data should be stored in encrypted data 1515 stores. Additional security risks may still exist. Therefore, 1516 discussion and applicability of specific security functions and 1517 protocols will be better described in documents that are use case 1518 and environment specific. 1520 9.1. CNC-MDSC Interface (CMI) 1522 Data stored by the MDSC will reveal details of the virtual network 1523 services, and which CNC and customer/application is consuming the 1524 resource. The data stored must therefore be considered as a 1525 candidate for encryption. 1527 CNC Access rights to an MDSC must be managed. The MDSC must 1528 allocate resources properly, and methods to prevent policy 1529 conflicts, resource wastage, and denial of service attacks on the 1530 MDSC by rogue CNCs, should also be considered. 1532 The CMI will likely be an external protocol interface. Suitable 1533 authentication and authorization of each CNC connecting to the MDSC 1534 will be required, especially, as these are likely to be implemented 1535 by different organizations and on separate functional nodes. Use of 1536 the AAA-based mechanisms would also provide role-based authorization 1537 methods, so that only authorized CNC's may access the different 1538 functions of the MDSC. 1540 9.2. MDSC-PNC Interface (MPI) 1542 Where the MDSC must interact with multiple (distributed) PNCs, a 1543 PKI-based mechanism is suggested, such as building a TLS or HTTPS 1544 connection between the MDSC and PNCs, to ensure trust between the 1545 physical network layer control components and the MDSC. Trust 1546 anchors for the PKI can be configured to use a smaller (and 1547 potentially non-intersecting) set of trusted Certificate Authorities 1548 (CAs) than in the Web PKI. 1550 Which MDSC the PNC exports topology information to, and the level of 1551 detail (full or abstracted), should also be authenticated, and 1552 specific access restrictions and topology views should be 1553 configurable and/or policy-based. 1555 10. IANA Considerations 1557 This document has no actions for IANA. 1559 11. References 1561 11.1. Informative References 1563 [RFC2702] Awduche, D., et. al., "Requirements for Traffic 1564 Engineering Over MPLS", RFC 2702, September 1999. 1566 [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path 1567 Computation Element (PCE)-Based Architecture", IETF RFC 1568 4655, August 2006. 1570 [RFC5654] Niven-Jenkins, B. (Ed.), D. Brungard (Ed.), and M. Betts 1571 (Ed.), "Requirements of an MPLS Transport Profile", RFC 1572 5654, September 2009. 1574 [RFC7149] Boucadair, M. and Jacquenet, C., "Software-Defined 1575 Networking: A Perspective from within a Service Provider 1576 Environment", RFC 7149, March 2014. 1578 [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for 1579 Information Exchange between Interconnected Traffic- 1580 Engineered Networks", RFC 7926, July 2016. 1582 [RFC3945] Manning, E., et al., "Generalized Multi-Protocol Label 1583 Switching (GMPLS) Architecture2, RFC 3945, October 2004. 1585 [ONF-ARCH] Open Networking Foundation, "SDN architecture", Issue 1586 1.1, ONF TR-521, June 2016. 1588 [Centralized] Farrel, A., et al., "An Architecture for Use of PCE 1589 and PCEP in a Network with Central Control", draft-ietf- 1590 teas-pce-central-control, work in progress. 1592 [Service-YANG] Lee, Y., Dhody, D., and Ceccarelli, C., "Traffic 1593 Engineering and Service Mapping Yang Model", draft-lee- 1594 teas-te-service-mapping-yang, work in progress. 1596 [ACTN-YANG] Lee, Y., et al., "A Yang Data Model for ACTN VN 1597 Operation", draft-lee-teas-actn-vn-yang, work in progress. 1599 [ACTN-REQ] Lee, Y., et al., "Requirements for Abstraction and 1600 Control of TE Networks", draft-ietf-teas-actn- 1601 requirements, work in progress. 1603 [TE-Topo] X. Liu et al., "YANG Data Model for TE Topologies", draft- 1604 ietf-teas-yang-te-topo, work in progress. 1606 12. Contributors 1608 Adrian Farrel 1609 Old Dog Consulting 1610 Email: adrian@olddog.co.uk 1612 Italo Busi 1613 Huawei 1614 Email: Italo.Busi@huawei.com 1616 Khuzema Pithewan 1617 Infinera 1618 Email: kpithewan@infinera.com 1620 Michael Scharf 1621 Nokia 1622 Email: michael.scharf@nokia.com 1624 Luyuan Fang 1625 eBay 1626 Email: luyuanf@gmail.com 1628 Diego Lopez 1629 Telefonica I+D 1630 Don Ramon de la Cruz, 82 1631 28006 Madrid, Spain 1632 Email: diego@tid.es 1634 Sergio Belotti 1635 Alcatel Lucent 1636 Via Trento, 30 1637 Vimercate, Italy 1638 Email: sergio.belotti@nokia.com 1640 Daniel King 1641 Lancaster University 1642 Email: d.king@lancaster.ac.uk 1644 Dhruv Dhody 1645 Huawei Technologies 1646 Divyashree Techno Park, Whitefield 1647 Bangalore, Karnataka 560066 1648 India 1649 Email: dhruv.ietf@gmail.com 1650 Gert Grammel 1651 Juniper Networks 1652 Email: ggrammel@juniper.net 1654 Authors' Addresses 1656 Daniele Ceccarelli 1657 Ericsson 1658 Torshamnsgatan,48 1659 Stockholm, Sweden 1660 Email: daniele.ceccarelli@ericsson.com 1662 Young Lee 1663 Huawei Technologies 1664 5340 Legacy Drive 1665 Plano, TX 75023, USA 1666 Phone: (469)277-5838 1667 Email: leeyoung@huawei.com 1669 APPENDIX A - Example of MDSC and PNC Functions Integrated in A 1670 Service/Network Orchestrator 1672 This section provides an example of a possible deployment scenario, 1673 in which Service/Network Orchestrator can include a number of 1674 functionalities, among which, in the example below, PNC 1675 functionalities for domain 2 and MDSC functionalities to coordinate 1676 the PNC1 functionalities (hosted in a separate domain controller) 1677 and PNC2 functionalities (co-hosted in the network orchestrator). 1679 Customer 1680 +-------------------------------+ 1681 | +-----+ | 1682 | | CNC | | 1683 | +-----+ | 1684 +-------|-----------------------+ 1685 | 1686 Service/Network | CMI 1687 Orchestrator | 1688 +-------|------------------------+ 1689 | +------+ MPI +------+ | 1690 | | MDSC |---------| PNC2 | | 1691 | +------+ +------+ | 1692 +-------|------------------|-----+ 1693 | MPI | 1694 Domain Controller | | 1695 +-------|-----+ | 1696 | +-----+ | | SBI 1697 | |PNC1 | | | 1698 | +-----+ | | 1699 +-------|-----+ | 1700 v SBI v 1701 ------- ------- 1702 ( ) ( ) 1703 - - - - 1704 ( ) ( ) 1705 ( Domain 1 )----( Domain 2 ) 1706 ( ) ( ) 1707 - - - - 1708 ( ) ( ) 1709 ------- -------