idnits 2.17.1 draft-ietf-teas-actn-framework-14.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 16 instances of too long lines in the document, the longest one being 7 characters in excess of 72. ** There are 2 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 11, 2018) is 2175 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 2 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Daniele Ceccarelli (Ed) 2 Internet Draft Ericsson 3 Intended status: Informational Young Lee (Ed) 4 Expires: November 11, 2018 Huawei 6 May 11, 2018 8 Framework for Abstraction and Control of Traffic Engineered Networks 10 draft-ietf-teas-actn-framework-14 12 Abstract 14 Traffic Engineered networks have a variety of mechanisms to 15 facilitate the separation of the data plane and control plane. They 16 also have a range of management and provisioning protocols to 17 configure and activate network resources. These mechanisms represent 18 key technologies for enabling flexible and dynamic networking. The 19 term "Traffic Engineered network" refers to a network that uses any 20 connection-oriented technology under the control of a distributed or 21 centralized control plane to support dynamic provisioning of end-to- 22 end connectivity. 24 Abstraction of network resources is a technique that can be applied 25 to a single network domain or across multiple domains to create a 26 single virtualized network that is under the control of a network 27 operator or the customer of the operator that actually owns 28 the network resources. 30 This document provides a framework for Abstraction and Control of 31 Traffic Engineered Networks (ACTN) to support virtual network 32 services and connectivity services. 34 Status of this Memo 36 This Internet-Draft is submitted to IETF in full conformance with 37 the provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF), its areas, and its working groups. Note that 41 other groups may also distribute working documents as Internet- 42 Drafts. 44 Internet-Drafts are draft documents valid for a maximum of six 45 months and may be updated, replaced, or obsoleted by other documents 46 at any time. It is inappropriate to use Internet-Drafts as 47 reference material or to cite them other than as "work in progress." 48 The list of current Internet-Drafts can be accessed at 49 http://www.ietf.org/ietf/1id-abstracts.txt 51 The list of Internet-Draft Shadow Directories can be accessed at 52 http://www.ietf.org/shadow.html. 54 This Internet-Draft will expire on November 11, 2018. 56 Copyright Notice 58 Copyright (c) 2018 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with 66 respect to this document. Code Components extracted from this 67 document must include Simplified BSD License text as described in 68 Section 4.e of the Trust Legal Provisions and are provided without 69 warranty as described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction...................................................3 74 2. Overview.......................................................4 75 2.1. Terminology...............................................5 76 2.2. VNS Model of ACTN.........................................7 77 2.2.1. Customers............................................9 78 2.2.2. Service Providers...................................10 79 2.2.3. Network Operators...................................10 80 3. ACTN Base Architecture........................................10 81 3.1. Customer Network Controller..............................12 82 3.2. Multi-Domain Service Coordinator.........................13 83 3.3. Provisioning Network Controller..........................13 84 3.4. ACTN Interfaces..........................................14 85 4. Advanced ACTN Architectures...................................15 86 4.1. MDSC Hierarchy...........................................15 87 4.2. Functional Split of MDSC Functions in Orchestrators......16 88 5. Topology Abstraction Methods..................................17 89 5.1. Abstraction Factors......................................17 90 5.2. Abstraction Types........................................18 91 5.2.1. Native/White Topology...............................18 92 5.2.2. Black Topology......................................19 93 5.2.3. Grey Topology.......................................20 94 5.3. Methods of Building Grey Topologies......................21 95 5.3.1. Automatic Generation of Abstract Topology by 96 Configuration..............................................21 97 5.3.2. On-demand Generation of Supplementary Topology via Path 98 Compute Request/Reply......................................21 99 5.4. Hierarchical Topology Abstraction Example................22 100 5.5. VN Recursion with Network Layers.........................24 101 6. Access Points and Virtual Network Access Points...............25 102 6.1. Dual-Homing Scenario.....................................27 103 7. Advanced ACTN Application: Multi-Destination Service..........28 104 7.1. Pre-Planned End Point Migration..........................29 105 7.2. On the Fly End-Point Migration...........................30 106 8. Manageability Considerations..................................30 107 8.1. Policy...................................................31 108 8.2. Policy Applied to the Customer Network Controller........32 109 8.3. Policy Applied to the Multi-Domain Service Coordinator...32 110 8.4. Policy Applied to the Provisioning Network Controller....32 111 9. Security Considerations.......................................33 112 9.1. CNC-MDSC Interface (CMI).................................34 113 9.2. MDSC-PNC Interface (MPI).................................34 114 10. IANA Considerations..........................................34 115 11. References...................................................35 116 11.1. Informative References..................................35 117 12. Contributors.................................................36 118 Authors' Addresses...............................................37 119 APPENDIX A - Example of MDSC and PNC Functions Integrated in A 120 Service/Network Orchestrator.....................................37 122 1. Introduction 124 The term "Traffic Engineered network" refers to a network that uses 125 any connection-oriented technology under the control of a 126 distributed or centralized control plane to support dynamic 127 provisioning of end-to-end connectivity. Traffic Engineered (TE) 128 networks have a variety of mechanisms to facilitate separation of 129 data plane and control plane including distributed signaling for 130 path setup and protection, centralized path computation for planning 131 and traffic engineering, and a range of management and provisioning 132 protocols to configure and activate network resources. These 133 mechanisms represent key technologies for enabling flexible and 134 dynamic networking. Some examples of networks that are in scope of 135 this definition are optical networks, Multiprotocol Label Switching 136 (MPLS) Transport Profile (MPLS-TP) networks [RFC5654], and MPLS-TE 137 networks [RFC2702]. 139 One of the main drivers for Software Defined Networking (SDN) 140 [RFC7149] is a decoupling of the network control plane from the data 141 plane. This separation has been achieved for TE networks with the 142 development of MPLS/GMPLS [RFC3945] and the Path Computation Element 143 (PCE) [RFC4655]. One of the advantages of SDN is its logically 144 centralized control regime that allows a global view of the 145 underlying networks. Centralized control in SDN helps improve 146 network resource utilization compared with distributed network 147 control. For TE-based networks, a PCE may serve as a logically 148 centralized path computation function. 150 This document describes a set of management and control functions 151 used to operate one or more TE networks to construct virtual 152 networks that can be represented to customers and that are built 153 from abstractions of the underlying TE networks so that, for 154 example, a link in the customer's network is constructed from a path 155 or collection of paths in the underlying networks. We call this set 156 of functions "Abstraction and Control of Traffic Engineered 157 Networks" (ACTN). 159 2. Overview 161 Three key aspects that need to be solved by SDN are: 163 . Separation of service requests from service delivery so that 164 the configuration and operation of a network is transparent 165 from the point of view of the customer, but remains responsive 166 to the customer's services and business needs. 168 . Network abstraction: As described in [RFC7926], abstraction is 169 the process of applying policy to a set of information about a 170 TE network to produce selective information that represents the 171 potential ability to connect across the network. The process 172 of abstraction presents the connectivity graph in a way that is 173 independent of the underlying network technologies, 174 capabilities, and topology so that the graph can be used to 175 plan and deliver network services in a uniform way 177 . Coordination of resources across multiple independent networks 178 and multiple technology layers to provide end-to-end services 179 regardless of whether the networks use SDN or not. 181 As networks evolve, the need to provide support for distinct 182 services, separated service orchestration, and resource abstraction 183 have emerged as key requirements for operators. In order to support 184 multiple customers each with its own view of and control of the 185 server network, a network operator needs to partition (or "slice") 186 or manage sharing of the network resources. Network slices can be 187 assigned to each customer for guaranteed usage which is a step 188 further than shared use of common network resources. 190 Furthermore, each network represented to a customer can be built 191 from virtualization of the underlying networks so that, for example, 192 a link in the customer's network is constructed from a path or 193 collection of paths in the underlying network. 195 ACTN can facilitate virtual network operation via the creation of a 196 single virtualized network or a seamless service. This supports 197 operators in viewing and controlling different domains (at any 198 dimension: applied technology, administrative zones, or vendor- 199 specific technology islands) and presenting virtualized networks to 200 their customers. 202 The ACTN framework described in this document facilitates: 204 . Abstraction of the underlying network resources to higher-layer 205 applications and customers [RFC7926]. 207 . Virtualization of particular underlying resources, whose 208 selection criterion is the allocation of those resources to a 209 particular customer, application, or service [ONF-ARCH]. 211 . TE Network slicing of infrastructure to meet specific 212 customers' service requirements. 214 . Creation of an abstract environment allowing operators to view 215 and control multi-domain networks as a single abstract network. 217 . The presentation to customers of networks as a virtual network 218 via open and programmable interfaces. 220 2.1. Terminology 222 The following terms are used in this document. Some of them are 223 newly defined, some others reference existing definitions: 225 . Domain: A domain [RFC4655] is any collection of network 226 elements within a common sphere of address management or path 227 computation responsibility. Specifically within this document 228 we mean a part of an operator's network that is under common 229 management. Network elements will often be grouped into 230 domains based on technology types, vendor profiles, and 231 geographic proximity. 233 . Abstraction: This process is defined in [RFC7926]. 235 . TE Network Slicing: In the context of ACTN, a TE network slice 236 is a collection of resources that is used to establish a 237 logically dedicated virtual network over one or more TE 238 networks. TE network slicing allows a network operator to 239 provide dedicated virtual networks for applications/customers 240 over a common network infrastructure. The logically dedicated 241 resources are a part of the larger common network 242 infrastructures that are shared among various TE network slice 243 instances which are the end-to-end realization of TE network 244 slicing, consisting of the combination of physically or 245 logically dedicated resources. 247 . Node: A node is a vertex on the graph representation of a TE 248 topology. In a physical network topology, a node corresponds 249 to a physical network element (NE) such as a router. In an 250 abstract network topology, a node (sometimes called an abstract 251 node) is a representation as a single vertex of one or more 252 physical NEs and their connecting physical connections. The 253 concept of a node represents the ability to connect from any 254 access to the node (a link end) to any other access to that 255 node, although "limited cross-connect capabilities" may also be 256 defined to restrict this functionality. Network abstraction 257 may be applied recursively, so a node in one topology may be 258 created by applying abstraction to the nodes in the underlying 259 topology. 261 . Link: A link is an edge on the graph representation of a TE 262 topology. Two nodes connected by a link are said to be 263 "adjacent" in the TE topology. In a physical network topology, 264 a link corresponds to a physical connection. In an abstract 265 network topology, a link (sometimes called an abstract link) is 266 a representation of the potential to connect a pair of points 267 with certain TE parameters (see [RFC7926] for details). 268 Network abstraction may be applied recursively, so a link in 269 one topology may be created by applying abstraction to the 270 links in the underlying topology. 272 . Abstract Topology: The topology of abstract nodes and abstract 273 links presented through the process of abstraction by a lower 274 layer network for use by a higher layer network. 276 . A Virtual Network (VN) is a network provided by a service 277 provider to a customer for the customer to use in any way it 278 wants as though it was a physical network. There are two views 279 of a VN as follows: 281 a) The VN can be abstracted as a set of edge-to-edge links (a 282 Type 1 VN). Each link is referred as a VN member and is 283 formed as an end-to-end tunnel across the underlying 284 networks. Such tunnels may be constructed by recursive 285 slicing or abstraction of paths in the underlying networks 286 and can encompass edge points of the customer's network, 287 access links, intra-domain paths, and inter-domain links. 289 b) The VN can also be abstracted as a topology of virtual nodes 290 and virtual links (a Type 2 VN). The operator needs to map 291 the VN to actual resource assignment, which is known as 292 virtual network embedding. The nodes in this case include 293 physical end points, border nodes, and internal nodes as well 294 as abstracted nodes. Similarly the links include physical 295 access links, inter-domain links, and intra-domain links as 296 well as abstract links. 298 Clearly a Type 1 VN is a special case of a Type 2 VN. 300 . Access link: A link between a customer node and a operator 301 node. 303 . Inter-domain link: A link between domains under distinct 304 management administration. 306 . Access Point (AP): An AP is a logical identifier shared between 307 the customer and the operator used to identify an access link. 308 The AP is used by the customer when requesting a VNS. Note that 309 the term "TE Link Termination Point" (LTP) defined in [TE-Topo] 310 describes the end points of links, while an AP is a common 311 identifier for the link itself. 313 . VN Access Point (VNAP): A VNAP is the binding between an AP and 314 a given VN. 316 . Server Network: As defined in [RFC7926], a server network is a 317 network that provides connectivity for another network (the 318 Client Network) in a client-server relationship. 320 2.2. VNS Model of ACTN 322 A Virtual Network Service (VNS) is the service agreement between a 323 customer and operator to provide a VN. When a VN is a simple 324 connectivity between two points, the difference between VNS and 325 connectivity service becomes blurred. There are three types of VNS 326 defined in this document. 328 o Type 1 VNS refers to a VNS in which the customer is allowed 329 to create and operate a Type 1 VN. 331 o Type 2a and 2b VNS refer to VNSs in which the customer is 332 allowed to create and operates a Type 2 VN. With a Type 333 2a VNS, the VN is statically created at service 334 configuration time and the customer is not allowed to 335 change the topology (e.g., by adding or deleting abstract 336 nodes and links). A Type 2b VNS is the same as a Type 2a 337 VNS except that the customer is allowed to make dynamic 338 changes to the initial topology created at service 339 configuration time. 341 VN Operations are functions that a customer can exercise on a VN 342 depending on the agreement between the customer and the operator. 344 o VN Creation allows a customer to request the instantiation 345 of a VN. This could be through off-line pre-configuration 346 or through dynamic requests specifying attributes to a 347 Service Level Agreement (SLA) to satisfy the customer's 348 objectives. 350 o Dynamic Operations allow a customer to modify or delete the 351 VN. The customer can further act upon the virtual network 352 to create/modify/delete virtual links and nodes. These 353 changes will result in subsequent tunnel management in the 354 operator's networks. 356 There are three key entities in the ACTN VNS model: 358 - Customers 359 - Service Providers 360 - Network Operators 362 These entities are related in a three tier model as shown in Figure 363 1. 365 +----------------------+ 366 | Customer | 367 +----------------------+ 368 | 369 VNS || | /\ VNS 370 Request || | || Reply 371 \/ | || 373 +----------------------+ 374 | Service Provider | 375 +----------------------+ 376 / | \ 377 / | \ 378 / | \ 379 / | \ 380 +------------------+ +------------------+ +------------------+ 381 |Network Operator 1| |Network Operator 2| |Network Operator 3| 382 +------------------+ +------------------+ +------------------+ 384 Figure 1: The Three Tier Model. 386 The commercial roles of these entities are described in the 387 following sections. 389 2.2.1. Customers 391 Basic customers include fixed residential users, mobile users, and 392 small enterprises. Each requires a small amount of resources and is 393 characterized by steady requests (relatively time invariant). Basic 394 customers do not modify their services themselves: if a service 395 change is needed, it is performed by the provider as a proxy. 397 Advanced customers include enterprises, governments, and utility 398 companies. Such customers ask for both point-to point and 399 multipoint connectivity with high resource demands varying 400 significantly in time. This is one of the reasons why a bundled 401 service offering is not enough and it is desirable to provide each 402 advanced customer with a customized virtual network service. 403 Advanced customers may also have the ability to modify their service 404 parameters within the scope of their virtualized environments. The 405 primary focus of ACTN is Advanced Customers. 407 As customers are geographically spread over multiple network 408 operator domains, they have to interface to multiple operators and 409 may have to support multiple virtual network services with different 410 underlying objectives set by the network operators. To enable these 411 customers to support flexible and dynamic applications they need to 412 control their allocated virtual network resources in a dynamic 413 fashion, and that means that they need a view of the topology that 414 spans all of the network operators. Customers of a given service 415 provider can in turn offer a service to other customers in a 416 recursive way. 418 2.2.2. Service Providers 420 In the scope of ACTN, service providers deliver VNSs to their 421 customers. Service providers may or may not own physical network 422 resources (i.e., may or may not be network operators as described in 423 Section 2.2.3). When a service provider is the same as the network 424 operator, this is similar to existing VPN models applied to a single 425 operator although it may be hard to use this approach when the 426 customer spans multiple independent network operator domains. 428 When network operators supply only infrastructure, while distinct 429 service providers interface to the customers, the service providers 430 are themselves customers of the network infrastructure operators. 431 One service provider may need to keep multiple independent network 432 operators because its end-users span geographically across multiple 433 network operator domains. In some cases, service provider is also a 434 network operator when it owns network infrastructure on which 435 service is provided. 437 2.2.3. Network Operators 439 Network operators are the infrastructure operators that provision 440 the network resources and provide network resources to their 441 customers. The layered model described in this architecture 442 separates the concerns of network operators and customers, with 443 service providers acting as aggregators of customer requests. 445 3. ACTN Base Architecture 447 This section provides a high-level model of ACTN showing the 448 interfaces and the flow of control between components. 450 The ACTN architecture is based on a 3-tier reference model and 451 allows for hierarchy and recursion. The main functionalities within 452 an ACTN system are: 454 . Multi-domain coordination: This function oversees the specific 455 aspects of different domains and builds a single abstracted 456 end-to-end network topology in order to coordinate end-to-end 457 path computation and path/service provisioning. Domain 458 sequence path calculation/determination is also a part of this 459 function. 461 . Abstraction: This function provides an abstracted view of the 462 underlying network resources for use by the customer - a 463 customer may be the client or a higher level controller entity. 464 This function includes network path computation based on 465 customer service connectivity request constraints, path 466 computation based on the global network-wide abstracted 467 topology, and the creation of an abstracted view of network 468 resources allocated to each customer. These operations depend 469 on customer-specific network objective functions and customer 470 traffic profiles. 472 . Customer mapping/translation: This function is to map customer 473 requests/commands into network provisioning requests that can 474 be sent from the Multi-Domain Service Coordinator (MDSC) to the 475 Provisioning Network Controller (PNC) according to business 476 policies provisioned statically or dynamically at the OSS/NMS. 477 Specifically, it provides mapping and translation of a 478 customer's service request into a set of parameters that are 479 specific to a network type and technology such that network 480 configuration process is made possible. 482 . Virtual service coordination: This function translates customer 483 service-related information into virtual network service 484 operations in order to seamlessly operate virtual networks 485 while meeting a customer's service requirements. In the 486 context of ACTN, service/virtual service coordination includes 487 a number of service orchestration functions such as multi- 488 destination load balancing, guarantees of service quality, 489 bandwidth and throughput. It also includes notifications for 490 service fault and performance degradation and so forth. 492 The base ACTN architecture defines three controller types and the 493 corresponding interfaces between these controllers. The following 494 types of controller are shown in Figure 2: 496 . CNC - Customer Network Controller 497 . MDSC - Multi-Domain Service Coordinator 498 . PNC - Provisioning Network Controller 500 Figure 2 also shows the following interfaces: 502 . CMI - CNC-MDSC Interface 503 . MPI - MDSC-PNC Interface 504 . SBI - Southbound Interface 505 +---------+ +---------+ +---------+ 506 | CNC | | CNC | | CNC | 507 +---------+ +---------+ +---------+ 508 \ | / 509 \ | / 510 Boundary =============\==================|=====================/======= 511 Between \ | / 512 Customer & ----------- | CMI -------------- 513 Network Operator \ | / 514 +---------------+ 515 | MDSC | 516 +---------------+ 517 / | \ 518 ------------ | MPI --------------- 519 / | \ 520 +-------+ +-------+ +-------+ 521 | PNC | | PNC | | PNC | 522 +-------+ +-------+ +-------+ 523 | SBI / | / \ 524 | / | SBI SBI / \ 525 --------- ----- | / \ 526 ( ) ( ) | / \ 527 - Control - ( Phys. ) | / ----- 528 ( Plane ) ( Net ) | / ( ) 529 ( Physical ) ----- | / ( Phys. ) 530 ( Network ) ----- ----- ( Net ) 531 - - ( ) ( ) ----- 532 ( ) ( Phys. ) ( Phys. ) 533 --------- ( Net ) ( Net ) 534 ----- ----- 536 Figure 2: ACTN Base Architecture 538 Note that this is a functional architecture: an implementation and 539 deployment might collocate one or more of the functional components. 541 3.1. Customer Network Controller 543 A Customer Network Controller (CNC) is responsible for communicating 544 a customer's VNS requirements to the network operator over the CNC- 545 MDSC Interface (CMI). It has knowledge of the end-points associated 546 with the VNS (expressed as APs), the service policy, and other QoS 547 information related to the service. 549 As the Customer Network Controller directly interfaces to the 550 applications, it understands multiple application requirements and 551 their service needs. The capability of a CNC beyond its CMI role is 552 outside the scope of ACTN and may be implemented in different ways. 553 For example, the CNC may in fact be a controller or part of a 554 controller in the customer's domain, or the CNC functionality could 555 also be implemented as part of a service provider's portal. 557 3.2. Multi-Domain Service Coordinator 559 A Multi-Domain Service Coordinator (MDSC) is a functional block that 560 implements all of the ACTN functions listed in Section 3 and 561 described further in Section 4.2. The two functions of the MDSC, 562 namely, multi-domain coordination and virtualization/abstraction are 563 referred to as network-related functions while the other two 564 functions, namely, customer mapping/translation and virtual service 565 coordination are referred to as service-related functions. The MDSC 566 sits at the center of the ACTN model between the CNC that issues 567 connectivity requests and the Provisioning Network Controllers 568 (PNCs) that manage the network resources. 569 The key point of the MDSC (and of the whole ACTN framework) is 570 detaching the network and service control from underlying technology 571 to help the customer express the network as desired by business 572 needs. The MDSC envelopes the instantiation of the right technology 573 and network control to meet business criteria. In essence it 574 controls and manages the primitives to achieve functionalities as 575 desired by the CNC. 577 In order to allow for multi-domain coordination a 1:N relationship 578 must be allowed between MDSCs and PNCs. 580 In addition to that, it could also be possible to have an M:1 581 relationship between MDSCs and PNC to allow for network resource 582 partitioning/sharing among different customers not necessarily 583 connected to the same MDSC (e.g., different service providers) but 584 all using the resources of a common network infrastructure operator. 586 3.3. Provisioning Network Controller 588 The Provisioning Network Controller (PNC) oversees configuring the 589 network elements, monitoring the topology (physical or virtual) of 590 the network, and collecting information about the topology (either 591 raw or abstracted). 593 The PNC functions can be implemented as part of an SDN domain 594 controller, a Network Management System (NMS), an Element Management 595 System (EMS), an active PCE-based controller [Centralized] or any 596 other means to dynamically control a set of nodes and that is 597 implementing an NBI compliant with ACTN specification. 599 A PNC domain includes all the resources under the control of a 600 single PNC. It can be composed of different routing domains and 601 administrative domains, and the resources may come from different 602 layers. The interconnection between PNC domains is illustrated in 603 Figure 3. 605 _______ _______ 606 _( )_ _( )_ 607 _( )_ _( )_ 608 ( ) Border ( ) 609 ( PNC ------ Link ------ PNC ) 610 ( Domain X |Border|========|Border| Domain Y ) 611 ( | Node | | Node | ) 612 ( ------ ------ ) 613 (_ _) (_ _) 614 (_ _) (_ _) 615 (_______) (_______) 617 Figure 3: PNC Domain Borders 619 3.4. ACTN Interfaces 621 Direct customer control of transport network elements and 622 virtualized services is not a viable proposition for network 623 operators due to security and policy concerns. In addition, some 624 networks may operate a control plane and as such it is not practical 625 for the customer to directly interface with network elements. 626 Therefore, the network has to provide open, programmable interfaces, 627 through which customer applications can create, replace and modify 628 virtual network resources and services in an interactive, flexible 629 and dynamic fashion. 631 Three interfaces exist in the ACTN architecture as shown in Figure 632 2. 634 . CMI: The CNC-MDSC Interface (CMI) is an interface between a CNC 635 and an MDSC. The CMI is a business boundary between customer 636 and network operator. It is used to request a VNS for an 637 application. All service-related information is conveyed over 638 this interface (such as the VNS type, topology, bandwidth, and 639 service constraints). Most of the information over this 640 interface is agnostic of the technology used by network 641 operators, but there are some cases (e.g., access link 642 configuration) where it is necessary to specify technology- 643 specific details. 645 . MPI: The MDSC-PNC Interface (MPI) is an interface between an 646 MDSC and a PNC. It communicates requests for new connectivity 647 or for bandwidth changes in the physical network. In multi- 648 domain environments, the MDSC needs to communicate with 649 multiple PNCs each responsible for control of a domain. The 650 MPI presents an abstracted topology to the MDSC hiding 651 technology specific aspects of the network and hiding topology 652 according to policy. 654 . SBI: The Southbound Interface (SBI) is out of scope of ACTN. 655 Many different SBIs have been defined for different 656 environments, technologies, standards organizations, and 657 vendors. It is shown in Figure 3 for reference reason only. 659 4. Advanced ACTN Architectures 661 This section describes advanced configurations of the ACTN 662 architecture. 664 4.1. MDSC Hierarchy 666 A hierarchy of MDSCs can be foreseen for many reasons, among which 667 are scalability, administrative choices, or putting together 668 different layers and technologies in the network. In the case where 669 there is a hierarchy of MDSCs, we introduce the terms higher-level 670 MDSC (MDSC-H) and lower-level MDSC (MDSC-L). The interface between 671 them is a recursion of the MPI. An implementation of an MDSC-H 672 makes provisioning requests as normal using the MPI, but an MDSC-L 673 must be able to receive requests as normal at the CMI and also at 674 the MPI. The hierarchy of MDSCs can be seen in Figure 4. 676 Another implementation choice could foresee the usage of an MDSC-L 677 for all the PNCs related to a given technology (e.g., Internet 678 Protocol (IP)/Multiprotocol Label Switching (MPLS)) and a different 679 MDSC-L for the PNCs related to another technology (e.g., Optical 680 Transport Network (OTN)/Wavelength Division Multiplexing (WDM)) and 681 an MDSC-H to coordinate them. 683 +--------+ 684 | CNC | 685 +--------+ 686 | +-----+ 687 | CMI | CNC | 688 +----------+ +-----+ 690 -------| MDSC-H |---- | 691 | +----------+ | | CMI 692 MPI | MPI | | 693 | | | 694 +---------+ +---------+ 695 | MDSC-L | | MDSC-L | 696 +---------+ +---------+ 697 MPI | | | | 698 | | | | 699 ----- ----- ----- ----- 700 | PNC | | PNC | | PNC | | PNC | 701 ----- ----- ----- ----- 703 Figure 4: MDSC Hierarchy 705 4.2. Functional Split of MDSC Functions in Orchestrators 707 An implementation choice could separate the MDSC functions into two 708 groups, one group for service-related functions and the other for 709 network-related functions. This enables the implementation of a 710 service orchestrator that provides the service-related functions of 711 the MDSC and a network orchestrator that provides the network- 712 related functions of the MDSC. This split is consistent with the 713 Yet Another Next Generation (YANG) service model architecture 714 described in [Service-YANG]. Figure 5 depicts this and shows how 715 the ACTN interfaces may map to YANG models. 717 +--------------------+ 718 | Customer | 719 | +-----+ | 720 | | CNC | | 721 | +-----+ | 722 +--------------------+ 723 CMI | Customer Service Model 724 | 725 +---------------------------------------+ 726 | Service | 727 ********|*********************** Orchestrator | 728 * MDSC | +-----------------+ * | 729 * | | Service-related | * | 730 * | | Functions | * | 731 * | +-----------------+ * | 732 * +----------------------*----------------+ 733 * * | Service Delivery Model 734 * * | 735 * +----------------------*----------------+ 736 * | * Network | 737 * | +-----------------+ * Orchestrator | 738 * | | Network-related | * | 739 * | | Functions | * | 740 * | +-----------------+ * | 741 ********|*********************** | 742 +---------------------------------------+ 743 MPI | Network Configuration Model 744 | 745 +------------------------+ 746 | Domain | 747 | +------+ Controller | 748 | | PNC | | 749 | +------+ | 750 +------------------------+ 751 SBI | Device Configuration Model 752 | 753 +--------+ 754 | Device | 755 +--------+ 757 Figure 5: ACTN Architecture in the Context of the YANG Service 758 Models 759 5. Topology Abstraction Methods 761 Topology abstraction is described in [RFC7926]. This section 762 discusses topology abstraction factors, types, and their context in 763 the ACTN architecture. 765 Abstraction in ACTN is performed by the PNC when presenting 766 available topology to the MDSC, or by an MDSC-L when presenting 767 topology to an MDSC-H. This function is different to the creation 768 of a VN (and particularly a Type 2 VN) which is not abstraction but 769 construction of virtual resources. 771 5.1. Abstraction Factors 773 As discussed in [RFC7926], abstraction is tied with policy of the 774 networks. For instance, per an operational policy, the PNC would 775 not provide any technology specific details (e.g., optical 776 parameters for Wavelength Switched Optical Network (WSON) in the 777 abstract topology it provides to the MDSC. Similarly, policy of the 778 networks may determine the abstraction type as described in Section 779 5.2. 781 There are many factors that may impact the choice of abstraction: 783 - Abstraction depends on the nature of the underlying domain 784 networks. For instance, packet networks may be abstracted with 785 fine granularity while abstraction of optical networks depends on 786 the switching units (such as wavelengths) and the end-to-end 787 continuity and cross-connect limitations within the network. 789 - Abstraction also depends on the capability of the PNCs. As 790 abstraction requires hiding details of the underlying network 791 resources, the PNC's capability to run algorithms impacts the 792 feasibility of abstraction. Some PNC may not have the ability to 793 abstract native topology while other PNCs may have the ability to 794 use sophisticated algorithms. 796 - Abstraction is a tool that can improve scalability. Where the 797 native network resource information is of large size there is a 798 specific scaling benefit to abstraction. 800 - The proper abstraction level may depend on the frequency of 801 topology updates and vice versa. 803 - The nature of the MDSC's support for technology-specific 804 parameters impacts the degree/level of abstraction. If the MDSC 805 is not capable of handling such parameters then a higher level of 806 abstraction is needed. 808 - In some cases, the PNC is required to hide key internal 809 topological data from the MDSC. Such confidentiality can be 810 achieved through abstraction. 812 5.2. Abstraction Types 814 This section defines the following three types of topology 815 abstraction: 817 . Native/White Topology (Section 5.2.1) 818 . Black Topology (Section 5.2.2) 819 . Grey Topology (Section 5.2.3) 821 5.2.1. Native/White Topology 823 This is a case where the PNC provides the actual network topology to 824 the MDSC without any hiding or filtering of information, i.e., no 825 abstraction is performed. In this case, the MDSC has the full 826 knowledge of the underlying network topology and can operate on it 827 directly. 829 5.2.2. Black Topology 831 A black topology replaces a full network with a minimal 832 representation of the edge-to-edge topology without disclosing any 833 node internal connectivity information. The entire domain network 834 may be abstracted as a single abstract node with the network's 835 access/egress links appearing as the ports to the abstract node and 836 the implication that any port can be 'cross-connected' to any other. 837 Figure 6 depicts a native topology with the corresponding black 838 topology with one virtual node and inter-domain links. In this 839 case, the MDSC has to make a provisioning request to the PNCs to 840 establish the port-to-port connection. If there is a large number 841 of inter-connected domains, this abstraction method may impose a 842 heavy coordination load at the MDSC level in order to find an 843 optimal end-to-end path since the abstraction hides so much 844 information that it is not possible to determine whether an end-to- 845 end path is feasible without asking each PNC to set up each path 846 fragment. For this reason, the MPI might need to be enhanced to 847 allow the PNCs to be queried for the practicality and 848 characteristics of paths across the abstract node. 849 ..................................... 850 : PNC Domain : 851 : +--+ +--+ +--+ +--+ : 852 ------+ +-----+ +-----+ +-----+ +------ 853 : ++-+ ++-+ +-++ +-++ : 854 : | | | | : 855 : | | | | : 856 : | | | | : 857 : | | | | : 858 : ++-+ ++-+ +-++ +-++ : 859 ------+ +-----+ +-----+ +-----+ +------ 860 : +--+ +--+ +--+ +--+ : 861 :.................................... 863 +----------+ 864 ---+ +--- 865 | Abstract | 866 | Node | 867 ---+ +--- 868 +----------+ 870 Figure 6: Native Topology with Corresponding Black Topology Expressed 871 as an Abstract Node 873 5.2.3. Grey Topology 875 A grey topology represents a compromise between black and white 876 topologies from a granularity point of view. In this case, the PNC 877 exposes an abstract topology containing all PNC domains border nodes 878 and an abstraction of the connectivity between those border nodes. 879 This abstraction may contain either physical or abstract 880 nodes/links. 882 Two types of grey topology are identified: 883 . In a type A grey topology, border nodes are connected by a full 884 mesh of TE links (see Figure 7). 885 . In a type B grey topology, border nodes are connected over a 886 more detailed network comprising internal abstract nodes and 887 abstracted links. This mode of abstraction supplies the MDSC 888 with more information about the internals of the PNC domain and 889 allows it to make more informed choices about how to route 890 connectivity over the underlying network. 892 ..................................... 893 : PNC Domain : 894 : +--+ +--+ +--+ +--+ : 895 ------+ +-----+ +-----+ +-----+ +------ 896 : ++-+ ++-+ +-++ +-++ : 897 : | | | | : 898 : | | | | : 899 : | | | | : 900 : | | | | : 901 : ++-+ ++-+ +-++ +-++ : 902 ------+ +-----+ +-----+ +-----+ +------ 903 : +--+ +--+ +--+ +--+ : 904 :.................................... 906 .................... 907 : Abstract Network : 908 : : 909 : +--+ +--+ : 910 -------+ +----+ +------- 911 : ++-+ +-++ : 912 : | \ / | : 913 : | \/ | : 914 : | /\ | : 915 : | / \ | : 916 : ++-+ +-++ : 917 -------+ +----+ +------- 918 : +--+ +--+ : 919 :..................: 921 Figure 7: Native Topology with Corresponding Grey Topology 923 5.3. Methods of Building Grey Topologies 925 This section discusses two different methods of building a grey 926 topology: 928 . Automatic generation of abstract topology by configuration 929 (Section 5.3.1) 930 . On-demand generation of supplementary topology via path 931 computation request/reply (Section 5.3.2) 933 5.3.1. Automatic Generation of Abstract Topology by Configuration 935 Automatic generation is based on the abstraction/summarization of 936 the whole domain by the PNC and its advertisement on the MPI. The 937 level of abstraction can be decided based on PNC configuration 938 parameters (e.g., "provide the potential connectivity between any PE 939 and any ASBR in an MPLS-TE network"). 941 Note that the configuration parameters for this abstract topology 942 can include available bandwidth, latency, or any combination of 943 defined parameters. How to generate such information is beyond the 944 scope of this document. 946 This abstract topology may need to be periodically or incrementally 947 updated when there is a change in the underlying network or the use 948 of the network resources that make connectivity more or less 949 available. 951 5.3.2. On-demand Generation of Supplementary Topology via Path Compute 952 Request/Reply 954 While abstract topology is generated and updated automatically by 955 configuration as explained in Section 5.3.1, additional 956 supplementary topology may be obtained by the MDSC via a path 957 compute request/reply mechanism. 959 The abstract topology advertisements from PNCs give the MDSC the 960 border node/link information for each domain. Under this scenario, 961 when the MDSC needs to create a new VN, the MDSC can issue path 962 computation requests to PNCs with constraints matching the VN 963 request as described in [ACTN-YANG]. An example is provided in 964 Figure 8, where the MDSC is creating a P2P VN between AP1 and AP2. 965 The MDSC could use two different inter-domain links to get from 966 Domain X to Domain Y, but in order to choose the best end-to-end 967 path it needs to know what domain X and Y can offer in terms of 968 connectivity and constraints between the PE nodes and the border 969 nodes. 971 ------- ------- 972 ( ) ( ) 973 - BrdrX.1------- BrdrY.1 - 974 (+---+ ) ( +---+) 975 -+---( |PE1| Dom.X ) ( Dom.Y |PE2| )---+- 976 | (+---+ ) ( +---+) | 977 AP1 - BrdrX.2------- BrdrY.2 - AP2 978 ( ) ( ) 979 ------- -------- 981 Figure 8: A Multi-Domain Example 982 The MDSC issues a path computation request to PNC.X asking for 983 potential connectivity between PE1 and border node BrdrX.1 and 984 between PE1 and BrdrX.2 with related objective functions and TE 985 metric constraints. A similar request for connectivity from the 986 border nodes in Domain Y to PE2 will be issued to PNC.Y. The MDSC 987 merges the results to compute the optimal end-to-end path including 988 the inter domain links. The MDSC can use the result of this 989 computation to request the PNCs to provision the underlying 990 networks, and the MDSC can then use the end-to-end path as a virtual 991 link in the VN it delivers to the customer. 993 5.4. Hierarchical Topology Abstraction Example 995 This section illustrates how topology abstraction operates in 996 different levels of a hierarchy of MDSCs as shown in Figure 9. 998 +-----+ 999 | CNC | CNC wants to create a VN 1000 +-----+ between CE A and CE B 1001 | 1002 | 1003 +-----------------------+ 1004 | MDSC-H | 1005 +-----------------------+ 1006 / \ 1007 / \ 1008 +---------+ +---------+ 1009 | MDSC-L1 | | MDSC-L2 | 1010 +---------+ +---------+ 1011 / \ / \ 1012 / \ / \ 1013 +----+ +----+ +----+ +----+ 1014 CE A o----|PNC1| |PNC2| |PNC3| |PNC4|----o CE B 1015 +----+ +----+ +----+ +----+ 1017 Virtual Network Delivered to CNC 1019 CE A o==============o CE B 1021 Topology operated on by MDSC-H 1023 CE A o----o==o==o===o----o CE B 1025 Topology operated on by MDSC-L1 Topology operated on by MDSC-L2 1026 _ _ _ _ 1027 ( ) ( ) ( ) ( ) 1028 ( ) ( ) ( ) ( ) 1029 CE A o--(o---o)==(o---o)==Dom.3 Dom.2==(o---o)==(o---o)--o CE B 1030 ( ) ( ) ( ) ( ) 1031 (_) (_) (_) (_) 1033 Actual Topology 1034 ___ ___ ___ ___ 1035 ( ) ( ) ( ) ( ) 1036 ( o ) ( o ) ( o--o) ( o ) 1037 ( / \ ) ( |\ ) ( | | ) ( / \ ) 1038 CE A o---(o-o---o-o)==(o-o-o-o-o)==(o--o--o-o)==(o-o-o-o-o)---o CE B 1039 ( \ / ) ( | |/ ) ( | | ) ( \ / ) 1040 ( o ) (o-o ) ( o--o) ( o ) 1041 (___) (___) (___) (___) 1043 Domain 1 Domain 2 Domain 3 Domain 4 1045 Where 1046 o is a node 1047 --- is a link 1048 === border link 1050 Figure 9: Illustration of Hierarchical Topology Abstraction 1052 In the example depicted in Figure 9, there are four domains under 1053 control of PNCs PNC1, PNC2, PNC3, and PNC4. MDSC-L1 controls PNC1 1054 and PNC2 while MDSC-L2 controls PNC3 and PNC4. Each of the PNCs 1055 provides a grey topology abstraction that presents only border nodes 1056 and links across and outside the domain. The abstract topology 1057 MDSC-L1 that operates is a combination of the two topologies from 1058 PNC1 and PNC2. Likewise, the abstract topology that MDSC-L2 1059 operates is shown in Figure 9. Both MDSC-L1 and MDSC-L2 provide a 1060 black topology abstraction to MSDC-H in which each PNC domain is 1061 presented as a single virtual node. MDSC-H combines these two 1062 topologies to create the abstraction topology on which it operates. 1063 MDSC-H sees the whole four domain networks as four virtual nodes 1064 connected via virtual links. 1066 5.5. VN Recursion with Network Layers 1068 In some cases the VN supplied to a customer may be built using 1069 resources from different technology layers operated by different 1070 operators. For example, one operator may run a packet TE network 1071 and use optical connectivity provided by another operator. 1073 As shown in Figure 10, a customer asks for end-to-end connectivity 1074 between CE A and CE B, a virtual network. The customer's CNC makes a 1075 request to Operator 1's MDSC. The MDSC works out which network 1076 resources need to be configured and sends instructions to the 1077 appropriate PNCs. However, the link between Q and R is a virtual 1078 link supplied by Operator 2: Operator 1 is a customer of Operator 2. 1080 To support this, Operator 1 has a CNC that communicates to Operator 1081 2's MDSC. Note that Operator 1's CNC in Figure 10 is a functional 1082 component that does not dictate implementation: it may be embedded 1083 in a PNC. 1085 Virtual CE A o===============================o CE B 1086 Network 1088 ----- CNC wants to create a VN 1089 Customer | CNC | between CE A and CE B 1090 ----- 1091 : 1092 *********************************************** 1093 : 1094 Operator 1 --------------------------- 1095 | MDSC | 1096 --------------------------- 1097 : : : 1098 : : : 1099 ----- ------------- ----- 1100 | PNC | | PNC | | PNC | 1101 ----- ------------- ----- 1102 : : : : : 1104 Higher v v : v v 1105 Layer CE A o---P-----Q===========R-----S---o CE B 1106 Network | : | 1107 | : | 1108 | ----- | 1109 | | CNC | | 1110 | ----- | 1111 | : | 1112 *********************************************** 1113 | : | 1114 Operator 2 | ------ | 1115 | | MSDC | | 1116 | ------ | 1117 | : | 1118 | ------- | 1119 | | PNC | | 1120 | ------- | 1121 \ : : : / 1122 Lower \v v v/ 1123 Layer X--Y--Z 1124 Network 1126 Where 1127 --- is a link 1128 === is a virtual link 1130 Figure 10: VN recursion with Network Layers 1132 6. Access Points and Virtual Network Access Points 1134 In order to map identification of connections between the customer's 1135 sites and the TE networks and to scope the connectivity requested in 1136 the VNS, the CNC and the MDSC refer to the connections using the 1137 Access Point (AP) construct as shown in Figure 11. 1139 ------------- 1140 ( ) 1141 - - 1142 +---+ X ( ) Z +---+ 1143 |CE1|---+----( )---+---|CE2| 1144 +---+ | ( ) | +---+ 1145 AP1 - - AP2 1146 ( ) 1147 ------------- 1149 Figure 11: Customer View of APs 1151 Let's take as an example a scenario shown in Figure 11. CE1 is 1152 connected to the network via a 10 Gbps link and CE2 via a 40 Gbps 1153 link. Before the creation of any VN between AP1 and AP2 the 1154 customer view can be summarized as shown in Table 1. 1156 +----------+------------------------+ 1157 |End Point | Access Link Bandwidth | 1158 +-----+----------+----------+-------------+ 1159 |AP id| CE,port | MaxResBw | AvailableBw | 1160 +-----+----------+----------+-------------+ 1161 | AP1 |CE1,portX | 10Gbps | 10Gbps | 1162 +-----+----------+----------+-------------+ 1163 | AP2 |CE2,portZ | 40Gbps | 40Gbps | 1164 +-----+----------+----------+-------------+ 1166 Table 1: AP - Customer View 1168 On the other hand, what the provider sees is shown in Figure 12. 1170 ------- ------- 1171 ( ) ( ) 1172 - - - - 1173 W (+---+ ) ( +---+) Y 1174 -+---( |PE1| Dom.X )---( Dom.Y |PE2| )---+- 1175 | (+---+ ) ( +---+) | 1176 AP1 - - - - AP2 1177 ( ) ( ) 1178 ------- ------- 1180 Figure 12: Provider view of the AP 1182 Which results in a summarization as shown in Table 2. 1184 +----------+------------------------+ 1185 |End Point | Access Link Bandwidth | 1186 +-----+----------+----------+-------------+ 1187 |AP id| PE,port | MaxResBw | AvailableBw | 1188 +-----+----------+----------+-------------+ 1189 | AP1 |PE1,portW | 10Gbps | 10Gbps | 1190 +-----+----------+----------+-------------+ 1191 | AP2 |PE2,portY | 40Gbps | 40Gbps | 1192 +-----+----------+----------+-------------+ 1194 Table 2: AP - Operator View 1196 A Virtual Network Access Point (VNAP) needs to be defined as binding 1197 between an AP and a VN. It is used to allow for different VNs to 1198 start from the same AP. It also allows for traffic engineering on 1199 the access and/or inter-domain links (e.g., keeping track of 1200 bandwidth allocation). A different VNAP is created on an AP for 1201 each VN. 1203 In this simple scenario we suppose we want to create two virtual 1204 networks. The first with VN identifier 9 between AP1 and AP2 with 1205 bandwidth of 1 Gbps, while the second with VN identifier 5, again 1206 between AP1 and AP2 and with bandwidth 2 Gbps. 1208 The operator view would evolve as shown in Table 3. 1210 +----------+------------------------+ 1211 |End Point | Access Link/VNAP Bw | 1212 +---------+----------+----------+-------------+ 1213 |AP/VNAPid| PE,port | MaxResBw | AvailableBw | 1214 +---------+----------+----------+-------------+ 1215 |AP1 |PE1,portW | 10 Gbps | 7 Gbps | 1216 | -VNAP1.9| | 1 Gbps | N.A. | 1217 | -VNAP1.5| | 2 Gbps | N.A | 1218 +---------+----------+----------+-------------+ 1219 |AP2 |PE2,portY | 4 0Gbps | 37 Gbps | 1220 | -VNAP2.9| | 1 Gbps | N.A. | 1221 | -VNAP2.5| | 2 Gbps | N.A | 1222 +---------+----------+----------+-------------+ 1223 Table 3: AP and VNAP - Operator View after VNS Creation 1225 6.1. Dual-Homing Scenario 1227 Often there is a dual homing relationship between a CE and a pair of 1228 PEs. This case needs to be supported by the definition of VN, APs, 1229 and VNAPs. Suppose CE1 connected to two different PEs in the 1230 operator domain via AP1 and AP2 and that the customer needs 5 Gbps 1231 of bandwidth between CE1 and CE2. This is shown in Figure 12. 1233 ____________ 1234 AP1 ( ) AP3 1235 -------(PE1) (PE3)------- 1236 W / ( ) \ X 1237 +---+/ ( ) \+---+ 1238 |CE1| ( ) |CE2| 1239 +---+\ ( ) /+---+ 1240 Y \ ( ) / Z 1241 -------(PE2) (PE4)------- 1242 AP2 (____________) 1244 Figure 12: Dual-Homing Scenario 1246 In this case, the customer will request for a VN between AP1, AP2, 1247 and AP3 specifying a dual homing relationship between AP1 and AP2. 1248 As a consequence no traffic will flow between AP1 and AP2. The dual 1249 homing relationship would then be mapped against the VNAPs (since 1250 other independent VNs might have AP1 and AP2 as end points). 1252 The customer view would be shown in Table 4. 1254 +----------+------------------------+ 1255 |End Point | Access Link/VNAP Bw | 1256 +---------+----------+----------+-------------+-----------+ 1257 |AP/VNAPid| CE,port | MaxResBw | AvailableBw |Dual Homing| 1258 +---------+----------+----------+-------------+-----------+ 1259 |AP1 |CE1,portW | 10 Gbps | 5 Gbps | | 1260 | -VNAP1.9| | 5 Gbps | N.A. | VNAP2.9 | 1261 +---------+----------+----------+-------------+-----------+ 1262 |AP2 |CE1,portY | 40 Gbps | 35 Gbps | | 1263 | -VNAP2.9| | 5 Gbps | N.A. | VNAP1.9 | 1264 +---------+----------+----------+-------------+-----------+ 1265 |AP3 |CE2,portX | 50 Gbps | 45 Gbps | | 1266 | -VNAP3.9| | 5 Gbps | N.A. | NONE | 1267 +---------+----------+----------+-------------+-----------+ 1269 Table 4: Dual-Homing - Customer View after VN Creation 1271 7. Advanced ACTN Application: Multi-Destination Service 1273 A further advanced application of ACTN is in the case of Data Center 1274 selection, where the customer requires the Data Center selection to 1275 be based on the network status; this is referred to as Multi- 1276 Destination in [ACTN-REQ]. In terms of ACTN, a CNC could request a 1277 VNS between a set of source APs and destination APs and leave it up 1278 to the network (MDSC) to decide which source and destination access 1279 points to be used to set up the VNS. The candidate list of source 1280 and destination APs is decided by a CNC (or an entity outside of 1281 ACTN) based on certain factors which are outside the scope of ACTN. 1283 Based on the AP selection as determined and returned by the network 1284 (MDSC), the CNC (or an entity outside of ACTN) should further take 1285 care of any subsequent actions such as orchestration or service 1286 setup requirements. These further actions are outside the scope of 1287 ACTN. 1289 Consider a case as shown in Figure 14, where three data centers are 1290 available, but the customer requires the data center selection to be 1291 based on the network status and the connectivity service setup 1292 between the AP1 (CE1) and one of the destination APs (AP2 (DC-A), 1293 AP3 (DC-B), and AP4 (DC-C)). The MDSC (in coordination with PNCs) 1294 would select the best destination AP based on the constraints, 1295 optimization criteria, policies, etc., and setup the connectivity 1296 service (virtual network). 1298 ------- ------- 1299 ( ) ( ) 1300 - - - - 1301 +---+ ( ) ( ) +----+ 1302 |CE1|---+---( Domain X )----( Domain Y )---+---|DC-A| 1303 +---+ | ( ) ( ) | +----+ 1304 AP1 - - - - AP2 1305 ( ) ( ) 1306 ---+--- ---+--- 1307 | | 1308 AP3-+ AP4-+ 1309 | | 1310 +----+ +----+ 1311 |DC-B| |DC-C| 1312 +----+ +----+ 1314 Figure 14: End-Point Selection Based on Network Status 1316 7.1. Pre-Planned End Point Migration 1318 Furthermore, in case of Data Center selection, customer could 1319 request for a backup DC to be selected, such that in case of 1320 failure, another DC site could provide hot stand-by protection. As 1321 shown in Figure 15 DC-C is selected as a backup for DC-A. Thus, the 1322 VN should be setup by the MDSC to include primary connectivity 1323 between AP1 (CE1) and AP2 (DC-A) as well as protection connectivity 1324 between AP1 (CE1) and AP4 (DC-C). 1326 ------- ------- 1327 ( ) ( ) 1328 - - - - 1329 +---+ ( ) ( ) +----+ 1330 |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| 1331 +---+ | ( ) ( ) | +----+ 1332 AP1 - - - - AP2 | 1333 ( ) ( ) | 1334 ---+--- ---+--- | 1335 | | | 1336 AP3-+ AP4-+ HOT STANDBY 1337 | | | 1338 +----+ +----+ | 1339 |DC-D| |DC-C|<------------- 1340 +----+ +----+ 1342 Figure 15: Pre-planned End-Point Migration 1344 7.2. On the Fly End-Point Migration 1346 Compared to pre-planned end point migration, on the fly end point 1347 selection is dynamic in that the migration is not pre-planned but 1348 decided based on network condition. Under this scenario, the MDSC 1349 would monitor the network (based on the VN Service-level Agreement 1350 (SLA) and notify the CNC in case where some other destination AP 1351 would be a better choice based on the network parameters. The CNC 1352 should instruct the MDSC when it is suitable to update the VN with 1353 the new AP if it is required. 1355 8. Manageability Considerations 1357 The objective of ACTN is to manage traffic engineered resources, and 1358 provide a set of mechanisms to allow customers to request virtual 1359 connectivity across server network resources. ACTN supports 1360 multiple customers each with its own view of and control of a 1361 virtual network built on the server network, the network operator 1362 will need to partition (or "slice") their network resources, and 1363 manage the resources accordingly. 1365 The ACTN platform will, itself, need to support the request, 1366 response, and reservations of client and network layer connectivity. 1367 It will also need to provide performance monitoring and control of 1368 traffic engineered resources. The management requirements may be 1369 categorized as follows: 1371 . Management of external ACTN protocols 1372 . Management of internal ACTN interfaces/protocols 1373 . Management and monitoring of ACTN components 1374 . Configuration of policy to be applied across the ACTN system 1376 The ACTN framework and interfaces are defined to enable traffic 1377 engineering for virtual network services and connectivity services. 1378 Network operators may have other Operations, Administration, and 1379 Maintenance (OAM) tasks for service fulfillment, optimization, and 1380 assurance beyond traffic engineering. The realization of OAM beyond 1381 abstraction and control of traffic engineered networks is not 1382 considered in this document. 1384 8.1. Policy 1386 Policy is an important aspect of ACTN control and management. 1387 Policies are used via the components and interfaces, during 1388 deployment of the service, to ensure that the service is compliant 1389 with agreed policy factors and variations (often described in SLAs), 1390 these include, but are not limited to: connectivity, bandwidth, 1391 geographical transit, technology selection, security, resilience, 1392 and economic cost. 1394 Depending on the deployment of the ACTN architecture, some policies 1395 may have local or global significance. That is, certain policies 1396 may be ACTN component specific in scope, while others may have 1397 broader scope and interact with multiple ACTN components. Two 1398 examples are provided below: 1400 . A local policy might limit the number, type, size, and 1401 scheduling of virtual network services a customer may request 1402 via its CNC. This type of policy would be implemented locally 1403 on the MDSC. 1405 . A global policy might constrain certain customer types (or 1406 specific customer applications) to only use certain MDSCs, and 1407 be restricted to physical network types managed by the PNCs. A 1408 global policy agent would govern these types of policies. 1410 The objective of this section is to discuss the applicability of 1411 ACTN policy: requirements, components, interfaces, and examples. 1412 This section provides an analysis and does not mandate a specific 1413 method for enforcing policy, or the type of policy agent that would 1414 be responsible for propagating policies across the ACTN components. 1415 It does highlight examples of how policy may be applied in the 1416 context of ACTN, but it is expected further discussion in an 1417 applicability or solution specific document, will be required. 1419 8.2. Policy Applied to the Customer Network Controller 1421 A virtual network service for a customer application will be 1422 requested by the CNC. The request will reflect the application 1423 requirements and specific service needs, including bandwidth, 1424 traffic type and survivability. Furthermore, application access and 1425 type of virtual network service requested by the CNC, will be need 1426 adhere to specific access control policies. 1428 8.3. Policy Applied to the Multi-Domain Service Coordinator 1430 A key objective of the MDSC is to support the customer's expression 1431 of the application connectivity request via its CNC as set of 1432 desired business needs, therefore policy will play an important 1433 role. 1435 Once authorized, the virtual network service will be instantiated 1436 via the CNC-MDSC Interface (CMI), it will reflect the customer 1437 application and connectivity requirements, and specific service 1438 transport needs. The CNC and the MDSC components will have agreed 1439 connectivity end-points, use of these end-points should be defined 1440 as a policy expression when setting up or augmenting virtual network 1441 services. Ensuring that permissible end-points are defined for CNCs 1442 and applications will require the MDSC to maintain a registry of 1443 permissible connection points for CNCs and application types. 1445 Conflicts may occur when virtual network service optimization 1446 criteria are in competition. For example, to meet objectives for 1447 service reachability a request may require an interconnection point 1448 between multiple physical networks; however, this might break a 1449 confidentially policy requirement of specific type of end-to-end 1450 service. Thus an MDSC may have to balance a number of the 1451 constraints on a service request and between different requested 1452 services. It may also have to balance requested services with 1453 operational norms for the underlying physical networks. This 1454 balancing may be resolved using configured policy and using hard and 1455 soft policy constraints. 1457 8.4. Policy Applied to the Provisioning Network Controller 1459 The PNC is responsible for configuring the network elements, 1460 monitoring physical network resources, and exposing connectivity 1461 (direct or abstracted) to the MDSC. It is therefore expected that 1462 policy will dictate what connectivity information will be exported 1463 between the PNC, via the MDSC-PNC Interface (MPI), and MDSC. 1465 Policy interactions may arise when a PNC determines that it cannot 1466 compute a requested path from the MDSC, or notices that (per a 1467 locally configured policy) the network is low on resources (for 1468 example, the capacity on key links become exhausted). In either 1469 case, the PNC will be required to notify the MDSC, which may (again 1470 per policy) act to construct a virtual network service across 1471 another physical network topology. 1473 Furthermore, additional forms of policy-based resource management 1474 will be required to provide virtual network service performance, 1475 security and resilience guarantees. This will likely be implemented 1476 via a local policy agent and additional protocol methods. 1478 9. Security Considerations 1480 The ACTN framework described in this document defines key components 1481 and interfaces for managed traffic engineered networks. Securing 1482 the request and control of resources, confidentially of the 1483 information, and availability of function, should all be critical 1484 security considerations when deploying and operating ACTN platforms. 1486 Several distributed ACTN functional components are required, and 1487 implementations should consider encrypting data that flows between 1488 components, especially when they are implemented at remote nodes, 1489 regardless these data flows are on external or internal network 1490 interfaces. 1492 The ACTN security discussion is further split into two specific 1493 categories described in the following sub-sections: 1495 . Interface between the Customer Network Controller and Multi- 1496 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) 1498 . Interface between the Multi-Domain Service Coordinator and 1499 Provisioning Network Controller (PNC), MDSC-PNC Interface (MPI) 1501 From a security and reliability perspective, ACTN may encounter many 1502 risks such as malicious attack and rogue elements attempting to 1503 connect to various ACTN components. Furthermore, some ACTN 1504 components represent a single point of failure and threat vector, 1505 and must also manage policy conflicts, and eavesdropping of 1506 communication between different ACTN components. 1508 The conclusion is that all protocols used to realize the ACTN 1509 framework should have rich security features, and customer, 1510 application and network data should be stored in encrypted data 1511 stores. Additional security risks may still exist. Therefore, 1512 discussion and applicability of specific security functions and 1513 protocols will be better described in documents that are use case 1514 and environment specific. 1516 9.1. CNC-MDSC Interface (CMI) 1518 Data stored by the MDSC will reveal details of the virtual network 1519 services, and which CNC and customer/application is consuming the 1520 resource. The data stored must therefore be considered as a 1521 candidate for encryption. 1523 CNC Access rights to an MDSC must be managed. The MDSC must 1524 allocate resources properly, and methods to prevent policy 1525 conflicts, resource wastage, and denial of service attacks on the 1526 MDSC by rogue CNCs, should also be considered. 1528 The CMI will likely be an external protocol interface. Suitable 1529 authentication and authorization of each CNC connecting to the MDSC 1530 will be required, especially, as these are likely to be implemented 1531 by different organizations and on separate functional nodes. Use of 1532 the AAA-based mechanisms would also provide role-based authorization 1533 methods, so that only authorized CNC's may access the different 1534 functions of the MDSC. 1536 9.2. MDSC-PNC Interface (MPI) 1538 Where the MDSC must interact with multiple (distributed) PNCs, a 1539 PKI-based mechanism is suggested, such as building a TLS or HTTPS 1540 connection between the MDSC and PNCs, to ensure trust between the 1541 physical network layer control components and the MDSC. 1543 Which MDSC the PNC exports topology information to, and the level of 1544 detail (full or abstracted), should also be authenticated, and 1545 specific access restrictions and topology views should be 1546 configurable and/or policy-based. 1548 10. IANA Considerations 1550 This document has no actions for IANA. 1552 11. References 1554 11.1. Informative References 1556 [RFC2702] Awduche, D., et. al., "Requirements for Traffic 1557 Engineering Over MPLS", RFC 2702, September 1999. 1559 [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path 1560 Computation Element (PCE)-Based Architecture", IETF RFC 1561 4655, August 2006. 1563 [RFC5654] Niven-Jenkins, B. (Ed.), D. Brungard (Ed.), and M. Betts 1564 (Ed.), "Requirements of an MPLS Transport Profile", RFC 1565 5654, September 2009. 1567 [RFC7149] Boucadair, M. and Jacquenet, C., "Software-Defined 1568 Networking: A Perspective from within a Service Provider 1569 Environment", RFC 7149, March 2014. 1571 [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for 1572 Information Exchange between Interconnected Traffic- 1573 Engineered Networks", RFC 7926, July 2016. 1575 [RFC3945] Manning, E., et al., "Generalized Multi-Protocol Label 1576 Switching (GMPLS) Architecture2, RFC 3945, October 2004. 1578 [ONF-ARCH] Open Networking Foundation, "SDN architecture", Issue 1579 1.1, ONF TR-521, June 2016. 1581 [Centralized] Farrel, A., et al., "An Architecture for Use of PCE 1582 and PCEP in a Network with Central Control", draft-ietf- 1583 teas-pce-central-control, work in progress. 1585 [Service-YANG] Lee, Y., Dhody, D., and Ceccarelli, C., "Traffic 1586 Engineering and Service Mapping Yang Model", draft-lee- 1587 teas-te-service-mapping-yang, work in progress. 1589 [ACTN-YANG] Lee, Y., et al., "A Yang Data Model for ACTN VN 1590 Operation", draft-lee-teas-actn-vn-yang, work in progress. 1592 [ACTN-REQ] Lee, Y., et al., "Requirements for Abstraction and 1593 Control of TE Networks", draft-ietf-teas-actn- 1594 requirements, work in progress. 1596 [TE-Topo] X. Liu et al., "YANG Data Model for TE Topologies", draft- 1597 ietf-teas-yang-te-topo, work in progress. 1599 12. Contributors 1601 Adrian Farrel 1602 Old Dog Consulting 1603 Email: adrian@olddog.co.uk 1605 Italo Busi 1606 Huawei 1607 Email: Italo.Busi@huawei.com 1609 Khuzema Pithewan 1610 Infinera 1611 Email: kpithewan@infinera.com 1613 Michael Scharf 1614 Nokia 1615 Email: michael.scharf@nokia.com 1617 Luyuan Fang 1618 eBay 1619 Email: luyuanf@gmail.com 1621 Diego Lopez 1622 Telefonica I+D 1623 Don Ramon de la Cruz, 82 1624 28006 Madrid, Spain 1625 Email: diego@tid.es 1627 Sergio Belotti 1628 Alcatel Lucent 1629 Via Trento, 30 1630 Vimercate, Italy 1631 Email: sergio.belotti@nokia.com 1633 Daniel King 1634 Lancaster University 1635 Email: d.king@lancaster.ac.uk 1637 Dhruv Dhody 1638 Huawei Technologies 1639 Divyashree Techno Park, Whitefield 1640 Bangalore, Karnataka 560066 1641 India 1642 Email: dhruv.ietf@gmail.com 1643 Gert Grammel 1644 Juniper Networks 1645 Email: ggrammel@juniper.net 1647 Authors' Addresses 1649 Daniele Ceccarelli 1650 Ericsson 1651 Torshamnsgatan,48 1652 Stockholm, Sweden 1653 Email: daniele.ceccarelli@ericsson.com 1655 Young Lee 1656 Huawei Technologies 1657 5340 Legacy Drive 1658 Plano, TX 75023, USA 1659 Phone: (469)277-5838 1660 Email: leeyoung@huawei.com 1662 APPENDIX A - Example of MDSC and PNC Functions Integrated in A 1663 Service/Network Orchestrator 1665 This section provides an example of a possible deployment scenario, 1666 in which Service/Network Orchestrator can include a number of 1667 functionalities, among which, in the example below, PNC 1668 functionalities for domain 2 and MDSC functionalities to coordinate 1669 the PNC1 functionalities (hosted in a separate domain controller) 1670 and PNC2 functionalities (co-hosted in the network orchestrator). 1672 Customer 1673 +-------------------------------+ 1674 | +-----+ | 1675 | | CNC | | 1676 | +-----+ | 1677 +-------|-----------------------+ 1678 | 1679 Service/Network | CMI 1680 Orchestrator | 1681 +-------|------------------------+ 1682 | +------+ MPI +------+ | 1683 | | MDSC |---------| PNC2 | | 1684 | +------+ +------+ | 1685 +-------|------------------|-----+ 1686 | MPI | 1687 Domain Controller | | 1688 +-------|-----+ | 1689 | +-----+ | | SBI 1690 | |PNC1 | | | 1691 | +-----+ | | 1692 +-------|-----+ | 1693 v SBI v 1694 ------- ------- 1695 ( ) ( ) 1696 - - - - 1697 ( ) ( ) 1698 ( Domain 1 )----( Domain 2 ) 1699 ( ) ( ) 1700 - - - - 1701 ( ) ( ) 1702 ------- -------