idnits 2.17.1 draft-ietf-teas-actn-framework-13.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (April 3, 2018) is 2214 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Daniele Ceccarelli (Ed) 2 Internet Draft Ericsson 3 Intended status: Informational Young Lee (Ed) 4 Expires: October 3, 2018 Huawei 6 April 3, 2018 8 Framework for Abstraction and Control of Traffic Engineered Networks 10 draft-ietf-teas-actn-framework-13 12 Abstract 14 Traffic Engineered networks have a variety of mechanisms to 15 facilitate the separation of the data plane and control plane. They 16 also have a range of management and provisioning protocols to 17 configure and activate network resources. These mechanisms represent 18 key technologies for enabling flexible and dynamic networking. The 19 term "Traffic Engineered network" refers to a network that uses any 20 connection-oriented technology under the control of a distributed or 21 centralized control plane to support dynamic provisioning of end-to- 22 end connectivity. 24 Abstraction of network resources is a technique that can be applied 25 to a single network domain or across multiple domains to create a 26 single virtualized network that is under the control of a network 27 operator or the customer of the operator that actually owns 28 the network resources. 30 This document provides a framework for Abstraction and Control of 31 Traffic Engineered Networks (ACTN) to support virtual network 32 services and connectivity services. 34 Status of this Memo 36 This Internet-Draft is submitted to IETF in full conformance with 37 the provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF), its areas, and its working groups. Note that 41 other groups may also distribute working documents as Internet- 42 Drafts. 44 Internet-Drafts are draft documents valid for a maximum of six 45 months and may be updated, replaced, or obsoleted by other documents 46 at any time. It is inappropriate to use Internet-Drafts as 47 reference material or to cite them other than as "work in progress." 48 The list of current Internet-Drafts can be accessed at 49 http://www.ietf.org/ietf/1id-abstracts.txt 51 The list of Internet-Draft Shadow Directories can be accessed at 52 http://www.ietf.org/shadow.html. 54 This Internet-Draft will expire on October 3, 2018. 56 Copyright Notice 58 Copyright (c) 2018 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with 66 respect to this document. Code Components extracted from this 67 document must include Simplified BSD License text as described in 68 Section 4.e of the Trust Legal Provisions and are provided without 69 warranty as described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction...................................................3 74 2. Overview.......................................................4 75 2.1. Terminology...............................................5 76 2.2. VNS Model of ACTN.........................................7 77 2.2.1. Customers............................................9 78 2.2.2. Service Providers...................................10 79 2.2.3. Network Providers...................................10 80 3. ACTN Base Architecture........................................10 81 3.1. Customer Network Controller..............................12 82 3.2. Multi-Domain Service Coordinator.........................13 83 3.3. Provisioning Network Controller..........................13 84 3.4. ACTN Interfaces..........................................14 85 4. Advanced ACTN Architectures...................................15 86 4.1. MDSC Hierarchy...........................................15 87 4.2. Functional Split of MDSC Functions in Orchestrators......16 88 5. Topology Abstraction Methods..................................17 89 5.1. Abstraction Factors......................................17 90 5.2. Abstraction Types........................................18 91 5.2.1. Native/White Topology...............................18 92 5.2.2. Black Topology......................................18 93 5.2.3. Grey Topology.......................................19 94 5.3. Methods of Building Grey Topologies......................20 95 5.3.1. Automatic Generation of Abstract Topology by 96 Configuration..............................................21 97 5.3.2. On-demand Generation of Supplementary Topology via Path 98 Compute Request/Reply......................................21 99 5.4. Hierarchical Topology Abstraction Example................22 100 5.5. VN Recursion with Network Layers.........................24 101 6. Access Points and Virtual Network Access Points...............25 102 6.1. Dual-Homing Scenario.....................................27 103 7. Advanced ACTN Application: Multi-Destination Service..........28 104 7.1. Pre-Planned End Point Migration..........................29 105 7.2. On the Fly End-Point Migration...........................30 106 8. Manageability Considerations..................................30 107 8.1. Policy...................................................31 108 8.2. Policy Applied to the Customer Network Controller........31 109 8.3. Policy Applied to the Multi Domain Service Coordinator...32 110 8.4. Policy Applied to the Provisioning Network Controller....32 111 9. Security Considerations.......................................33 112 9.1. CNC-MDSC Interface (CMI).................................33 113 9.2. MDSC-PNC Interface (MPI).................................34 114 10. IANA Considerations..........................................34 115 11. References...................................................34 116 11.1. Informative References..................................34 117 12. Contributors.................................................35 118 Authors' Addresses...............................................37 119 APPENDIX A - Example of MDSC and PNC Functions Integrated in A 120 Service/Network Orchestrator.....................................37 122 1. Introduction 124 The term "Traffic Engineered network" refers to a network that uses 125 any connection-oriented technology under the control of a 126 distributed or centralized control plane to support dynamic 127 provisioning of end-to-end connectivity. Traffic Engineered (TE) 128 networks have a variety of mechanisms to facilitate separation of 129 data plane and control plane including distributed signaling for 130 path setup and protection, centralized path computation for planning 131 and traffic engineering, and a range of management and provisioning 132 protocols to configure and activate network resources. These 133 mechanisms represent key technologies for enabling flexible and 134 dynamic networking. Some examples of networks that are in scope of 135 this definition are optical networks, MPLS Transport Profile (MPLS- 136 TP) networks [RFC5654], and MPLS-TE networks [RFC2702]. 138 One of the main drivers for Software Defined Networking (SDN) 139 [RFC7149] is a decoupling of the network control plane from the data 140 plane. This separation has been achieved for TE networks with the 141 development of MPLS/GMPLS [RFC3945] and the Path Computation Element 142 (PCE) [RFC4655]. One of the advantages of SDN is its logically 143 centralized control regime that allows a global view of the 144 underlying networks. Centralized control in SDN helps improve 145 network resource utilization compared with distributed network 146 control. For TE-based networks, a PCE may serve as a logically 147 centralized path computation function. 149 This document describes a set of management and control functions 150 used to operate one or more TE networks to construct virtual 151 networks that can be represented to customers and that are built 152 from abstractions of the underlying TE networks so that, for 153 example, a link in the customer's network is constructed from a path 154 or collection of paths in the underlying networks. We call this set 155 of function "Abstraction and Control of Traffic Engineered Networks" 156 (ACTN). 158 2. Overview 160 Three key aspects that need to be solved by SDN are: 162 . Separation of service requests from service delivery so that 163 the configuration and operation of a network is transparent 164 from the point of view of the customer, but remains responsive 165 to the customer's services and business needs. 167 . Network abstraction: As described in [RFC7926], abstraction is 168 the process of applying policy to a set of information about a 169 TE network to produce selective information that represents the 170 potential ability to connect across the network. The process 171 of abstraction presents the connectivity graph in a way that is 172 independent of the underlying network technologies, 173 capabilities, and topology so that the graph can be used to 174 plan and deliver network services in a uniform way 176 . Coordination of resources across multiple independent networks 177 and multiple technology layers to provide end-to-end services 178 regardless of whether the networks use SDN or not. 180 As networks evolve, the need to provide support for distinct 181 services, separated service orchestration, and resource abstraction 182 have emerged as key requirements for operators. In order to support 183 multiple customers each with its own view of and control of the 184 server network, a network operator needs to partition (or "slice") 185 or manage sharing of the network resources. Network slices can be 186 assigned to each customer for guaranteed usage which is a step 187 further than shared use of common network resources. 189 Furthermore, each network represented to a customer can be built 190 from virtualization of the underlying networks so that, for example, 191 a link in the customer's network is constructed from a path or 192 collection of paths in the underlying network. 194 ACTN can facilitate virtual network operation via the creation of a 195 single virtualized network or a seamless service. This supports 196 operators in viewing and controlling different domains (at any 197 dimension: applied technology, administrative zones, or vendor- 198 specific technology islands) and presenting virtualized networks to 199 their customers. 201 The ACTN framework described in this document facilitates: 203 . Abstraction of the underlying network resources to higher-layer 204 applications and customers [RFC7926]. 206 . Virtualization of particular underlying resources, whose 207 selection criterion is the allocation of those resources to a 208 particular customer, application or service [ONF-ARCH]. 210 . TE Network slicing of infrastructure to meet specific customers' 211 service requirements. 213 . Creation of an abstract environment allowing operators to view 214 and control multi-domain networks as a single abstract network. 216 . The presentation to customers of networks as a virtual network 217 via open and programmable interfaces. 219 2.1. Terminology 221 The following terms are used in this document. Some of them are 222 newly defined, some others reference existing definitions: 223 . Domain: A domain [RFC4655] is any collection of network 224 elements within a common sphere of address management or path 225 computation responsibility. Specifically within this document 226 we mean a part of an operator's network that is under common 227 management. Network elements will often be grouped into 228 domains based on technology types, vendor profiles, and 229 geographic proximity. 231 . Abstraction: This process is defined in [RFC7926]. 233 . TE Network Slicing: In the context of ACTN, a TE network slice 234 is a collection of resources that is used to establish a 235 logically dedicated virtual network over one or more TE 236 network. TE Network slicing allows a network provider to 237 provide dedicated virtual networks for applications/customers 238 over a common network infrastructure. The logically dedicated 239 resources are a part of the larger common network 240 infrastructures that are shared among various TE network slice 241 instances which are the end-to-end realization of TE network 242 slicing, consisting of the combination of physically or 243 logically dedicated resources. 245 . Node: A node is a vertex on the graph representation of a TE 246 topology. In a physical network topology, a node corresponds 247 to a physical network element (NE) such as a router. In an 248 abstract network topology, a node (sometimes called an abstract 249 node) is a representation as a single vertex of one or more 250 physical NEs and their connecting physical connections. The 251 concept of a node represents the ability to connect from any 252 access to the node (a link end) to any other access to that 253 node, although "limited cross-connect capabilities" may also be 254 defined to restrict this functionality. Network abstraction 255 may be applied recursively, so a node in one topology may be 256 created by applying abstraction to the nodes in the underlying 257 topology. 259 . Link: A link is an edge on the graph representation of a TE 260 topology. Two nodes connected by a link are said to be 261 "adjacent" in the TE topology. In a physical network topology, 262 a link corresponds to a physical connection. In an abstract 263 network topology, a link (sometimes called an abstract link) is 264 a representation of the potential to connect a pair of points 265 with certain TE parameters (see [RFC7926] for details). 266 Network abstraction may be applied recursively, so a link in 267 one topology may be created by applying abstraction to the 268 links in the underlying topology. 270 . Abstract Link: The term "abstract link" is defined in 271 [RFC7926]. 273 . Abstract Topology: The topology of abstract nodes and abstract 274 links presented through the process of abstraction by a lower 275 layer network for use by a higher layer network. 277 . A Virtual Network (VN) is a network provided by a service 278 provider to a customer for the customer to use in any way it 279 wants as though it was a physical network. There are two views 280 of a VN as follows: 282 a) The VN can be abstracted as a set of edge-to-edge links (a 283 Type 1 VN). Each link is referred as a VN member and is 284 formed as an end-to-end tunnel across the underlying 285 networks. Such tunnels may be constructed by recursive 286 slicing or abstraction of paths in the underlying networks 287 and can encompass edge points of the customer's network, 288 access links, intra-domain paths, and inter-domain links. 290 b) The VN can also be abstracted as a topology of virtual nodes 291 and virtual links (a Type 2 VN). The provider needs to map 292 the VN to actual resource assignment, which is known as 293 virtual network embedding. The nodes in this case include 294 physical end points, border nodes, and internal nodes as well 295 as abstracted nodes. Similarly the links include physical 296 access links, inter-domain links, and intra-domain links as 297 well as abstract links. 299 Clearly a Type 1 VN is a special case of a Type 2 VN. 301 . Access link: A link between a customer node and a provider 302 node. 304 . Inter-domain link: A link between domains under distinct 305 management administration. 307 . Access Point (AP): An AP is a logical identifier shared between 308 the customer and the provider used to identify an access link. 309 The AP is used by the customer when requesting a VNS. Note that 310 the term "TE Link Termination Point" (LTP) defined in [TE-Topo] 311 describes the end points of links, while an AP is a common 312 identifier for the link itself. 314 . VN Access Point (VNAP): A VNAP is the binding between an AP and 315 a given VN. 317 . Server Network: As defined in [RFC7926], a server network is a 318 network that provides connectivity for another network (the 319 Client Network) in a client-server relationship. 321 2.2. VNS Model of ACTN 323 A Virtual Network Service (VNS) is the service agreement between a 324 customer and provider to provide a VN. When a VN is a simple 325 connectivity between two points, the difference between VNS and 326 connectivity service becomes blurred. 328 There are three types of VNS defined in this document. 330 o Type 1 VNS refers to a VNS in which the customer is allowed 331 to create and operate a Type 1 VN. 333 o Type 2a and 2b VNS refer to VNSs in which the customer is 334 allowed to create and operates a Type 2 VN. With a Type 335 2a VNS, the VN is statically created at service 336 configuration time and the customer is not allowed to 337 change the topology (e.g., by adding or deleting abstract 338 nodes and links). A Type 2b VNS is the same as a Type 2a 339 VNS except that the customer is allowed to make dynamic 340 changes to the initial topology created at service 341 configuration time. 343 VN Operations are functions that a customer can exercise on a VN 344 depending on the agreement between the customer and the provider. 346 o VN Creation allows a customer to request the instantiation 347 of a VN. This could be through off-line pre-configuration 348 or through dynamic requests specifying attributes to a 349 Service Level Agreement (SLA) to satisfy the customer's 350 objectives. 352 o Dynamic Operations allow a customer to modify or delete the 353 VN. The customer can further act upon the virtual network 354 to create/modify/delete virtual links and nodes. These 355 changes will result in subsequent tunnel management in the 356 operator's networks. 358 There are three key entities in the ACTN VNS model: 360 - Customers 361 - Service Providers 362 - Network Providers 364 These entities are related in a three tier model as shown in Figure 365 1. 367 +----------------------+ 368 | Customer | 369 +----------------------+ 370 | 371 VNS || | /\ VNS 372 Request || | || Reply 373 \/ | || 374 +----------------------+ 375 | Service Provider | 376 +----------------------+ 377 / | \ 378 / | \ 379 / | \ 380 / | \ 381 +------------------+ +------------------+ +------------------+ 382 |Network Provider 1| |Network Provider 2| |Network Provider 3| 383 +------------------+ +------------------+ +------------------+ 385 Figure 1: The Three Tier Model. 387 The commercial roles of these entities are described in the 388 following sections. 390 2.2.1. Customers 392 Basic customers include fixed residential users, mobile users, and 393 small enterprises. Each requires a small amount of resources and is 394 characterized by steady requests (relatively time invariant). Basic 395 customers do not modify their services themselves: if a service 396 change is needed, it is performed by the provider as a proxy. 398 Advanced customers include enterprises, governments, and utility 399 companies. Such customers ask for both point-to point and 400 multipoint connectivity with high resource demands varying 401 significantly in time. This is one of the reasons why a bundled 402 service offering is not enough and it is desirable to provide each 403 advanced customer with a customized virtual network service. 404 Advanced customers may also have the ability to modify their service 405 parameters within the scope of their virtualized environments. The 406 primary focus of ACTN is Advanced Customers. 408 As customers are geographically spread over multiple network 409 provider domains, they have to interface to multiple providers and 410 may have to support multiple virtual network services with different 411 underlying objectives set by the network providers. To enable these 412 customers to support flexible and dynamic applications they need to 413 control their allocated virtual network resources in a dynamic 414 fashion, and that means that they need a view of the topology that 415 spans all of the network providers. Customers of a given service 416 provider can in turn offer a service to other customers in a 417 recursive way. 419 2.2.2. Service Providers 421 In the scope of ACTN, service providers deliver VNSs to their 422 customers. Service providers may or may not own physical network 423 resources (i.e., may or may not be network providers as described in 424 Section 2.2.3). When a service provider is the same as the network 425 provider, this is similar to existing VPN models applied to a single 426 provider although it may be hard to use this approach when the 427 customer spans multiple independent network provider domains. 429 When network providers supply only infrastructure, while distinct 430 service providers interface to the customers, the service providers 431 are themselves customers of the network infrastructure providers. 432 One service provider may need to keep multiple independent network 433 providers because its end-users span geographically across multiple 434 network provider domains. 436 2.2.3. Network Providers 438 Network Providers are the infrastructure providers that provision 439 the network resources and provide network resources to their 440 customers. The layered model described in this architecture 441 separates the concerns of network providers and customers, with 442 service providers acting as aggregators of customer requests. 444 3. ACTN Base Architecture 446 This section provides a high-level model of ACTN showing the 447 interfaces and the flow of control between components. 449 The ACTN architecture is based on a 3-tier reference model and 450 allows for hierarchy and recursion. The main functionalities within 451 an ACTN system are: 453 . Multi-domain coordination: This function oversees the specific 454 aspects of different domains and builds a single abstracted 455 end-to-end network topology in order to coordinate end-to-end 456 path computation and path/service provisioning. Domain 457 sequence path calculation/determination is also a part of this 458 function. 460 . Abstraction: This function provides an abstracted view of the 461 underlying network resources for use by the customer - a 462 customer may be the client or a higher level controller entity. 463 This function includes network path computation based on 464 customer service connectivity request constraints, path 465 computation based on the global network-wide abstracted 466 topology, and the creation of an abstracted view of network 467 resources allocated to each customer. These operations depend 468 on customer-specific network objective functions and customer 469 traffic profiles. 471 . Customer mapping/translation: This function is to map customer 472 requests/commands into network provisioning requests that can 473 be sent to the Provisioning Network Controller (PNC) according 474 to business policies provisioned statically or dynamically at 475 the OSS/NMS. Specifically, it provides mapping and translation 476 of a customer's service request into a set of parameters that 477 are specific to a network type and technology such that network 478 configuration process is made possible. 480 . Virtual service coordination: This function translates customer 481 service-related information into virtual network service 482 operations in order to seamlessly operate virtual networks 483 while meeting a customer's service requirements. In the 484 context of ACTN, service/virtual service coordination includes 485 a number of service orchestration functions such as multi- 486 destination load balancing, guarantees of service quality, 487 bandwidth and throughput. It also includes notifications for 488 service fault and performance degradation and so forth. 490 The base ACTN architecture defines three controller types and the 491 corresponding interfaces between these controllers. The following 492 types of controller are shown in Figure 2: 494 . CNC - Customer Network Controller 495 . MDSC - Multi Domain Service Coordinator 496 . PNC - Provisioning Network Controller 498 Figure 2 also shows the following interfaces: 500 . CMI - CNC-MDSC Interface 501 . MPI - MDSC-PNC Interface 502 . SBI - South Bound Interface 503 +---------+ +---------+ +---------+ 504 | CNC | | CNC | | CNC | 505 +---------+ +---------+ +---------+ 506 \ | / 507 Business \ | / 508 Boundary =============\==============|==============/============ 509 Between \ | / 510 Customer & ------- | CMI ------- 511 Network Provider \ | / 512 +---------------+ 513 | MDSC | 514 +---------------+ 515 / | \ 516 ------------ | MPI ------------- 517 / | \ 518 +-------+ +-------+ +-------+ 519 | PNC | | PNC | | PNC | 520 +-------+ +-------+ +-------+ 521 | SBI / | / \ 522 | / | SBI SBI / \ 523 --------- ----- | / \ 524 ( ) ( ) | / \ 525 - Control - ( Phys. ) | / ----- 526 ( Plane ) ( Net ) | / ( ) 527 ( Physical ) ----- | / ( Phys. ) 528 ( Network ) ----- ----- ( Net ) 529 - - ( ) ( ) ----- 530 ( ) ( Phys. ) ( Phys. ) 531 --------- ( Net ) ( Net ) 532 ----- ----- 534 Figure 2: ACTN Base Architecture 536 Note that this is a functional architecture: an implementation and 537 deployment might collocate one or more of the functional components. 539 3.1. Customer Network Controller 541 A Customer Network Controller (CNC) is responsible for communicating 542 a customer's VNS requirements to the network provider over the CNC- 543 MDSC Interface (CMI). It has knowledge of the end-points associated 544 with the VNS (expressed as APs), the service policy, and other QoS 545 information related to the service. 547 As the Customer Network Controller directly interfaces to the 548 applications, it understands multiple application requirements and 549 their service needs. The capability of a CNC beyond its CMI role is 550 outside the scope of ACTN and may be implemented in different ways. 551 For example, the CNC may in fact be a controller or part of a 552 controller in the customer's domain, or the CNC functionality could 553 also be implemented as part of a provisioning portal. 555 3.2. Multi-Domain Service Coordinator 557 A Multi-Domain Service Coordinator (MDSC) is a functional block that 558 implements all of the ACTN functions listed in Section 3 and 559 described further in Section 4.2. The two functions of the MDSC, 560 namely, multi domain coordination and virtualization/abstraction are 561 referred to as network-related functions while the other two 562 functions, namely, customer mapping/translation and virtual service 563 coordination are referred to as service-related functions. The MDSC 564 sits at the center of the ACTN model between the CNC that issues 565 connectivity requests and the Provisioning Network Controllers 566 (PNCs) that manage the network resources. 567 The key point of the MDSC (and of the whole ACTN framework) is 568 detaching the network and service control from underlying technology 569 to help the customer express the network as desired by business 570 needs. The MDSC envelopes the instantiation of the right technology 571 and network control to meet business criteria. In essence it 572 controls and manages the primitives to achieve functionalities as 573 desired by the CNC. 575 In order to allow for multi-domain coordination a 1:N relationship 576 must be allowed between MDSCs and PNCs. 578 In addition to that, it could also be possible to have an M:1 579 relationship between MDSCs and PNC to allow for network resource 580 partitioning/sharing among different customers not necessarily 581 connected to the same MDSC (e.g., different service providers) but 582 all using the resources of a common network infrastructure provider. 584 3.3. Provisioning Network Controller 586 The Provisioning Network Controller (PNC) oversees configuring the 587 network elements, monitoring the topology (physical or virtual) of 588 the network, and collecting information about the topology (either 589 raw or abstracted). 591 The PNC functions can be implemented as part of an SDN domain 592 controller, a Network Management System (NMS), an Element Management 593 System (EMS), an active PCE-based controller [Centralized] or any 594 other means to dynamically control a set of nodes and that is 595 implementing an NBI compliant with ACTN specification. 597 A PNC domain includes all the resources under the control of a 598 single PNC. It can be composed of different routing domains and 599 administrative domains, and the resources may come from different 600 layers. The interconnection between PNC domains is illustrated in 601 Figure 3. 603 _______ _______ 604 _( )_ _( )_ 605 _( )_ _( )_ 606 ( ) Border ( ) 607 ( PNC ------ Link ------ PNC ) 608 ( Domain X |Border|========|Border| Domain Y ) 609 ( | Node | | Node | ) 610 ( ------ ------ ) 611 (_ _) (_ _) 612 (_ _) (_ _) 613 (_______) (_______) 615 Figure 3: PNC Domain Borders 617 3.4. ACTN Interfaces 619 Direct customer control of transport network elements and 620 virtualized services is not a viable proposition for network 621 providers due to security and policy concerns. In addition, some 622 networks may operate a control plane and as such it is not practical 623 for the customer to directly interface with network elements. 624 Therefore, the network has to provide open, programmable interfaces, 625 through which customer applications can create, replace and modify 626 virtual network resources and services in an interactive, flexible 627 and dynamic fashion. 629 Three interfaces exist in the ACTN architecture as shown in Figure 630 2. 632 . CMI: The CNC-MDSC Interface (CMI) is an interface between a CNC 633 and an MDSC. The CMI is a business boundary between customer 634 and network provider. It is used to request a VNS for an 635 application. All service-related information is conveyed over 636 this interface (such as the VNS type, topology, bandwidth, and 637 service constraints). Most of the information over this 638 interface is agnostic of the technology used by Network 639 Providers, but there are some cases (e.g., access link 640 configuration) where it is necessary to specify technology- 641 specific details. 643 . MPI: The MDSC-PNC Interface (MPI) is an interface between an 644 MDSC and a PNC. It communicates requests for new connectivity 645 or for bandwidth changes in the physical network. In multi- 646 domain environments, the MDSC needs to communicate with 647 multiple PNCs each responsible for control of a domain. The 648 MPI presents an abstracted topology to the MDSC hiding 649 technology specific aspects of the network and hiding topology 650 according to policy. 652 . SBI: The Southbound Interface (SBI) is out of scope of ACTN. 653 Many different SBIs have been defined for different 654 environments, technologies, standards organizations, and 655 vendors. It is shown in Figure 3 for reference reason only. 657 4. Advanced ACTN Architectures 659 This section describes advanced configurations of the ACTN 660 architecture. 662 4.1. MDSC Hierarchy 664 A hierarchy of MDSCs can be foreseen for many reasons, among which 665 are scalability, administrative choices, or putting together 666 different layers and technologies in the network. In the case where 667 there is a hierarchy of MDSCs, we introduce the terms higher-level 668 MDSC (MDSC-H) and lower-level MDSC (MDSC-L). The interface between 669 them is a recursion of the MPI. An implementation of an MDSC-H 670 makes provisioning requests as normal using the MPI, but an MDSC-L 671 must be able to receive requests as normal at the CMI and also at 672 the MPI. The hierarchy of MDSCs can be seen in Figure 4. 674 Another implementation choice could foresee the usage of an MDSC-L 675 for all the PNCs related to a given technology (e.g. IP/MPLS) and a 676 different MDSC-L for the PNCs related to another technology (e.g. 677 OTN/WDM) and an MDSC-H to coordinate them. 679 +--------+ 680 | CNC | 681 +--------+ 682 | +-----+ 683 | CMI | CNC | 684 +----------+ +-----+ 685 -------| MDSC-H |---- | 686 | +----------+ | | CMI 688 MPI | MPI | | 689 | | | 690 +---------+ +---------+ 691 | MDSC-L | | MDSC-L | 692 +---------+ +---------+ 693 MPI | | | | 694 | | | | 695 ----- ----- ----- ----- 696 | PNC | | PNC | | PNC | | PNC | 697 ----- ----- ----- ----- 699 Figure 4: MDSC Hierarchy 701 4.2. Functional Split of MDSC Functions in Orchestrators 703 An implementation choice could separate the MDSC functions into two 704 groups, one group for service-related functions and the other for 705 network-related functions. This enables the implementation of a 706 service orchestrator that provides the service-related functions of 707 the MDSC and a network orchestrator that provides the network- 708 related functions of the MDSC. This split is consistent with the 709 YANG service model architecture described in [Service-YANG]. Figure 710 5 depicts this and shows how the ACTN interfaces may map to YANG 711 models. 713 +--------------------+ 714 | Customer | 715 | +-----+ | 716 | | CNC | | 717 | +-----+ | 718 +--------------------+ 719 CMI | Customer Service Model 720 | 721 +---------------------------------------+ 722 | Service | 723 ********|*********************** Orchestrator | 724 * MDSC | +-----------------+ * | 725 * | | Service-related | * | 726 * | | Functions | * | 727 * | +-----------------+ * | 728 * +----------------------*----------------+ 729 * * | Service Delivery Model 730 * * | 731 * +----------------------*----------------+ 732 * | * Network | 733 * | +-----------------+ * Orchestrator | 734 * | | Network-related | * | 735 * | | Functions | * | 736 * | +-----------------+ * | 737 ********|*********************** | 738 +---------------------------------------+ 739 MPI | Network Configuration Model 740 | 741 +------------------------+ 742 | Domain | 743 | +------+ Controller | 744 | | PNC | | 745 | +------+ | 746 +------------------------+ 747 SBI | Device Configuration Model 748 | 749 +--------+ 750 | Device | 751 +--------+ 753 Figure 5: ACTN Architecture in the Context of the YANG Service 754 Models 755 5. Topology Abstraction Methods 757 Topology abstraction is described in [RFC7926]. This section 758 discusses topology abstraction factors, types, and their context in 759 the ACTN architecture. 761 Abstraction in ACTN is performed by the PNC when presenting 762 available topology to the MDSC, or by an MDSC-L when presenting 763 topology to an MDSC-H. This function is different to the creation 764 of a VN (and particularly a Type 2 VN) which is not abstraction but 765 construction of virtual resources. 767 5.1. Abstraction Factors 769 As discussed in [RFC7926], abstraction is tied with policy of the 770 networks. For instance, per an operational policy, the PNC would 771 not provide any technology specific details (e.g., optical 772 parameters for WSON) in the abstract topology it provides to the 773 MDSC. Similarly, policy of the networks may determine the 774 abstraction type as described in Section 5.2. 776 There are many factors that may impact the choice of abstraction: 778 - Abstraction depends on the nature of the underlying domain 779 networks. For instance, packet networks may be abstracted with 780 fine granularity while abstraction of optical networks depends on 781 the switching units (such as wavelengths) and the end-to-end 782 continuity and cross-connect limitations within the network. 784 - Abstraction also depends on the capability of the PNCs. As 785 abstraction requires hiding details of the underlying network 786 resources, the PNC's capability to run algorithms impacts the 787 feasibility of abstraction. Some PNC may not have the ability to 788 abstract native topology while other PNCs may have the ability to 789 use sophisticated algorithms. 791 - Abstraction is a tool that can improve scalability. Where the 792 native network resource information is of large size there is a 793 specific scaling benefit to abstraction. 795 - The proper abstraction level may depend on the frequency of 796 topology updates and vice versa. 798 - The nature of the MDSC's support for technology-specific 799 parameters impacts the degree/level of abstraction. If the MDSC 800 is not capable of handling such parameters then a higher level of 801 abstraction is needed. 803 - In some cases, the PNC is required to hide key internal 804 topological data from the MDSC. Such confidentiality can be 805 achieved through abstraction. 807 5.2. Abstraction Types 809 This section defines the following three types of topology 810 abstraction: 812 . Native/White Topology (Section 5.2.1) 813 . Black Topology (Section 5.2.2) 814 . Grey Topology (Section 5.2.3) 816 5.2.1. Native/White Topology 818 This is a case where the PNC provides the actual network topology to 819 the MDSC without any hiding or filtering of information. I.e., no 820 abstraction is performed. In this case, the MDSC has the full 821 knowledge of the underlying network topology and can operate on it 822 directly. 824 5.2.2. Black Topology 826 A black topology replaces a full network with a minimal 827 representation of the edge-to-edge topology without disclosing any 828 node internal connectivity information. The entire domain network 829 may be abstracted as a single abstract node with the network's 830 access/egress links appearing as the ports to the abstract node and 831 the implication that any port can be 'cross-connected' to any other. 832 Figure 6 depicts a native topology with the corresponding black 833 topology with one virtual node and inter-domain links. In this 834 case, the MDSC has to make a provisioning request to the PNCs to 835 establish the port-to-port connection. If there is a large number 836 of inter-connected domains, this abstraction method may impose a 837 heavy coordination load at the MDSC level in order to find an 838 optimal end-to-end path since the abstraction hides so much 839 information that it is not possible to determine whether an end-to- 840 end path is feasible without asking each PNC to set up each path 841 fragment. For this reason, the MPI might need to be enhanced to 842 allow the PNCs to be queried for the practicality and 843 characteristics of paths across the abstract node. 844 ..................................... 845 : PNC Domain : 846 : +--+ +--+ +--+ +--+ : 847 ------+ +-----+ +-----+ +-----+ +------ 848 : ++-+ ++-+ +-++ +-++ : 849 : | | | | : 850 : | | | | : 851 : | | | | : 852 : | | | | : 853 : ++-+ ++-+ +-++ +-++ : 854 ------+ +-----+ +-----+ +-----+ +------ 855 : +--+ +--+ +--+ +--+ : 856 :.................................... 858 +----------+ 859 ---+ +--- 860 | Abstract | 861 | Node | 862 ---+ +--- 863 +----------+ 865 Figure 6: Native Topology with Corresponding Black Topology Expressed 866 as an Abstract Node 868 5.2.3. Grey Topology 870 A grey topology represents a compromise between black and white 871 topologies from a granularity point of view. In this case, the PNC 872 exposes an abstract topology containing all PNC domains border nodes 873 and an abstraction of the connectivity between those border nodes. 874 This abstraction may contain either physical or abstract 875 nodes/links. 877 Two modes of grey topology are identified: 878 . In a type A grey topology type border nodes are connected by a 879 full mesh of TE links (see Figure 7). 880 . In a type B grey topology border nodes are connected over a 881 more detailed network comprising internal abstract nodes and 882 abstracted links. This mode of abstraction supplies the MDSC 883 with more information about the internals of the PNC domain and 884 allows it to make more informed choices about how to route 885 connectivity over the underlying network. 887 ..................................... 888 : PNC Domain : 889 : +--+ +--+ +--+ +--+ : 890 ------+ +-----+ +-----+ +-----+ +------ 891 : ++-+ ++-+ +-++ +-++ : 892 : | | | | : 893 : | | | | : 894 : | | | | : 895 : | | | | : 896 : ++-+ ++-+ +-++ +-++ : 897 ------+ +-----+ +-----+ +-----+ +------ 898 : +--+ +--+ +--+ +--+ : 899 :.................................... 901 .................... 902 : Abstract Network : 903 : : 904 : +--+ +--+ : 905 -------+ +----+ +------- 906 : ++-+ +-++ : 907 : | \ / | : 908 : | \/ | : 909 : | /\ | : 910 : | / \ | : 911 : ++-+ +-++ : 912 -------+ +----+ +------- 913 : +--+ +--+ : 914 :..................: 916 Figure 7: Native Topology with Corresponding Grey Topology 918 5.3. Methods of Building Grey Topologies 920 This section discusses two different methods of building a grey 921 topology: 923 . Automatic generation of abstract topology by configuration 924 (Section 5.3.1) 925 . On-demand generation of supplementary topology via path 926 computation request/reply (Section 5.3.2) 928 5.3.1. Automatic Generation of Abstract Topology by Configuration 930 Automatic generation is based on the abstraction/summarization of 931 the whole domain by the PNC and its advertisement on the MPI. The 932 level of abstraction can be decided based on PNC configuration 933 parameters (e.g., "provide the potential connectivity between any PE 934 and any ASBR in an MPLS-TE network"). 936 Note that the configuration parameters for this abstract topology 937 can include available bandwidth, latency, or any combination of 938 defined parameters. How to generate such information is beyond the 939 scope of this document. 941 This abstract topology may need to be periodically or incrementally 942 updated when there is a change in the underlying network or the use 943 of the network resources that make connectivity more or less 944 available. 946 5.3.2. On-demand Generation of Supplementary Topology via Path Compute 947 Request/Reply 949 While abstract topology is generated and updated automatically by 950 configuration as explained in Section 5.3.1, additional 951 supplementary topology may be obtained by the MDSC via a path 952 compute request/reply mechanism. 954 The abstract topology advertisements from PNCs give the MDSC the 955 border node/link information for each domain. Under this scenario, 956 when the MDSC needs to create a new VN, the MDSC can issue path 957 computation requests to PNCs with constraints matching the VN 958 request as described in [ACTN-YANG]. An example is provided in 959 Figure 8, where the MDSC is creating a P2P VN between AP1 and AP2. 960 The MDSC could use two different inter-domain links to get from 961 Domain X to Domain Y, but in order to choose the best end-to-end 962 path it needs to know what domain X and Y can offer in terms of 963 connectivity and constraints between the PE nodes and the border 964 nodes. 966 ------- -------- 967 ( ) ( ) 968 - BrdrX.1------- BrdrY.1 - 969 (+---+ ) ( +---+) 971 -+---( |PE1| Dom.X ) ( Dom.Y |PE2| )---+- 972 | (+---+ ) ( +---+) | 973 AP1 - BrdrX.2------- BrdrY.2 - AP2 974 ( ) ( ) 975 ------- -------- 977 Figure 8: A Multi-Domain Example 979 The MDSC issues a path computation request to PNC.X asking for 980 potential connectivity between PE1 and border node BrdrX.1 and 981 between PE1 and BrdrX.2 with related objective functions and TE 982 metric constraints. A similar request for connectivity from the 983 border nodes in Domain Y to PE2 will be issued to PNC.Y. The MDSC 984 merges the results to compute the optimal end-to-end path including 985 the inter domain links. The MDSC can use the result of this 986 computation to request the PNCs to provision the underlying 987 networks, and the MDSC can then use the end-to-end path as a virtual 988 link in the VN it delivers to the customer. 990 5.4. Hierarchical Topology Abstraction Example 992 This section illustrates how topology abstraction operates in 993 different levels of a hierarchy of MDSCs as shown in Figure 9. 995 +-----+ 996 | CNC | CNC wants to create a VN 997 +-----+ between CE A and CE B 998 | 999 | 1000 +-----------------------+ 1001 | MDSC-H | 1002 +-----------------------+ 1003 / \ 1004 / \ 1005 +---------+ +---------+ 1006 | MDSC-L1 | | MDSC-L2 | 1007 +---------+ +---------+ 1008 / \ / \ 1009 / \ / \ 1010 +----+ +----+ +----+ +----+ 1011 CE A o----|PNC1| |PNC2| |PNC3| |PNC4|----o CE B 1012 +----+ +----+ +----+ +----+ 1014 Virtual Network Delivered to CNC 1016 CE A o==============o CE B 1018 Topology operated on by MDSC-H 1020 CE A o----o==o==o===o----o CE B 1022 Topology operated on by MDSC-L1 Topology operated on by MDSC-L2 1023 _ _ _ _ 1024 ( ) ( ) ( ) ( ) 1025 ( ) ( ) ( ) ( ) 1026 CE A o--(o---o)==(o---o)==Dom.3 Dom.2==(o---o)==(o---o)--o CE B 1027 ( ) ( ) ( ) ( ) 1028 (_) (_) (_) (_) 1030 Actual Topology 1031 ___ ___ ___ ___ 1032 ( ) ( ) ( ) ( ) 1033 ( o ) ( o ) ( o--o) ( o ) 1034 ( / \ ) ( |\ ) ( | | ) ( / \ ) 1035 CE A o---(o-o---o-o)==(o-o-o-o-o)==(o--o--o-o)==(o-o-o-o-o)---o CE B 1036 ( \ / ) ( | |/ ) ( | | ) ( \ / ) 1037 ( o ) (o-o ) ( o--o) ( o ) 1038 (___) (___) (___) (___) 1040 Domain 1 Domain 2 Domain 3 Domain 4 1042 Where 1043 o is a node 1044 --- is a link 1045 === border link 1047 Figure 9: Illustration of Hierarchical Topology Abstraction 1049 In the example depicted in Figure 9, there are four domains under 1050 control of PNCs PNC1, PNC2, PNC3, and PNC4. MDSC-L1 controls PNC1 1051 and PNC2 while MDSC-L2 controls PNC3 and PNC4. Each of the PNCs 1052 provides a grey topology abstraction that presents only border nodes 1053 and links across and outside the domain. The abstract topology 1054 MDSC-L1 that operates is a combination of the two topologies from 1055 PNC1 and PNC2. Likewise, the abstract topology that MDSC-L2 1056 operates is shown in Figure 9. Both MDSC-L1 and MDSC-L2 provide a 1057 black topology abstraction to MSDC-H in which each PNC domain is 1058 presented as a single virtual node. MDSC-H combines these two 1059 topologies to create the abstraction topology on which it operates. 1060 MDSC-H sees the whole four domain networks as four virtual nodes 1061 connected via virtual links. 1063 5.5. VN Recursion with Network Layers 1065 In some cases the VN supplied to a customer may be built using 1066 resources from different technology layers operated by different 1067 providers. For example, one provider may run a packet TE network 1068 and use optical connectivity provided by another provider. 1070 As shown in Figure 10, a customer asks for end-to-end connectivity 1071 between CE A and CE B, a virtual network. The customer's CNC makes a 1072 request to Provider 1's MDSC. The MDSC works out which network 1073 resources need to be configured and sends instructions to the 1074 appropriate PNCs. However, the link between Q and R is a virtual 1075 link supplied by Provider 2: Provider 1 is a customer of Provider 2. 1077 To support this, Provider 1 has a CNC that communicates to Provider 1078 2's MDSC. Note that Provider 1's CNC in Figure 10 is a functional 1079 component that does not dictate implementation: it may be embedded 1080 in a PNC. 1082 Virtual CE A o===============================o CE B 1083 Network 1085 ----- CNC wants to create a VN 1086 Customer | CNC | between CE A and CE B 1087 ----- 1088 : 1089 *********************************************** 1090 : 1091 Provider 1 --------------------------- 1092 | MDSC | 1093 --------------------------- 1094 : : : 1095 : : : 1096 ----- ------------- ----- 1097 | PNC | | PNC | | PNC | 1098 ----- ------------- ----- 1099 : : : : : 1100 Higher v v : v v 1101 Layer CE A o---P-----Q===========R-----S---o CE B 1102 Network | : | 1103 | : | 1104 | ----- | 1105 | | CNC | | 1106 | ----- | 1107 | : | 1108 *********************************************** 1109 | : | 1111 Provider 2 | ------ | 1112 | | MSDC | | 1113 | ------ | 1114 | : | 1115 | ------- | 1116 | | PNC | | 1117 | ------- | 1118 \ : : : / 1119 Lower \v v v/ 1120 Layer X--Y--Z 1121 Network 1123 Figure 10: VN recursion with Network Layers 1125 6. Access Points and Virtual Network Access Points 1127 In order to map identification of connections between the customer's 1128 sites and the TE networks and to scope the connectivity requested in 1129 the VNS, the CNC and the MDSC refer to the connections using the 1130 Access Point (AP) construct as shown in Figure 11. 1132 ------------- 1133 ( ) 1134 - - 1135 +---+ X ( ) Z +---+ 1136 |CE1|---+----( )---+---|CE2| 1137 +---+ | ( ) | +---+ 1138 AP1 - - AP2 1139 ( ) 1140 ------------- 1142 Figure 11: Customer View of APs 1144 Let's take as an example a scenario shown in Figure 11. CE1 is 1145 connected to the network via a 10Gbps link and CE2 via a 40Gbps 1146 link. Before the creation of any VN between AP1 and AP2 the 1147 customer view can be summarized as shown in Table 1. 1149 +----------+------------------------+ 1150 |End Point | Access Link Bandwidth | 1151 +-----+----------+----------+-------------+ 1152 |AP id| CE,port | MaxResBw | AvailableBw | 1153 +-----+----------+----------+-------------+ 1154 | AP1 |CE1,portX | 10Gbps | 10Gbps | 1155 +-----+----------+----------+-------------+ 1156 | AP2 |CE2,portZ | 40Gbps | 40Gbps | 1157 +-----+----------+----------+-------------+ 1159 Table 1: AP - Customer View 1161 On the other hand, what the provider sees is shown in Figure 12. 1163 ------- ------- 1164 ( ) ( ) 1165 - - - - 1166 W (+---+ ) ( +---+) Y 1167 -+---( |PE1| Dom.X )---( Dom.Y |PE2| )---+- 1168 | (+---+ ) ( +---+) | 1169 AP1 - - - - AP2 1170 ( ) ( ) 1171 ------- ------- 1173 Figure 12: Provider view of the AP 1175 Which results in a summarization as shown in Table 2. 1177 +----------+------------------------+ 1178 |End Point | Access Link Bandwidth | 1179 +-----+----------+----------+-------------+ 1180 |AP id| PE,port | MaxResBw | AvailableBw | 1181 +-----+----------+----------+-------------+ 1182 | AP1 |PE1,portW | 10Gbps | 10Gbps | 1183 +-----+----------+----------+-------------+ 1184 | AP2 |PE2,portY | 40Gbps | 40Gbps | 1185 +-----+----------+----------+-------------+ 1187 Table 2: AP - Provider View 1189 A Virtual Network Access Point (VNAP) needs to be defined as binding 1190 between an AP and a VN. It is used to allow for different VNs to 1191 start from the same AP. "It also allows for traffic engineering on 1192 the access and/or inter-domain links (e.g., keeping track of 1193 bandwidth allocation). A different VNAP is created on an AP for 1194 each VN. 1196 In this simple scenario we suppose we want to create two virtual 1197 networks. The first with VN identifier 9 between AP1 and AP2 with 1198 bandwidth of 1Gbps, while the second with VN identifier 5, again 1199 between AP1 and AP2 and with bandwidth 2Gbps. 1201 The provider view would evolve as shown in Table 3. 1203 +----------+------------------------+ 1204 |End Point | Access Link/VNAP Bw | 1205 +---------+----------+----------+-------------+ 1206 |AP/VNAPid| PE,port | MaxResBw | AvailableBw | 1207 +---------+----------+----------+-------------+ 1208 |AP1 |PE1,portW | 10Gbps | 7Gbps | 1209 | -VNAP1.9| | 1Gbps | N.A. | 1210 | -VNAP1.5| | 2Gbps | N.A | 1211 +---------+----------+----------+-------------+ 1212 |AP2 |PE2,portY | 40Gbps | 37Gbps | 1213 | -VNAP2.9| | 1Gbps | N.A. | 1214 | -VNAP2.5| | 2Gbps | N.A | 1215 +---------+----------+----------+-------------+ 1216 Table 3: AP and VNAP - Provider View after VNS Creation 1218 6.1. Dual-Homing Scenario 1220 Often there is a dual homing relationship between a CE and a pair of 1221 PEs. This case needs to be supported by the definition of VN, APs 1222 and VNAPs. Suppose CE1 connected to two different PEs in the 1223 operator domain via AP1 and AP2 and that the customer needs 5Gbps of 1224 bandwidth between CE1 and CE2. This is shown in Figure 12. 1226 ____________ 1227 AP1 ( ) AP3 1228 -------(PE1) (PE3)------- 1229 W / ( ) \ X 1230 +---+/ ( ) \+---+ 1231 |CE1| ( ) |CE2| 1232 +---+\ ( ) /+---+ 1233 Y \ ( ) / Z 1234 -------(PE2) (PE4)------- 1235 AP2 (____________) 1237 Figure 12: Dual-Homing Scenario 1239 In this case, the customer will request for a VN between AP1, AP2, 1240 and AP3 specifying a dual homing relationship between AP1 and AP2. 1241 As a consequence no traffic will flow between AP1 and AP2. The dual 1242 homing relationship would then be mapped against the VNAPs (since 1243 other independent VNs might have AP1 and AP2 as end points). 1245 The customer view would be shown in Table 4. 1247 +----------+------------------------+ 1248 |End Point | Access Link/VNAP Bw | 1249 +---------+----------+----------+-------------+-----------+ 1250 |AP/VNAPid| CE,port | MaxResBw | AvailableBw |Dual Homing| 1251 +---------+----------+----------+-------------+-----------+ 1252 |AP1 |CE1,portW | 10Gbps | 5Gbps | | 1253 | -VNAP1.9| | 5Gbps | N.A. | VNAP2.9 | 1254 +---------+----------+----------+-------------+-----------+ 1255 |AP2 |CE1,portY | 40Gbps | 35Gbps | | 1256 | -VNAP2.9| | 5Gbps | N.A. | VNAP1.9 | 1257 +---------+----------+----------+-------------+-----------+ 1258 |AP3 |CE2,portX | 40Gbps | 35Gbps | | 1259 | -VNAP3.9| | 5Gbps | N.A. | NONE | 1260 +---------+----------+----------+-------------+-----------+ 1262 Table 4: Dual-Homing - Customer View after VN Creation 1264 7. Advanced ACTN Application: Multi-Destination Service 1266 A further advanced application of ACTN is in the case of Data Center 1267 selection, where the customer requires the Data Center selection to 1268 be based on the network status; this is referred to as Multi- 1269 Destination in [ACTN-REQ]. In terms of ACTN, a CNC could request a 1270 VNS between a set of source APs and destination APs and leave it up 1271 to the network (MDSC) to decide which source and destination access 1272 points to be used to set up the VNS. The candidate list of 1273 source and destination APs is decided by a CNC (or an entity outside 1274 of ACTN) based on certain factors which are outside the scope of 1275 ACTN. 1277 Based on the AP selection as determined and returned by the network 1278 (MDSC), the CNC (or an entity outside of ACTN) should further take 1279 care of any subsequent actions such as orchestration or service 1280 setup requirements. These further actions are outside the scope of 1281 ACTN. 1283 Consider a case as shown in Figure 14, where three data centers are 1284 available, but the customer requires the data center selection to be 1285 based on the network status and the connectivity service setup 1286 between the AP1 (CE1) and one of the destination APs (AP2 (DC-A), 1287 AP3 (DC-B), and AP4 (DC-C)). The MDSC (in coordination with PNCs) 1288 would select the best destination AP based on the constraints, 1289 optimization criteria, policies, etc., and setup the connectivity 1290 service (virtual network). 1292 ------- ------- 1293 ( ) ( ) 1294 - - - - 1295 +---+ ( ) ( ) +----+ 1296 |CE1|---+---( Domain X )----( Domain Y )---+---|DC-A| 1297 +---+ | ( ) ( ) | +----+ 1298 AP1 - - - - AP2 1299 ( ) ( ) 1300 ---+--- ---+--- 1301 | | 1302 AP3-+ AP4-+ 1303 | | 1304 +----+ +----+ 1305 |DC-B| |DC-C| 1306 +----+ +----+ 1308 Figure 14: End-Point Selection Based on Network Status 1310 7.1. Pre-Planned End Point Migration 1312 Furthermore, in case of Data Center selection, customer could 1313 request for a backup DC to be selected, such that in case of 1314 failure, another DC site could provide hot stand-by protection. As 1315 shown in Figure 15 DC-C is selected as a backup for DC-A. Thus, the 1316 VN should be setup by the MDSC to include primary connectivity 1317 between AP1 (CE1) and AP2 (DC-A) as well as protection connectivity 1318 between AP1 (CE1) and AP4 (DC-C). 1320 ------- ------- 1321 ( ) ( ) 1322 - - - - 1323 +---+ ( ) ( ) +----+ 1324 |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| 1325 +---+ | ( ) ( ) | +----+ 1326 AP1 - - - - AP2 | 1327 ( ) ( ) | 1328 ---+--- ---+--- | 1329 | | | 1330 AP3-+ AP4-+ HOT STANDBY 1331 | | | 1332 +----+ +----+ | 1333 |DC-D| |DC-C|<------------- 1334 +----+ +----+ 1336 Figure 15: Pre-planned End-Point Migration 1338 7.2. On the Fly End-Point Migration 1340 Compared to pre-planned end point migration, on the fly end point 1341 selection is dynamic in that the migration is not pre-planned but 1342 decided based on network condition. Under this scenario, the MDSC 1343 would monitor the network (based on the VN SLA) and notify the CNC 1344 in case where some other destination AP would be a better choice 1345 based on the network parameters. The CNC should instruct the MDSC 1346 when it is suitable to update the VN with the new AP if it is 1347 required. 1349 8. Manageability Considerations 1351 The objective of ACTN is to manage traffic engineered resources, and 1352 provide a set of mechanisms to allow customers to request virtual 1353 connectivity across server network resources. ACTN supports 1354 multiple customers each with its own view of and control of a 1355 virtual network built on the server network, the network operator 1356 will need to partition (or "slice") their network resources, and 1357 manage the resources accordingly. 1359 The ACTN platform will, itself, need to support the request, 1360 response, and reservations of client and network layer connectivity. 1361 It will also need to provide performance monitoring and control of 1362 traffic engineered resources. The management requirements may be 1363 categorized as follows: 1365 . Management of external ACTN protocols 1366 . Management of internal ACTN interfaces/protocols 1367 . Management and monitoring of ACTN components 1368 . Configuration of policy to be applied across the ACTN system 1370 The ACTN framework and interfaces are defined to enable traffic 1371 engineering for virtual network services and connectivity services. 1372 Network operators may have other Operations, Administration, and 1373 Maintenance (OAM) tasks for service fulfillment, optimization, and 1374 assurance beyond traffic engineering. The realization of OAM beyond 1375 abstraction and control of traffic engineered networks is not 1376 considered in this document. 1378 8.1. Policy 1380 Policy is an important aspect of ACTN control and management. 1381 Policies are used via the components and interfaces, during 1382 deployment of the service, to ensure that the service is compliant 1383 with agreed policy factors and variations (often described in SLAs), 1384 these include, but are not limited to: connectivity, bandwidth, 1385 geographical transit, technology selection, security, resilience, 1386 and economic cost. 1388 Depending on the deployment of the ACTN architecture, some policies 1389 may have local or global significance. That is, certain policies 1390 may be ACTN component specific in scope, while others may have 1391 broader scope and interact with multiple ACTN components. Two 1392 examples are provided below: 1394 . A local policy might limit the number, type, size, and 1395 scheduling of virtual network services a customer may request 1396 via its CNC. This type of policy would be implemented locally 1397 on the MDSC. 1399 . A global policy might constrain certain customer types (or 1400 specific customer applications) to only use certain MDSCs, and 1401 be restricted to physical network types managed by the PNCs. A 1402 global policy agent would govern these types of policies. 1404 The objective of this section is to discuss the applicability of 1405 ACTN policy: requirements, components, interfaces, and examples. 1406 This section provides an analysis and does not mandate a specific 1407 method for enforcing policy, or the type of policy agent that would 1408 be responsible for propagating policies across the ACTN components. 1409 It does highlight examples of how policy may be applied in the 1410 context of ACTN, but it is expected further discussion in an 1411 applicability or solution specific document, will be required. 1413 8.2. Policy Applied to the Customer Network Controller 1415 A virtual network service for a customer application will be 1416 requested by the CNC. The request will reflect the application 1417 requirements and specific service needs, including bandwidth, 1418 traffic type and survivability. Furthermore, application access and 1419 type of virtual network service requested by the CNC, will be need 1420 adhere to specific access control policies. 1422 8.3. Policy Applied to the Multi Domain Service Coordinator 1424 A key objective of the MDSC is to support the customer's expression 1425 of the application connectivity request via its CNC as set of 1426 desired business needs, therefore policy will play an important 1427 role. 1429 Once authorized, the virtual network service will be instantiated 1430 via the CNC-MDSC Interface (CMI), it will reflect the customer 1431 application and connectivity requirements, and specific service 1432 transport needs. The CNC and the MDSC components will have agreed 1433 connectivity end-points, use of these end-points should be defined 1434 as a policy expression when setting up or augmenting virtual network 1435 services. Ensuring that permissible end-points are defined for CNCs 1436 and applications will require the MDSC to maintain a registry of 1437 permissible connection points for CNCs and application types. 1439 Conflicts may occur when virtual network service optimization 1440 criteria are in competition. For example, to meet objectives for 1441 service reachability a request may require an interconnection point 1442 between multiple physical networks; however, this might break a 1443 confidentially policy requirement of specific type of end-to-end 1444 service. Thus an MDSC may have to balance a number of the 1445 constraints on a service request and between different requested 1446 services. It may also have to balance requested services with 1447 operational norms for the underlying physical networks. This 1448 balancing may be resolved using configured policy and using hard and 1449 soft policy constraints. 1451 8.4. Policy Applied to the Provisioning Network Controller 1453 The PNC is responsible for configuring the network elements, 1454 monitoring physical network resources, and exposing connectivity 1455 (direct or abstracted) to the MDSC. It is therefore expected that 1456 policy will dictate what connectivity information will be exported 1457 between the PNC, via the MDSC-PNC Interface (MPI), and MDSC. 1459 Policy interactions may arise when a PNC determines that it cannot 1460 compute a requested path from the MDSC, or notices that (per a 1461 locally configured policy) the network is low on resources (for 1462 example, the capacity on key links become exhausted). In either 1463 case, the PNC will be required to notify the MDSC, which may (again 1464 per policy) act to construct a virtual network service across 1465 another physical network topology. 1467 Furthermore, additional forms of policy-based resource management 1468 will be required to provide virtual network service performance, 1469 security and resilience guarantees. This will likely be implemented 1470 via a local policy agent and additional protocol methods. 1472 9. Security Considerations 1474 The ACTN framework described in this document defines key components 1475 and interfaces for managed traffic engineered networks. Securing 1476 the request and control of resources, confidentially of the 1477 information, and availability of function, should all be critical 1478 security considerations when deploying and operating ACTN platforms. 1480 Several distributed ACTN functional components are required, and 1481 implementations should consider encrypting data that flows between 1482 components, especially when they are implemented at remote nodes, 1483 regardless these data flows are on external or internal network 1484 interfaces. 1486 The ACTN security discussion is further split into two specific 1487 categories described in the following sub-sections: 1489 . Interface between the Customer Network Controller and Multi 1490 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) 1492 . Interface between the Multi Domain Service Coordinator and 1493 Provisioning Network Controller (PNC), MDSC-PNC Interface (MPI) 1495 From a security and reliability perspective, ACTN may encounter many 1496 risks such as malicious attack and rogue elements attempting to 1497 connect to various ACTN components. Furthermore, some ACTN 1498 components represent a single point of failure and threat vector, 1499 and must also manage policy conflicts, and eavesdropping of 1500 communication between different ACTN components. 1502 The conclusion is that all protocols used to realize the ACTN 1503 framework should have rich security features, and customer, 1504 application and network data should be stored in encrypted data 1505 stores. Additional security risks may still exist. Therefore, 1506 discussion and applicability of specific security functions and 1507 protocols will be better described in documents that are use case 1508 and environment specific. 1510 9.1. CNC-MDSC Interface (CMI) 1512 Data stored by the MDSC will reveal details of the virtual network 1513 services, and which CNC and customer/application is consuming the 1514 resource. The data stored must therefore be considered as a 1515 candidate for encryption. 1517 CNC Access rights to an MDSC must be managed. The MDSC must 1518 allocate resources properly, and methods to prevent policy 1519 conflicts, resource wastage, and denial of service attacks on the 1520 MDSC by rogue CNCs, should also be considered. 1522 The CMI will likely be an external protocol interface. Suitable 1523 authentication and authorization of each CNC connecting to the MDSC 1524 will be required, especially, as these are likely to be implemented 1525 by different organizations and on separate functional nodes. Use of 1526 the AAA-based mechanisms would also provide role-based authorization 1527 methods, so that only authorized CNC's may access the different 1528 functions of the MDSC. 1530 9.2. MDSC-PNC Interface (MPI) 1532 Where the MDSC must interact with multiple (distributed) PNCs, a 1533 PKI-based mechanism is suggested, such as building a TLS or HTTPS 1534 connection between the MDSC and PNCs, to ensure trust between the 1535 physical network layer control components and the MDSC. 1537 Which MDSC the PNC exports topology information to, and the level of 1538 detail (full or abstracted) should also be authenticated and 1539 specific access restrictions and topology views, should be 1540 configurable and/or policy-based. 1542 10. IANA Considerations 1544 This document has no actions for IANA. 1546 11. References 1548 11.1. Informative References 1550 [RFC2702] Awduche, D., et. al., "Requirements for Traffic 1551 Engineering Over MPLS", RFC 2702, October 1999. 1553 [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path 1554 Computation Element (PCE)-Based Architecture", IETF RFC 1555 4655, August 2006. 1557 [RFC5654] Niven-Jenkins, B. (Ed.), D. Brungard (Ed.), and M. Betts 1558 (Ed.), "Requirements of an MPLS Transport Profile", RFC 1559 5654, October 2009. 1561 [RFC7149] Boucadair, M. and Jacquenet, C., "Software-Defined 1562 Networking: A Perspective from within a Service Provider 1563 Environment", RFC 7149, April 2014. 1565 [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for 1566 Information Exchange between Interconnected Traffic- 1567 Engineered Networks", RFC 7926, July 2016. 1569 [RFC3945] Manning, E., et al., "Generalized Multi-Protocol Label 1570 Switching (GMPLS) Architecture2, RFC 3945, October 2004. 1572 [ONF-ARCH] Open Networking Foundation, "SDN architecture", Issue 1573 1.1, ONF TR-521, June 2016. 1575 [Centralized] Farrel, A., et al., "An Architecture for Use of PCE 1576 and PCEP in a Network with Central Control", draft-ietf- 1577 teas-pce-central-control, work in progress. 1579 [Service-YANG] Lee, Y., Dhody, D., and Ceccarelli, C., "Traffic 1580 Engineering and Service Mapping Yang Model", draft-lee- 1581 teas-te-service-mapping-yang, work in progress. 1583 [ACTN-YANG] Lee, Y., et al., "A Yang Data Model for ACTN VN 1584 Operation", draft-lee-teas-actn-vn-yang, work in progress. 1586 [ACTN-REQ] Lee, Y., et al., "Requirements for Abstraction and 1587 Control of TE Networks", draft-ietf-teas-actn- 1588 requirements, work in progress. 1590 [TE-Topo] X. Liu et al., "YANG Data Model for TE Topologies", draft- 1591 ietf-teas-yang-te-topo, work in progress. 1593 12. Contributors 1595 Adrian Farrel 1596 Old Dog Consulting 1597 Email: adrian@olddog.co.uk 1599 Italo Busi 1600 Huawei 1601 Email: Italo.Busi@huawei.com 1603 Khuzema Pithewan 1604 Infinera 1605 Email: kpithewan@infinera.com 1607 Michael Scharf 1608 Nokia 1609 Email: michael.scharf@nokia.com 1610 Luyuan Fang 1611 eBay 1612 Email: luyuanf@gmail.com 1614 Diego Lopez 1615 Telefonica I+D 1616 Don Ramon de la Cruz, 82 1617 28006 Madrid, Spain 1618 Email: diego@tid.es 1620 Sergio Belotti 1621 Alcatel Lucent 1622 Via Trento, 30 1623 Vimercate, Italy 1624 Email: sergio.belotti@nokia.com 1626 Daniel King 1627 Lancaster University 1628 Email: d.king@lancaster.ac.uk 1630 Dhruv Dhody 1631 Huawei Technologies 1632 Divyashree Techno Park, Whitefield 1633 Bangalore, Karnataka 560066 1634 India 1635 Email: dhruv.ietf@gmail.com 1637 Gert Grammel 1638 Juniper Networks 1639 Email: ggrammel@juniper.net 1641 Authors' Addresses 1643 Daniele Ceccarelli 1644 Ericsson 1645 Torshamnsgatan,48 1646 Stockholm, Sweden 1647 Email: daniele.ceccarelli@ericsson.com 1649 Young Lee 1650 Huawei Technologies 1651 5340 Legacy Drive 1652 Plano, TX 75023, USA 1653 Phone: (469)277-5838 1654 Email: leeyoung@huawei.com 1656 APPENDIX A - Example of MDSC and PNC Functions Integrated in A 1657 Service/Network Orchestrator 1659 This section provides an example of a possible deployment scenario, 1660 in which Service/Network Orchestrator can include a number of 1661 functionalities, among which, in the example below, PNC 1662 functionalities for domain 2 and MDSC functionalities to coordinate 1663 the PNC1 functionalities (hosted in a separate domain controller) 1664 and PNC2 functionalities (co-hosted in the network orchestrator). 1666 Customer 1667 +-------------------------------+ 1668 | +-----+ | 1669 | | CNC | | 1670 | +-----+ | 1671 +-------|-----------------------+ 1672 | 1673 Service/Network | CMI 1674 Orchestrator | 1675 +-------|------------------------+ 1676 | +------+ MPI +------+ | 1677 | | MDSC |---------| PNC2 | | 1678 | +------+ +------+ | 1679 +-------|------------------|-----+ 1680 | MPI | 1681 Domain Controller | | 1682 +-------|-----+ | 1683 | +-----+ | | SBI 1684 | |PNC1 | | | 1685 | +-----+ | | 1686 +-------|-----+ | 1687 v SBI v 1688 ------- ------- 1690 ( ) ( ) 1691 - - - - 1692 ( ) ( ) 1693 ( Domain 1 )----( Domain 2 ) 1694 ( ) ( ) 1695 - - - - 1696 ( ) ( ) 1697 ------- -------