idnits 2.17.1 draft-ietf-teas-actn-framework-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 27, 2017) is 2372 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Daniele Ceccarelli (Ed) 2 Internet Draft Ericsson 3 Intended status: Informational Young Lee (Ed) 4 Expires: April 27, 2018 Huawei 6 October 27, 2017 8 Framework for Abstraction and Control of Traffic Engineered Networks 10 draft-ietf-teas-actn-framework-11 12 Abstract 14 Traffic Engineered networks have a variety of mechanisms to 15 facilitate the separation of the data plane and control plane. They 16 also have a range of management and provisioning protocols to 17 configure and activate network resources. These mechanisms 18 represent key technologies for enabling flexible and dynamic 19 networking. 21 Abstraction of network resources is a technique that can be applied 22 to a single network domain or across multiple domains to create a 23 single virtualized network that is under the control of a network 24 operator or the customer of the operator that actually owns 25 the network resources. 27 This document provides a framework for Abstraction and Control of 28 Traffic Engineered Networks (ACTN). 30 Status of this Memo 32 This Internet-Draft is submitted to IETF in full conformance with 33 the provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF), its areas, and its working groups. Note that 37 other groups may also distribute working documents as Internet- 38 Drafts. 40 Internet-Drafts are draft documents valid for a maximum of six 41 months and may be updated, replaced, or obsoleted by other documents 42 at any time. It is inappropriate to use Internet-Drafts as 43 reference material or to cite them other than as "work in progress." 45 The list of current Internet-Drafts can be accessed at 46 http://www.ietf.org/ietf/1id-abstracts.txt 47 The list of Internet-Draft Shadow Directories can be accessed at 48 http://www.ietf.org/shadow.html. 50 This Internet-Draft will expire on April 27, 2018. 52 Copyright Notice 54 Copyright (c) 2017 IETF Trust and the persons identified as the 55 document authors. All rights reserved. 57 This document is subject to BCP 78 and the IETF Trust's Legal 58 Provisions Relating to IETF Documents 59 (http://trustee.ietf.org/license-info) in effect on the date of 60 publication of this document. Please review these documents 61 carefully, as they describe your rights and restrictions with 62 respect to this document. Code Components extracted from this 63 document must include Simplified BSD License text as described in 64 Section 4.e of the Trust Legal Provisions and are provided without 65 warranty as described in the Simplified BSD License. 67 Table of Contents 69 1. Introduction...................................................3 70 2. Overview.......................................................4 71 2.1. Terminology...............................................5 72 2.2. VNS Model of ACTN.........................................8 73 2.2.1. Customers............................................9 74 2.2.2. Service Providers...................................10 75 2.2.3. Network Providers...................................10 76 3. ACTN Base Architecture........................................10 77 3.1. Customer Network Controller..............................12 78 3.2. Multi-Domain Service Coordinator.........................13 79 3.3. Provisioning Network Controller..........................13 80 3.4. ACTN Interfaces..........................................14 81 4. Advanced ACTN Architectures...................................15 82 4.1. MDSC Hierarchy...........................................15 83 4.2. Functional Split of MDSC Functions in Orchestrators......16 84 5. Topology Abstraction Methods..................................17 85 5.1. Abstraction Factors......................................17 86 5.2. Abstraction Types........................................18 87 5.2.1. Native/White Topology...............................18 88 5.2.2. Black Topology......................................18 89 5.2.3. Grey Topology.......................................19 90 5.3. Methods of Building Grey Topologies......................20 91 5.3.1. Automatic Generation of Abstract Topology by 92 Configuration..............................................21 93 5.3.2. On-demand Generation of Supplementary Topology via Path 94 Compute Request/Reply......................................21 95 5.4. Hierarchical Topology Abstraction Example................22 96 5.5. VN Recursion with Network Layers.........................23 97 6. Access Points and Virtual Network Access Points...............25 98 6.1. Dual-Homing Scenario.....................................27 99 7. Advanced ACTN Application: Multi-Destination Service..........28 100 7.1. Pre-Planned End Point Migration..........................29 101 7.2. On the Fly End-Point Migration...........................30 102 8. Manageability Considerations..................................30 103 8.1. Policy...................................................30 104 8.2. Policy Applied to the Customer Network Controller........31 105 8.3. Policy Applied to the Multi Domain Service Coordinator...31 106 8.4. Policy Applied to the Provisioning Network Controller....32 107 9. Security Considerations.......................................32 108 9.1. CNC-MDSC Interface (CMI).................................33 109 9.2. MDSC-PNC Interface (MPI).................................34 110 10. IANA Considerations..........................................34 111 11. References...................................................34 112 11.1. Informative References..................................34 113 12. Contributors.................................................35 114 Authors' Addresses...............................................36 115 APPENDIX A - Example of MDSC and PNC Functions Integrated in A 116 Service/Network Orchestrator.....................................37 118 1. Introduction 120 The term "Traffic Engineered network" refers to a network that uses 121 any connection-oriented technology under the control of a 122 distributed or centralized control plane to support dynamic 123 provisioning of end-to-end connectivity. Traffic Engineered (TE) 124 networks have a variety of mechanisms to facilitate separation of 125 data plane and control plane including distributed signaling for 126 path setup and protection, centralized path computation for planning 127 and traffic engineering, and a range of management and provisioning 128 protocols to configure and activate network resources. These 129 mechanisms represent key technologies for enabling flexible and 130 dynamic networking. Some examples of networks that are in scope of 131 this definition are optical networks, MPLS Transport Profile (MPLS- 132 TP) networks [RFC5654], and MPLS-TE networks [RFC2702]. 134 One of the main drivers for Software Defined Networking (SDN) 135 [RFC7149] is a decoupling of the network control plane from the data 136 plane. This separation has been achieved for TE networks with the 137 development of MPLS/GMPLS [RFC3945] and the Path Computation Element 138 (PCE) [RFC4655]. One of the advantages of SDN is its logically 139 centralized control regime that allows a global view of the 140 underlying networks. Centralized control in SDN helps improve 141 network resource utilization compared with distributed network 142 control. For TE-based networks, a PCE may serve as a logically 143 centralized path computation function. 145 This document describes a set of management and control functions 146 used to operate one or more TE networks to construct virtual 147 networks that can be represented to customers and that are built 148 from abstractions of the underlying TE networks so that, for 149 example, a link in the customer's network is constructed from a path 150 or collection of paths in the underlying networks. We call this set 151 of function "Abstraction and Control of Traffic Engineered Networks" 152 (ACTN). 154 2. Overview 156 Three key aspects that need to be solved by SDN are: 158 . Separation of service requests from service delivery so that 159 the configuration and operation of a network is transparent 160 from the point of view of the customer, but remains responsive 161 to the customer's services and business needs. 163 . Network abstraction: As described in [RFC7926], abstraction is 164 the process of applying policy to a set of information about a 165 TE network to produce selective information that represents the 166 potential ability to connect across the network. The process 167 of abstraction presents the connectivity graph in a way that is 168 independent of the underlying network technologies, 169 capabilities, and topology so that the graph can be used to 170 plan and deliver network services in a uniform way 172 . Coordination of resources across multiple independent networks 173 and multiple technology layers to provide end-to-end services 174 regardless of whether the networks use SDN or not. 176 As networks evolve, the need to provide support for distinct 177 services, separated service orchestration, and resource abstraction 178 have emerged as key requirements for operators. In order to support 179 multiple customers each with its own view of and control of the 180 server network, a network operator needs to partition (or "slice") 181 or manage sharing of the network resources. Network slices can be 182 assigned to each customer for guaranteed usage which is a step 183 further than shared use of common network resources. 185 Furthermore, each network represented to a customer can be built 186 from virtualization of the underlying networks so that, for example, 187 a link in the customer's network is constructed from a path or 188 collection of paths in the underlying network. 190 We call the set of management and control functions used to provide 191 these features Abstraction and Control of Traffic Engineered 192 Networks (ACTN). 194 ACTN can facilitate virtual network operation via the creation of a 195 single virtualized network or a seamless service. This supports 196 operators in viewing and controlling different domains (at any 197 dimension: applied technology, administrative zones, or vendor- 198 specific technology islands) and presenting virtualized networks to 199 their customers. 201 The ACTN framework described in this document facilitates: 203 . Abstraction of the underlying network resources to higher-layer 204 applications and customers [RFC7926]. 206 . Virtualization of particular underlying resources, whose 207 selection criterion is the allocation of those resources to a 208 particular customer, application or service [ONF-ARCH]. 210 . Network slicing of infrastructure to meet specific customers' 211 service requirements. 213 . Creation of a virtualized environment allowing operators to 214 view and control multi-domain networks as a single virtualized 215 network. 217 . The presentation to customers of networks as a virtual network 218 via open and programmable interfaces. 220 2.1. Terminology 222 The following terms are used in this document. Some of them are 223 newly defined, some others reference existing definitions: 224 . Domain: A domain [RFC4655] is any collection of network 225 elements within a common sphere of address management or path 226 computation responsibility. Specifically within this document 227 we mean a part of an operator's network that is under common 228 management. Network elements will often be grouped into 229 domains based on technology types, vendor profiles, and 230 geographic proximity. 232 . Abstraction: This process is defined in [RFC7926]. 234 . Network Slicing: In the context of ACTN, a network slice is a 235 collection of resources that is used to establish a logically 236 dedicated virtual network over one or more TE network. Network 237 slicing allows a network provider to provide dedicated virtual 238 networks for applications/customers over a common network 239 infrastructure. The logically dedicated resources are a part 240 of the larger common network infrastructures that are shared 241 among various network slice instances which are the end-to-end 242 realization of network slicing, consisting of the combination 243 of physically or logically dedicated resources. 245 . Node: A node is a vertex on the graph representation of a TE 246 topology. In a physical network topology, a node corresponds 247 to a physical network element (NE) such as a router. In an 248 abstract network topology, a node (sometimes called an abstract 249 node) is a representation as a single vertex of one or more 250 physical NEs and their connecting physical connections. The 251 concept of a node represents the ability to connect from any 252 access to the node (a link end) to any other access to that 253 node, although "limited cross-connect capabilities" may also be 254 defined to restrict this functionality. Just as network 255 slicing and network abstraction may be applied recursively, so 256 a node in one topology may be created by applying slicing or 257 abstraction to the nodes in the underlying topology. 259 . Link: A link is an edge on the graph representation of a TE 260 topology. Two nodes connected by a link are said to be 261 "adjacent" in the TE topology. In a physical network topology, 262 a link corresponds to a physical connection. In an abstract 263 network topology, a link (sometimes called an abstract link) is 264 a representation of the potential to connect a pair of points 265 with certain TE parameters (see [RFC7926] for details). 266 Network slicing/virtualization and network abstraction may be 267 applied recursively, so a link in one topology may be created 268 by applying slicing and/or abstraction to the links in the 269 underlying topology. 271 . Abstract Link: The term "abstract link" is defined in 272 [RFC7926]. 274 . Abstract Topology: The topology of abstract nodes and abstract 275 links presented through the process of abstraction by a lower 276 layer network for use by a higher layer network. 278 . A Virtual Network (VN) is a network provided by a service 279 provider to a customer for the customer to use in any way it 280 wants as though it was a physical network. There are two views 281 of a VN as follows: 283 a) The VN can be seen as a set of edge-to-edge links (a Type 1 284 VN). Each link is referred as a VN member and is formed as 285 an end-to-end tunnel across the underlying networks. Such 286 tunnels may be constructed by recursive slicing or 287 abstraction of paths in the underlying networks and can 288 encompass edge points of the customer's network, access 289 links, intra-domain paths, and inter-domain links. 291 b) The VN can also be seen as a topology of virtual nodes and 292 virtual links (a Type 2 VN). The provider needs to map the 293 VN to actual resource assignment, which is known as virtual 294 network embedding. The nodes in this case include physical 295 end points, border nodes, and internal nodes as well as 296 abstracted nodes. Similarly the links include physical 297 access links, inter-domain links, and intra-domain links as 298 well as abstract links. 300 Clearly a Type 1 VN is a special case of a Type 2 VN. 302 . Access link: A link between a customer node and a provider 303 node. 305 . Inter-domain link: A link between domains under distinct 306 management administration. 308 . Access Point (AP): An AP is a logical identifier shared between 309 the customer and the provider used to identify an access link. 310 The AP is used by the customer when requesting a VNS. Note that 311 the term "TE Link Termination Point" (LTP) defined in [TE-Topo] 312 describes the end points of links, while an AP is a common 313 identifier for the link itself. 315 . VN Access Point (VNAP): A VNAP is the binding between an AP and 316 a given VN. 318 . Server Network: As defined in [RFC7926], a server network is a 319 network that provides connectivity for another network (the 320 Client Network) in a client-server relationship. 322 2.2. VNS Model of ACTN 324 A Virtual Network Service (VNS) is the service agreement between a 325 customer and provider to provide a VN. There are three types of VNS 326 defined in this document. 328 o Type 1 VNS refers to a VNS in which the customer is allowed 329 to create and operate a Type 1 VN. 331 o Type 2a and 2b VNS refer to VNSs in which the customer is 332 allowed to create and operates a Type 2 VN. With a Type 333 2a VNS, the VN is statically created at service 334 configuration time and the customer is not allowed to 335 change the topology (e.g., by adding or deleting abstract 336 nodes and links). A Type 2b VNS is the same as a Type 2a 337 VNS except that the customer is allowed to make dynamic 338 changes to the initial topology created at service 339 configuration time. 341 VN Operations are functions that a customer can exercise on a VN 342 depending on the agreement between the customer and the provider. 344 o VN Creation allows a customer to request the instantiation 345 of a VN. This could be through off-line pre-configuration 346 or through dynamic requests specifying attributes to a 347 Service Level Agreement (SLA) to satisfy the customer's 348 objectives. 350 o Dynamic Operations allow a customer to modify or delete the 351 VN. The customer can further act upon the virtual network 352 to create/modify/delete virtual links and nodes. These 353 changes will result in subsequent tunnel management in the 354 operator's networks. 356 There are three key entities in the ACTN VNS model: 358 - Customers 359 - Service Providers 360 - Network Providers 362 These entities are related in a three tier model as shown in Figure 363 1. 365 +----------------------+ 366 | Customer | 367 +----------------------+ 368 | 369 | 371 VNS || | /\ VNS 372 Request || | || Reply 373 \/ | || 374 +----------------------+ 375 | Service Provider | 376 +----------------------+ 377 / | \ 378 / | \ 379 / | \ 380 / | \ 381 +------------------+ +------------------+ +------------------+ 382 |Network Provider 1| |Network Provider 2| |Network Provider 3| 383 +------------------+ +------------------+ +------------------+ 385 Figure 1: The Three Tier Model. 387 The commercial roles of these entities are described in the 388 following sections. 390 2.2.1. Customers 392 Basic customers include fixed residential users, mobile users, and 393 small enterprises. Each requires a small amount of resources and is 394 characterized by steady requests (relatively time invariant). Basic 395 customers do not modify their services themselves: if a service 396 change is needed, it is performed by the provider as a proxy. 398 Advanced customers include enterprises, governments, and utility 399 companies. Such customers ask for both point-to point and 400 multipoint connectivity with high resource demands varying 401 significantly in time. This is one of the reasons why a bundled 402 service offering is not enough and it is desirable to provide each 403 advanced customer with a customized virtual network service. 404 Advanced customers may also have the ability to modify their service 405 parameters within the scope of their virtualized environments. The 406 primary focus of ACTN is Advanced Customers. 408 As customers are geographically spread over multiple network 409 provider domains, they have to interface to multiple providers and 410 may have to support multiple virtual network services with different 411 underlying objectives set by the network providers. To enable these 412 customers to support flexible and dynamic applications they need to 413 control their allocated virtual network resources in a dynamic 414 fashion, and that means that they need a view of the topology that 415 spans all of the network providers. Customers of a given service 416 provider can in turn offer a service to other customers in a 417 recursive way. 419 2.2.2. Service Providers 421 In the scope of ACTN, service providers deliver VNSs to their 422 customers. Service providers may or may not own physical network 423 resources (i.e., may or may not be network providers as described in 424 Section 2.2.3). When a service provider is the same as the network 425 provider, this is similar to existing VPN models applied to a single 426 provider although it may be hard to use this approach when the 427 customer spans multiple independent network provider domains. 429 When network providers supply only infrastructure, while distinct 430 service providers interface to the customers, the service providers 431 are themselves customers of the network infrastructure providers. 432 One service provider may need to keep multiple independent network 433 providers because its end-users span geographically across multiple 434 network provider domains. 436 2.2.3. Network Providers 438 Network Providers are the infrastructure providers that own the 439 physical network resources and provide network resources to their 440 customers. The network operated by a network provider may be a 441 virtual network created by a service provider and supplied to the 442 network provider in its role as a customer. The layered model 443 described in this architecture separates the concerns of network 444 providers and customers, with service providers acting as 445 aggregators of customer requests. 447 3. ACTN Base Architecture 449 This section provides a high-level model of ACTN showing the 450 interfaces and the flow of control between components. 452 The ACTN architecture is based on a 3-tier reference model and 453 allows for hierarchy and recursion. The main functionalities within 454 an ACTN system are: 456 . Multi-domain coordination: This function oversees the specific 457 aspects of different domains and builds a single abstracted 458 end-to-end network topology in order to coordinate end-to-end 459 path computation and path/service provisioning. Domain 460 sequence path calculation/determination is also a part of this 461 function. 463 . Virtualization/Abstraction: This function provides an 464 abstracted view of the underlying network resources for use by 465 the customer - a customer may be the client or a higher level 466 controller entity. This function includes network path 467 computation based on customer service connectivity request 468 constraints, path computation based on the global network-wide 469 abstracted topology, and the creation of an abstracted view of 470 network resources allocated to each customer. These operations 471 depend on customer-specific network objective functions and 472 customer traffic profiles. 474 . Customer mapping/translation: This function is to map customer 475 requests/commands into network provisioning requests that can 476 be sent to the Provisioning Network Controller (PNC) according 477 to business policies provisioned statically or dynamically at 478 the OSS/NMS. Specifically, it provides mapping and translation 479 of a customer's service request into a set of parameters that 480 are specific to a network type and technology such that network 481 configuration process is made possible. 483 . Virtual service coordination: This function translates customer 484 service-related information into virtual network service 485 operations in order to seamlessly operate virtual networks 486 while meeting a customer's service requirements. In the 487 context of ACTN, service/virtual service coordination includes 488 a number of service orchestration functions such as multi- 489 destination load balancing, guarantees of service quality, 490 bandwidth and throughput. It also includes notifications for 491 service fault and performance degradation and so forth. 493 The base ACTN architecture defines three controller types and the 494 corresponding interfaces between these controllers. The following 495 types of controller are shown in Figure 2: 497 . CNC - Customer Network Controller 498 . MDSC - Multi Domain Service Coordinator 499 . PNC - Provisioning Network Controller 501 Figure 2 also shows the following interfaces: 503 . CMI - CNC-MDSC Interface 504 . MPI - MDSC-PNC Interface 505 . SBI - South Bound Interface 506 +---------+ +---------+ +---------+ 507 | CNC | | CNC | | CNC | 508 +---------+ +---------+ +---------+ 509 \ | / 510 Business \ | / 511 Boundary =============\==============|==============/============ 512 Between \ | / 513 Customer & ------- | CMI ------- 514 Network Provider \ | / 515 +---------------+ 516 | MDSC | 517 +---------------+ 518 / | \ 519 ------------ | MPI ------------- 520 / | \ 521 +-------+ +-------+ +-------+ 522 | PNC | | PNC | | PNC | 523 +-------+ +-------+ +-------+ 524 | SBI / | / \ 525 | / | SBI / \ 526 --------- ----- | / \ 527 ( ) ( ) | / \ 528 - Control - ( Phys. ) | / ----- 529 ( Plane ) ( Net ) | / ( ) 530 ( Physical ) ----- | / ( Phys. ) 531 ( Network ) ----- ----- ( Net ) 532 - - ( ) ( ) ----- 533 ( ) ( Phys. ) ( Phys. ) 534 --------- ( Net ) ( Net ) 535 ----- ----- 537 Figure 2: ACTN Base Architecture 539 Note that this is a functional architecture: an implementation and 540 deployment might collocate one or more of the functional components. 542 3.1. Customer Network Controller 544 A Customer Network Controller (CNC) is responsible for communicating 545 a customer's VNS requirements to the network provider over the CNC- 546 MDSC Interface (CMI). It has knowledge of the end-points associated 547 with the VNS (expressed as APs), the service policy, and other QoS 548 information related to the service. 550 As the Customer Network Controller directly interfaces to the 551 applications, it understands multiple application requirements and 552 their service needs. 554 The capability of a CNC beyond its CMI role is outside the scope of 555 ACTN and may be implemented in different ways. For example, the CNC 556 may in fact be a controller or part of a controller in the customer's 557 domain, or the CNC functionality could also be implemented as part of 558 a provisioning portal. 560 3.2. Multi-Domain Service Coordinator 562 A Multi-Domain Service Coordinator (MDSC) is a functional block that 563 implements all of the ACTN functions listed in Section 3 and 564 described further in Section 4.2. The two functions of the MDSC, 565 namely, multi domain coordination and virtualization/abstraction are 566 referred to as network-related functions while the other two 567 functions, namely, customer mapping/translation and virtual service 568 coordination are referred to as service-related functions. The MDSC 569 sits at the center of the ACTN model between the CNC that issues 570 connectivity requests and the Provisioning Network Controllers 571 (PNCs) that manage the network resources. 572 The key point of the MDSC (and of the whole ACTN framework) is 573 detaching the network and service control from underlying technology 574 to help the customer express the network as desired by business 575 needs. The MDSC envelopes the instantiation of the right technology 576 and network control to meet business criteria. In essence it 577 controls and manages the primitives to achieve functionalities as 578 desired by the CNC. 580 In order to allow for multi-domain coordination a 1:N relationship 581 must be allowed between MDSCs and PNCs. 583 In addition to that, it could also be possible to have an M:1 584 relationship between MDSCs and PNC to allow for network resource 585 partitioning/sharing among different customers not necessarily 586 connected to the same MDSC (e.g., different service providers) but 587 all using the resources of a common network infrastructure provider. 589 3.3. Provisioning Network Controller 591 The Provisioning Network Controller (PNC) oversees configuring the 592 network elements, monitoring the topology (physical or virtual) of 593 the network, and collecting information about the topology (either 594 raw or abstracted). 596 The PNC functions can be implemented as part of an SDN domain 597 controller, a Network Management System (NMS), an Element Management 598 System (EMS), an active PCE-based controller [Centralized] or any 599 other means to dynamically control a set of nodes and that is 600 implementing an NBI compliant with ACTN specification. 602 A PNC domain includes all the resources under the control of a 603 single PNC. It can be composed of different routing domains and 604 administrative domains, and the resources may come from different 605 layers. The interconnection between PNC domains is illustrated in 606 Figure 3. 608 _______ _______ 609 _( )_ _( )_ 610 _( )_ _( )_ 611 ( ) Border ( ) 612 ( PNC ------ Link ------ PNC ) 613 ( Domain X |Border|========|Border| Domain Y ) 614 ( | Node | | Node | ) 615 ( ------ ------ ) 616 (_ _) (_ _) 617 (_ _) (_ _) 618 (_______) (_______) 620 Figure 3: PNC Domain Borders 622 3.4. ACTN Interfaces 624 Direct customer control of transport network elements and 625 virtualized services is not a viable proposition for network 626 providers due to security and policy concerns. In addition, some 627 networks may operate a control plane and as such it is not practical 628 for the customer to directly interface with network elements. 629 Therefore, the network has to provide open, programmable interfaces, 630 through which customer applications can create, replace and modify 631 virtual network resources and services in an interactive, flexible 632 and dynamic fashion while having no impact on other customers. 634 Three interfaces exist in the ACTN architecture as shown in Figure 635 2. 637 . CMI: The CNC-MDSC Interface (CMI) is an interface between a CNC 638 and an MDSC. The CMI is a business boundary between customer 639 and network provider. It is used to request a VNS for an 640 application. All service-related information is conveyed over 641 this interface (such as the VNS type, topology, bandwidth, and 642 service constraints). Most of the information over this 643 interface is technology agnostic (the customer is unaware of 644 the network technologies used to deliver the service), but 645 there are some cases (e.g., access link configuration) where it 646 is necessary to specify technology-specific details. 648 . MPI: The MDSC-PNC Interface (MPI) is an interface between an 649 MDSC and a PNC. It communicates requests for new connectivity 650 or for bandwidth changes in the physical network. In multi- 651 domain environments, the MDSC needs to communicate with 652 multiple PNCs each responsible for control of a domain. The 653 MPI presents an abstracted topology to the MDSC hiding 654 technology specific aspects of the network and hiding topology 655 according to policy. 657 . SBI: The Southbound Interface (SBI) is out of scope of ACTN. 658 Many different SBIs have been defined for different 659 environments, technologies, standards organizations, and 660 vendors. It is shown in Figure 3 for reference reason only. 662 4. Advanced ACTN Architectures 664 This section describes advanced configurations of the ACTN 665 architecture. 667 4.1. MDSC Hierarchy 669 A hierarchy of MDSCs can be foreseen for many reasons, among which 670 are scalability, administrative choices, or putting together 671 different layers and technologies in the network. In the case where 672 there is a hierarchy of MDSCs, we introduce the terms higher-level 673 MDSC (MDSC-H) and lower-level MDSC (MDSC-L). The interface between 674 them is a recursion of the MPI. An implementation of an MDSC-H 675 makes provisioning requests as normal using the MPI, but an MDSC-L 676 must be able to receive requests as normal at the CMI and also at 677 the MPI. The hierarchy of MDSCs can be seen in Figure 4. 679 Another implementation choice could foresee the usage of an MDSC-L 680 for all the PNCs related to a given technology (e.g. IP/MPLS) and a 681 different MDSC-L for the PNCs related to another technology (e.g. 682 OTN/WDM) and an MDSC-H to coordinate them. 684 +--------+ 685 | CNC | 686 +--------+ 687 | +-----+ 688 | CMI | CNC | 689 +----------+ +-----+ 690 -------| MDSC-H |---- | 691 | +----------+ | | CMI 692 MPI | MPI | | 693 | | | 694 +---------+ +---------+ 695 | MDSC-L | | MDSC-L | 696 +---------+ +---------+ 697 MPI | | | | 698 | | | | 699 ----- ----- ----- ----- 700 | PNC | | PNC | | PNC | | PNC | 701 ----- ----- ----- ----- 703 Figure 4: MDSC Hierarchy 705 4.2. Functional Split of MDSC Functions in Orchestrators 707 An implementation choice could separate the MDSC functions into two 708 groups, one group for service-related functions and the other for 709 network-related functions. This enables the implementation of a 710 service orchestrator that provides the service-related functions of 711 the MDSC and a network orchestrator that provides the network- 712 related functions of the MDSC. This split is consistent with the 713 YANG service model architecture described in [Service-YANG]. Figure 714 5 depicts this and shows how the ACTN interfaces may map to YANG 715 models. 717 +--------------------+ 718 | Customer | 719 | +-----+ | 720 | | CNC | | 721 | +-----+ | 722 +--------------------+ 723 CMI | Customer Service Model 724 | 725 +---------------------------------------+ 726 | Service | 727 ********|*********************** Orchestrator | 728 * MDSC | +-----------------+ * | 729 * | | Service-related | * | 730 * | | Functions | * | 731 * | +-----------------+ * | 732 * +----------------------*----------------+ 733 * * | Service Delivery Model 734 * * | 735 * +----------------------*----------------+ 736 * | * Network | 737 * | +-----------------+ * Orchestrator | 738 * | | Network-related | * | 739 * | | Functions | * | 740 * | +-----------------+ * | 741 ********|*********************** | 742 +---------------------------------------+ 743 MPI | Network Configuration Model 744 | 745 +------------------------+ 746 | Domain | 747 | +------+ Controller | 748 | | PNC | | 749 | +------+ | 750 +------------------------+ 751 SBI | Device Configuration Model 752 | 753 +--------+ 754 | Device | 755 +--------+ 757 Figure 5: ACTN Architecture in the Context of the YANG Service 758 Models 759 5. Topology Abstraction Methods 761 Topology abstraction is described in [RFC7926]. This section 762 discusses topology abstraction factors, types, and their context in 763 the ACTN architecture. 765 Abstraction in ACTN is performed by the PNC when presenting 766 available topology to the MDSC, or by an MDSC-L when presenting 767 topology to an MDSC-H. This function is different to the creation 768 of a VN (and particularly a Type 2 VN) which is not abstraction but 769 construction of virtual resources. 771 5.1. Abstraction Factors 773 As discussed in [RFC7926], abstraction is tied with policy of the 774 networks. For instance, per an operational policy, the PNC would 775 not provide any technology specific details (e.g., optical 776 parameters for WSON) in the abstract topology it provides to the 777 MDSC. 779 There are many factors that may impact the choice of abstraction: 781 - Abstraction depends on the nature of the underlying domain 782 networks. For instance, packet networks may be abstracted with 783 fine granularity while abstraction of optical networks depends on 784 the switching units (such as wavelengths) and the end-to-end 785 continuity and cross-connect limitations within the network. 787 - Abstraction also depends on the capability of the PNCs. As 788 abstraction requires hiding details of the underlying network 789 resources, the PNC's capability to run algorithms impacts the 790 feasibility of abstraction. Some PNC may not have the ability to 791 abstract native topology while other PNCs may have the ability to 792 use sophisticated algorithms. 794 - Abstraction is a tool that can improve scalability. Where the 795 native network resource information is of large size there is a 796 specific scaling benefit to abstraction. 798 - The proper abstraction level may depend on the frequency of 799 topology updates and vice versa. 801 - The nature of the MDSC's support for technology-specific 802 parameters impacts the degree/level of abstraction. If the MDSC 803 is not capable of handling such parameters then a higher level of 804 abstraction is needed. 806 - In some cases, the PNC is required to hide key internal 807 topological data from the MDSC. Such confidentiality can be 808 achieved through abstraction. 810 5.2. Abstraction Types 812 This section defines the following three types of topology 813 abstraction: 815 . Native/White Topology (Section 5.2.1) 816 . Black Topology (Section 5.2.2) 817 . Grey Topology (Section 5.2.3) 819 5.2.1. Native/White Topology 821 This is a case where the PNC provides the actual network topology to 822 the MDSC without any hiding or filtering of information. I.e., no 823 abstraction is performed. In this case, the MDSC has the full 824 knowledge of the underlying network topology and can operate on it 825 directly. 826 5.2.2. Black Topology 828 A black topology replaces a full network with a minimal 829 representation of the edge-to-edge topology without disclosing any 830 node internal connectivity information. The entire domain network 831 may be abstracted as a single abstract node with the network's 832 access/egress links appearing as the ports to the abstract node and 833 the implication that any port can be 'cross-connected' to any other. 834 Figure 6 depicts a native topology with the corresponding black 835 topology with one virtual node and inter-domain links. In this 836 case, the MDSC has to make a provisioning request to the PNCs to 837 establish the port-to-port connection. If there is a large number 838 of inter-connected domains, this abstraction method may impose a 839 heavy coordination load at the MDSC level in order to find an 840 optimal end-to-end path since the abstraction hides so much 841 information that it is not possible to determine whether an end-to- 842 end path is feasible without asking each PNC to set up each path 843 fragment. For this reason, the MPI might need to be enhanced to 844 allow the PNCs to be queried for the practicality and 845 characteristics of paths across the abstract node. 846 ..................................... 847 : PNC Domain : 848 : +--+ +--+ +--+ +--+ : 849 ------+ +-----+ +-----+ +-----+ +------ 850 : ++-+ ++-+ +-++ +-++ : 851 : | | | | : 852 : | | | | : 853 : | | | | : 854 : | | | | : 855 : ++-+ ++-+ +-++ +-++ : 856 ------+ +-----+ +-----+ +-----+ +------ 857 : +--+ +--+ +--+ +--+ : 858 :.................................... 860 +----------+ 861 ---+ +--- 862 | Abstract | 863 | Node | 864 ---+ +--- 865 +----------+ 867 Figure 6: Native Topology with Corresponding Black Topology Expressed 868 as an Abstract Node 870 5.2.3. Grey Topology 872 A grey topology represents a compromise between black and white 873 topologies from a granularity point of view. In this case the PNC 874 exposes an abstract topology that comprises nodes and links. The 875 nodes and links may be physical of abstract while the abstract 876 topology represents the potential of connectivity across the PNC 877 domain. 878 Two modes of grey topology are identified: 879 . In a type A grey topology type border nodes are connected by a 880 full mesh of TE links (see Figure 7). 882 . In a type B grey topology border nodes are connected over a 883 more detailed network comprising internal abstract nodes and 884 abstracted links. This mode of abstraction supplies the MDSC 885 with more information about the internals of the PNC domain and 886 allows it to make more informed choices about how to route 887 connectivity over the underlying network. 889 ..................................... 890 : PNC Domain : 891 : +--+ +--+ +--+ +--+ : 892 ------+ +-----+ +-----+ +-----+ +------ 893 : ++-+ ++-+ +-++ +-++ : 894 : | | | | : 895 : | | | | : 896 : | | | | : 897 : | | | | : 898 : ++-+ ++-+ +-++ +-++ : 899 ------+ +-----+ +-----+ +-----+ +------ 900 : +--+ +--+ +--+ +--+ : 901 :.................................... 903 .................... 904 : Abstract Network : 905 : : 906 : +--+ +--+ : 907 -------+ +----+ +------- 908 : ++-+ +-++ : 909 : | \ / | : 910 : | \/ | : 911 : | /\ | : 912 : | / \ | : 913 : ++-+ +-++ : 914 -------+ +----+ +------- 915 : +--+ +--+ : 916 :..................: 918 Figure 7: Native Topology with Corresponding Grey Topology 920 5.3. Methods of Building Grey Topologies 922 This section discusses two different methods of building a grey 923 topology: 925 . Automatic generation of abstract topology by configuration 926 (Section 5.3.1) 927 . On-demand generation of supplementary topology via path 928 computation request/reply (Section 5.3.2) 930 5.3.1. Automatic Generation of Abstract Topology by Configuration 932 Automatic generation is based on the abstraction/summarization of 933 the whole domain by the PNC and its advertisement on the MPI. The 934 level of abstraction can be decided based on PNC configuration 935 parameters (e.g., "provide the potential connectivity between any PE 936 and any ASBR in an MPLS-TE network"). 938 Note that the configuration parameters for this abstract topology 939 can include available bandwidth, latency, or any combination of 940 defined parameters. How to generate such information is beyond the 941 scope of this document. 943 This abstract topology may need to be periodically or incrementally 944 updated when there is a change in the underlying network or the use 945 of the network resources that make connectivity more or less 946 available. 948 5.3.2. On-demand Generation of Supplementary Topology via Path Compute 949 Request/Reply 951 While abstract topology is generated and updated automatically by 952 configuration as explained in Section 5.3.1, additional 953 supplementary topology may be obtained by the MDSC via a path 954 compute request/reply mechanism. 956 The abstract topology advertisements from PNCs give the MDSC the 957 border node/link information for each domain. Under this scenario, 958 when the MDSC needs to create a new VN, the MDSC can issue path 959 computation requests to PNCs with constraints matching the VN 960 request as described in [ACTN-YANG]. An example is provided in 961 Figure 8, where the MDSC is creating a P2P VN between AP1 and AP2. 962 The MDSC could use two different inter-domain links to get from 963 Domain X to Domain Y, but in order to choose the best end-to-end 964 path it needs to know what domain X and Y can offer in terms of 965 connectivity and constraints between the PE nodes and the border 966 nodes. 968 ------- -------- 969 ( ) ( ) 970 - BrdrX.1------- BrdrY.1 - 971 (+---+ ) ( +---+) 972 -+---( |PE1| Dom.X ) ( Dom.Y |PE2| )---+- 973 | (+---+ ) ( +---+) | 974 AP1 - BrdrX.2------- BrdrY.2 - AP2 975 ( ) ( ) 976 ------- -------- 978 Figure 8: A Multi-Domain Example 979 The MDSC issues a path computation request to PNC.X asking for 980 potential connectivity between PE1 and border node BrdrX.1 and 981 between PE1 and BrdrX.2 with related objective functions and TE 982 metric constraints. A similar request for connectivity from the 983 border nodes in Domain Y to PE2 will be issued to PNC.Y. The MDSC 984 merges the results to compute the optimal end-to-end path including 985 the inter domain links. The MDSC can use the result of this 986 computation to request the PNCs to provision the underlying 987 networks, and the MDSC can then use the end-to-end path as a virtual 988 link in the VN it delivers to the customer. 990 5.4. Hierarchical Topology Abstraction Example 992 This section illustrates how topology abstraction operates in 993 different levels of a hierarchy of MDSCs as shown in Figure 9. 995 +-----+ 996 | CNC | CNC wants to create a VN 997 +-----+ between CE A and CE B 998 | 999 | 1000 +-----------------------+ 1001 | MDSC-H | 1002 +-----------------------+ 1003 / \ 1004 / \ 1005 +---------+ +---------+ 1006 | MDSC-L1 | | MDSC-L2 | 1007 +---------+ +---------+ 1008 / \ / \ 1009 / \ / \ 1010 +----+ +----+ +----+ +----+ 1011 CE A o----|PNC1| |PNC2| |PNC3| |PNC4|----o CE B 1012 +----+ +----+ +----+ +----+ 1014 Virtual Network Delivered to CNC 1016 CE A o==============o CE B 1018 Topology operated on by MDSC-H 1020 CE A o----o==o==o===o----o CE B 1022 Topology operated on by MDSC-L1 Topology operated on by MDSC-L2 1023 _ _ _ _ 1024 ( ) ( ) ( ) ( ) 1025 ( ) ( ) ( ) ( ) 1026 CE A o--(o---o)==(o---o)==Dom.3 Dom.2==(o---o)==(o---o)--o CE B 1027 ( ) ( ) ( ) ( ) 1028 (_) (_) (_) (_) 1030 Actual Topology 1031 ___ ___ ___ ___ 1032 ( ) ( ) ( ) ( ) 1033 ( o ) ( o ) ( o--o) ( o ) 1034 ( / \ ) ( |\ ) ( | | ) ( / \ ) 1035 CE A o---(o-o---o-o)==(o-o-o-o-o)==(o--o--o-o)==(o-o-o-o-o)---o CE B 1036 ( \ / ) ( | |/ ) ( | | ) ( \ / ) 1037 ( o ) (o-o ) ( o--o) ( o ) 1038 (___) (___) (___) (___) 1040 Domain 1 Domain 2 Domain 3 Domain 4 1042 Where 1043 o is a node 1044 --- is a link 1045 === border link 1047 Figure 9: Illustration of Hierarchical Topology Abstraction 1049 In the example depicted in Figure 9, there are four domains under 1050 control of PNCs PNC1, PNC2, PNC3, and PNC4. MDSC-L1 controls PNC1 1051 and PNC2 while MDSC-L2 controls PNC3 and PNC4. Each of the PNCs 1052 provides a grey topology abstraction that presents only border nodes 1053 and links across and outside the domain. The abstract topology 1054 MDSC-L1 that operates is a combination of the two topologies from 1055 PNC1 and PNC2. Likewise, the abstract topology that MDSC-L2 1056 operates is shown in Figure 9. Both MDSC-L1 and MDSC-L2 provide a 1057 black topology abstraction to MSDC-H in which each PNC domain is 1058 presented as a single virtual node. MDSC-H combines these two 1059 topologies to create the abstraction topology on which it operates. 1060 MDSC-H sees the whole four domain networks as four virtual nodes 1061 connected via virtual links. 1063 5.5. VN Recursion with Network Layers 1065 In some cases the VN supplied to a customer may be built using 1066 resources from different technology layers operated by different 1067 providers. For example, one provider may run a packet TE network 1068 and use optical connectivity provided by another provider. 1070 As shown in Figure 10, a customer asks for end-to-end connectivity 1071 between CE A and CE B, a virtual network. The customer's CNC makes a 1072 request to Provider 1's MDSC. The MDSC works out which network 1073 resources need to be configured and sends instructions to the 1074 appropriate PNCs. However, the link between Q and R is a virtual 1075 link supplied by Provider 2: Provider 1 is a customer of Provider 2. 1077 To support this, Provider 1 has a CNC that communicates to Provider 1078 2's MDSC. Note that Provider 1's CNC in Figure 10 is a functional 1079 component that does not dictate implementation: it may be embedded 1080 in a PNC. 1082 Virtual CE A o===============================o CE B 1083 Network 1085 ----- CNC wants to create a VN 1086 Customer | CNC | between CE A and CE B 1087 ----- 1088 : 1089 *********************************************** 1090 : 1091 Provider 1 --------------------------- 1092 | MDSC | 1093 --------------------------- 1094 : : : 1095 : : : 1096 ----- ------------- ----- 1097 | PNC | | PNC | | PNC | 1098 ----- ------------- ----- 1099 : : : : : 1100 Higher v v : v v 1101 Layer CE A o---P-----Q===========R-----S---o CE B 1102 Network | : | 1103 | : | 1104 | ----- | 1105 | | CNC | | 1106 | ----- | 1107 | : | 1108 *********************************************** 1109 | : | 1110 Provider 2 | ------ | 1111 | | MSDC | | 1112 | ------ | 1113 | : | 1114 | ------- | 1115 | | PNC | | 1116 | ------- | 1117 \ : : : / 1118 Lower \v v v/ 1119 Layer X--Y--Z 1120 Network 1122 Figure 10: VN Recursion with Network Layers 1124 6. Access Points and Virtual Network Access Points 1126 In order to map identification of connections between the customer's 1127 sites and the TE networks and to scope the connectivity requested in 1128 the VNS, the CNC and the MDSC refer to the connections using the 1129 Access Point (AP) construct as shown in Figure 11. 1131 ------------- 1132 ( ) 1133 - - 1134 +---+ X ( ) Z +---+ 1135 |CE1|---+----( )---+---|CE2| 1136 +---+ | ( ) | +---+ 1137 AP1 - - AP2 1138 ( ) 1139 ------------- 1141 Figure 11: Customer View of APs 1143 Let's take as an example a scenario shown in Figure 11. CE1 is 1144 connected to the network via a 10Gb link and CE2 via a 40Gb link. 1145 Before the creation of any VN between AP1 and AP2 the customer view 1146 can be summarized as shown in Table 1. 1148 +----------+------------------------+ 1149 |End Point | Access Link Bandwidth | 1150 +-----+----------+----------+-------------+ 1151 |AP id| CE,port | MaxResBw | AvailableBw | 1152 +-----+----------+----------+-------------+ 1153 | AP1 |CE1,portX | 10Gb | 10Gb | 1154 +-----+----------+----------+-------------+ 1155 | AP2 |CE2,portZ | 40Gb | 40Gb | 1156 +-----+----------+----------+-------------+ 1158 Table 1: AP - Customer View 1160 On the other hand, what the provider sees is shown in Figure 12. 1162 ------- ------- 1163 ( ) ( ) 1164 - - - - 1165 W (+---+ ) ( +---+) Y 1166 -+---( |PE1| Dom.X )---( Dom.Y |PE2| )---+- 1167 | (+---+ ) ( +---+) | 1168 AP1 - - - - AP2 1169 ( ) ( ) 1170 ------- ------- 1172 Figure 12: Provider view of the AP 1174 Which results in a summarization as shown in Table 2. 1176 +----------+------------------------+ 1177 |End Point | Access Link Bandwidth | 1178 +-----+----------+----------+-------------+ 1179 |AP id| PE,port | MaxResBw | AvailableBw | 1180 +-----+----------+----------+-------------+ 1181 | AP1 |PE1,portW | 10Gb | 10Gb | 1182 +-----+----------+----------+-------------+ 1183 | AP2 |PE2,portY | 40Gb | 40Gb | 1184 +-----+----------+----------+-------------+ 1186 Table 2: AP - Provider View 1188 A Virtual Network Access Point (VNAP) needs to be defined as binding 1189 between the AP that is linked to a VN and that is used to allow for 1190 different VNs to start from the same AP. It also allows for traffic 1191 engineering on the access and/or inter-domain links (e.g., keeping 1192 track of bandwidth allocation). A different VNAP is created on an 1193 AP for each VN. 1195 In this simple scenario we suppose we want to create two virtual 1196 networks. The first with VN identifier 9 between AP1 and AP2 with 1197 bandwidth of 1Gbps, while the second with VN identifier 5, again 1198 between AP1 and AP2 and with bandwidth 2Gbps. 1200 The provider view would evolve as shown in Table 3. 1202 +----------+------------------------+ 1203 |End Point | Access Link/VNAP Bw | 1204 +---------+----------+----------+-------------+ 1205 |AP/VNAPid| PE,port | MaxResBw | AvailableBw | 1206 +---------+----------+----------+-------------+ 1207 |AP1 |PE1,portW | 10Gbps | 7Gbps | 1208 | -VNAP1.9| | 1Gbps | N.A. | 1209 | -VNAP1.5| | 2Gbps | N.A | 1210 +---------+----------+----------+-------------+ 1211 |AP2 |PE2,portY | 40Gbps | 37Gbps | 1212 | -VNAP2.9| | 1Gbps | N.A. | 1213 | -VNAP2.5| | 2Gbps | N.A | 1214 +---------+----------+----------+-------------+ 1215 Table 3: AP and VNAP - Provider View after VNS Creation 1217 6.1. Dual-Homing Scenario 1219 Often there is a dual homing relationship between a CE and a pair of 1220 PEs. This case needs to be supported by the definition of VN, APs 1221 and VNAPs. Suppose CE1 connected to two different PEs in the 1222 operator domain via AP1 and AP2 and that the customer needs 5Gbps of 1223 bandwidth between CE1 and CE2. This is shown in Figure 12. 1225 ____________ 1226 AP1 ( ) AP3 1227 -------(PE1) (PE3)------- 1228 W / ( ) \ X 1229 +---+/ ( ) \+---+ 1230 |CE1| ( ) |CE2| 1231 +---+\ ( ) /+---+ 1232 Y \ ( ) / Z 1233 -------(PE2) (PE4)------- 1234 AP2 (____________) 1236 Figure 12: Dual-Homing Scenario 1238 In this case, the customer will request for a VN between AP1, AP2, 1239 and AP3 specifying a dual homing relationship between AP1 and AP2. 1240 As a consequence no traffic will flow between AP1 and AP2. The dual 1241 homing relationship would then be mapped against the VNAPs (since 1242 other independent VNs might have AP1 and AP2 as end points). 1244 The customer view would be shown in Table 4. 1246 +----------+------------------------+ 1247 |End Point | Access Link/VNAP Bw | 1248 +---------+----------+----------+-------------+-----------+ 1249 |AP/VNAPid| CE,port | MaxResBw | AvailableBw |Dual Homing| 1250 +---------+----------+----------+-------------+-----------+ 1251 |AP1 |CE1,portW | 10Gbps | 5Gbps | | 1252 | -VNAP1.9| | 5Gbps | N.A. | VNAP2.9 | 1253 +---------+----------+----------+-------------+-----------+ 1254 |AP2 |CE1,portY | 40Gbps | 35Gbps | | 1255 | -VNAP2.9| | 5Gbps | N.A. | VNAP1.9 | 1256 +---------+----------+----------+-------------+-----------+ 1257 |AP3 |CE2,portX | 40Gbps | 35Gbps | | 1258 | -VNAP3.9| | 5Gbps | N.A. | NONE | 1259 +---------+----------+----------+-------------+-----------+ 1261 Table 4: Dual-Homing - Customer View after VN Creation 1263 7. Advanced ACTN Application: Multi-Destination Service 1265 A further advanced application of ACTN is in the case of Data Center 1266 selection, where the customer requires the Data Center selection to 1267 be based on the network status; this is referred to as Multi- 1268 Destination in [ACTN-REQ]. In terms of ACTN, a CNC could request a 1269 connectivity service (virtual network) between a set of source Aps 1270 and destination APs and leave it up to the network (MDSC) to decide 1271 which source and destination access points to be used to set up the 1272 connectivity service (virtual network). The candidate list of 1273 source and destination APs is decided by a CNC (or an entity outside 1274 of ACTN) based on certain factors which are outside the scope of 1275 ACTN. 1277 Based on the AP selection as determined and returned by the network 1278 (MDSC), the CNC (or an entity outside of ACTN) should further take 1279 care of any subsequent actions such as orchestration or service 1280 setup requirements. These further actions are outside the scope of 1281 ACTN. 1283 Consider a case as shown in Figure 14, where three data centers are 1284 available, but the customer requires the data center selection to be 1285 based on the network status and the connectivity service setup 1286 between the AP1 (CE1) and one of the destination APs (AP2 (DC-A), 1287 AP3 (DC-B), and AP4 (DC-C)). The MDSC (in coordination with PNCs) 1288 would select the best destination AP based on the constraints, 1289 optimization criteria, policies, etc., and setup the connectivity 1290 service (virtual network). 1292 ------- ------- 1293 ( ) ( ) 1294 - - - - 1295 +---+ ( ) ( ) +----+ 1296 |CE1|---+---( Domain X )----( Domain Y )---+---|DC-A| 1297 +---+ | ( ) ( ) | +----+ 1298 AP1 - - - - AP2 1299 ( ) ( ) 1300 ---+--- ---+--- 1301 | | 1302 AP3-+ AP4-+ 1303 | | 1304 +----+ +----+ 1305 |DC-B| |DC-C| 1306 +----+ +----+ 1308 Figure 14: End-Point Selection Based on Network Status 1310 7.1. Pre-Planned End Point Migration 1312 Furthermore, in case of Data Center selection, customer could 1313 request for a backup DC to be selected, such that in case of 1314 failure, another DC site could provide hot stand-by protection. As 1315 shown in Figure 15 DC-C is selected as a backup for DC-A. Thus, the 1316 VN should be setup by the MDSC to include primary connectivity 1317 between AP1 (CE1) and AP2 (DC-A) as well as protection connectivity 1318 between AP1 (CE1) and AP4 (DC-C). 1320 ------- ------- 1321 ( ) ( ) 1322 - - - - 1323 +---+ ( ) ( ) +----+ 1324 |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| 1325 +---+ | ( ) ( ) | +----+ 1326 AP1 - - - - AP2 | 1327 ( ) ( ) | 1328 ---+--- ---+--- | 1329 | | | 1330 AP3-+ AP4-+ HOT STANDBY 1331 | | | 1332 +----+ +----+ | 1333 |DC-D| |DC-C|<------------- 1334 +----+ +----+ 1336 Figure 15: Pre-planned End-Point Migration 1338 7.2. On the Fly End-Point Migration 1340 Compared to pre-planned end point migration, on the fly end point 1341 selection is dynamic in that the migration is not pre-planned but 1342 decided based on network condition. Under this scenario, the MDSC 1343 would monitor the network (based on the VN SLA) and notify the CNC 1344 in case where some other destination AP would be a better choice 1345 based on the network parameters. The CNC should instruct the MDSC 1346 when it is suitable to update the VN with the new AP if it is 1347 required. 1349 8. Manageability Considerations 1351 The objective of ACTN is to manage traffic engineered resources, and 1352 provide a set of mechanisms to allow customers to request virtual 1353 connectivity across server network resources. ACTN supports 1354 multiple customers each with its own view of and control of a 1355 virtual network built on the server network, the network operator 1356 will need to partition (or "slice") their network resources, and 1357 manage the resources accordingly. 1359 The ACTN platform will, itself, need to support the request, 1360 response, and reservations of client and network layer connectivity. 1361 It will also need to provide performance monitoring and control of 1362 traffic engineered resources. The management requirements may be 1363 categorized as follows: 1365 . Management of external ACTN protocols 1366 . Management of internal ACTN interfaces/protocols 1367 . Management and monitoring of ACTN components 1368 . Configuration of policy to be applied across the ACTN system 1370 The ACTN framework and interfaces are defined to enable traffic 1371 engineering for virtual networks. Network operators may have other 1372 Operations, Administration, and Maintenance (OAM) tasks for service 1373 fulfillment, optimization, and assurance beyond traffic engineering. 1374 The realization of OAM beyond abstraction and control of traffic 1375 engineered networks is not considered in this document. 1377 8.1. Policy 1379 Policy is an important aspect of ACTN control and management. 1380 Policies are used via the components and interfaces, during 1381 deployment of the service, to ensure that the service is compliant 1382 with agreed policy factors and variations (often described in SLAs), 1383 these include, but are not limited to: connectivity, bandwidth, 1384 geographical transit, technology selection, security, resilience, 1385 and economic cost. 1387 Depending on the deployment of the ACTN architecture, some policies 1388 may have local or global significance. That is, certain policies 1389 may be ACTN component specific in scope, while others may have 1390 broader scope and interact with multiple ACTN components. Two 1391 examples are provided below: 1393 . A local policy might limit the number, type, size, and 1394 scheduling of virtual network services a customer may request 1395 via its CNC. This type of policy would be implemented locally 1396 on the MDSC. 1398 . A global policy might constrain certain customer types (or 1399 specific customer applications) to only use certain MDSCs, and 1400 be restricted to physical network types managed by the PNCs. A 1401 global policy agent would govern these types of policies. 1403 The objective of this section is to discuss the applicability of 1404 ACTN policy: requirements, components, interfaces, and examples. 1405 This section provides an analysis and does not mandate a specific 1406 method for enforcing policy, or the type of policy agent that would 1407 be responsible for propagating policies across the ACTN components. 1408 It does highlight examples of how policy may be applied in the 1409 context of ACTN, but it is expected further discussion in an 1410 applicability or solution specific document, will be required. 1412 8.2. Policy Applied to the Customer Network Controller 1414 A virtual network service for a customer application will be 1415 requested by the CNC. The request will reflect the application 1416 requirements and specific service needs, including bandwidth, 1417 traffic type and survivability. Furthermore, application access and 1418 type of virtual network service requested by the CNC, will be need 1419 adhere to specific access control policies. 1421 8.3. Policy Applied to the Multi Domain Service Coordinator 1423 A key objective of the MDSC is to support the customer's expression 1424 of the application connectivity request via its CNC as set of 1425 desired business needs, therefore policy will play an important 1426 role. 1428 Once authorized, the virtual network service will be instantiated 1429 via the CNC-MDSC Interface (CMI), it will reflect the customer 1430 application and connectivity requirements, and specific service 1431 transport needs. The CNC and the MDSC components will have agreed 1432 connectivity end-points, use of these end-points should be defined 1433 as a policy expression when setting up or augmenting virtual network 1434 services. Ensuring that permissible end-points are defined for CNCs 1435 and applications will require the MDSC to maintain a registry of 1436 permissible connection points for CNCs and application types. 1438 Conflicts may occur when virtual network service optimization 1439 criteria are in competition. For example, to meet objectives for 1440 service reachability a request may require an interconnection point 1441 between multiple physical networks; however, this might break a 1442 confidentially policy requirement of specific type of end-to-end 1443 service. Thus an MDSC may have to balance a number of the 1444 constraints on a service request and between different requested 1445 services. It may also have to balance requested services with 1446 operational norms for the underlying physical networks. This 1447 balancing may be resolved using configured policy and using hard and 1448 soft policy constraints. 1450 8.4. Policy Applied to the Provisioning Network Controller 1452 The PNC is responsible for configuring the network elements, 1453 monitoring physical network resources, and exposing connectivity 1454 (direct or abstracted) to the MDSC. It is therefore expected that 1455 policy will dictate what connectivity information will be exported 1456 between the PNC, via the MDSC-PNC Interface (MPI), and MDSC. 1458 Policy interactions may arise when a PNC determines that it cannot 1459 compute a requested path from the MDSC, or notices that (per a 1460 locally configured policy) the network is low on resources (for 1461 example, the capacity on key links become exhausted). In either 1462 case, the PNC will be required to notify the MDSC, which may (again 1463 per policy) act to construct a virtual network service across 1464 another physical network topology. 1466 Furthermore, additional forms of policy-based resource management 1467 will be required to provide virtual network service performance, 1468 security and resilience guarantees. This will likely be implemented 1469 via a local policy agent and additional protocol methods. 1471 9. Security Considerations 1473 The ACTN framework described in this document defines key components 1474 and interfaces for managed traffic engineered networks. Securing 1475 the request and control of resources, confidentially of the 1476 information, and availability of function, should all be critical 1477 security considerations when deploying and operating ACTN platforms. 1479 Several distributed ACTN functional components are required, and 1480 implementations should consider encrypting data that flows between 1481 components, especially when they are implemented at remote nodes, 1482 regardless these data flows are on external or internal network 1483 interfaces. 1485 The ACTN security discussion is further split into two specific 1486 categories described in the following sub-sections: 1488 . Interface between the Customer Network Controller and Multi 1489 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) 1491 . Interface between the Multi Domain Service Coordinator and 1492 Provisioning Network Controller (PNC), MDSC-PNC Interface (MPI) 1494 From a security and reliability perspective, ACTN may encounter many 1495 risks such as malicious attack and rogue elements attempting to 1496 connect to various ACTN components. Furthermore, some ACTN 1497 components represent a single point of failure and threat vector, 1498 and must also manage policy conflicts, and eavesdropping of 1499 communication between different ACTN components. 1501 The conclusion is that all protocols used to realize the ACTN 1502 framework should have rich security features, and customer, 1503 application and network data should be stored in encrypted data 1504 stores. Additional security risks may still exist. Therefore, 1505 discussion and applicability of specific security functions and 1506 protocols will be better described in documents that are use case 1507 and environment specific. 1509 9.1. CNC-MDSC Interface (CMI) 1511 Data stored by the MDSC will reveal details of the virtual network 1512 services, and which CNC and customer/application is consuming the 1513 resource. The data stored must therefore be considered as a 1514 candidate for encryption. 1516 CNC Access rights to an MDSC must be managed. The MDSC must 1517 allocate resources properly, and methods to prevent policy 1518 conflicts, resource wastage, and denial of service attacks on the 1519 MDSC by rogue CNCs, should also be considered. 1521 The CMI will likely be an external protocol interface. Suitable 1522 authentication and authorization of each CNC connecting to the MDSC 1523 will be required, especially, as these are likely to be implemented 1524 by different organizations and on separate functional nodes. Use of 1525 the AAA-based mechanisms would also provide role-based authorization 1526 methods, so that only authorized CNC's may access the different 1527 functions of the MDSC. 1529 9.2. MDSC-PNC Interface (MPI) 1531 Where the MDSC must interact with multiple (distributed) PNCs, a 1532 PKI-based mechanism is suggested, such as building a TLS or HTTPS 1533 connection between the MDSC and PNCs, to ensure trust between the 1534 physical network layer control components and the MDSC. 1536 Which MDSC the PNC exports topology information to, and the level of 1537 detail (full or abstracted) should also be authenticated and 1538 specific access restrictions and topology views, should be 1539 configurable and/or policy-based. 1541 10. IANA Considerations 1543 This document has no actions for IANA. 1545 11. References 1547 11.1. Informative References 1549 [RFC2702] Awduche, D., et. al., "Requirements for Traffic 1550 Engineering Over MPLS", RFC 2702, September 1999. 1552 [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path 1553 Computation Element (PCE)-Based Architecture", IETF RFC 1554 4655, August 2006. 1556 [RFC5654] Niven-Jenkins, B. (Ed.), D. Brungard (Ed.), and M. Betts 1557 (Ed.), "Requirements of an MPLS Transport Profile", RFC 1558 5654, September 2009. 1560 [RFC7149] Boucadair, M. and Jacquenet, C., "Software-Defined 1561 Networking: A Perspective from within a Service Provider 1562 Environment", RFC 7149, March 2014. 1564 [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for 1565 Information Exchange between Interconnected Traffic- 1566 Engineered Networks", RFC 7926, July 2016. 1568 [RFC3945] Manning, E., et al., "Generalized Multi-Protocol Label 1569 Switching (GMPLS) Architecture2, RFC 3945, October 2004. 1571 [ONF-ARCH] Open Networking Foundation, "SDN architecture", Issue 1572 1.1, ONF TR-521, June 2016. 1574 [Centralized] Farrel, A., et al., "An Architecture for Use of PCE 1575 and PCEP in a Network with Central Control", draft-ietf- 1576 teas-pce-central-control, work in progress. 1578 [Service-YANG] Lee, Y., Dhody, D., and Ceccarelli, C., "Traffic 1579 Engineering and Service Mapping Yang Model", draft-lee- 1580 teas-te-service-mapping-yang, work in progress. 1582 [ACTN-YANG] Lee, Y., et al., "A Yang Data Model for ACTN VN 1583 Operation", draft-lee-teas-actn-vn-yang, work in progress. 1585 [ACTN-REQ] Lee, Y., et al., "Requirements for Abstraction and 1586 Control of TE Networks", draft-ietf-teas-actn- 1587 requirements, work in progress. 1589 [TE-Topo] X. Liu et al., "YANG Data Model for TE Topologies", draft- 1590 ietf-teas-yang-te-topo, work in progress. 1592 12. Contributors 1594 Adrian Farrel 1595 Old Dog Consulting 1596 Email: adrian@olddog.co.uk 1598 Italo Busi 1599 Huawei 1600 Email: Italo.Busi@huawei.com 1602 Khuzema Pithewan 1603 Infinera 1604 Email: kpithewan@infinera.com 1606 Michael Scharf 1607 Nokia 1608 Email: michael.scharf@nokia.com 1609 Luyuan Fang 1610 eBay 1611 Email: luyuanf@gmail.com 1613 Diego Lopez 1614 Telefonica I+D 1615 Don Ramon de la Cruz, 82 1616 28006 Madrid, Spain 1617 Email: diego@tid.es 1619 Sergio Belotti 1620 Alcatel Lucent 1621 Via Trento, 30 1622 Vimercate, Italy 1623 Email: sergio.belotti@nokia.com 1625 Daniel King 1626 Lancaster University 1627 Email: d.king@lancaster.ac.uk 1629 Dhruv Dhody 1630 Huawei Technologies 1631 Divyashree Techno Park, Whitefield 1632 Bangalore, Karnataka 560066 1633 India 1634 Email: dhruv.ietf@gmail.com 1636 Gert Grammel 1637 Juniper Networks 1638 Email: ggrammel@juniper.net 1640 Authors' Addresses 1642 Daniele Ceccarelli 1643 Ericsson 1644 Torshamnsgatan,48 1645 Stockholm, Sweden 1646 Email: daniele.ceccarelli@ericsson.com 1648 Young Lee 1649 Huawei Technologies 1650 5340 Legacy Drive 1651 Plano, TX 75023, USA 1652 Phone: (469)277-5838 1653 Email: leeyoung@huawei.com 1655 APPENDIX A - Example of MDSC and PNC Functions Integrated in A 1656 Service/Network Orchestrator 1658 This section provides an example of a possible deployment scenario, 1659 in which Service/Network Orchestrator can include a number of 1660 functionalities, among which, in the example below, PNC 1661 functionalities for domain 2 and MDSC functionalities to coordinate 1662 the PNC1 functionalities (hosted in a separate domain controller) 1663 and PNC2 functionalities (co-hosted in the network orchestrator). 1665 Customer 1666 +-------------------------------+ 1667 | +-----+ | 1668 | | CNC | | 1669 | +-----+ | 1670 +-------|-----------------------+ 1671 | 1672 Service/Network | CMI 1673 Orchestrator | 1674 +-------|------------------------+ 1675 | +------+ MPI +------+ | 1676 | | MDSC |---------| PNC2 | | 1677 | +------+ +------+ | 1678 +-------|------------------|-----+ 1679 | MPI | 1680 Domain Controller | | 1681 +-------|-----+ | 1682 | +-----+ | | SBI 1683 | |PNC1 | | | 1684 | +-----+ | | 1685 +-------|-----+ | 1686 v SBI v 1687 ------- ------- 1688 ( ) ( ) 1689 - - - - 1690 ( ) ( ) 1691 ( Domain 1 )----( Domain 2 ) 1692 ( ) ( ) 1693 - - - - 1694 ( ) ( ) 1695 ------- -------