idnits 2.17.1 draft-ietf-teas-actn-framework-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 18, 2017) is 2380 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Daniele Ceccarelli (Ed) 2 Internet Draft Ericsson 3 Intended status: Informational Young Lee (Ed) 4 Expires: April 18, 2018 Huawei 6 October 18, 2017 8 Framework for Abstraction and Control of Traffic Engineered Networks 10 draft-ietf-teas-actn-framework-10 12 Abstract 14 Traffic Engineered networks have a variety of mechanisms to 15 facilitate the separation of the data plane and control plane. They 16 also have a range of management and provisioning protocols to 17 configure and activate network resources. These mechanisms 18 represent key technologies for enabling flexible and dynamic 19 networking. 21 Abstraction of network resources is a technique that can be applied 22 to a single network domain or across multiple domains to create a 23 single virtualized network that is under the control of a network 24 operator or the customer of the operator that actually owns 25 the network resources. 27 This document provides a framework for Abstraction and Control of 28 Traffic Engineered Networks (ACTN). 30 Status of this Memo 32 This Internet-Draft is submitted to IETF in full conformance with 33 the provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF), its areas, and its working groups. Note that 37 other groups may also distribute working documents as Internet- 38 Drafts. 40 Internet-Drafts are draft documents valid for a maximum of six 41 months and may be updated, replaced, or obsoleted by other documents 42 at any time. It is inappropriate to use Internet-Drafts as 43 reference material or to cite them other than as "work in progress." 45 The list of current Internet-Drafts can be accessed at 46 http://www.ietf.org/ietf/1id-abstracts.txt 47 The list of Internet-Draft Shadow Directories can be accessed at 48 http://www.ietf.org/shadow.html. 50 This Internet-Draft will expire on April 18, 2018. 52 Copyright Notice 54 Copyright (c) 2017 IETF Trust and the persons identified as the 55 document authors. All rights reserved. 57 This document is subject to BCP 78 and the IETF Trust's Legal 58 Provisions Relating to IETF Documents 59 (http://trustee.ietf.org/license-info) in effect on the date of 60 publication of this document. Please review these documents 61 carefully, as they describe your rights and restrictions with 62 respect to this document. Code Components extracted from this 63 document must include Simplified BSD License text as described in 64 Section 4.e of the Trust Legal Provisions and are provided without 65 warranty as described in the Simplified BSD License. 67 Table of Contents 69 1. Introduction...................................................3 70 2. Overview.......................................................4 71 2.1. Terminology...............................................5 72 2.2. VNS Model of ACTN.........................................8 73 2.2.1. Customers............................................9 74 2.2.2. Service Providers...................................10 75 2.2.3. Network Providers...................................10 76 3. ACTN Base Architecture........................................10 77 3.1. Customer Network Controller..............................12 78 3.2. Multi-Domain Service Coordinator.........................13 79 3.3. Provisioning Network Controller..........................13 80 3.4. ACTN Interfaces..........................................14 81 4. Advanced ACTN Architectures...................................15 82 4.1. MDSC Hierarchy...........................................15 83 4.2. Functional Split of MDSC Functions in Orchestrators......16 84 5. Topology Abstraction Methods..................................17 85 5.1. Abstraction Factors......................................17 86 5.2. Abstraction Types........................................18 87 5.2.1. Native/White Topology...............................18 88 5.2.2. Black Topology......................................18 89 5.2.3. Grey Topology.......................................19 90 5.3. Methods of Building Grey Topologies......................20 91 5.3.1. Automatic Generation of Abstract Topology by 92 Configuration..............................................21 93 5.3.2. On-demand Generation of Supplementary Topology via Path 94 Compute Request/Reply......................................21 95 5.4. Hierarchical Topology Abstraction Example................22 96 5.5. VN Recursion with Network Layers.........................23 97 6. Access Points and Virtual Network Access Points...............25 98 6.1. Dual-Homing Scenario.....................................27 99 7. Advanced ACTN Application: Multi-Destination Service..........28 100 7.1. Pre-Planned End Point Migration..........................29 101 7.2. On the Fly End-Point Migration...........................30 102 8. Manageability Considerations..................................30 103 8.1. Policy...................................................30 104 8.2. Policy Applied to the Customer Network Controller........31 105 8.3. Policy Applied to the Multi Domain Service Coordinator...31 106 8.4. Policy Applied to the Provisioning Network Controller....32 107 9. Security Considerations.......................................32 108 9.1. CNC-MDSC Interface (CMI).................................33 109 9.2. MDSC-PNC Interface (MPI).................................34 110 10. IANA Considerations..........................................34 111 11. References...................................................34 112 11.1. Informative References..................................34 113 12. Contributors.................................................35 114 Authors' Addresses...............................................36 115 APPENDIX A - Example of MDSC and PNC Functions Integrated in A 116 Service/Network Orchestrator.....................................37 118 1. Introduction 120 The term "Traffic Engineered network" refers to a network that uses 121 any connection-oriented technology under the control of a 122 distributed or centralized control plane to support dynamic 123 provisioning of end-to-end connectivity. Traffic Engineered (TE) 124 networks have a variety of mechanisms to facilitate separation of 125 data plane and control plane including distributed signaling for 126 path setup and protection, centralized path computation for planning 127 and traffic engineering, and a range of management and provisioning 128 protocols to configure and activate network resources. These 129 mechanisms represent key technologies for enabling flexible and 130 dynamic networking. Some examples of networks that are in scope of 131 this definition are optical networks, MPLS Transport Profile (MPLS- 132 TP) networks [RFC5654], and MPLS-TE networks [RFC2702]. 134 One of the main drivers for Software Defined Networking (SDN) 135 [RFC7149] is a decoupling of the network control plane from the data 136 plane. This separation has been achieved for TE networks with the 137 development of MPLS/GMPLS [RFC3945] and the Path Computation Element 138 (PCE) [RFC4655]. One of the advantages of SDN is its logically 139 centralized control regime that allows a global view of the 140 underlying networks. Centralized control in SDN helps improve 141 network resource utilization compared with distributed network 142 control. For TE-based networks, a PCE may serve as a logically 143 centralized path computation function. 145 This document describes a set of management and control functions 146 used to operate one or more TE networks to construct virtual 147 networks that can be represented to customers and that are built 148 from abstractions of the underlying TE networks so that, for 149 example, a link in the customer's network is constructed from a path 150 or collection of paths in the underlying networks. We call this set 151 of function "Abstraction and Control of Traffic Engineered Networks" 152 (ACTN). 154 2. Overview 156 Three key aspects that need to be solved by SDN are: 158 . Separation of service requests from service delivery so that 159 the configuration and operation of a network is transparent 160 from the point of view of the customer, but remains responsive 161 to the customer's services and business needs. 163 . Network abstraction: As described in [RFC7926], abstraction is 164 the process of applying policy to a set of information about a 165 TE network to produce selective information that represents the 166 potential ability to connect across the network. The process 167 of abstraction presents the connectivity graph in a way that is 168 independent of the underlying network technologies, 169 capabilities, and topology so that the graph can be used to 170 plan and deliver network services in a uniform way 172 . Coordination of resources across multiple independent networks 173 and multiple technology layers to provide end-to-end services 174 regardless of whether the networks use SDN or not. 176 As networks evolve, the need to provide support for distinct 177 services, separated service orchestration, and resource abstraction 178 have emerged as key requirements for operators. In order to support 179 multiple customers each with its own view of and control of the 180 server network, a network operator needs to partition (or "slice") 181 or manage sharing of the network resources. Network slices can be 182 assigned to each customer for guaranteed usage which is a step 183 further than shared use of common network resources. 185 Furthermore, each network represented to a customer can be built 186 from virtualization of the underlying networks so that, for example, 187 a link in the customer's network is constructed from a path or 188 collection of paths in the underlying network. 190 We call the set of management and control functions used to provide 191 these features Abstraction and Control of Traffic Engineered 192 Networks (ACTN). 194 ACTN can facilitate virtual network operation via the creation of a 195 single virtualized network or a seamless service. This supports 196 operators in viewing and controlling different domains (at any 197 dimension: applied technology, administrative zones, or vendor- 198 specific technology islands) and presenting virtualized networks to 199 their customers. 201 The ACTN framework described in this document facilitates: 203 . Abstraction of the underlying network resources to higher-layer 204 applications and customers [RFC7926]. 206 . Virtualization of particular underlying resources, whose 207 selection criterion is the allocation of those resources to a 208 particular customer, application or service [ONF-ARCH]. 210 . Network slicing of infrastructure to meet specific customers' 211 service requirements. 213 . Creation of a virtualized environment allowing operators to 214 view and control multi-domain networks as a single virtualized 215 network. 217 . The presentation to customers of networks as a virtual network 218 via open and programmable interfaces. 220 2.1. Terminology 222 The following terms are used in this document. Some of them are 223 newly defined, some others reference existing definitions: 224 . Domain: A domain [RFC4655] is any collection of network 225 elements within a common sphere of address management or path 226 computation responsibility. Specifically within this document 227 we mean a part of an operator's network that is under common 228 management. Network elements will often be grouped into 229 domains based on technology types, vendor profiles, and 230 geographic proximity. 232 . Abstraction: This process is defined in [RFC7926]. 234 . Network Slicing: In the context of ACTN, a network slice is a 235 collection of resources that is used to establish a logically 236 dedicated virtual network over one or more TE network. Network 237 slicing allows a network provider to provide dedicated virtual 238 networks for applications/customers over a common network 239 infrastructure. The logically dedicated resources are a part 240 of the larger common network infrastructures that are shared 241 among various network slice instances which are the end-to-end 242 realization of network slicing, consisting of the combination 243 of physically or logically dedicated resources. 245 . Node: A node is a vertex on the graph representation of a TE 246 topology. In a physical network topology, a node corresponds 247 to a physical network element (NE) such as a router. In an 248 abstract network topology, a node (sometimes called an abstract 249 node) is a representation as a single vertex of one or more 250 physical NEs and their connecting physical connections. The 251 concept of a node represents the ability to connect from any 252 access to the node (a link end) to any other access to that 253 node, although "limited cross-connect capabilities" may also be 254 defined to restrict this functionality. Just as network 255 slicing and network abstraction may be applied recursively, so 256 a node in one topology may be created by applying slicing or 257 abstraction to the nodes in the underlying topology. 259 . Link: A link is an edge on the graph representation of a TE 260 topology. Two nodes connected by a link are said to be 261 "adjacent" in the TE topology. In a physical network topology, 262 a link corresponds to a physical connection. In an abstract 263 network topology, a link (sometimes called an abstract link) is 264 a representation of the potential to connect a pair of points 265 with certain TE parameters (see [RFC7926] for details). 266 Network slicing/virtualization and network abstraction may be 267 applied recursively, so a link in one topology may be created 268 by applying slicing and/or abstraction to the links in the 269 underlying topology. 271 . Abstract Link: The term "abstract link" is defined in 272 [RFC7926]. 274 . Abstract Topology: The topology of abstract nodes and abstract 275 links presented through the process of abstraction by a lower 276 layer network for use by a higher layer network. 278 . A Virtual Network (VN) is a network provided by a service 279 provider to a customer for the customer to use in any way it 280 wants as though it was a physical network. There are two views 281 of a VN as follows: 283 a) The VN can be seen as a set of edge-to-edge links (a Type 1 284 VN). Each link is referred as a VN member and is formed as 285 an end-to-end tunnel across the underlying networks. Such 286 tunnels may be constructed by recursive slicing or 287 abstraction of paths in the underlying networks and can 288 encompass edge points of the customer's network, access 289 links, intra-domain paths, and inter-domain links. 291 b) The VN can also be seen as a topology of virtual nodes and 292 virtual links (a Type 2 VN). The provider needs to map the 293 VN to actual resource assignment, which is known as virtual 294 network embedding. The nodes in this case include physical 295 end points, border nodes, and internal nodes as well as 296 abstracted nodes. Similarly the links include physical 297 access links, inter-domain links, and intra-domain links as 298 well as abstract links. 300 Clearly a Type 1 VN is a special case of a Type 2 VN. 302 . Access link: A link between a customer node and a provider 303 node. 305 . Inter-domain link: A link between domains under distinct 306 management administration. 308 . Access Point (AP): An AP is a logical identifier shared between 309 the customer and the provider used to identify an access link. 310 The AP is used by the customer when requesting a VNS. Note that 311 the term "TE Link Termination Point" (LTP) defined in [TE-Topo] 312 describes the end points of links, while an AP is a common 313 identifier for the link itself. 315 . VN Access Point (VNAP): A VNAP is the binding between an AP and 316 a given VN. 318 . Server Network: As defined in [RFC7926], a server network is a 319 network that provides connectivity for another network (the 320 Client Network) in a client-server relationship. 322 2.2. VNS Model of ACTN 324 A Virtual Network Service (VNS) is the service agreement between a 325 customer and provider to provide a VN. There are three types of VNS 326 defined in this document. 328 o Type 1 VNS refers to a VNS in which the customer is allowed 329 to create and operate a Type 1 VN. 331 o Type 2a and 2b VNS refer to VNSs in which the customer is 332 allowed to create and operates a Type 2 VN. With a Type 333 2a VNS, the VN is statically created at service 334 configuration time and the customer is not allowed to 335 change the topology (e.g., by adding or deleting abstract 336 nodes and links). A Type 2b VNS is the same as a Type 2a 337 VNS except that the customer is allowed to make dynamic 338 changes to the initial topology created at service 339 configuration time. 341 VN Operations are functions that a customer can exercise on a VN 342 depending on the agreement between the customer and the provider. 344 o VN Creation allows a customer to request the instantiation 345 of a VN. This could be through off-line pre-configuration 346 or through dynamic requests specifying attributes to a 347 Service Level Agreement (SLA) to satisfy the customer's 348 objectives. 350 o Dynamic Operations allow a customer to modify or delete the 351 VN. The customer can further act upon the virtual network 352 to create/modify/delete virtual links and nodes. These 353 changes will result in subsequent tunnel management in the 354 operator's networks. 356 There are three key entities in the ACTN VNS model: 358 - Customers 359 - Service Providers 360 - Network Providers 362 These entities are related in a three tier model as shown in Figure 363 1. 365 +----------------------+ 366 | Customer | 367 +----------------------+ 368 | 369 | 371 VNS || | /\ VNS 372 Request || | || Reply 373 \/ | || 374 +----------------------+ 375 | Service Provider | 376 +----------------------+ 377 / | \ 378 / | \ 379 / | \ 380 / | \ 381 +------------------+ +------------------+ +------------------+ 382 |Network Provider 1| |Network Provider 2| |Network Provider 3| 383 +------------------+ +------------------+ +------------------+ 385 Figure 1: The Three Tier Model. 387 The commercial roles of these entities are described in the 388 following sections. 390 2.2.1. Customers 392 Basic customers include fixed residential users, mobile users, and 393 small enterprises. Each requires a small amount of resources and is 394 characterized by steady requests (relatively time invariant). Basic 395 customers do not modify their services themselves: if a service 396 change is needed, it is performed by the provider as a proxy. 398 Advanced customers include enterprises, governments, and utility 399 companies. Such customers ask for both point-to point and 400 multipoint connectivity with high resource demands varying 401 significantly in time. This is one of the reasons why a bundled 402 service offering is not enough and it is desirable to provide each 403 advanced customer with a customized virtual network service. 404 Advanced customers may also have the ability to modify their service 405 parameters within the scope of their virtualized environments. The 406 primary focus of ACTN is Advanced Customers. 408 As customers are geographically spread over multiple network 409 provider domains, they have to interface to multiple providers and 410 may have to support multiple virtual network services with different 411 underlying objectives set by the network providers. To enable these 412 customers to support flexible and dynamic applications they need to 413 control their allocated virtual network resources in a dynamic 414 fashion, and that means that they need a view of the topology that 415 spans all of the network providers. Customers of a given service 416 provider can in turn offer a service to other customers in a 417 recursive way. 419 2.2.2. Service Providers 421 In the scope of ACTN, service providers deliver VNSs to their 422 customers. Service providers may or may not own physical network 423 resources (i.e., may or may not be network providers as described in 424 Section 2.2.3). When a service provider is the same as the network 425 provider, this is similar to existing VPN models applied to a single 426 provider although it may be hard to use this approach when the 427 customer spans multiple independent network provider domains. 429 When network providers supply only infrastructure, while distinct 430 service providers interface to the customers, the service providers 431 are themselves customers of the network infrastructure providers. 432 One service provider may need to keep multiple independent network 433 providers because its end-users span geographically across multiple 434 network provider domains. 436 2.2.3. Network Providers 438 Network Providers are the infrastructure providers that own the 439 physical network resources and provide network resources to their 440 customers. The network operated by a network provider may be a 441 virtual network created by a service provider and supplied to the 442 network provider in its role as a customer. The layered model 443 described in this architecture separates the concerns of network 444 providers and customers, with service providers acting as 445 aggregators of customer requests. 447 3. ACTN Base Architecture 449 This section provides a high-level model of ACTN showing the 450 interfaces and the flow of control between components. 452 The ACTN architecture is based on a 3-tier reference model and 453 allows for hierarchy and recursion. The main functionalities within 454 an ACTN system are: 456 . Multi-domain coordination: This function oversees the specific 457 aspects of different domains and builds a single abstracted 458 end-to-end network topology in order to coordinate end-to-end 459 path computation and path/service provisioning. Domain 460 sequence path calculation/determination is also a part of this 461 function. 463 . Virtualization/Abstraction: This function provides an 464 abstracted view of the underlying network resources for use by 465 the customer - a customer may be the client or a higher level 466 controller entity. This function includes network path 467 computation based on customer service connectivity request 468 constraints, path computation based on the global network-wide 469 abstracted topology, and the creation of an abstracted view of 470 network resources allocated to each customer. These operations 471 depend on customer-specific network objective functions and 472 customer traffic profiles. 474 . Customer mapping/translation: This function is to map customer 475 requests/commands into network provisioning requests that can 476 be sent to the Provisioning Network Controller (PNC) according 477 to business policies provisioned statically or dynamically at 478 the OSS/NMS. Specifically, it provides mapping and translation 479 of a customer's service request into a set of parameters that 480 are specific to a network type and technology such that network 481 configuration process is made possible. 483 . Virtual service coordination: This function translates customer 484 service-related information into virtual network service 485 operations in order to seamlessly operate virtual networks 486 while meeting a customer's service requirements. In the 487 context of ACTN, service/virtual service coordination includes 488 a number of service orchestration functions such as multi- 489 destination load balancing, guarantees of service quality, 490 bandwidth and throughput. It also includes notifications for 491 service fault and performance degradation and so forth. 493 The base ACTN architecture defines three controller types and the 494 corresponding interfaces between these controllers. The following 495 types of controller are shown in Figure 2: 497 . CNC - Customer Network Controller 498 . MDSC - Multi Domain Service Coordinator 499 . PNC - Provisioning Network Controller 501 Figure 2 also shows the following interfaces: 503 . CMI - CNC-MDSC Interface 504 . MPI - MDSC-PNC Interface 505 . SBI - South Bound Interface 506 +---------+ +---------+ +---------+ 507 | CNC | | CNC | | CNC | 508 +---------+ +---------+ +---------+ 509 \ | / 510 Business \ | / 511 Boundary =============\==============|==============/============ 512 Between \ | / 513 Customer & ------- | CMI ------- 514 Network Provider \ | / 515 +---------------+ 516 | MDSC | 517 +---------------+ 518 / | \ 519 ------------ | MPI ------------- 520 / | \ 521 +-------+ +-------+ +-------+ 522 | PNC | | PNC | | PNC | 523 +-------+ +-------+ +-------+ 524 | SBI / | / \ 525 | / | SBI / \ 526 --------- ----- | / \ 527 ( ) ( ) | / \ 528 - Control - ( Phys. ) | / ----- 529 ( Plane ) ( Net ) | / ( ) 530 ( Physical ) ----- | / ( Phys. ) 531 ( Network ) ----- ----- ( Net ) 532 - - ( ) ( ) ----- 533 ( ) ( Phys. ) ( Phys. ) 534 --------- ( Net ) ( Net ) 535 ----- ----- 537 Figure 2: ACTN Base Architecture 539 Note that this is a functional architecture: an implementation and 540 deployment might collocate one or more of the functional components. 542 3.1. Customer Network Controller 544 A Customer Network Controller (CNC) is responsible for communicating 545 a customer's VNS requirements to the network provider over the CNC- 546 MDSC Interface (CMI). It has knowledge of the end-points associated 547 with the VNS (expressed as APs), the service policy, and other QoS 548 information related to the service. 550 As the Customer Network Controller directly interfaces to the 551 applications, it understands multiple application requirements and 552 their service needs. 554 3.2. Multi-Domain Service Coordinator 556 A Multi-Domain Service Coordinator (MDSC) is a functional block that 557 implements all of the ACTN functions listed in Section 3 and 558 described further in Section 4.2. The two functions of the MDSC, 559 namely, multi domain coordination and virtualization/abstraction are 560 referred to as network-related functions while the other two 561 functions, namely, customer mapping/translation and virtual service 562 coordination are referred to as service-related functions. The MDSC 563 sits at the center of the ACTN model between the CNC that issues 564 connectivity requests and the Provisioning Network Controllers 565 (PNCs) that manage the network resources. 566 The key point of the MDSC (and of the whole ACTN framework) is 567 detaching the network and service control from underlying technology 568 to help the customer express the network as desired by business 569 needs. The MDSC envelopes the instantiation of the right technology 570 and network control to meet business criteria. In essence it 571 controls and manages the primitives to achieve functionalities as 572 desired by the CNC. 574 In order to allow for multi-domain coordination a 1:N relationship 575 must be allowed between MDSCs and PNCs. 577 In addition to that, it could also be possible to have an M:1 578 relationship between MDSCs and PNC to allow for network resource 579 partitioning/sharing among different customers not necessarily 580 connected to the same MDSC (e.g., different service providers) but 581 all using the resources of a common network infrastructure provider. 583 3.3. Provisioning Network Controller 585 The Provisioning Network Controller (PNC) oversees configuring the 586 network elements, monitoring the topology (physical or virtual) of 587 the network, and collecting information about the topology (either 588 raw or abstracted). 590 The PNC functions can be implemented as part of an SDN domain 591 controller, a Network Management System (NMS), an Element Management 592 System (EMS), an active PCE-based controller [Centralized] or any 593 other means to dynamically control a set of nodes and that is 594 implementing an NBI compliant with ACTN specification. 596 A PNC domain includes all the resources under the control of a 597 single PNC. It can be composed of different routing domains and 598 administrative domains, and the resources may come from different 599 layers. The interconnection between PNC domains is illustrated in 600 Figure 3. 602 _______ _______ 603 _( )_ _( )_ 604 _( )_ _( )_ 605 ( ) Border ( ) 606 ( PNC ------ Link ------ PNC ) 607 ( Domain X |Border|========|Border| Domain Y ) 608 ( | Node | | Node | ) 609 ( ------ ------ ) 610 (_ _) (_ _) 611 (_ _) (_ _) 612 (_______) (_______) 614 Figure 3: PNC Domain Borders 616 3.4. ACTN Interfaces 618 Direct customer control of transport network elements and 619 virtualized services is not a viable proposition for network 620 providers due to security and policy concerns. In addition, some 621 networks may operate a control plane and as such it is not practical 622 for the customer to directly interface with network elements. 623 Therefore, the network has to provide open, programmable interfaces, 624 through which customer applications can create, replace and modify 625 virtual network resources and services in an interactive, flexible 626 and dynamic fashion while having no impact on other customers. 628 Three interfaces exist in the ACTN architecture as shown in Figure 629 2. 631 . CMI: The CNC-MDSC Interface (CMI) is an interface between a CNC 632 and an MDSC. The CMI is a business boundary between customer 633 and network provider. It is used to request a VNS for an 634 application. All service-related information is conveyed over 635 this interface (such as the VNS type, topology, bandwidth, and 636 service constraints). Most of the information over this 637 interface is technology agnostic (the customer is unaware of 638 the network technologies used to deliver the service), but 639 there are some cases (e.g., access link configuration) where it 640 is necessary to specify technology-specific details. 642 . MPI: The MDSC-PNC Interface (MPI) is an interface between an 643 MDSC and a PNC. It communicates requests for new connectivity 644 or for bandwidth changes in the physical network. In multi- 645 domain environments, the MDSC needs to communicate with 646 multiple PNCs each responsible for control of a domain. The 647 MPI presents an abstracted topology to the MDSC hiding 648 technology specific aspects of the network and hiding topology 649 according to policy. 651 . SBI: The Southbound Interface (SBI) is out of scope of ACTN. 652 Many different SBIs have been defined for different 653 environments, technologies, standards organizations, and 654 vendors. It is shown in Figure 3 for reference reason only. 656 4. Advanced ACTN Architectures 658 This section describes advanced configurations of the ACTN 659 architecture. 661 4.1. MDSC Hierarchy 663 A hierarchy of MDSCs can be foreseen for many reasons, among which 664 are scalability, administrative choices, or putting together 665 different layers and technologies in the network. In the case where 666 there is a hierarchy of MDSCs, we introduce the terms higher-level 667 MDSC (MDSC-H) and lower-level MDSC (MDSC-L). The interface between 668 them is a recursion of the MPI. An implementation of an MDSC-H 669 makes provisioning requests as normal using the MPI, but an MDSC-L 670 must be able to receive requests as normal at the CMI and also at 671 the MPI. The hierarchy of MDSCs can be seen in Figure 4. 673 Another implementation choice could foresee the usage of an MDSC-L 674 for all the PNCs related to a given technology (e.g. IP/MPLS) and a 675 different MDSC-L for the PNCs related to another technology (e.g. 676 OTN/WDM) and an MDSC-H to coordinate them. 678 +--------+ 679 | CNC | 680 +--------+ 681 | +-----+ 682 | CMI | CNC | 683 +----------+ +-----+ 684 -------| MDSC-H |---- | 685 | +----------+ | | CMI 686 MPI | MPI | | 687 | | | 688 +---------+ +---------+ 689 | MDSC-L | | MDSC-L | 690 +---------+ +---------+ 691 MPI | | | | 692 | | | | 693 ----- ----- ----- ----- 694 | PNC | | PNC | | PNC | | PNC | 695 ----- ----- ----- ----- 697 Figure 4: MDSC Hierarchy 699 4.2. Functional Split of MDSC Functions in Orchestrators 701 An implementation choice could separate the MDSC functions into two 702 groups, one group for service-related functions and the other for 703 network-related functions. This enables the implementation of a 704 service orchestrator that provides the service-related functions of 705 the MDSC and a network orchestrator that provides the network- 706 related functions of the MDSC. This split is consistent with the 707 YANG service model architecture described in [Service-YANG]. Figure 708 5 depicts this and shows how the ACTN interfaces may map to YANG 709 models. 711 +--------------------+ 712 | Customer | 713 | +-----+ | 714 | | CNC | | 715 | +-----+ | 716 +--------------------+ 717 CMI | Customer Service Model 718 | 719 +---------------------------------------+ 720 | Service | 721 ********|*********************** Orchestrator | 722 * MDSC | +-----------------+ * | 723 * | | Service-related | * | 724 * | | Functions | * | 725 * | +-----------------+ * | 726 * +----------------------*----------------+ 727 * * | Service Delivery Model 728 * * | 729 * +----------------------*----------------+ 730 * | * Network | 731 * | +-----------------+ * Orchestrator | 732 * | | Network-related | * | 733 * | | Functions | * | 734 * | +-----------------+ * | 735 ********|*********************** | 736 +---------------------------------------+ 737 MPI | Network Configuration Model 738 | 739 +------------------------+ 740 | Domain | 741 | +------+ Controller | 742 | | PNC | | 743 | +------+ | 744 +------------------------+ 745 SBI | Device Configuration Model 746 | 747 +--------+ 748 | Device | 749 +--------+ 751 Figure 5: ACTN Architecture in the Context of the YANG Service 752 Models 753 5. Topology Abstraction Methods 755 Topology abstraction is described in [RFC7926]. This section 756 discusses topology abstraction factors, types, and their context in 757 the ACTN architecture. 759 Abstraction in ACTN is performed by the PNC when presenting 760 available topology to the MDSC, or by an MDSC-L when presenting 761 topology to an MDSC-H. This function is different to the creation 762 of a VN (and particularly a Type 2 VN) which is not abstraction but 763 construction of virtual resources. 765 5.1. Abstraction Factors 767 As discussed in [RFC7926], abstraction is tied with policy of the 768 networks. For instance, per an operational policy, the PNC would 769 not provide any technology specific details (e.g., optical 770 parameters for WSON) in the abstract topology it provides to the 771 MDSC. 773 There are many factors that may impact the choice of abstraction: 775 - Abstraction depends on the nature of the underlying domain 776 networks. For instance, packet networks may be abstracted with 777 fine granularity while abstraction of optical networks depends on 778 the switching units (such as wavelengths) and the end-to-end 779 continuity and cross-connect limitations within the network. 781 - Abstraction also depends on the capability of the PNCs. As 782 abstraction requires hiding details of the underlying network 783 resources, the PNC's capability to run algorithms impacts the 784 feasibility of abstraction. Some PNC may not have the ability to 785 abstract native topology while other PNCs may have the ability to 786 use sophisticated algorithms. 788 - Abstraction is a tool that can improve scalability. Where the 789 native network resource information is of large size there is a 790 specific scaling benefit to abstraction. 792 - The proper abstraction level may depend on the frequency of 793 topology updates and vice versa. 795 - The nature of the MDSC's support for technology-specific 796 parameters impacts the degree/level of abstraction. If the MDSC 797 is not capable of handling such parameters then a higher level of 798 abstraction is needed. 800 - In some cases, the PNC is required to hide key internal 801 topological data from the MDSC. Such confidentiality can be 802 achieved through abstraction. 804 5.2. Abstraction Types 806 This section defines the following three types of topology 807 abstraction: 809 . Native/White Topology (Section 5.2.1) 810 . Black Topology (Section 5.2.2) 811 . Grey Topology (Section 5.2.3) 813 5.2.1. Native/White Topology 815 This is a case where the PNC provides the actual network topology to 816 the MDSC without any hiding or filtering of information. I.e., no 817 abstraction is performed. In this case, the MDSC has the full 818 knowledge of the underlying network topology and can operate on it 819 directly. 820 5.2.2. Black Topology 822 A black topology replaces a full network with a minimal 823 representation of the edge-to-edge topology without disclosing any 824 node internal connectivity information. The entire domain network 825 may be abstracted as a single abstract node with the network's 826 access/egress links appearing as the ports to the abstract node and 827 the implication that any port can be 'cross-connected' to any other. 828 Figure 6 depicts a native topology with the corresponding black 829 topology with one virtual node and inter-domain links. In this 830 case, the MDSC has to make a provisioning request to the PNCs to 831 establish the port-to-port connection. If there is a large number 832 of inter-connected domains, this abstraction method may impose a 833 heavy coordination load at the MDSC level in order to find an 834 optimal end-to-end path since the abstraction hides so much 835 information that it is not possible to determine whether an end-to- 836 end path is feasible without asking each PNC to set up each path 837 fragment. For this reason, the MPI might need to be enhanced to 838 allow the PNCs to be queried for the practicality and 839 characteristics of paths across the abstract node. 840 ..................................... 841 : PNC Domain : 842 : +--+ +--+ +--+ +--+ : 843 ------+ +-----+ +-----+ +-----+ +------ 844 : ++-+ ++-+ +-++ +-++ : 845 : | | | | : 846 : | | | | : 847 : | | | | : 848 : | | | | : 849 : ++-+ ++-+ +-++ +-++ : 850 ------+ +-----+ +-----+ +-----+ +------ 851 : +--+ +--+ +--+ +--+ : 852 :.................................... 854 +----------+ 855 ---+ +--- 856 | Abstract | 857 | Node | 858 ---+ +--- 859 +----------+ 861 Figure 6: Native Topology with Corresponding Black Topology Expressed 862 as an Abstract Node 864 5.2.3. Grey Topology 866 A grey topology represents a compromise between black and white 867 topologies from a granularity point of view. In this case the PNC 868 exposes an abstract topology that comprises nodes and links. The 869 nodes and links may be physical of abstract while the abstract 870 topology represents the potential of connectivity across the PNC 871 domain. 872 Two modes of grey topology are identified: 873 . In a type A grey topology type border nodes are connected by a 874 full mesh of TE links (see Figure 7). 876 . In a type B grey topology border nodes are connected over a 877 more detailed network comprising internal abstract nodes and 878 abstracted links. This mode of abstraction supplies the MDSC 879 with more information about the internals of the PNC domain and 880 allows it to make more informed choices about how to route 881 connectivity over the underlying network. 883 ..................................... 884 : PNC Domain : 885 : +--+ +--+ +--+ +--+ : 886 ------+ +-----+ +-----+ +-----+ +------ 887 : ++-+ ++-+ +-++ +-++ : 888 : | | | | : 889 : | | | | : 890 : | | | | : 891 : | | | | : 892 : ++-+ ++-+ +-++ +-++ : 893 ------+ +-----+ +-----+ +-----+ +------ 894 : +--+ +--+ +--+ +--+ : 895 :.................................... 897 .................... 898 : Abstract Network : 899 : : 900 : +--+ +--+ : 901 -------+ +----+ +------- 902 : ++-+ +-++ : 903 : | \ / | : 904 : | \/ | : 905 : | /\ | : 906 : | / \ | : 907 : ++-+ +-++ : 908 -------+ +----+ +------- 909 : +--+ +--+ : 910 :..................: 912 Figure 7: Native Topology with Corresponding Grey Topology 914 5.3. Methods of Building Grey Topologies 916 This section discusses two different methods of building a grey 917 topology: 919 . Automatic generation of abstract topology by configuration 920 (Section 5.3.1) 921 . On-demand generation of supplementary topology via path 922 computation request/reply (Section 5.3.2) 924 5.3.1. Automatic Generation of Abstract Topology by Configuration 926 Automatic generation is based on the abstraction/summarization of 927 the whole domain by the PNC and its advertisement on the MPI. The 928 level of abstraction can be decided based on PNC configuration 929 parameters (e.g., "provide the potential connectivity between any PE 930 and any ASBR in an MPLS-TE network"). 932 Note that the configuration parameters for this abstract topology 933 can include available bandwidth, latency, or any combination of 934 defined parameters. How to generate such information is beyond the 935 scope of this document. 937 This abstract topology may need to be periodically or incrementally 938 updated when there is a change in the underlying network or the use 939 of the network resources that make connectivity more or less 940 available. 942 5.3.2. On-demand Generation of Supplementary Topology via Path Compute 943 Request/Reply 945 While abstract topology is generated and updated automatically by 946 configuration as explained in Section 5.3.1, additional 947 supplementary topology may be obtained by the MDSC via a path 948 compute request/reply mechanism. 950 The abstract topology advertisements from PNCs give the MDSC the 951 border node/link information for each domain. Under this scenario, 952 when the MDSC needs to create a new VN, the MDSC can issue path 953 computation requests to PNCs with constraints matching the VN 954 request as described in [ACTN-YANG]. An example is provided in 955 Figure 8, where the MDSC is creating a P2P VN between AP1 and AP2. 956 The MDSC could use two different inter-domain links to get from 957 Domain X to Domain Y, but in order to choose the best end-to-end 958 path it needs to know what domain X and Y can offer in terms of 959 connectivity and constraints between the PE nodes and the border 960 nodes. 962 ------- -------- 963 ( ) ( ) 964 - BrdrX.1------- BrdrY.1 - 965 (+---+ ) ( +---+) 966 -+---( |PE1| Dom.X ) ( Dom.Y |PE2| )---+- 967 | (+---+ ) ( +---+) | 968 AP1 - BrdrX.2------- BrdrY.2 - AP2 969 ( ) ( ) 970 ------- -------- 972 Figure 8: A Multi-Domain Example 973 The MDSC issues a path computation request to PNC.X asking for 974 potential connectivity between PE1 and border node BrdrX.1 and 975 between PE1 and BrdrX.2 with related objective functions and TE 976 metric constraints. A similar request for connectivity from the 977 border nodes in Domain Y to PE2 will be issued to PNC.Y. The MDSC 978 merges the results to compute the optimal end-to-end path including 979 the inter domain links. The MDSC can use the result of this 980 computation to request the PNCs to provision the underlying 981 networks, and the MDSC can then use the end-to-end path as a virtual 982 link in the VN it delivers to the customer. 984 5.4. Hierarchical Topology Abstraction Example 986 This section illustrates how topology abstraction operates in 987 different levels of a hierarchy of MDSCs as shown in Figure 9. 989 +-----+ 990 | CNC | CNC wants to create a VN 991 +-----+ between CE A and CE B 992 | 993 | 994 +-----------------------+ 995 | MDSC-H | 996 +-----------------------+ 997 / \ 998 / \ 999 +---------+ +---------+ 1000 | MDSC-L1 | | MDSC-L2 | 1001 +---------+ +---------+ 1002 / \ / \ 1003 / \ / \ 1004 +----+ +----+ +----+ +----+ 1005 CE A o----|PNC1| |PNC2| |PNC3| |PNC4|----o CE B 1006 +----+ +----+ +----+ +----+ 1008 Virtual Network Delivered to CNC 1010 CE A o==============o CE B 1012 Topology operated on by MDSC-H 1014 CE A o----o==o==o===o----o CE B 1016 Topology operated on by MDSC-L1 Topology operated on by MDSC-L2 1017 _ _ _ _ 1018 ( ) ( ) ( ) ( ) 1019 ( ) ( ) ( ) ( ) 1020 CE A o--(o---o)==(o---o)==Dom.3 Dom.2==(o---o)==(o---o)--o CE B 1021 ( ) ( ) ( ) ( ) 1022 (_) (_) (_) (_) 1024 Actual Topology 1025 ___ ___ ___ ___ 1026 ( ) ( ) ( ) ( ) 1027 ( o ) ( o ) ( o--o) ( o ) 1028 ( / \ ) ( |\ ) ( | | ) ( / \ ) 1029 CE A o---(o-o---o-o)==(o-o-o-o-o)==(o--o--o-o)==(o-o-o-o-o)---o CE B 1030 ( \ / ) ( | |/ ) ( | | ) ( \ / ) 1031 ( o ) (o-o ) ( o--o) ( o ) 1032 (___) (___) (___) (___) 1034 Domain 1 Domain 2 Domain 3 Domain 4 1036 Where 1037 o is a node 1038 --- is a link 1039 === border link 1041 Figure 9: Illustration of Hierarchical Topology Abstraction 1043 In the example depicted in Figure 9, there are four domains under 1044 control of PNCs PNC1, PNC2, PNC3, and PNC4. MDSC-L1 controls PNC1 1045 and PNC2 while MDSC-L2 controls PNC3 and PNC4. Each of the PNCs 1046 provides a grey topology abstraction that presents only border nodes 1047 and links across and outside the domain. The abstract topology 1048 MDSC-L1 that operates is a combination of the two topologies from 1049 PNC1 and PNC2. Likewise, the abstract topology that MDSC-L2 1050 operates is shown in Figure 9. Both MDSC-L1 and MDSC-L2 provide a 1051 black topology abstraction to MSDC-H in which each PNC domain is 1052 presented as a single virtual node. MDSC-H combines these two 1053 topologies to create the abstraction topology on which it operates. 1054 MDSC-H sees the whole four domain networks as four virtual nodes 1055 connected via virtual links. 1057 5.5. VN Recursion with Network Layers 1059 In some cases the VN supplied to a customer may be built using 1060 resources from different technology layers operated by different 1061 providers. For example, one provider may run a packet TE network 1062 and use optical connectivity provided by another provider. 1064 As shown in Figure 10, a customer asks for end-to-end connectivity 1065 between CE A and CE B, a virtual network. The customer's CNC makes a 1066 request to Provider 1's MDSC. The MDSC works out which network 1067 resources need to be configured and sends instructions to the 1068 appropriate PNCs. However, the link between Q and R is a virtual 1069 link supplied by Provider 2: Provider 1 is a customer of Provider 2. 1071 To support this, Provider 1 has a CNC that communicates to Provider 1072 2's MDSC. Note that Provider 1's CNC in Figure 10 is a functional 1073 component that does not dictate implementation: it may be embedded 1074 in a PNC. 1076 Virtual CE A o===============================o CE B 1077 Network 1079 ----- CNC wants to create a VN 1080 Customer | CNC | between CE A and CE B 1081 ----- 1082 : 1083 *********************************************** 1084 : 1085 Provider 1 --------------------------- 1086 | MDSC | 1087 --------------------------- 1088 : : : 1089 : : : 1090 ----- ------------- ----- 1091 | PNC | | PNC | | PNC | 1092 ----- ------------- ----- 1093 : : : : : 1094 Higher v v : v v 1095 Layer CE A o---P-----Q===========R-----S---o CE B 1096 Network | : | 1097 | : | 1098 | ----- | 1099 | | CNC | | 1100 | ----- | 1101 | : | 1102 *********************************************** 1103 | : | 1104 Provider 2 | ------ | 1105 | | MSDC | | 1106 | ------ | 1107 | : | 1108 | ------- | 1109 | | PNC | | 1110 | ------- | 1111 \ : : : / 1112 Lower \v v v/ 1113 Layer X--Y--Z 1114 Network 1116 Figure 10: VN Recursion with Network Layers 1118 6. Access Points and Virtual Network Access Points 1120 In order to map identification of connections between the customer's 1121 sites and the TE networks and to scope the connectivity requested in 1122 the VNS, the CNC and the MDSC refer to the connections using the 1123 Access Point (AP) construct as shown in Figure 11. 1125 ------------- 1126 ( ) 1127 - - 1128 +---+ X ( ) Z +---+ 1129 |CE1|---+----( )---+---|CE2| 1130 +---+ | ( ) | +---+ 1131 AP1 - - AP2 1132 ( ) 1133 ------------- 1135 Figure 11: Customer View of APs 1137 Let's take as an example a scenario shown in Figure 11. CE1 is 1138 connected to the network via a 10Gb link and CE2 via a 40Gb link. 1139 Before the creation of any VN between AP1 and AP2 the customer view 1140 can be summarized as shown in Table 1. 1142 +----------+------------------------+ 1143 |End Point | Access Link Bandwidth | 1144 +-----+----------+----------+-------------+ 1145 |AP id| CE,port | MaxResBw | AvailableBw | 1146 +-----+----------+----------+-------------+ 1147 | AP1 |CE1,portX | 10Gb | 10Gb | 1148 +-----+----------+----------+-------------+ 1149 | AP2 |CE2,portZ | 40Gb | 40Gb | 1150 +-----+----------+----------+-------------+ 1152 Table 1: AP - Customer View 1154 On the other hand, what the provider sees is shown in Figure 12. 1156 ------- ------- 1157 ( ) ( ) 1158 - - - - 1159 W (+---+ ) ( +---+) Y 1160 -+---( |PE1| Dom.X )---( Dom.Y |PE2| )---+- 1161 | (+---+ ) ( +---+) | 1162 AP1 - - - - AP2 1163 ( ) ( ) 1164 ------- ------- 1166 Figure 12: Provider view of the AP 1168 Which results in a summarization as shown in Table 2. 1170 +----------+------------------------+ 1171 |End Point | Access Link Bandwidth | 1172 +-----+----------+----------+-------------+ 1173 |AP id| PE,port | MaxResBw | AvailableBw | 1174 +-----+----------+----------+-------------+ 1175 | AP1 |PE1,portW | 10Gb | 10Gb | 1176 +-----+----------+----------+-------------+ 1177 | AP2 |PE2,portY | 40Gb | 40Gb | 1178 +-----+----------+----------+-------------+ 1180 Table 2: AP - Provider View 1182 A Virtual Network Access Point (VNAP) needs to be defined as binding 1183 between the AP that is linked to a VN and that is used to allow for 1184 different VNs to start from the same AP. It also allows for traffic 1185 engineering on the access and/or inter-domain links (e.g., keeping 1186 track of bandwidth allocation). A different VNAP is created on an 1187 AP for each VN. 1189 In this simple scenario we suppose we want to create two virtual 1190 networks. The first with VN identifier 9 between AP1 and AP2 with 1191 bandwidth of 1Gbps, while the second with VN identifier 5, again 1192 between AP1 and AP2 and with bandwidth 2Gbps. 1194 The provider view would evolve as shown in Table 3. 1196 +----------+------------------------+ 1197 |End Point | Access Link/VNAP Bw | 1198 +---------+----------+----------+-------------+ 1199 |AP/VNAPid| PE,port | MaxResBw | AvailableBw | 1200 +---------+----------+----------+-------------+ 1201 |AP1 |PE1,portW | 10Gbps | 7Gbps | 1202 | -VNAP1.9| | 1Gbps | N.A. | 1203 | -VNAP1.5| | 2Gbps | N.A | 1204 +---------+----------+----------+-------------+ 1205 |AP2 |PE2,portY | 40Gbps | 37Gbps | 1206 | -VNAP2.9| | 1Gbps | N.A. | 1207 | -VNAP2.5| | 2Gbps | N.A | 1208 +---------+----------+----------+-------------+ 1209 Table 3: AP and VNAP - Provider View after VNS Creation 1211 6.1. Dual-Homing Scenario 1213 Often there is a dual homing relationship between a CE and a pair of 1214 PEs. This case needs to be supported by the definition of VN, APs 1215 and VNAPs. Suppose CE1 connected to two different PEs in the 1216 operator domain via AP1 and AP2 and that the customer needs 5Gbps of 1217 bandwidth between CE1 and CE2. This is shown in Figure 12. 1219 ____________ 1220 AP1 ( ) AP3 1221 -------(PE1) (PE3)------- 1222 W / ( ) \ X 1223 +---+/ ( ) \+---+ 1224 |CE1| ( ) |CE2| 1225 +---+\ ( ) /+---+ 1226 Y \ ( ) / Z 1227 -------(PE2) (PE4)------- 1228 AP2 (____________) 1230 Figure 12: Dual-Homing Scenario 1232 In this case, the customer will request for a VN between AP1, AP2, 1233 and AP3 specifying a dual homing relationship between AP1 and AP2. 1234 As a consequence no traffic will flow between AP1 and AP2. The dual 1235 homing relationship would then be mapped against the VNAPs (since 1236 other independent VNs might have AP1 and AP2 as end points). 1238 The customer view would be shown in Table 4. 1240 +----------+------------------------+ 1241 |End Point | Access Link/VNAP Bw | 1242 +---------+----------+----------+-------------+-----------+ 1243 |AP/VNAPid| CE,port | MaxResBw | AvailableBw |Dual Homing| 1244 +---------+----------+----------+-------------+-----------+ 1245 |AP1 |CE1,portW | 10Gbps | 5Gbps | | 1246 | -VNAP1.9| | 5Gbps | N.A. | VNAP2.9 | 1247 +---------+----------+----------+-------------+-----------+ 1248 |AP2 |CE1,portY | 40Gbps | 35Gbps | | 1249 | -VNAP2.9| | 5Gbps | N.A. | VNAP1.9 | 1250 +---------+----------+----------+-------------+-----------+ 1251 |AP3 |CE2,portX | 40Gbps | 35Gbps | | 1252 | -VNAP3.9| | 5Gbps | N.A. | NONE | 1253 +---------+----------+----------+-------------+-----------+ 1255 Table 4: Dual-Homing - Customer View after VN Creation 1257 7. Advanced ACTN Application: Multi-Destination Service 1259 A further advanced application of ACTN is in the case of Data Center 1260 selection, where the customer requires the Data Center selection to 1261 be based on the network status; this is referred to as Multi- 1262 Destination in [ACTN-REQ]. In terms of ACTN, a CNC could request a 1263 connectivity service (virtual network) between a set of source Aps 1264 and destination APs and leave it up to the network (MDSC) to decide 1265 which source and destination access points to be used to set up the 1266 connectivity service (virtual network). The candidate list of 1267 source and destination APs is decided by a CNC (or an entity outside 1268 of ACTN) based on certain factors which are outside the scope of 1269 ACTN. 1271 Based on the AP selection as determined and returned by the network 1272 (MDSC), the CNC (or an entity outside of ACTN) should further take 1273 care of any subsequent actions such as orchestration or service 1274 setup requirements. These further actions are outside the scope of 1275 ACTN. 1277 Consider a case as shown in Figure 14, where three data centers are 1278 available, but the customer requires the data center selection to be 1279 based on the network status and the connectivity service setup 1280 between the AP1 (CE1) and one of the destination APs (AP2 (DC-A), 1281 AP3 (DC-B), and AP4 (DC-C)). The MDSC (in coordination with PNCs) 1282 would select the best destination AP based on the constraints, 1283 optimization criteria, policies, etc., and setup the connectivity 1284 service (virtual network). 1286 ------- ------- 1287 ( ) ( ) 1288 - - - - 1289 +---+ ( ) ( ) +----+ 1290 |CE1|---+---( Domain X )----( Domain Y )---+---|DC-A| 1291 +---+ | ( ) ( ) | +----+ 1292 AP1 - - - - AP2 1293 ( ) ( ) 1294 ---+--- ---+--- 1295 | | 1296 AP3-+ AP4-+ 1297 | | 1298 +----+ +----+ 1299 |DC-B| |DC-C| 1300 +----+ +----+ 1302 Figure 14: End-Point Selection Based on Network Status 1304 7.1. Pre-Planned End Point Migration 1306 Furthermore, in case of Data Center selection, customer could 1307 request for a backup DC to be selected, such that in case of 1308 failure, another DC site could provide hot stand-by protection. As 1309 shown in Figure 15 DC-C is selected as a backup for DC-A. Thus, the 1310 VN should be setup by the MDSC to include primary connectivity 1311 between AP1 (CE1) and AP2 (DC-A) as well as protection connectivity 1312 between AP1 (CE1) and AP4 (DC-C). 1314 ------- ------- 1315 ( ) ( ) 1316 - - - - 1317 +---+ ( ) ( ) +----+ 1318 |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| 1319 +---+ | ( ) ( ) | +----+ 1320 AP1 - - - - AP2 | 1321 ( ) ( ) | 1322 ---+--- ---+--- | 1323 | | | 1324 AP3-+ AP4-+ HOT STANDBY 1325 | | | 1326 +----+ +----+ | 1327 |DC-D| |DC-C|<------------- 1328 +----+ +----+ 1330 Figure 15: Pre-planned End-Point Migration 1332 7.2. On the Fly End-Point Migration 1334 Compared to pre-planned end point migration, on the fly end point 1335 selection is dynamic in that the migration is not pre-planned but 1336 decided based on network condition. Under this scenario, the MDSC 1337 would monitor the network (based on the VN SLA) and notify the CNC 1338 in case where some other destination AP would be a better choice 1339 based on the network parameters. The CNC should instruct the MDSC 1340 when it is suitable to update the VN with the new AP if it is 1341 required. 1343 8. Manageability Considerations 1345 The objective of ACTN is to manage traffic engineered resources, and 1346 provide a set of mechanisms to allow customers to request virtual 1347 connectivity across server network resources. ACTN supports 1348 multiple customers each with its own view of and control of a 1349 virtual network built on the server network, the network operator 1350 will need to partition (or "slice") their network resources, and 1351 manage the resources accordingly. 1353 The ACTN platform will, itself, need to support the request, 1354 response, and reservations of client and network layer connectivity. 1355 It will also need to provide performance monitoring and control of 1356 traffic engineered resources. The management requirements may be 1357 categorized as follows: 1359 . Management of external ACTN protocols 1360 . Management of internal ACTN interfaces/protocols 1361 . Management and monitoring of ACTN components 1362 . Configuration of policy to be applied across the ACTN system 1364 The ACTN framework and interfaces are defined to enable traffic 1365 engineering for virtual networks. Network operators may have other 1366 Operations, Administration, and Maintenance (OAM) tasks for service 1367 fulfillment, optimization, and assurance beyond traffic engineering. 1368 The realization of OAM beyond abstraction and control of traffic 1369 engineered networks is not considered in this document. 1371 8.1. Policy 1373 Policy is an important aspect of ACTN control and management. 1374 Policies are used via the components and interfaces, during 1375 deployment of the service, to ensure that the service is compliant 1376 with agreed policy factors and variations (often described in SLAs), 1377 these include, but are not limited to: connectivity, bandwidth, 1378 geographical transit, technology selection, security, resilience, 1379 and economic cost. 1381 Depending on the deployment of the ACTN architecture, some policies 1382 may have local or global significance. That is, certain policies 1383 may be ACTN component specific in scope, while others may have 1384 broader scope and interact with multiple ACTN components. Two 1385 examples are provided below: 1387 . A local policy might limit the number, type, size, and 1388 scheduling of virtual network services a customer may request 1389 via its CNC. This type of policy would be implemented locally 1390 on the MDSC. 1392 . A global policy might constrain certain customer types (or 1393 specific customer applications) to only use certain MDSCs, and 1394 be restricted to physical network types managed by the PNCs. A 1395 global policy agent would govern these types of policies. 1397 The objective of this section is to discuss the applicability of 1398 ACTN policy: requirements, components, interfaces, and examples. 1399 This section provides an analysis and does not mandate a specific 1400 method for enforcing policy, or the type of policy agent that would 1401 be responsible for propagating policies across the ACTN components. 1402 It does highlight examples of how policy may be applied in the 1403 context of ACTN, but it is expected further discussion in an 1404 applicability or solution specific document, will be required. 1406 8.2. Policy Applied to the Customer Network Controller 1408 A virtual network service for a customer application will be 1409 requested by the CNC. The request will reflect the application 1410 requirements and specific service needs, including bandwidth, 1411 traffic type and survivability. Furthermore, application access and 1412 type of virtual network service requested by the CNC, will be need 1413 adhere to specific access control policies. 1415 8.3. Policy Applied to the Multi Domain Service Coordinator 1417 A key objective of the MDSC is to support the customer's expression 1418 of the application connectivity request via its CNC as set of 1419 desired business needs, therefore policy will play an important 1420 role. 1422 Once authorized, the virtual network service will be instantiated 1423 via the CNC-MDSC Interface (CMI), it will reflect the customer 1424 application and connectivity requirements, and specific service 1425 transport needs. The CNC and the MDSC components will have agreed 1426 connectivity end-points, use of these end-points should be defined 1427 as a policy expression when setting up or augmenting virtual network 1428 services. Ensuring that permissible end-points are defined for CNCs 1429 and applications will require the MDSC to maintain a registry of 1430 permissible connection points for CNCs and application types. 1432 Conflicts may occur when virtual network service optimization 1433 criteria are in competition. For example, to meet objectives for 1434 service reachability a request may require an interconnection point 1435 between multiple physical networks; however, this might break a 1436 confidentially policy requirement of specific type of end-to-end 1437 service. Thus an MDSC may have to balance a number of the 1438 constraints on a service request and between different requested 1439 services. It may also have to balance requested services with 1440 operational norms for the underlying physical networks. This 1441 balancing may be resolved using configured policy and using hard and 1442 soft policy constraints. 1444 8.4. Policy Applied to the Provisioning Network Controller 1446 The PNC is responsible for configuring the network elements, 1447 monitoring physical network resources, and exposing connectivity 1448 (direct or abstracted) to the MDSC. It is therefore expected that 1449 policy will dictate what connectivity information will be exported 1450 between the PNC, via the MDSC-PNC Interface (MPI), and MDSC. 1452 Policy interactions may arise when a PNC determines that it cannot 1453 compute a requested path from the MDSC, or notices that (per a 1454 locally configured policy) the network is low on resources (for 1455 example, the capacity on key links become exhausted). In either 1456 case, the PNC will be required to notify the MDSC, which may (again 1457 per policy) act to construct a virtual network service across 1458 another physical network topology. 1460 Furthermore, additional forms of policy-based resource management 1461 will be required to provide virtual network service performance, 1462 security and resilience guarantees. This will likely be implemented 1463 via a local policy agent and additional protocol methods. 1465 9. Security Considerations 1467 The ACTN framework described in this document defines key components 1468 and interfaces for managed traffic engineered networks. Securing 1469 the request and control of resources, confidentially of the 1470 information, and availability of function, should all be critical 1471 security considerations when deploying and operating ACTN platforms. 1473 Several distributed ACTN functional components are required, and 1474 implementations should consider encrypting data that flows between 1475 components, especially when they are implemented at remote nodes, 1476 regardless these data flows are on external or internal network 1477 interfaces. 1479 The ACTN security discussion is further split into two specific 1480 categories described in the following sub-sections: 1482 . Interface between the Customer Network Controller and Multi 1483 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) 1485 . Interface between the Multi Domain Service Coordinator and 1486 Provisioning Network Controller (PNC), MDSC-PNC Interface (MPI) 1488 From a security and reliability perspective, ACTN may encounter many 1489 risks such as malicious attack and rogue elements attempting to 1490 connect to various ACTN components. Furthermore, some ACTN 1491 components represent a single point of failure and threat vector, 1492 and must also manage policy conflicts, and eavesdropping of 1493 communication between different ACTN components. 1495 The conclusion is that all protocols used to realize the ACTN 1496 framework should have rich security features, and customer, 1497 application and network data should be stored in encrypted data 1498 stores. Additional security risks may still exist. Therefore, 1499 discussion and applicability of specific security functions and 1500 protocols will be better described in documents that are use case 1501 and environment specific. 1503 9.1. CNC-MDSC Interface (CMI) 1505 Data stored by the MDSC will reveal details of the virtual network 1506 services, and which CNC and customer/application is consuming the 1507 resource. The data stored must therefore be considered as a 1508 candidate for encryption. 1510 CNC Access rights to an MDSC must be managed. The MDSC must 1511 allocate resources properly, and methods to prevent policy 1512 conflicts, resource wastage, and denial of service attacks on the 1513 MDSC by rogue CNCs, should also be considered. 1515 The CMI will likely be an external protocol interface. Suitable 1516 authentication and authorization of each CNC connecting to the MDSC 1517 will be required, especially, as these are likely to be implemented 1518 by different organizations and on separate functional nodes. Use of 1519 the AAA-based mechanisms would also provide role-based authorization 1520 methods, so that only authorized CNC's may access the different 1521 functions of the MDSC. 1523 9.2. MDSC-PNC Interface (MPI) 1525 Where the MDSC must interact with multiple (distributed) PNCs, a 1526 PKI-based mechanism is suggested, such as building a TLS or HTTPS 1527 connection between the MDSC and PNCs, to ensure trust between the 1528 physical network layer control components and the MDSC. 1530 Which MDSC the PNC exports topology information to, and the level of 1531 detail (full or abstracted) should also be authenticated and 1532 specific access restrictions and topology views, should be 1533 configurable and/or policy-based. 1535 10. IANA Considerations 1537 This document has no actions for IANA. 1539 11. References 1541 11.1. Informative References 1543 [RFC2702] Awduche, D., et. al., "Requirements for Traffic 1544 Engineering Over MPLS", RFC 2702, September 1999. 1546 [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path 1547 Computation Element (PCE)-Based Architecture", IETF RFC 1548 4655, August 2006. 1550 [RFC5654] Niven-Jenkins, B. (Ed.), D. Brungard (Ed.), and M. Betts 1551 (Ed.), "Requirements of an MPLS Transport Profile", RFC 1552 5654, September 2009. 1554 [RFC7149] Boucadair, M. and Jacquenet, C., "Software-Defined 1555 Networking: A Perspective from within a Service Provider 1556 Environment", RFC 7149, March 2014. 1558 [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for 1559 Information Exchange between Interconnected Traffic- 1560 Engineered Networks", RFC 7926, July 2016. 1562 [RFC3945] Manning, E., et al., "Generalized Multi-Protocol Label 1563 Switching (GMPLS) Architecture2, RFC 3945, October 2004. 1565 [ONF-ARCH] Open Networking Foundation, "SDN architecture", Issue 1566 1.1, ONF TR-521, June 2016. 1568 [Centralized] Farrel, A., et al., "An Architecture for Use of PCE 1569 and PCEP in a Network with Central Control", draft-ietf- 1570 teas-pce-central-control, work in progress. 1572 [Service-YANG] Lee, Y., Dhody, D., and Ceccarelli, C., "Traffic 1573 Engineering and Service Mapping Yang Model", draft-lee- 1574 teas-te-service-mapping-yang, work in progress. 1576 [ACTN-YANG] Lee, Y., et al., "A Yang Data Model for ACTN VN 1577 Operation", draft-lee-teas-actn-vn-yang, work in progress. 1579 [ACTN-REQ] Lee, Y., et al., "Requirements for Abstraction and 1580 Control of TE Networks", draft-ietf-teas-actn- 1581 requirements, work in progress. 1583 [TE-Topo] X. Liu et al., "YANG Data Model for TE Topologies", draft- 1584 ietf-teas-yang-te-topo, work in progress. 1586 12. Contributors 1588 Adrian Farrel 1589 Old Dog Consulting 1590 Email: adrian@olddog.co.uk 1592 Italo Busi 1593 Huawei 1594 Email: Italo.Busi@huawei.com 1596 Khuzema Pithewan 1597 Infinera 1598 Email: kpithewan@infinera.com 1600 Michael Scharf 1601 Nokia 1602 Email: michael.scharf@nokia.com 1603 Luyuan Fang 1604 eBay 1605 Email: luyuanf@gmail.com 1607 Diego Lopez 1608 Telefonica I+D 1609 Don Ramon de la Cruz, 82 1610 28006 Madrid, Spain 1611 Email: diego@tid.es 1613 Sergio Belotti 1614 Alcatel Lucent 1615 Via Trento, 30 1616 Vimercate, Italy 1617 Email: sergio.belotti@nokia.com 1619 Daniel King 1620 Lancaster University 1621 Email: d.king@lancaster.ac.uk 1623 Dhruv Dhody 1624 Huawei Technologies 1625 Divyashree Techno Park, Whitefield 1626 Bangalore, Karnataka 560066 1627 India 1628 Email: dhruv.ietf@gmail.com 1630 Gert Grammel 1631 Juniper Networks 1632 Email: ggrammel@juniper.net 1634 Authors' Addresses 1636 Daniele Ceccarelli 1637 Ericsson 1638 Torshamnsgatan,48 1639 Stockholm, Sweden 1640 Email: daniele.ceccarelli@ericsson.com 1642 Young Lee 1643 Huawei Technologies 1644 5340 Legacy Drive 1645 Plano, TX 75023, USA 1646 Phone: (469)277-5838 1647 Email: leeyoung@huawei.com 1649 APPENDIX A - Example of MDSC and PNC Functions Integrated in A 1650 Service/Network Orchestrator 1652 This section provides an example of a possible deployment scenario, 1653 in which Service/Network Orchestrator can include a number of 1654 functionalities, among which, in the example below, PNC 1655 functionalities for domain 2 and MDSC functionalities to coordinate 1656 the PNC1 functionalities (hosted in a separate domain controller) 1657 and PNC2 functionalities (co-hosted in the network orchestrator). 1659 Customer 1660 +-------------------------------+ 1661 | +-----+ | 1662 | | CNC | | 1663 | +-----+ | 1664 +-------|-----------------------+ 1665 | 1666 Service/Network | CMI 1667 Orchestrator | 1668 +-------|------------------------+ 1669 | +------+ MPI +------+ | 1670 | | MDSC |---------| PNC2 | | 1671 | +------+ +------+ | 1672 +-------|------------------|-----+ 1673 | MPI | 1674 Domain Controller | | 1675 +-------|-----+ | 1676 | +-----+ | | SBI 1677 | |PNC1 | | | 1678 | +-----+ | | 1679 +-------|-----+ | 1680 v SBI v 1681 ------- ------- 1682 ( ) ( ) 1683 - - - - 1684 ( ) ( ) 1685 ( Domain 1 )----( Domain 2 ) 1686 ( ) ( ) 1687 - - - - 1688 ( ) ( ) 1689 ------- -------