idnits 2.17.1 draft-ietf-teas-actn-framework-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 16, 2017) is 2382 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Daniele Ceccarelli (Ed) 2 Internet Draft Ericsson 3 Intended status: Informational Young Lee (Ed) 4 Expires: April 16, 2018 Huawei 6 October 16, 2017 8 Framework for Abstraction and Control of Traffic Engineered Networks 10 draft-ietf-teas-actn-framework-09 12 Abstract 14 Traffic Engineered networks have a variety of mechanisms to 15 facilitate the separation of the data plane and control plane. They 16 also have a range of management and provisioning protocols to 17 configure and activate network resources. These mechanisms 18 represent key technologies for enabling flexible and dynamic 19 networking. 21 Abstraction of network resources is a technique that can be applied 22 to a single network domain or across multiple domains to create a 23 single virtualized network that is under the control of a network 24 operator or the customer of the operator that actually owns 25 the network resources. 27 This document provides a framework for Abstraction and Control of 28 Traffic Engineered Networks (ACTN). 30 Status of this Memo 32 This Internet-Draft is submitted to IETF in full conformance with 33 the provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF), its areas, and its working groups. Note that 37 other groups may also distribute working documents as Internet- 38 Drafts. 40 Internet-Drafts are draft documents valid for a maximum of six 41 months and may be updated, replaced, or obsoleted by other documents 42 at any time. It is inappropriate to use Internet-Drafts as 43 reference material or to cite them other than as "work in progress." 45 The list of current Internet-Drafts can be accessed at 46 http://www.ietf.org/ietf/1id-abstracts.txt 47 The list of Internet-Draft Shadow Directories can be accessed at 48 http://www.ietf.org/shadow.html. 50 This Internet-Draft will expire on April 16, 2018. 52 Copyright Notice 54 Copyright (c) 2017 IETF Trust and the persons identified as the 55 document authors. All rights reserved. 57 This document is subject to BCP 78 and the IETF Trust's Legal 58 Provisions Relating to IETF Documents 59 (http://trustee.ietf.org/license-info) in effect on the date of 60 publication of this document. Please review these documents 61 carefully, as they describe your rights and restrictions with 62 respect to this document. Code Components extracted from this 63 document must include Simplified BSD License text as described in 64 Section 4.e of the Trust Legal Provisions and are provided without 65 warranty as described in the Simplified BSD License. 67 Table of Contents 69 1. Introduction...................................................3 70 2. Overview.......................................................4 71 2.1. Terminology...............................................5 72 2.2. VNS Model of ACTN.........................................8 73 2.2.1. Customers............................................9 74 2.2.2. Service Providers...................................10 75 2.2.3. Network Providers...................................10 76 3. ACTN Base Architecture........................................10 77 3.1. Customer Network Controller..............................12 78 3.2. Multi-Domain Service Coordinator.........................13 79 3.3. Provisioning Network Controller..........................13 80 3.4. ACTN Interfaces..........................................14 81 4. Advanced ACTN Architectures...................................15 82 4.1. MDSC Hierarchy...........................................15 83 4.2. Functional Split of MDSC Functions in Orchestrators......16 84 5. Topology Abstraction Methods..................................17 85 5.1. Abstraction Factors......................................17 86 5.2. Abstraction Types........................................18 87 5.2.1. Native/White Topology...............................18 88 5.2.2. Black Topology......................................18 89 5.2.3. Grey Topology.......................................19 90 5.3. Methods of Building Grey Topologies......................20 91 5.3.1. Automatic Generation of Abstract Topology by 92 Configuration..............................................21 93 5.3.2. On-demand Generation of Supplementary Topology via Path 94 Compute Request/Reply......................................21 95 5.4. Hierarchical Topology Abstraction Example................22 96 6. Access Points and Virtual Network Access Points...............23 97 6.1. Dual-Homing Scenario.....................................25 98 7. Advanced ACTN Application: Multi-Destination Service..........26 99 7.1. Pre-Planned End Point Migration..........................27 100 7.2. On the Fly End-Point Migration...........................28 101 8. Manageability Considerations..................................28 102 8.1. Policy...................................................29 103 8.2. Policy Applied to the Customer Network Controller........30 104 8.3. Policy Applied to the Multi Domain Service Coordinator...30 105 8.4. Policy Applied to the Provisioning Network Controller....31 106 9. Security Considerations.......................................31 107 9.1. CNC-MDSC Interface (CMI).................................32 108 9.2. MDSC-PNC Interface (MPI).................................32 109 10. IANA Considerations..........................................32 110 11. References...................................................33 111 11.1. Informative References..................................33 112 12. Contributors.................................................34 113 Authors' Addresses...............................................35 114 APPENDIX A - Example of MDSC and PNC Functions Integrated in A 115 Service/Network Orchestrator.....................................35 117 1. Introduction 119 The term "Traffic Engineered network" refers to a network that uses 120 any connection-oriented technology under the control of a 121 distributed or centralized control plane to support dynamic 122 provisioning of end-to-end connectivity. Traffic Engineered (TE) 123 networks have a variety of mechanisms to facilitate separation of 124 data plane and control plane including distributed signaling for 125 path setup and protection, centralized path computation for planning 126 and traffic engineering, and a range of management and provisioning 127 protocols to configure and activate network resources. These 128 mechanisms represent key technologies for enabling flexible and 129 dynamic networking. Some examples of networks that are in scope of 130 this definition are optical networks, MPLS Transport Profile (MPLS- 131 TP) networks [RFC5654], and MPLS-TE networks [RFC2702]. 133 One of the main drivers for Software Defined Networking (SDN) 134 [RFC7149] is a decoupling of the network control plane from the data 135 plane. This separation has been achieved for TE networks with the 136 development of MPLS/GMPLS [RFC3945] and the Path Computation Element 137 (PCE) [RFC4655]. One of the advantages of SDN is its logically 138 centralized control regime that allows a global view of the 139 underlying networks. Centralized control in SDN helps improve 140 network resource utilization compared with distributed network 141 control. For TE-based networks, a PCE may serve as a logically 142 centralized path computation function. 144 This document describes a set of management and control functions 145 used to operate one or more TE networks to construct virtual 146 networks that can be represented to customers and that are built 147 from abstractions of the underlying TE networks so that, for 148 example, a link in the customer's network is constructed from a path 149 or collection of paths in the underlying networks. We call this set 150 of function "Abstraction and Control of Traffic Engineered Networks" 151 (ACTN). 153 2. Overview 155 Three key aspects that need to be solved by SDN are: 157 . Separation of service requests from service delivery so that 158 the configuration and operation of a network is transparent 159 from the point of view of the customer, but remains responsive 160 to the customer's services and business needs. 162 . Network abstraction: As described in [RFC7926], abstraction is 163 the process of applying policy to a set of information about a 164 TE network to produce selective information that represents the 165 potential ability to connect across the network. The process 166 of abstraction presents the connectivity graph in a way that is 167 independent of the underlying network technologies, 168 capabilities, and topology so that the graph can be used to 169 plan and deliver network services in a uniform way 171 . Coordination of resources across multiple independent networks 172 and multiple technology layers to provide end-to-end services 173 regardless of whether the networks use SDN or not. 175 As networks evolve, the need to provide support for distinct 176 services, separated service orchestration, and resource abstraction 177 have emerged as key requirements for operators. In order to support 178 multiple customers each with its own view of and control of the 179 server network, a network operator needs to partition (or "slice") 180 or manage sharing of the network resources. Network slices can be 181 assigned to each customer for guaranteed usage which is a step 182 further than shared use of common network resources. 184 Furthermore, each network represented to a customer can be built 185 from virtualization of the underlying networks so that, for example, 186 a link in the customer's network is constructed from a path or 187 collection of paths in the underlying network. 189 We call the set of management and control functions used to provide 190 these features Abstraction and Control of Traffic Engineered 191 Networks (ACTN). 193 ACTN can facilitate virtual network operation via the creation of a 194 single virtualized network or a seamless service. This supports 195 operators in viewing and controlling different domains (at any 196 dimension: applied technology, administrative zones, or vendor- 197 specific technology islands) and presenting virtualized networks to 198 their customers. 200 The ACTN framework described in this document facilitates: 202 . Abstraction of the underlying network resources to higher-layer 203 applications and customers [RFC7926]. 205 . Virtualization of particular underlying resources, whose 206 selection criterion is the allocation of those resources to a 207 particular customer, application or service [ONF-ARCH]. 209 . Network slicing of infrastructure to meet specific customers' 210 service requirements. 212 . Creation of a virtualized environment allowing operators to 213 view and control multi-domain networks as a single virtualized 214 network. 216 . The presentation to customers of networks as a virtual network 217 via open and programmable interfaces. 219 2.1. Terminology 221 The following terms are used in this document. Some of them are 222 newly defined, some others reference existing definitions: 223 . Domain: A domain [RFC4655] is any collection of network 224 elements within a common sphere of address management or path 225 computation responsibility. Specifically within this document 226 we mean a part of an operator's network that is under common 227 management. Network elements will often be grouped into 228 domains based on technology types, vendor profiles, and 229 geographic proximity. 231 . Abstraction: This process is defined in [RFC7926]. 233 . Network Slicing: In the context of ACTN, a network slice is a 234 collection of resources that is used to establish a logically 235 dedicated virtual network over one or more TE network. Network 236 slicing allows a network provider to provide dedicated virtual 237 networks for applications/customers over a common network 238 infrastructure. The logically dedicated resources are a part 239 of the larger common network infrastructures that are shared 240 among various network slice instances which are the end-to-end 241 realization of network slicing, consisting of the combination 242 of physically or logically dedicated resources. 244 . Node: A node is a vertex on the graph representation of a TE 245 topology. In a physical network topology, a node corresponds 246 to a physical network element (NE) such as a router. In an 247 abstract network topology, a node (sometimes called an abstract 248 node) is a representation as a single vertex of one or more 249 physical NEs and their connecting physical connections. The 250 concept of a node represents the ability to connect from any 251 access to the node (a link end) to any other access to that 252 node, although "limited cross-connect capabilities" may also be 253 defined to restrict this functionality. Just as network 254 slicing and network abstraction may be applied recursively, so 255 a node in one topology may be created by applying slicing or 256 abstraction to the nodes in the underlying topology. 258 . Link: A link is an edge on the graph representation of a TE 259 topology. Two nodes connected by a link are said to be 260 "adjacent" in the TE topology. In a physical network topology, 261 a link corresponds to a physical connection. In an abstract 262 network topology, a link (sometimes called an abstract link) is 263 a representation of the potential to connect a pair of points 264 with certain TE parameters (see [RFC7926] for details). 265 Network slicing/virtualization and network abstraction may be 266 applied recursively, so a link in one topology may be created 267 by applying slicing and/or abstraction to the links in the 268 underlying topology. 270 . Abstract Link: The term "abstract link" is defined in 271 [RFC7926]. 273 . Abstract Topology: The topology of abstract nodes and abstract 274 links presented through the process of abstraction by a lower 275 layer network for use by a higher layer network. 277 . A Virtual Network (VN) is a network provided by a service 278 provider to a customer for the customer to use in any way it 279 wants as though it was a physical network. There are two views 280 of a VN as follows: 282 a) The VN can be seen as a set of edge-to-edge links (a Type 1 283 VN). Each link is referred as a VN member and is formed as 284 an end-to-end tunnel across the underlying networks. Such 285 tunnels may be constructed by recursive slicing or 286 abstraction of paths in the underlying networks and can 287 encompass edge points of the customer's network, access 288 links, intra-domain paths, and inter-domain links. 290 b) The VN can also be seen as a topology of virtual nodes and 291 virtual links (a Type 2 VN). The provider needs to map the 292 VN to actual resource assignment, which is known as virtual 293 network embedding. The nodes in this case include physical 294 end points, border nodes, and internal nodes as well as 295 abstracted nodes. Similarly the links include physical 296 access links, inter-domain links, and intra-domain links as 297 well as abstract links. 299 Clearly a Type 1 VN is a special case of a Type 2 VN. 301 . Access link: A link between a customer node and a provider 302 node. 304 . Inter-domain link: A link between domains under distinct 305 management administration. 307 . Access Point (AP): An AP is a logical identifier shared between 308 the customer and the provider used to identify an access link. 309 The AP is used by the customer when requesting a VNS. Note that 310 the term "TE Link Termination Point" (LTP) defined in [TE-Topo] 311 describes the end points of links, while an AP is a common 312 identifier for the link itself. 314 . VN Access Point (VNAP): A VNAP is the binding between an AP and 315 a given VN. 317 . Server Network: As defined in [RFC7926], a server network is a 318 network that provides connectivity for another network (the 319 Client Network) in a client-server relationship. 321 2.2. VNS Model of ACTN 323 A Virtual Network Service (VNS) is the service agreement between a 324 customer and provider to provide a VN. There are three types of VNS 325 defined in this document. 327 o Type 1 VNS refers to a VNS in which the customer is allowed 328 to create and operate a Type 1 VN. 330 o Type 2a and 2b VNS refer to VNSs in which the customer is 331 allowed to create and operates a Type 2 VN. With a Type 332 2a VNS, the VN is statically created at service 333 configuration time and the customer is not allowed to 334 change the topology (e.g., by adding or deleting abstract 335 nodes and links). A Type 2b VNS is the same as a Type 2a 336 VNS except that the customer is allowed to make dynamic 337 changes to the initial topology created at service 338 configuration time. 340 VN Operations are functions that a customer can exercise on a VN 341 depending on the agreement between the customer and the provider. 343 o VN Creation allows a customer to request the instantiation 344 of a VN. This could be through off-line pre-configuration 345 or through dynamic requests specifying attributes to a 346 Service Level Agreement (SLA) to satisfy the customer's 347 objectives. 349 o Dynamic Operations allow a customer to modify or delete the 350 VN. The customer can further act upon the virtual network 351 to create/modify/delete virtual links and nodes. These 352 changes will result in subsequent tunnel management in the 353 operator's networks. 355 There are three key entities in the ACTN VNS model: 357 - Customers 358 - Service Providers 359 - Network Providers 361 These entities are related in a three tier model as shown in Figure 362 1. 364 +----------------------+ 365 | Customer | 366 +----------------------+ 367 | 368 | 370 VNS || | /\ VNS 371 Request || | || Reply 372 \/ | || 373 +----------------------+ 374 | Service Provider | 375 +----------------------+ 376 / | \ 377 / | \ 378 / | \ 379 / | \ 380 +------------------+ +------------------+ +------------------+ 381 |Network Provider 1| |Network Provider 2| |Network Provider 3| 382 +------------------+ +------------------+ +------------------+ 384 Figure 1: The Three Tier Model. 386 The commercial roles of these entities are described in the 387 following sections. 389 2.2.1. Customers 391 Basic customers include fixed residential users, mobile users, and 392 small enterprises. Each requires a small amount of resources and is 393 characterized by steady requests (relatively time invariant). Basic 394 customers do not modify their services themselves: if a service 395 change is needed, it is performed by the provider as a proxy. 397 Advanced customers include enterprises, governments, and utility 398 companies. Such customers ask for both point-to point and 399 multipoint connectivity with high resource demands varying 400 significantly in time. This is one of the reasons why a bundled 401 service offering is not enough and it is desirable to provide each 402 advanced customer with a customized virtual network service. 403 Advanced customers may also have the ability to modify their service 404 parameters within the scope of their virtualized environments. The 405 primary focus of ACTN is Advanced Customers. 407 As customers are geographically spread over multiple network 408 provider domains, they have to interface to multiple providers and 409 may have to support multiple virtual network services with different 410 underlying objectives set by the network providers. To enable these 411 customers to support flexible and dynamic applications they need to 412 control their allocated virtual network resources in a dynamic 413 fashion, and that means that they need a view of the topology that 414 spans all of the network providers. Customers of a given service 415 provider can in turn offer a service to other customers in a 416 recursive way. 418 2.2.2. Service Providers 420 In the scope of ACTN, service providers deliver VNSs to their 421 customers. Service providers may or may not own physical network 422 resources (i.e., may or may not be network providers as described in 423 Section 2.2.3). When a service provider is the same as the network 424 provider, this is similar to existing VPN models applied to a single 425 provider although it may be hard to use this approach when the 426 customer spans multiple independent network provider domains. 428 When network providers supply only infrastructure, while distinct 429 service providers interface to the customers, the service providers 430 are themselves customers of the network infrastructure providers. 431 One service provider may need to keep multiple independent network 432 providers because its end-users span geographically across multiple 433 network provider domains. 435 2.2.3. Network Providers 437 Network Providers are the infrastructure providers that own the 438 physical network resources and provide network resources to their 439 customers. The network operated by a network provider may be a 440 virtual network created by a service provider and supplied to the 441 network provider in its role as a customer. The layered model 442 described in this architecture separates the concerns of network 443 providers and customers, with service providers acting as 444 aggregators of customer requests. 446 3. ACTN Base Architecture 448 This section provides a high-level model of ACTN showing the 449 interfaces and the flow of control between components. 451 The ACTN architecture is based on a 3-tier reference model and 452 allows for hierarchy and recursion. The main functionalities within 453 an ACTN system are: 455 . Multi-domain coordination: This function oversees the specific 456 aspects of different domains and builds a single abstracted 457 end-to-end network topology in order to coordinate end-to-end 458 path computation and path/service provisioning. Domain 459 sequence path calculation/determination is also a part of this 460 function. 462 . Virtualization/Abstraction: This function provides an 463 abstracted view of the underlying network resources for use by 464 the customer - a customer may be the client or a higher level 465 controller entity. This function includes network path 466 computation based on customer service connectivity request 467 constraints, path computation based on the global network-wide 468 abstracted topology, and the creation of an abstracted view of 469 network resources allocated to each customer. These operations 470 depend on customer-specific network objective functions and 471 customer traffic profiles. 473 . Customer mapping/translation: This function is to map customer 474 requests/commands into network provisioning requests that can 475 be sent to the Provisioning Network Controller (PNC) according 476 to business policies provisioned statically or dynamically at 477 the OSS/NMS. Specifically, it provides mapping and translation 478 of a customer's service request into a set of parameters that 479 are specific to a network type and technology such that network 480 configuration process is made possible. 482 . Virtual service coordination: This function translates customer 483 service-related information into virtual network service 484 operations in order to seamlessly operate virtual networks 485 while meeting a customer's service requirements. In the 486 context of ACTN, service/virtual service coordination includes 487 a number of service orchestration functions such as multi- 488 destination load balancing, guarantees of service quality, 489 bandwidth and throughput. It also includes notifications for 490 service fault and performance degradation and so forth. 492 The base ACTN architecture defines three controller types and the 493 corresponding interfaces between these controllers. The following 494 types of controller are shown in Figure 2: 496 . CNC - Customer Network Controller 497 . MDSC - Multi Domain Service Coordinator 498 . PNC - Provisioning Network Controller 500 Figure 2 also shows the following interfaces: 502 . CMI - CNC-MDSC Interface 503 . MPI - MDSC-PNC Interface 504 . SBI - South Bound Interface 505 +---------+ +---------+ +---------+ 506 | CNC | | CNC | | CNC | 507 +---------+ +---------+ +---------+ 508 \ | / 509 Business \ | / 510 Boundary =============\==============|==============/============ 511 Between \ | / 512 Customer & ------- | CMI ------- 513 Network Provider \ | / 514 +---------------+ 515 | MDSC | 516 +---------------+ 517 / | \ 518 ------------ | MPI ------------- 519 / | \ 520 +-------+ +-------+ +-------+ 521 | PNC | | PNC | | PNC | 522 +-------+ +-------+ +-------+ 523 | SBI / | / \ 524 | / | SBI / \ 525 --------- ----- | / \ 526 ( ) ( ) | / \ 527 - Control - ( Phys. ) | / ----- 528 ( Plane ) ( Net ) | / ( ) 529 ( Physical ) ----- | / ( Phys. ) 530 ( Network ) ----- ----- ( Net ) 531 - - ( ) ( ) ----- 532 ( ) ( Phys. ) ( Phys. ) 533 --------- ( Net ) ( Net ) 534 ----- ----- 536 Figure 2: ACTN Base Architecture 538 Note that this is a functional architecture: an implementation and 539 deployment might collocate one or more of the functional components. 541 3.1. Customer Network Controller 543 A Customer Network Controller (CNC) is responsible for communicating 544 a customer's VNS requirements to the network provider over the CNC- 545 MDSC Interface (CMI). It has knowledge of the end-points associated 546 with the VNS (expressed as APs), the service policy, and other QoS 547 information related to the service. 549 As the Customer Network Controller directly interfaces to the 550 applications, it understands multiple application requirements and 551 their service needs. 553 3.2. Multi-Domain Service Coordinator 555 A Multi-Domain Service Coordinator (MDSC) is a functional block that 556 implements all of the ACTN functions listed in Section 3 and 557 described further in Section 4.2. The two functions of the MDSC, 558 namely, multi domain coordination and virtualization/abstraction are 559 referred to as network-related functions while the other two 560 functions, namely, customer mapping/translation and virtual service 561 coordination are referred to as service-related functions. The MDSC 562 sits at the center of the ACTN model between the CNC that issues 563 connectivity requests and the Provisioning Network Controllers 564 (PNCs) that manage the network resources. 566 The key point of the MDSC (and of the whole ACTN framework) is 567 detaching the network and service control from underlying technology 568 to help the customer express the network as desired by business 569 needs. The MDSC envelopes the instantiation of the right technology 570 and network control to meet business criteria. In essence it 571 controls and manages the primitives to achieve functionalities as 572 desired by the CNC. 574 In order to allow for multi-domain coordination a 1:N relationship 575 must be allowed between MDSCs and PNCs. 577 In addition to that, it could also be possible to have an M:1 578 relationship between MDSCs and PNC to allow for network resource 579 partitioning/sharing among different customers not necessarily 580 connected to the same MDSC (e.g., different service providers) but 581 all using the resources of a common network infrastructure provider. 583 3.3. Provisioning Network Controller 585 The Provisioning Network Controller (PNC) oversees configuring the 586 network elements, monitoring the topology (physical or virtual) of 587 the network, and collecting information about the topology (either 588 raw or abstracted). 590 The PNC functions can be implemented as part of an SDN domain 591 controller, a Network Management System (NMS), an Element Management 592 System (EMS), an active PCE-based controller [Centralized] or any 593 other means to dynamically control a set of nodes and that is 594 implementing an NBI compliant with ACTN specification. 596 A PNC domain includes all the resources under the control of a 597 single PNC. It can be composed of different routing domains and 598 administrative domains, and the resources may come from different 599 layers. The interconnection between PNC domains is illustrated in 600 Figure 3. 602 _______ _______ 603 _( )_ _( )_ 604 _( )_ _( )_ 605 ( ) Border ( ) 606 ( PNC ------ Link ------ PNC ) 607 ( Domain X |Border|========|Border| Domain Y ) 608 ( | Node | | Node | ) 609 ( ------ ------ ) 610 (_ _) (_ _) 611 (_ _) (_ _) 612 (_______) (_______) 614 Figure 3: PNC Domain Borders 616 3.4. ACTN Interfaces 618 Direct customer control of transport network elements and 619 virtualized services is not a viable proposition for network 620 providers due to security and policy concerns. In addition, some 621 networks may operate a control plane and as such it is not practical 622 for the customer to directly interface with network elements. 623 Therefore, the network has to provide open, programmable interfaces, 624 through which customer applications can create, replace and modify 625 virtual network resources and services in an interactive, flexible 626 and dynamic fashion while having no impact on other customers. 628 Three interfaces exist in the ACTN architecture as shown in Figure 629 2. 631 . CMI: The CNC-MDSC Interface (CMI) is an interface between a CNC 632 and an MDSC. The CMI is a business boundary between customer 633 and network provider. It is used to request a VNS for an 634 application. All service-related information is conveyed over 635 this interface (such as the VNS type, topology, bandwidth, and 636 service constraints). Most of the information over this 637 interface is technology agnostic (the customer is unaware of 638 the network technologies used to deliver the service), but 639 there are some cases (e.g., access link configuration) where it 640 is necessary to specify technology-specific details. 642 . MPI: The MDSC-PNC Interface (MPI) is an interface between an 643 MDSC and a PNC. It communicates requests for new connectivity 644 or for bandwidth changes in the physical network. In multi- 645 domain environments, the MDSC needs to communicate with 646 multiple PNCs each responsible for control of a domain. The 647 MPI presents an abstracted topology to the MDSC hiding 648 technology specific aspects of the network and hiding topology 649 according to policy. 651 . SBI: The Southbound Interface (SBI) is out of scope of ACTN. 652 Many different SBIs have been defined for different 653 environments, technologies, standards organizations, and 654 vendors. It is shown in Figure 3 for reference reason only. 656 4. Advanced ACTN Architectures 658 This section describes advanced configurations of the ACTN 659 architecture. 661 4.1. MDSC Hierarchy 663 A hierarchy of MDSCs can be foreseen for many reasons, among which 664 are scalability, administrative choices, or putting together 665 different layers and technologies in the network. In the case where 666 there is a hierarchy of MDSCs, we introduce the terms higher-level 667 MDSC (MDSC-H) and lower-level MDSC (MDSC-L). The interface between 668 them is a recursion of the MPI. An implementation of an MDSC-H 669 makes provisioning requests as normal using the MPI, but an MDSC-L 670 must be able to receive requests as normal at the CMI and also at 671 the MPI. The hierarchy of MDSCs can be seen in Figure 4. 673 Another implementation choice could foresee the usage of an MDSC-L 674 for all the PNCs related to a given technology (e.g. IP/MPLS) and a 675 different MDSC-L for the PNCs related to another technology (e.g. 676 OTN/WDM) and an MDSC-H to coordinate them. 678 +--------+ 679 | CNC | 680 +--------+ 681 | +-----+ 682 | CMI | CNC | 683 +----------+ +-----+ 684 -------| MDSC-H |---- | 685 | +----------+ | | CMI 686 MPI | MPI | | 687 | | | 688 +---------+ +---------+ 689 | MDSC-L | | MDSC-L | 690 +---------+ +---------+ 691 MPI | | | | 692 | | | | 693 ----- ----- ----- ----- 694 | PNC | | PNC | | PNC | | PNC | 695 ----- ----- ----- ----- 697 Figure 4: MDSC Hierarchy 699 4.2. Functional Split of MDSC Functions in Orchestrators 701 An implementation choice could separate the MDSC functions into two 702 groups, one group for service-related functions and the other for 703 network-related functions. This enables the implementation of a 704 service orchestrator that provides the service-related functions of 705 the MDSC and a network orchestrator that provides the network- 706 related functions of the MDSC. This split is consistent with the 707 YANG service model architecture described in [Service-YANG]. Figure 708 5 depicts this and shows how the ACTN interfaces may map to YANG 709 models. 711 +--------------------+ 712 | Customer | 713 | +-----+ | 714 | | CNC | | 715 | +-----+ | 716 +--------------------+ 717 CMI | Customer Service Model 718 | 719 +---------------------------------------+ 720 | Service | 721 ********|*********************** Orchestrator | 722 * MDSC | +-----------------+ * | 723 * | | Service-related | * | 724 * | | Functions | * | 725 * | +-----------------+ * | 726 * +----------------------*----------------+ 727 * * | Service Delivery Model 728 * * | 729 * +----------------------*----------------+ 730 * | * Network | 731 * | +-----------------+ * Orchestrator | 732 * | | Network-related | * | 733 * | | Functions | * | 734 * | +-----------------+ * | 735 ********|*********************** | 736 +---------------------------------------+ 737 MPI | Network Configuration Model 738 | 739 +------------------------+ 740 | Domain | 741 | +------+ Controller | 742 | | PNC | | 743 | +------+ | 744 +------------------------+ 745 SBI | Device Configuration Model 746 | 747 +--------+ 748 | Device | 749 +--------+ 751 Figure 5: ACTN Architecture in the Context of the YANG Service 752 Models 753 5. Topology Abstraction Methods 755 Topology abstraction is described in [RFC7926]. This section 756 discusses topology abstraction factors, types, and their context in 757 the ACTN architecture. 759 Abstraction in ACTN is performed by the PNC when presenting 760 available topology to the MDSC, or by an MDSC-L when presenting 761 topology to an MDSC-H. This function is different to the creation 762 of a VN (and particularly a Type 2 VN) which is not abstraction but 763 construction of virtual resources. 765 5.1. Abstraction Factors 767 As discussed in [RFC7926], abstraction is tied with policy of the 768 networks. For instance, per an operational policy, the PNC would 769 not provide any technology specific details (e.g., optical 770 parameters for WSON) in the abstract topology it provides to the 771 MDSC. 773 There are many factors that may impact the choice of abstraction: 775 - Abstraction depends on the nature of the underlying domain 776 networks. For instance, packet networks may be abstracted with 777 fine granularity while abstraction of optical networks depends on 778 the switching units (such as wavelengths) and the end-to-end 779 continuity and cross-connect limitations within the network. 781 - Abstraction also depends on the capability of the PNCs. As 782 abstraction requires hiding details of the underlying network 783 resources, the PNC's capability to run algorithms impacts the 784 feasibility of abstraction. Some PNC may not have the ability to 785 abstract native topology while other PNCs may have the ability to 786 use sophisticated algorithms. 788 - Abstraction is a tool that can improve scalability. Where the 789 native network resource information is of large size there is a 790 specific scaling benefit to abstraction. 792 - The proper abstraction level may depend on the frequency of 793 topology updates and vice versa. 795 - The nature of the MDSC's support for technology-specific 796 parameters impacts the degree/level of abstraction. If the MDSC 797 is not capable of handling such parameters then a higher level of 798 abstraction is needed. 800 - In some cases, the PNC is required to hide key internal 801 topological data from the MDSC. Such confidentiality can be 802 achieved through abstraction. 804 5.2. Abstraction Types 806 This section defines the following three types of topology 807 abstraction: 809 . Native/White Topology (Section 5.2.1) 810 . Black Topology (Section 5.2.2) 811 . Grey Topology (Section 5.2.3) 813 5.2.1. Native/White Topology 815 This is a case where the PNC provides the actual network topology to 816 the MDSC without any hiding or filtering of information. I.e., no 817 abstraction is performed. In this case, the MDSC has the full 818 knowledge of the underlying network topology and can operate on it 819 directly. 820 5.2.2. Black Topology 822 A black topology replaces a full network with a minimal 823 representation of the edge-to-edge topology without disclosing any 824 node internal connectivity information. The entire domain network 825 may be abstracted as a single abstract node with the network's 826 access/egress links appearing as the ports to the abstract node and 827 the implication that any port can be 'cross-connected' to any other. 828 Figure 6 depicts a native topology with the corresponding black 829 topology with one virtual node and inter-domain links. In this 830 case, the MDSC has to make a provisioning request to the PNCs to 831 establish the port-to-port connection. If there is a large number 832 of inter-connected domains, this abstraction method may impose a 833 heavy coordination load at the MDSC level in order to find an 834 optimal end-to-end path since the abstraction hides so much 835 information that it is not possible to determine whether an end-to- 836 end path is feasible without asking each PNC to set up each path 837 fragment. For this reason, the MPI might need to be enhanced to 838 allow the PNCs to be queried for the practicality and 839 characteristics of paths across the abstract node. 840 ..................................... 841 : PNC Domain : 842 : +--+ +--+ +--+ +--+ : 843 ------+ +-----+ +-----+ +-----+ +------ 844 : ++-+ ++-+ +-++ +-++ : 845 : | | | | : 846 : | | | | : 847 : | | | | : 848 : | | | | : 849 : ++-+ ++-+ +-++ +-++ : 850 ------+ +-----+ +-----+ +-----+ +------ 851 : +--+ +--+ +--+ +--+ : 852 :.................................... 854 +----------+ 855 ---+ +--- 856 | Abstract | 857 | Node | 858 ---+ +--- 859 +----------+ 861 Figure 6: Native Topology with Corresponding Black Topology Expressed 862 as an Abstract Node 864 5.2.3. Grey Topology 866 A grey topology represents a compromise between black and white 867 topologies from a granularity point of view. In this case the PNC 868 exposes an abstract topology that comprises nodes and links. The 869 nodes and links may be physical of abstract while the abstract 870 topology represents the potential of connectivity across the PNC 871 domain. 872 Two modes of grey topology are identified: 873 . In a type A grey topology type border nodes are connected by a 874 full mesh of TE links (see Figure 7). 876 . In a type B grey topology border nodes are connected over a 877 more detailed network comprising internal abstract nodes and 878 abstracted links. This mode of abstraction supplies the MDSC 879 with more information about the internals of the PNC domain and 880 allows it to make more informed choices about how to route 881 connectivity over the underlying network. 883 ..................................... 884 : PNC Domain : 885 : +--+ +--+ +--+ +--+ : 886 ------+ +-----+ +-----+ +-----+ +------ 887 : ++-+ ++-+ +-++ +-++ : 888 : | | | | : 889 : | | | | : 890 : | | | | : 891 : | | | | : 892 : ++-+ ++-+ +-++ +-++ : 893 ------+ +-----+ +-----+ +-----+ +------ 894 : +--+ +--+ +--+ +--+ : 895 :.................................... 897 .................... 898 : Abstract Network : 899 : : 900 : +--+ +--+ : 901 -------+ +----+ +------- 902 : ++-+ +-++ : 903 : | \ / | : 904 : | \/ | : 905 : | /\ | : 906 : | / \ | : 907 : ++-+ +-++ : 908 -------+ +----+ +------- 909 : +--+ +--+ : 910 :..................: 912 Figure 7: Native Topology with Corresponding Grey Topology 914 5.3. Methods of Building Grey Topologies 916 This section discusses two different methods of building a grey 917 topology: 919 . Automatic generation of abstract topology by configuration 920 (Section 5.3.1) 921 . On-demand generation of supplementary topology via path 922 computation request/reply (Section 5.3.2) 924 5.3.1. Automatic Generation of Abstract Topology by Configuration 926 Automatic generation is based on the abstraction/summarization of 927 the whole domain by the PNC and its advertisement on the MPI. The 928 level of abstraction can be decided based on PNC configuration 929 parameters (e.g., "provide the potential connectivity between any PE 930 and any ASBR in an MPLS-TE network"). 932 Note that the configuration parameters for this abstract topology 933 can include available bandwidth, latency, or any combination of 934 defined parameters. How to generate such information is beyond the 935 scope of this document. 937 This abstract topology may need to be periodically or incrementally 938 updated when there is a change in the underlying network or the use 939 of the network resources that make connectivity more or less 940 available. 942 5.3.2. On-demand Generation of Supplementary Topology via Path Compute 943 Request/Reply 945 While abstract topology is generated and updated automatically by 946 configuration as explained in Section 5.3.1, additional 947 supplementary topology may be obtained by the MDSC via a path 948 compute request/reply mechanism. 950 The abstract topology advertisements from PNCs give the MDSC the 951 border node/link information for each domain. Under this scenario, 952 when the MDSC needs to create a new VN, the MDSC can issue path 953 computation requests to PNCs with constraints matching the VN 954 request as described in [ACTN-YANG]. An example is provided in 955 Figure 8, where the MDSC is creating a P2P VN between AP1 and AP2. 956 The MDSC could use two different inter-domain links to get from 957 Domain X to Domain Y, but in order to choose the best end-to-end 958 path it needs to know what domain X and Y can offer in terms of 959 connectivity and constraints between the PE nodes and the border 960 nodes. 962 ------- -------- 963 ( ) ( ) 964 - BrdrX.1------- BrdrY.1 - 965 (+---+ ) ( +---+) 966 -+---( |PE1| Dom.X ) ( Dom.Y |PE2| )---+- 967 | (+---+ ) ( +---+) | 968 AP1 - BrdrX.2------- BrdrY.2 - AP2 969 ( ) ( ) 970 ------- -------- 972 Figure 8: A Multi-Domain Example 973 The MDSC issues a path computation request to PNC.X asking for 974 potential connectivity between PE1 and border node BrdrX.1 and 975 between PE1 and BrdrX.2 with related objective functions and TE 976 metric constraints. A similar request for connectivity from the 977 border nodes in Domain Y to PE2 will be issued to PNC.Y. The MDSC 978 merges the results to compute the optimal end-to-end path including 979 the inter domain links. The MDSC can use the result of this 980 computation to request the PNCs to provision the underlying 981 networks, and the MDSC can then use the end-to-end path as a virtual 982 link in the VN it delivers to the customer. 984 5.4. Hierarchical Topology Abstraction Example 986 This section illustrates how topology abstraction operates in 987 different levels of a hierarchy of MDSCs as shown in Figure 9. 989 +-----+ 990 | CNC | CNC wants to create a VN 991 +-----+ between CE A and CE B 992 | 993 | 994 +-----------------------+ 995 | MDSC-H | 996 +-----------------------+ 997 / \ 998 / \ 999 +---------+ +---------+ 1000 | MDSC-L1 | | MDSC-L2 | 1001 +---------+ +---------+ 1002 / \ / \ 1003 / \ / \ 1004 +----+ +----+ +----+ +----+ 1005 CE A o----|PNC1| |PNC2| |PNC3| |PNC4|----o CE B 1006 +----+ +----+ +----+ +----+ 1008 Virtual Network Delivered to CNC 1010 CE A o==============o CE B 1012 Topology operated on by MDSC-H 1014 CE A o----o==o==o===o----o CE B 1016 Topology operated on by MDSC-L1 Topology operated on by MDSC-L2 1017 _ _ _ _ 1018 ( ) ( ) ( ) ( ) 1019 ( ) ( ) ( ) ( ) 1020 CE A o--(o---o)==(o---o)==Dom.3 Dom.2==(o---o)==(o---o)--o CE B 1021 ( ) ( ) ( ) ( ) 1022 (_) (_) (_) (_) 1024 Actual Topology 1025 ___ ___ ___ ___ 1026 ( ) ( ) ( ) ( ) 1027 ( o ) ( o ) ( o--o) ( o ) 1028 ( / \ ) ( |\ ) ( | | ) ( / \ ) 1029 CE A o---(o-o---o-o)==(o-o-o-o-o)==(o--o--o-o)==(o-o-o-o-o)---o CE B 1030 ( \ / ) ( | |/ ) ( | | ) ( \ / ) 1031 ( o ) (o-o ) ( o--o) ( o ) 1032 (___) (___) (___) (___) 1034 Domain 1 Domain 2 Domain 3 Domain 4 1036 Where 1037 o is a node 1038 --- is a link 1039 === border link 1041 Figure 9: Illustration of Hierarchical Topology Abstraction 1043 In the example depicted in Figure 9, there are four domains under 1044 control of PNCs PNC1, PNC2, PNC3, and PNC4. MDSC-L1 controls PNC1 1045 and PNC2 while MDSC-L2 controls PNC3 and PNC4. Each of the PNCs 1046 provides a grey topology abstraction that presents only border nodes 1047 and links across and outside the domain. The abstract topology 1048 MDSC-L1 that operates is a combination of the two topologies from 1049 PNC1 and PNC2. Likewise, the abstract topology that MDSC-L2 1050 operates is shown in Figure 9. Both MDSC-L1 and MDSC-L2 provide a 1051 black topology abstraction to MSDC-H in which each PNC domain is 1052 presented as a single virtual node. MDSC-H combines these two 1053 topologies to create the abstraction topology on which it operates. 1054 MDSC-H sees the whole four domain networks as four virtual nodes 1055 connected via virtual links. 1057 6. Access Points and Virtual Network Access Points 1059 In order to map identification of connections between the customer's 1060 sites and the TE networks and to scope the connectivity requested in 1061 the VNS, the CNC and the MDSC refer to the connections using the 1062 Access Point (AP) construct as shown in Figure 10. 1064 ------------- 1065 ( ) 1066 - - 1067 +---+ X ( ) Z +---+ 1068 |CE1|---+----( )---+---|CE2| 1069 +---+ | ( ) | +---+ 1070 AP1 - - AP2 1071 ( ) 1072 ------------- 1074 Figure 10: Customer View of APs 1076 Let's take as an example a scenario shown in Figure 10. CE1 is 1077 connected to the network via a 10Gb link and CE2 via a 40Gb link. 1078 Before the creation of any VN between AP1 and AP2 the customer view 1079 can be summarized as shown in Table 1. 1081 +----------+------------------------+ 1082 |End Point | Access Link Bandwidth | 1083 +-----+----------+----------+-------------+ 1084 |AP id| CE,port | MaxResBw | AvailableBw | 1085 +-----+----------+----------+-------------+ 1086 | AP1 |CE1,portX | 10Gb | 10Gb | 1087 +-----+----------+----------+-------------+ 1088 | AP2 |CE2,portZ | 40Gb | 40Gb | 1089 +-----+----------+----------+-------------+ 1091 Table 1: AP - Customer View 1093 On the other hand, what the provider sees is shown in Figure 11. 1095 ------- ------- 1096 ( ) ( ) 1097 - - - - 1098 W (+---+ ) ( +---+) Y 1099 -+---( |PE1| Dom.X )---( Dom.Y |PE2| )---+- 1100 | (+---+ ) ( +---+) | 1101 AP1 - - - - AP2 1102 ( ) ( ) 1103 ------- ------- 1105 Figure 11: Provider view of the AP 1107 Which results in a summarization as shown in Table 2. 1109 +----------+------------------------+ 1110 |End Point | Access Link Bandwidth | 1111 +-----+----------+----------+-------------+ 1112 |AP id| PE,port | MaxResBw | AvailableBw | 1113 +-----+----------+----------+-------------+ 1114 | AP1 |PE1,portW | 10Gb | 10Gb | 1115 +-----+----------+----------+-------------+ 1116 | AP2 |PE2,portY | 40Gb | 40Gb | 1117 +-----+----------+----------+-------------+ 1119 Table 2: AP - Provider View 1121 A Virtual Network Access Point (VNAP) needs to be defined as binding 1122 between the AP that is linked to a VN and that is used to allow for 1123 different VNs to start from the same AP. It also allows for traffic 1124 engineering on the access and/or inter-domain links (e.g., keeping 1125 track of bandwidth allocation). A different VNAP is created on an 1126 AP for each VN. 1128 In this simple scenario we suppose we want to create two virtual 1129 networks. The first with VN identifier 9 between AP1 and AP2 with 1130 bandwidth of 1Gbps, while the second with VN identifier 5, again 1131 between AP1 and AP2 and with bandwidth 2Gbps. 1133 The provider view would evolve as shown in Table 3. 1135 +----------+------------------------+ 1136 |End Point | Access Link/VNAP Bw | 1137 +---------+----------+----------+-------------+ 1138 |AP/VNAPid| PE,port | MaxResBw | AvailableBw | 1139 +---------+----------+----------+-------------+ 1140 |AP1 |PE1,portW | 10Gbps | 7Gbps | 1141 | -VNAP1.9| | 1Gbps | N.A. | 1142 | -VNAP1.5| | 2Gbps | N.A | 1143 +---------+----------+----------+-------------+ 1144 |AP2 |PE2,portY | 40Gbps | 37Gbps | 1145 | -VNAP2.9| | 1Gbps | N.A. | 1146 | -VNAP2.5| | 2Gbps | N.A | 1147 +---------+----------+----------+-------------+ 1148 Table 3: AP and VNAP - Provider View after VNS Creation 1150 6.1. Dual-Homing Scenario 1152 Often there is a dual homing relationship between a CE and a pair of 1153 PEs. This case needs to be supported by the definition of VN, APs 1154 and VNAPs. Suppose CE1 connected to two different PEs in the 1155 operator domain via AP1 and AP2 and that the customer needs 5Gbps of 1156 bandwidth between CE1 and CE2. This is shown in Figure 12. 1158 ____________ 1159 AP1 ( ) AP3 1160 -------(PE1) (PE3)------- 1161 W / ( ) \ X 1162 +---+/ ( ) \+---+ 1163 |CE1| ( ) |CE2| 1164 +---+\ ( ) /+---+ 1165 Y \ ( ) / Z 1166 -------(PE2) (PE4)------- 1167 AP2 (____________) 1169 Figure 12: Dual-Homing Scenario 1171 In this case, the customer will request for a VN between AP1, AP2, 1172 and AP3 specifying a dual homing relationship between AP1 and AP2. 1173 As a consequence no traffic will flow between AP1 and AP2. The dual 1174 homing relationship would then be mapped against the VNAPs (since 1175 other independent VNs might have AP1 and AP2 as end points). 1177 The customer view would be shown in Table 4. 1179 +----------+------------------------+ 1180 |End Point | Access Link/VNAP Bw | 1181 +---------+----------+----------+-------------+-----------+ 1182 |AP/VNAPid| CE,port | MaxResBw | AvailableBw |Dual Homing| 1183 +---------+----------+----------+-------------+-----------+ 1184 |AP1 |CE1,portW | 10Gbps | 5Gbps | | 1185 | -VNAP1.9| | 5Gbps | N.A. | VNAP2.9 | 1186 +---------+----------+----------+-------------+-----------+ 1187 |AP2 |CE1,portY | 40Gbps | 35Gbps | | 1188 | -VNAP2.9| | 5Gbps | N.A. | VNAP1.9 | 1189 +---------+----------+----------+-------------+-----------+ 1190 |AP3 |CE2,portX | 40Gbps | 35Gbps | | 1191 | -VNAP3.9| | 5Gbps | N.A. | NONE | 1192 +---------+----------+----------+-------------+-----------+ 1194 Table 4: Dual-Homing - Customer View after VN Creation 1196 7. Advanced ACTN Application: Multi-Destination Service 1198 A further advanced application of ACTN is in the case of Data Center 1199 selection, where the customer requires the Data Center selection to 1200 be based on the network status; this is referred to as Multi- 1201 Destination in [ACTN-REQ]. In terms of ACTN, a CNC could request a 1202 connectivity service (virtual network) between a set of source Aps 1203 and destination APs and leave it up to the network (MDSC) to decide 1204 which source and destination access points to be used to set up the 1205 connectivity service (virtual network). The candidate list of 1206 source and destination APs is decided by a CNC (or an entity outside 1207 of ACTN) based on certain factors which are outside the scope of 1208 ACTN. 1210 Based on the AP selection as determined and returned by the network 1211 (MDSC), the CNC (or an entity outside of ACTN) should further take 1212 care of any subsequent actions such as orchestration or service 1213 setup requirements. These further actions are outside the scope of 1214 ACTN. 1216 Consider a case as shown in Figure 13, where three data centers are 1217 available, but the customer requires the data center selection to be 1218 based on the network status and the connectivity service setup 1219 between the AP1 (CE1) and one of the destination APs (AP2 (DC-A), 1220 AP3 (DC-B), and AP4 (DC-C)). The MDSC (in coordination with PNCs) 1221 would select the best destination AP based on the constraints, 1222 optimization criteria, policies, etc., and setup the connectivity 1223 service (virtual network). 1225 ------- ------- 1226 ( ) ( ) 1227 - - - - 1228 +---+ ( ) ( ) +----+ 1229 |CE1|---+---( Domain X )----( Domain Y )---+---|DC-A| 1230 +---+ | ( ) ( ) | +----+ 1231 AP1 - - - - AP2 1232 ( ) ( ) 1233 ---+--- ---+--- 1234 | | 1235 AP3-+ AP4-+ 1236 | | 1237 +----+ +----+ 1238 |DC-B| |DC-C| 1239 +----+ +----+ 1241 Figure 13: End-Point Selection Based on Network Status 1243 7.1. Pre-Planned End Point Migration 1245 Furthermore, in case of Data Center selection, customer could 1246 request for a backup DC to be selected, such that in case of 1247 failure, another DC site could provide hot stand-by protection. As 1248 shown in Figure 14 DC-C is selected as a backup for DC-A. Thus, the 1249 VN should be setup by the MDSC to include primary connectivity 1250 between AP1 (CE1) and AP2 (DC-A) as well as protection connectivity 1251 between AP1 (CE1) and AP4 (DC-C). 1253 ------- ------- 1254 ( ) ( ) 1255 - - - - 1256 +---+ ( ) ( ) +----+ 1257 |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| 1258 +---+ | ( ) ( ) | +----+ 1259 AP1 - - - - AP2 | 1260 ( ) ( ) | 1261 ---+--- ---+--- | 1262 | | | 1263 AP3-+ AP4-+ HOT STANDBY 1264 | | | 1265 +----+ +----+ | 1266 |DC-D| |DC-C|<------------- 1267 +----+ +----+ 1269 Figure 14: Pre-planned End-Point Migration 1271 7.2. On the Fly End-Point Migration 1273 Compared to pre-planned end point migration, on the fly end point 1274 selection is dynamic in that the migration is not pre-planned but 1275 decided based on network condition. Under this scenario, the MDSC 1276 would monitor the network (based on the VN SLA) and notify the CNC 1277 in case where some other destination AP would be a better choice 1278 based on the network parameters. The CNC should instruct the MDSC 1279 when it is suitable to update the VN with the new AP if it is 1280 required. 1282 8. Manageability Considerations 1284 The objective of ACTN is to manage traffic engineered resources, and 1285 provide a set of mechanisms to allow customers to request virtual 1286 connectivity across server network resources. ACTN supports 1287 multiple customers each with its own view of and control of a 1288 virtual network built on the server network, the network operator 1289 will need to partition (or "slice") their network resources, and 1290 manage the resources accordingly. 1292 The ACTN platform will, itself, need to support the request, 1293 response, and reservations of client and network layer connectivity. 1294 It will also need to provide performance monitoring and control of 1295 traffic engineered resources. The management requirements may be 1296 categorized as follows: 1298 . Management of external ACTN protocols 1299 . Management of internal ACTN interfaces/protocols 1300 . Management and monitoring of ACTN components 1301 . Configuration of policy to be applied across the ACTN system 1303 The ACTN framework and interfaces are defined to enable traffic 1304 engineering for virtual networks. Network operators may have other 1305 Operations, Administration, and Maintenance (OAM) tasks for service 1306 fulfillment, optimization, and assurance beyond traffic engineering. 1307 The realization of OAM beyond abstraction and control of traffic 1308 engineered networks is not considered in this document. 1310 8.1. Policy 1312 Policy is an important aspect of ACTN control and management. 1313 Policies are used via the components and interfaces, during 1314 deployment of the service, to ensure that the service is compliant 1315 with agreed policy factors and variations (often described in SLAs), 1316 these include, but are not limited to: connectivity, bandwidth, 1317 geographical transit, technology selection, security, resilience, 1318 and economic cost. 1320 Depending on the deployment of the ACTN architecture, some policies 1321 may have local or global significance. That is, certain policies 1322 may be ACTN component specific in scope, while others may have 1323 broader scope and interact with multiple ACTN components. Two 1324 examples are provided below: 1326 . A local policy might limit the number, type, size, and 1327 scheduling of virtual network services a customer may request 1328 via its CNC. This type of policy would be implemented locally 1329 on the MDSC. 1331 . A global policy might constrain certain customer types (or 1332 specific customer applications) to only use certain MDSCs, and 1333 be restricted to physical network types managed by the PNCs. A 1334 global policy agent would govern these types of policies. 1336 The objective of this section is to discuss the applicability of 1337 ACTN policy: requirements, components, interfaces, and examples. 1339 This section provides an analysis and does not mandate a specific 1340 method for enforcing policy, or the type of policy agent that would 1341 be responsible for propagating policies across the ACTN components. 1342 It does highlight examples of how policy may be applied in the 1343 context of ACTN, but it is expected further discussion in an 1344 applicability or solution specific document, will be required. 1346 8.2. Policy Applied to the Customer Network Controller 1348 A virtual network service for a customer application will be 1349 requested by the CNC. The request will reflect the application 1350 requirements and specific service needs, including bandwidth, 1351 traffic type and survivability. Furthermore, application access and 1352 type of virtual network service requested by the CNC, will be need 1353 adhere to specific access control policies. 1355 8.3. Policy Applied to the Multi Domain Service Coordinator 1357 A key objective of the MDSC is to support the customer's expression 1358 of the application connectivity request via its CNC as set of 1359 desired business needs, therefore policy will play an important 1360 role. 1362 Once authorized, the virtual network service will be instantiated 1363 via the CNC-MDSC Interface (CMI), it will reflect the customer 1364 application and connectivity requirements, and specific service 1365 transport needs. The CNC and the MDSC components will have agreed 1366 connectivity end-points, use of these end-points should be defined 1367 as a policy expression when setting up or augmenting virtual network 1368 services. Ensuring that permissible end-points are defined for CNCs 1369 and applications will require the MDSC to maintain a registry of 1370 permissible connection points for CNCs and application types. 1372 Conflicts may occur when virtual network service optimization 1373 criteria are in competition. For example, to meet objectives for 1374 service reachability a request may require an interconnection point 1375 between multiple physical networks; however, this might break a 1376 confidentially policy requirement of specific type of end-to-end 1377 service. Thus an MDSC may have to balance a number of the 1378 constraints on a service request and between different requested 1379 services. It may also have to balance requested services with 1380 operational norms for the underlying physical networks. This 1381 balancing may be resolved using configured policy and using hard and 1382 soft policy constraints. 1384 8.4. Policy Applied to the Provisioning Network Controller 1386 The PNC is responsible for configuring the network elements, 1387 monitoring physical network resources, and exposing connectivity 1388 (direct or abstracted) to the MDSC. It is therefore expected that 1389 policy will dictate what connectivity information will be exported 1390 between the PNC, via the MDSC-PNC Interface (MPI), and MDSC. 1392 Policy interactions may arise when a PNC determines that it cannot 1393 compute a requested path from the MDSC, or notices that (per a 1394 locally configured policy) the network is low on resources (for 1395 example, the capacity on key links become exhausted). In either 1396 case, the PNC will be required to notify the MDSC, which may (again 1397 per policy) act to construct a virtual network service across 1398 another physical network topology. 1400 Furthermore, additional forms of policy-based resource management 1401 will be required to provide virtual network service performance, 1402 security and resilience guarantees. This will likely be implemented 1403 via a local policy agent and additional protocol methods. 1405 9. Security Considerations 1407 The ACTN framework described in this document defines key components 1408 and interfaces for managed traffic engineered networks. Securing 1409 the request and control of resources, confidentially of the 1410 information, and availability of function, should all be critical 1411 security considerations when deploying and operating ACTN platforms. 1413 Several distributed ACTN functional components are required, and 1414 implementations should consider encrypting data that flows between 1415 components, especially when they are implemented at remote nodes, 1416 regardless these data flows are on external or internal network 1417 interfaces. 1419 The ACTN security discussion is further split into two specific 1420 categories described in the following sub-sections: 1422 . Interface between the Customer Network Controller and Multi 1423 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) 1425 . Interface between the Multi Domain Service Coordinator and 1426 Provisioning Network Controller (PNC), MDSC-PNC Interface (MPI) 1428 From a security and reliability perspective, ACTN may encounter many 1429 risks such as malicious attack and rogue elements attempting to 1430 connect to various ACTN components. Furthermore, some ACTN 1431 components represent a single point of failure and threat vector, 1432 and must also manage policy conflicts, and eavesdropping of 1433 communication between different ACTN components. 1435 The conclusion is that all protocols used to realize the ACTN 1436 framework should have rich security features, and customer, 1437 application and network data should be stored in encrypted data 1438 stores. Additional security risks may still exist. Therefore, 1439 discussion and applicability of specific security functions and 1440 protocols will be better described in documents that are use case 1441 and environment specific. 1443 9.1. CNC-MDSC Interface (CMI) 1445 Data stored by the MDSC will reveal details of the virtual network 1446 services, and which CNC and customer/application is consuming the 1447 resource. The data stored must therefore be considered as a 1448 candidate for encryption. 1450 CNC Access rights to an MDSC must be managed. The MDSC must 1451 allocate resources properly, and methods to prevent policy 1452 conflicts, resource wastage, and denial of service attacks on the 1453 MDSC by rogue CNCs, should also be considered. 1455 The CMI will likely be an external protocol interface. Suitable 1456 authentication and authorization of each CNC connecting to the MDSC 1457 will be required, especially, as these are likely to be implemented 1458 by different organizations and on separate functional nodes. Use of 1459 the AAA-based mechanisms would also provide role-based authorization 1460 methods, so that only authorized CNC's may access the different 1461 functions of the MDSC. 1463 9.2. MDSC-PNC Interface (MPI) 1465 Where the MDSC must interact with multiple (distributed) PNCs, a 1466 PKI-based mechanism is suggested, such as building a TLS or HTTPS 1467 connection between the MDSC and PNCs, to ensure trust between the 1468 physical network layer control components and the MDSC. 1470 Which MDSC the PNC exports topology information to, and the level of 1471 detail (full or abstracted) should also be authenticated and 1472 specific access restrictions and topology views, should be 1473 configurable and/or policy-based. 1475 10. IANA Considerations 1477 This document has no actions for IANA. 1479 11. References 1481 11.1. Informative References 1483 [RFC2702] Awduche, D., et. al., "Requirements for Traffic 1484 Engineering Over MPLS", RFC 2702, September 1999. 1486 [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path 1487 Computation Element (PCE)-Based Architecture", IETF RFC 1488 4655, August 2006. 1490 [RFC5654] Niven-Jenkins, B. (Ed.), D. Brungard (Ed.), and M. Betts 1491 (Ed.), "Requirements of an MPLS Transport Profile", RFC 1492 5654, September 2009. 1494 [RFC7149] Boucadair, M. and Jacquenet, C., "Software-Defined 1495 Networking: A Perspective from within a Service Provider 1496 Environment", RFC 7149, March 2014. 1498 [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for 1499 Information Exchange between Interconnected Traffic- 1500 Engineered Networks", RFC 7926, July 2016. 1502 [RFC3945] Manning, E., et al., "Generalized Multi-Protocol Label 1503 Switching (GMPLS) Architecture2, RFC 3945, October 2004. 1505 [ONF-ARCH] Open Networking Foundation, "SDN architecture", Issue 1506 1.1, ONF TR-521, June 2016. 1508 [Centralized] Farrel, A., et al., "An Architecture for Use of PCE 1509 and PCEP in a Network with Central Control", draft-ietf- 1510 teas-pce-central-control, work in progress. 1512 [Service-YANG] Lee, Y., Dhody, D., and Ceccarelli, C., "Traffic 1513 Engineering and Service Mapping Yang Model", draft-lee- 1514 teas-te-service-mapping-yang, work in progress. 1516 [ACTN-YANG] Lee, Y., et al., "A Yang Data Model for ACTN VN 1517 Operation", draft-lee-teas-actn-vn-yang, work in progress. 1519 [ACTN-REQ] Lee, Y., et al., "Requirements for Abstraction and 1520 Control of TE Networks", draft-ietf-teas-actn- 1521 requirements, work in progress. 1523 [TE-Topo] X. Liu et al., "YANG Data Model for TE Topologies", draft- 1524 ietf-teas-yang-te-topo, work in progress. 1526 12. Contributors 1528 Adrian Farrel 1529 Old Dog Consulting 1530 Email: adrian@olddog.co.uk 1532 Italo Busi 1533 Huawei 1534 Email: Italo.Busi@huawei.com 1536 Khuzema Pithewan 1537 Infinera 1538 Email: kpithewan@infinera.com 1540 Michael Scharf 1541 Nokia 1542 Email: michael.scharf@nokia.com 1544 Luyuan Fang 1545 eBay 1546 Email: luyuanf@gmail.com 1548 Diego Lopez 1549 Telefonica I+D 1550 Don Ramon de la Cruz, 82 1551 28006 Madrid, Spain 1552 Email: diego@tid.es 1554 Sergio Belotti 1555 Alcatel Lucent 1556 Via Trento, 30 1557 Vimercate, Italy 1558 Email: sergio.belotti@nokia.com 1560 Daniel King 1561 Lancaster University 1562 Email: d.king@lancaster.ac.uk 1564 Dhruv Dhody 1565 Huawei Technologies 1566 Divyashree Techno Park, Whitefield 1567 Bangalore, Karnataka 560066 1568 India 1569 Email: dhruv.ietf@gmail.com 1570 Gert Grammel 1571 Juniper Networks 1572 Email: ggrammel@juniper.net 1574 Authors' Addresses 1576 Daniele Ceccarelli 1577 Ericsson 1578 Torshamnsgatan,48 1579 Stockholm, Sweden 1580 Email: daniele.ceccarelli@ericsson.com 1582 Young Lee 1583 Huawei Technologies 1584 5340 Legacy Drive 1585 Plano, TX 75023, USA 1586 Phone: (469)277-5838 1587 Email: leeyoung@huawei.com 1589 APPENDIX A - Example of MDSC and PNC Functions Integrated in A 1590 Service/Network Orchestrator 1592 This section provides an example of a possible deployment scenario, 1593 in which Service/Network Orchestrator can include a number of 1594 functionalities, among which, in the example below, PNC 1595 functionalities for domain 2 and MDSC functionalities to coordinate 1596 the PNC1 functionalities (hosted in a separate domain controller) 1597 and PNC2 functionalities (co-hosted in the network orchestrator). 1599 Customer 1600 +-------------------------------+ 1601 | +-----+ | 1602 | | CNC | | 1603 | +-----+ | 1604 +-------|-----------------------+ 1605 | 1606 Service/Network | CMI 1607 Orchestrator | 1608 +-------|------------------------+ 1609 | +------+ MPI +------+ | 1610 | | MDSC |---------| PNC2 | | 1611 | +------+ +------+ | 1612 +-------|------------------|-----+ 1613 | MPI | 1614 Domain Controller | | 1615 +-------|-----+ | 1616 | +-----+ | | SBI 1617 | |PNC1 | | | 1618 | +-----+ | | 1619 +-------|-----+ | 1620 v SBI v 1621 ------- ------- 1622 ( ) ( ) 1623 - - - - 1624 ( ) ( ) 1625 ( Domain 1 )----( Domain 2 ) 1626 ( ) ( ) 1627 - - - - 1628 ( ) ( ) 1629 ------- -------