idnits 2.17.1 draft-ietf-teas-actn-framework-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There is 1 instance of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 2, 2017) is 2639 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ACTN-REQ' is mentioned on line 1051, but not defined Summary: 2 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Daniele Ceccarelli (Ed) 2 Internet Draft Ericsson 3 Intended status: Informational Young Lee (Ed) 4 Expires: August 2017 Huawei 6 February 2, 2017 8 Framework for Abstraction and Control of Traffic Engineered Networks 10 draft-ietf-teas-actn-framework-03 12 Abstract 14 Traffic Engineered networks have a variety of mechanisms to 15 facilitate the separation of the data plane and control plane. They 16 also have a range of management and provisioning protocols to 17 configure and activate network resources. These mechanisms 18 represent key technologies for enabling flexible and dynamic 19 networking. 21 Abstraction of network resources is a technique that can be applied 22 to a single network domain or across multiple domains to create a 23 single virtualized network that is under the control of a network 24 operator or the customer of the operator that actually owns 25 the network resources. 27 This document provides a framework for Abstraction and Control of 28 Traffic Engineered Networks (ACTN). 30 Status of this Memo 32 This Internet-Draft is submitted to IETF in full conformance with 33 the provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF), its areas, and its working groups. Note that 37 other groups may also distribute working documents as Internet- 38 Drafts. 40 Internet-Drafts are draft documents valid for a maximum of six 41 months and may be updated, replaced, or obsoleted by other documents 42 at any time. It is inappropriate to use Internet-Drafts as 43 reference material or to cite them other than as "work in progress." 44 The list of current Internet-Drafts can be accessed at 45 http://www.ietf.org/ietf/1id-abstracts.txt 47 The list of Internet-Draft Shadow Directories can be accessed at 48 http://www.ietf.org/shadow.html. 50 This Internet-Draft will expire on August 2, 2017. 52 Copyright Notice 54 Copyright (c) 2017 IETF Trust and the persons identified as the 55 document authors. All rights reserved. 57 This document is subject to BCP 78 and the IETF Trust's Legal 58 Provisions Relating to IETF Documents 59 (http://trustee.ietf.org/license-info) in effect on the date of 60 publication of this document. Please review these documents 61 carefully, as they describe your rights and restrictions with 62 respect to this document. Code Components extracted from this 63 document must include Simplified BSD License text as described in 64 Section 4.e of the Trust Legal Provisions and are provided without 65 warranty as described in the Simplified BSD License. 67 Table of Contents 69 1. Introduction...................................................3 70 1.1. Terminology...............................................6 71 2. Business Model of ACTN.........................................9 72 2.1. Customers.................................................9 73 2.2. Service Providers........................................10 74 2.3. Network Providers........................................11 75 3. ACTN Architecture.............................................12 76 3.1. Customer Network Controller..............................14 77 3.2. Multi Domain Service Coordinator.........................15 78 3.3. Physical Network Controller..............................16 79 3.4. ACTN Interfaces..........................................17 80 4. VN Creation Process...........................................20 81 4.1. VN Creation Example......................................20 82 5. Access Points and Virtual Network Access Points...............22 83 5.1. Dual homing scenario.....................................25 84 6. End Point Selection Based On Network Status...................26 85 6.1. Pre-Planned End Point Migration..........................27 86 6.2. On the Fly End Point Migration...........................28 88 7. Manageability Considerations..................................28 89 7.1. Policy...................................................28 90 7.2. Policy applied to the Customer Network Controller........29 91 7.3. Policy applied to the Multi Domain Service Coordinator...29 92 7.4. Policy applied to the Physical Network Controller........30 93 8. Security Considerations.......................................30 94 8.1. Interface between the Customer Network Controller and Multi 95 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI)...31 96 8.2. Interface between the Multi Domain Service Coordinator and 97 Physical Network Controller (PNC), MDSC-PNC Interface (MPI)...32 98 9. References....................................................32 99 9.1. Informative References...................................32 100 10. Contributors.................................................33 101 Authors' Addresses...............................................34 103 1. Introduction 105 Traffic Engineered networks have a variety of mechanisms to 106 facilitate separation of data plane and control plane including 107 distributed signaling for path setup and protection, centralized 108 path computation for planning and traffic engineering, and a range 109 of management and provisioning protocols to configure and activate 110 network resources. These mechanisms represent key technologies for 111 enabling flexible and dynamic networking. 113 The term Traffic Engineered network is used in this document to 114 refer to a network that uses any connection-oriented technology 115 under the control of a distributed or centralized control plane to 116 support dynamic provisioning of end-to-end connectivity. Some 117 examples of networks that are in scope of this definition are 118 optical networks, MPLS Transport Profile (MPLS-TP) networks 119 [RFC5654], and MPLS Traffic Engineering (MPLS-TE) networks 120 [RFC2702]. 122 One of the main drivers for Software Defined Networking (SDN) 123 [RFC7149] is a decoupling of the network control plane from the data 124 plane. This separation of the control plane from the data plane has 125 been already achieved with the development of MPLS/GMPLS [GMPLS] and 126 the Path Computation Element (PCE) [RFC4655] for TE-based networks. 127 One of the advantages of SDN is its logically centralized control 128 regime that allows a global view of the underlying networks. 129 Centralized control in SDN helps improve network resource 130 utilization compared with distributed network control. For TE-based 131 networks, PCE is essentially equivalent to a logically centralized 132 path computation function. 134 Three key aspects that need to be solved by SDN are: 136 . Separation of service requests from service delivery so that 137 the orchestration of a network is transparent from the point of 138 view of the customer but remains responsive to the customer's 139 services and business needs. 141 . Network abstraction: As described in [RFC7926], abstraction is 142 the process of applying policy to a set of information about a 143 TE network to produce selective information that represents the 144 potential ability to connect across the domain. The process of 145 abstraction presents the connectivity graph in a way that is 146 independent of the underlying network technologies, 147 capabilities, and topology so that it can be used to plan and 148 deliver network services in a uniform way 150 . Coordination of resources across multiple domains and multiple 151 layers to provide end-to-end services regardless of whether the 152 domains use SDN or not. 154 As networks evolve, the need to provide separated service 155 request/orchestration and resource abstraction has emerged as a key 156 requirement for operators. In order to support multiple clients each 157 with its own view of and control of the server network, a network 158 operator needs to partition (or "slice") the network resources. The 159 resulting slices can be assigned to each client for guaranteed usage 160 which is a step further than shared use of common network resources. 162 Furthermore, each network represented to a client can be built from 163 abstractions of the underlying networks so that, for example, a link 164 in the client's network is constructed from a path or collection of 165 paths in the underlying network. 167 We call the set of management and control functions used to provide 168 these features Abstraction and Control of Traffic Engineered 169 Networks (ACTN). 171 Particular attention needs to be paid to the multi-domain case, ACTN 172 can facilitate virtual network operation via the creation of a 173 single virtualized network or a seamless service. This supports 174 operators in viewing and controlling different domains (at any 175 dimension: applied technology, administrative zones, or vendor- 176 specific technology islands) as a single virtualized network. 178 Network virtualization refers to allowing the customers of network 179 operators (see Section 2.1) to utilize a certain amount of network 180 resources as if they own them and thus control their allocated 181 resources with higher layer or application processes that enables 182 the resources to be used in the most optimal way. More flexible, 183 dynamic customer control capabilities are added to the traditional 184 VPN along with a customer-specific virtual network view. Customers 185 control a view of virtual network resources, specifically allocated 186 to each one of them. This view is called an virtual network 187 topology. Such a view may be specific to a service, the set of 188 consumed resources, or to a particular customer. 190 Network abstraction refers to presenting a customer with a view of 191 the operator's network in such a way that the links and nodes in 192 that view constitute an aggregation or abstraction of the real 193 resources in the operator's network in a way that is independent of 194 the underlying network technologies, capabilities, and topology. 195 The customer operates an abstract network as if it was their own 196 network, but the operational commands are mapped onto the underlying 197 network through domains coordination. 199 The customer controller for a virtual or abstract network is 200 envisioned to support many distinct applications. This means that 201 there may be a further level of virtualization that provides a view 202 of resources in the customer's virtual network for use by an 203 individual application. 205 The ACTN framework described in this document facilitates: 207 . Abstraction of the underlying network resources to higher-layer 208 applications and customers [RFC7926]. 210 . Virtualization of particular underlying resources, whose 211 selection criterion is the allocation of those resources to a 212 particular customer, application or service [ONF-ARCH]. 214 . Slicing of infrastructure to meet specific customers' service 215 requirements. 217 . Creation of a virtualized environment allowing operators to 218 view and control multi-domain networks as a single virtualized 219 network. 221 . The possibility of providing a customer with a virtualized 222 network. 224 . A virtualization/mapping network function that adapts the 225 customer's requests for control of the virtual resources that 226 have been allocated to the customer to control commands applied 227 to the underlying network resources. Such a function performs 228 the necessary mapping, translation, isolation and 229 security/policy enforcement, etc. 231 . The presentation to customers of networks as a virtualized 232 topology via open and programmable interfaces. This allows for 233 the recursion of controllers in a customer-provider 234 relationship. 236 1.1. Terminology 238 The following terms are used in this document. Some of them are 239 newly defined, some others reference existing definition: 240 . Node: A node is a vertex on the graph representation of a TE 241 topology. In a physical network a node corresponds to a network 242 element (NE). In a sliced network, a node is some subset of the 243 capabilities of a physical network element. In an abstract 244 network, a node (sometimes called an abstract node) is a 245 representation as a single vertex in the topology of the 246 abstract network of one or more nodes and their connecting 247 links from the physical network. The concept of a node 248 represents the ability to connect from any access to the node 249 (a link end) to any other access to that node, although 250 "limited cross-connect capabilities" may also be defined to 251 restrict this functionality. Just as network slicing and 252 network abstraction may be applied recursively, so a node in a 253 topology may be created by applying slicing or abstraction on 254 the nodes in the underlying topology. 256 . Link: A link is an edge on the graph representation of a TE 257 topology. Two nodes connected by a link are said to be 258 "adjacent" in the TE topology. In a physical network, a link 259 corresponds to a physical connection. In a sliced topology, a 260 link is some subset of the capabilities of a physical 261 connection. In an abstract network, a link (sometimes called an 262 abstract link) is a representation as an edge in the topology 263 of the abstract network of one or more links and the nodes they 264 connect from the physical network. Abstract links may be 265 realized by Label Switched Paths (LSPs) across the physical 266 network that may be pre-established or could be only 267 potentially achievable. Just as network slicing and network 268 abstraction may be applied recursively, so a link in a topology 269 may be created by applying slicing or abstraction on the links 270 in the underlying topology. While most links are point-to- 271 point, connecting just two nodes, the concept of a multi-access 272 link exists where more than two nodes are collectively adjacent 273 and data sent on the link by one node will be equally delivered 274 to all other nodes connected by the link. 276 . PNC: A Physical Network Controller is a domain controller that 277 is responsible for controlling devices or NEs under its direct 278 control. This can be an SDN controller, a Network Management 279 System (NMS), an Element Management System (EMS), an active PCE 280 or any other mean to dynamically control a set of nodes and 281 that is implementing an NBI compliant with ACTN specification. 283 . PNC domain: A PNC domain includes all the resources under the 284 control of a single PNC. It can be composed of different 285 routing domains and administrative domains, and the resources 286 may come from different layers. The interconnection between PNC 287 domains can be a link or a node. 289 _______ Border Link _______ 290 _( )================( )_ 291 _( )_ _( )_ 292 ( ) ---- ( ) 293 ( PNC )| |( PNC ) 294 ( Domain X )| |( Domain Y ) 295 ( )| |( ) 296 (_ _) ---- (_ _) 297 (_ _) Border (_ _) 298 (_______) Node (_______) 300 Figure 1: PNC Domain Borders 302 . A Virtual Network (VN) is a customer view of the TE 303 network. It is presented by the provider as a set of physical 304 and/or abstracted resources. Depending on the agreement between 305 client and provider various VN operations and VN views are 306 possible as follows: 308 o VN Creation - VN could be pre-configured and created via 309 offline negotiation between customer and provider. In 310 other cases, the VN could also be created dynamically 311 based on a request from the customer with given SLA 312 attributes which satisfy the customer's objectives. 314 o Dynamic Operations - The VN could be further modified or 315 deleted based on a customer request to request. The 316 customer can further act upon the virtual network 317 resources to perform end-to-end tunnel management (set- 318 up/release/modify). These changes will result in 319 subsequent LSP management at the operator's level. 321 o VN View: 323 a. The VN can be seen as set of end-to-end tunnels from a 324 customer point of view, where each tunnel is referred 325 as a VN member. Each VN member can then be formed by 326 recursive slicing or abstraction of paths in 327 underlying networks. Such end-to-end tunnels may 328 comprise of customer end points, access links, intra- 329 domain paths, and inter-domain links. In this view VN 330 is thus a set of VN members. 332 b. The VN can also be seen as a topology comprising of 333 physical, sliced, and abstract nodes and links. The 334 nodes in this case include physical customer end 335 points, border nodes, and internal nodes as well as 336 abstracted nodes. Similarly the links include physical 337 access links, inter-domain links, and intra-domain 338 links as well as abstract links. The abstract nodes 339 and links in this view can be pre-negotiated or 340 created dynamically. 342 . Abstraction. This process is defined in [RFC7926]. 344 . Abstract Link: The term "abstract link" is defined in 345 [RFC7926]. 347 . Abstract Topology: The topology of abstract nodes and abstract 348 links presented through the process of abstraction by a lower 349 layer network for use by a higher layer network. 351 . Access link: A link between a customer node and a provider 352 node. 354 . Inter-domain link: A link between domains managed by different 355 PNCs. The MDSC is in charge of managing inter-domain links. 357 . Access Point (AP): An access point is used to keep 358 confidentiality between the customer and the provider. It is a 359 logical identifier shared between the customer and the 360 provider, used to map the end points of the border node in both 361 the customer and the provider NW. The AP can be used by the 362 customer when requesting VN service to the provider. 364 . VN Access Point (VNAP): A VNAP is defined as the binding 365 between an AP and a given VN and is used to identify the 366 portion of the access and/or inter-domain link dedicated to a 367 given VN. 369 2. Business Model of ACTN 371 The Virtual Private Network (VPN) [RFC4026] and Overlay Network (ON) 372 models [RFC4208] are built on the premise that the network provider 373 provides all virtual private or overlay networks to its customers. 374 These models are simple to operate but have some disadvantages in 375 accommodating the increasing need for flexible and dynamic network 376 virtualization capabilities. 378 There are three key entities in the ACTN model: 380 - Customers 381 - Service Providers 382 - Network Providers 384 These are described in the following sections. 386 2.1. Customers 388 Within the ACTN framework, different types of customers may be taken 389 into account depending on the type of their resource needs, and on 390 their number and type of access. For example, it is possible to 391 group them into two main categories: 393 Basic Customer: Basic customers include fixed residential users, 394 mobile users and small enterprises. Usually, the number of basic 395 customers for a service provider is high: they require small amounts 396 of resources and are characterized by steady requests (relatively 397 time invariant). A typical request for a basic customer is for a 398 bundle of voice services and internet access. Moreover, basic 399 customers do not modify their services themselves: if a service 400 change is needed, it is performed by the provider as a proxy and the 401 services generally have very few dedicated resources (such as for 402 subscriber drop), with everything else shared on the basis of some 403 Service Level Agreement (LSA), which is usually best-efforts. 405 Advanced Customer: Advanced customers typically include enterprises, 406 governments and utilities. Such customers can ask for both point-to 407 point and multipoint connectivity with high resource demands varying 408 significantly in time and from customer to customer. This is one of 409 the reasons why a bundled service offering is not enough and it is 410 desirable to provide each advanced customer with a customized 411 virtual network service. 413 Advanced customers may own dedicated virtual resources, or share 414 resources. They may also have the ability to modify their service 415 parameters within the scope of their virtualized environments. The 416 primary focus of ACTN is Advanced Customers. 418 As customers are geographically spread over multiple network 419 provider domains, they have to interface to multiple providers and 420 may have to support multiple virtual network services with different 421 underlying objectives set by the network providers. To enable these 422 customers to support flexible and dynamic applications they need to 423 control their allocated virtual network resources in a dynamic 424 fashion, and that means that they need a view of the topology that 425 spans all of the network providers. Customers of a given service 426 provider can in turn offer a service to other customers in a 427 recursive way. 429 2.2. Service Providers 431 Service providers are the providers of virtual network services to 432 their customers. Service providers may or may not own physical 433 network resources (i.e, may or may not be network providers as 434 described in Section 2.3). When a service provider is the same as 435 the network provider, this is similar to existing VPN models applied 436 to a single provider. This approach works well when the customer 437 maintains a single interface with a single provider. When customer 438 spans multiple independent network provider domains, then it becomes 439 hard to facilitate the creation of end-to-end virtual network 440 services with this model. 442 A more interesting case arises when network providers only provide 443 infrastructure, while distinct service providers interface to the 444 customers. In this case, service providers are, themselves customers 445 of the network infrastructure providers. One service provider may 446 need to keep multiple independent network providers as its end-users 447 span geographically across multiple network provider domains. 449 The ACTN network model is predicated upon this three tier model and 450 is summarized in Figure 2: 452 +----------------------+ 453 | customer | 454 +----------------------+ 455 | 456 | /\ Service/Customer specific 457 | || Abstract Topology 458 | || 459 +----------------------+ E2E abstract 460 | Service Provider | topology creation 461 +----------------------+ 462 / | \ 463 / | \ Network Topology 464 / | \ (raw or abstract) 465 / | \ 466 +------------------+ +------------------+ +------------------+ 467 |Network Provider 1| |Network Provider 2| |Network Provider 3| 468 +------------------+ +------------------+ +------------------+ 470 Figure 2: Three tier model. 472 There can be multiple service providers to which a customer may 473 interface. 475 There are multiple types of service providers: 477 . Data Center providers can be viewed as a service provider type 478 as they own and operate data center resources for various WAN 479 customers, and they can lease physical network resources from 480 network providers. 481 . Internet Service Providers (ISP) are service providers of 482 internet services to their customers while leasing physical 483 network resources from network providers. 484 . Mobile Virtual Network Operators (MVNO) provide mobile services 485 to their end-users without owning the physical network 486 infrastructure. 488 2.3. Network Providers 490 Network Providers are the infrastructure providers that own the 491 physical network resources and provide network resources to their 492 customers. The layered model described in this architecture 493 separates the concerns of network providers and customers, with 494 service providers acting as aggregators of customer requests. 496 3. ACTN Architecture 498 This section provides a high-level model of ACTN showing the 499 interfaces and the flow of control between components. 501 The ACTN architecture is aligned with the ONF SDN architecture [ONF- 502 ARCH] and presents a 3-tiers reference model. It allows for 503 hierarchy and recursiveness not only of SDN controllers but also of 504 traditionally controlled domains that use a control plane. It 505 defines three types of controllers depending on the functionalities 506 they implement. The main functionalities that are identified are: 508 . Multi-domain coordination function: This function oversees the 509 specific aspects of the different domains and builds a single 510 abstracted end-to-end network topology in order to coordinate 511 end-to-end path computation and path/service provisioning. 512 Domain sequence path calculation/determination is also a part 513 of this function. 515 . Virtualization/Abstraction function: This function provides an 516 abstracted view of the underlying network resources for use by 517 the customer - a customer may be the client or a higher level 518 controller entity. This function includes network path 519 computation based on customer service connectivity request 520 constraints, path computation based on the global network-wide 521 abstracted topology, and the creation of an abstracted view of 522 network slices allocated to each customer. These operations 523 depend on customer-specific network objective functions and 524 customer traffic profiles. 526 . Customer mapping/translation function: This function is to map 527 customer requests/commands into network provisioning requests 528 that can be sent to the Physical Network Controller (PNC) 529 according to business policies provisioned statically or 530 dynamically at the OSS/NMS. Specifically, it provides mapping and 531 translation of a customer's service request into a set of 532 parameters that are specific to a network type and technology 533 such that network configuration process is made possible. 535 . Virtual service coordination function: This function translates 536 customer service-related information into virtual network 537 service operations in order to seamlessly operate virtual 538 networks while meeting a customer's service requirements. In 539 the context of ACTN, service/virtual service coordination 540 includes a number of service orchestration functions such as 541 multi-destination load balancing, guarantees of service 542 quality, bandwidth and throughput. It also includes 543 notifications for service fault and performance degradation and 544 so forth. 546 The types of controller defined in the ACTN architecture are shown 547 in Figure 3 below and are as follows: 549 . CNC - Customer Network Controller 550 . MDSC - Multi Domain Service Coordinator 551 . PNC - Physical Network Controller 553 Figure 3 also shows the following interfaces: 555 . CMI - CNC-MPI Interface 556 . MPI - MDSC-PNC Interface 558 VPN customer NW Mobile Customer ISP NW service Customer 559 | | | 560 +-------+ +-------+ +-------+ 561 | CNC-A | | CNC-B | | CNC-C | 562 +-------+ +-------+ +-------+ 563 \ | / 564 ----------- |CMI I/F -------------- 565 \ | / 566 +-----------------------+ 567 | MDSC | 568 +-----------------------+ 569 / | \ 570 ------------- |MPI I/F ------------- 571 / | \ 572 +-------+ +-------+ +-------+ 573 | PNC | | PNC | | PNC | 574 +-------+ +-------+ +-------+ 575 | GMPLS / | / \ 576 | trigger / | / \ 577 -------- ---- | / \ 578 ( ) ( ) | / \ 579 - - ( Phys ) | / ----- 580 ( GMPLS ) (Netw) | / ( ) 581 ( Physical ) ---- | / ( Phys. ) 582 ( Network ) ----- ----- ( Net ) 583 - - ( ) ( ) ----- 584 ( ) ( Phys. ) ( Phys ) 585 -------- ( Net ) ( Net ) 586 ----- ----- 588 Figure 3: ACTN Control Hierarchy 590 3.1. Customer Network Controller 592 A Virtual Network Service is instantiated by the Customer Network 593 Controller via the CNC-MDSC Interface (CMI). As the Customer Network 594 Controller directly interfaces to the applications, it understands 595 multiple application requirements and their service needs. It is 596 assumed that the Customer Network Controller and the MDSC have a 597 common knowledge of the end-point interfaces based on their business 598 negotiations prior to service instantiation. End-point interfaces 599 refer to customer-network physical interfaces that connect customer 600 premise equipment to network provider equipment. 602 3.2. Multi Domain Service Coordinator 604 The Multi Domain Service Coordinator (MDSC) sits between the CNC 605 that issues connectivity requests and the Physical Network 606 Controllers (PNCs) that manage the physical network resources. The 607 MDSC can be collocated with the PNC, especially in those cases where 608 the service provider and the network provider are the same entity. 610 The internal system architecture and building blocks of the MDSC are 611 out of the scope of ACTN. Some examples can be found in the 612 Application Based Network Operations (ABNO) architecture [RFC7491] 613 and the ONF SDN architecture [ONF-ARCH]. 615 The MDSC is the only building block of the architecture that can 616 implement all four ACTN main functions, i.e., multi domain 617 coordination, virtualization/abstraction, customer 618 mapping/translation, and virtual service coordination. The first two 619 functions of the MDSC, namely, multi domain coordination and 620 virtualization/abstraction are referred to as network 621 control/coordination functions while the last two functions, namely, 622 customer mapping/translation and virtual service coordination are 623 referred to as service control/coordination functions. 624 The key point of the MDSC (and of the whole ACTN framework) is 625 detaching the network and service control from underlying technology 626 to help the customer express the network as desired by business 627 needs. The MDSC envelopes the instantiation of the right technology 628 and network control to meet business criteria. In essence it 629 controls and manages the primitives to achieve functionalities as 630 desired by the CNC. 631 A hierarchy of MDSCs can be foreseen for scalability and 632 administrative choices. In this case another interface needs to be 633 defined, the MMI (MDSC-MDSC interface) as shown in Figure 4. 635 +-------+ +-------+ +-------+ 636 | CNC-A | | CNC-B | | CNC-C | 637 +-------+ +-------+ +-------+ 638 \ | / 639 ---------- |-CMI I/F ----------- 640 \ | / 641 +-----------------------+ 642 | MDSC | 643 +-----------------------+ 644 / | \ 645 ---------- |-MMI I/F ----------- 646 / | \ 647 +----------+ +----------+ +--------+ 648 | MDSC | | MDSC | | MDSC | 649 +----------+ +----------+ +--------+ 650 | / |-MPI I/F / \ 651 | / | / \ 652 +-----+ +-----+ +-----+ +-----+ +-----+ 653 | PNC | | PNC | | PNC | | PNC | | PNC | 654 +-----+ +-----+ +-----+ +-----+ +-----+ 656 Figure 4: Controller recursiveness 658 In order to allow for multi-domain coordination a 1:N relationship 659 must be allowed between MDSCs and between MDSCs and PNCs (i.e. 1 660 parent MDSC and N child MDSC or 1 MDSC and N PNCs). 662 In the case where there is a hierarchy of MDSCs, the interface above 663 the top MDSC (i.e., CMI) and the interface below the bottom MDSCs 664 (i.e., SBI) remain the same. The recursion of MDSCs in the middle 665 layers within this hierarchy of MDSCs may take place via the MMI. 666 Please see Section 4 for details of the ACTN interfaces. 668 In addition to that, it could also be possible to have an M:1 669 relationship between MDSCs and PNC to allow for network resource 670 partitioning/sharing among different customers not necessarily 671 connected to the same MDSC (e.g., different service providers). 673 3.3. Physical Network Controller 675 The Physical Network Controller (PNC) oversees configuring the 676 network elements, monitoring the topology (physical or virtual) of 677 the network, and passing information about the topology (either raw 678 or abstracted) to the MDSC. 680 The internal architecture of the PNC, its building blocks, and the 681 way it controls its domain are out of the scope of ACTN. Some 682 examples can be found in the Application Based Network Operations 683 (ABNO) architecture [RFC7491] and the ONF SDN architecture [ONF- 684 ARCH] 686 The PNC, in addition to being in charge of controlling the physical 687 network, is able to implement two of the four main ACTN main 688 functions: multi domain coordination and virtualization/abstraction 689 function. 691 3.4. ACTN Interfaces 693 To allow virtualization and multi domain coordination, the network 694 has to provide open, programmable interfaces, through which customer 695 applications can create, replace and modify virtual network 696 resources and services in an interactive, flexible and dynamic 697 fashion while having no impact on other customers. Direct customer 698 control of transport network elements and virtualized services is 699 not perceived as a viable proposition for transport network 700 providers due to security and policy concerns among other reasons. 701 In addition, as discussed in Section 3.3, the network control plane 702 for transport networks has been separated from the data plane and as 703 such it is not viable for the customer to directly interface with 704 transport network elements. 706 Figure 5 depicts a high-level control and interface architecture for 707 ACTN. A number of key ACTN interfaces exist for deployment and 708 operation of ACTN-based networks. These are highlighted in Figure 5 709 (ACTN Interfaces). 711 .-------------- 712 ------------- | 713 | Application |-- 714 ------------- 715 ^ 716 | I/F A -------- 717 v ( ) 718 -------------- - - 719 | Customer | ( Customer ) 720 | Network |--------->( Network ) 721 | Controller | ( ) 722 -------------- - - 723 ^ ( ) 724 | I/F B -------- 725 v 726 -------------- 727 | MultiDomain | 728 | Service | 729 | Coordinator| -------- 730 -------------- ( ) 731 ^ - - 732 | I/F C ( Physical ) 733 v ( Network ) 734 --------------- ( ) -------- 735 | |<----> - - ( ) 736 -------------- | ( ) - - 737 | Physical |-- -------- ( Physical ) 738 | Network |<---------------------->( Network ) 739 | Controller | I/F D ( ) 740 -------------- - - 741 ( ) 742 -------- 744 Figure 5: ACTN Interfaces 746 The interfaces and functions are described below: 748 . Interface A: A north-bound interface (NBI) that communicates 749 the service request or application demand. A request includes 750 specific service properties, including service type, topology, 751 bandwidth, and constraint information. 753 . Interface B: The CNC-MDSC Interface (CMI) is an interface 754 between a CNC and an MDSC. It is used to request the creation 755 of network resources, topology or services for the 756 applications. Note that all service related information 757 conveyed via Interface A (i.e., specific service properties, 758 including service type, topology, bandwidth, and constraint 759 information) needs to be transparently carried over this 760 interface. The MDSC may also report potential network topology 761 availability if queried for current capability from the CNC. 762 The CMI is the interface with the highest level of abstraction, 763 where the Virtual Networks are modelled and presented to the 764 customer/CNC. Most of the information over this interface is 765 technology agnostic, even if in some cases it should be 766 possible to explicitly request for a VN to be created at a 767 given layer in the network (e.g. ODU VN or MPLS VN). 769 . Interface C: The MDSC-PNC Interface (MPI) is an interface 770 between an MDSC and a PNC. It communicates the creation 771 requests for new connectivity or for bandwidth changes in the 772 physical network. In multi-domain environments, the MDSC needs 773 to establish multiple MPIs, one for each PNC, as there is one 774 PNC responsible for control of each domain. The MPI could have 775 different degrees of abstraction and present an abstracted 776 topology hiding technology specific aspects of the network or 777 convey technology specific parameters to allow for path 778 computation at the MDSC level. Please refer to CCAMP Transport 779 NBI work for the latter case [Transport NBI]. 781 . Interface D: The provisioning interface for creating forwarding 782 state in the physical network, requested via the Physical 783 Network Controller. 785 The interfaces within the ACTN scope are B and C while interfaces A 786 and D are out of the scope of ACTN and are only shown in Figure 5 to 787 give a complete context of ACTN. 788 As previously stated in Section 3.2 there might be a third interface 789 in ACTN scope, the MMI. The MMI is a special case of the MPI and 790 behaves similarly to an MPI to support general functions performed 791 by the MDSCs such as abstraction function and provisioning function. 792 From an abstraction point of view, the top level MDSC which 793 interfaces the CNC operates on a higher level of abstraction (i.e., 794 less granular level) than the lower level MSDCs. As such, the MMI 795 carries more abstract TE information than the MPI. 797 Please note that for all the three interfaces, when technology 798 specific information needs to be included, this info will be add-ons 799 on top of the general abstract topology. As far as general topology 800 abstraction standpoint, all interfaces are still recursive in 801 nature. 803 4. VN Creation Process 805 The provider can present different level of network abstraction to 806 the customer, spanning from one extreme (say "black") where nothing 807 except the Access Points (APs) is shown to the other extreme (say 808 "white") where an actual network topology is shown to the customer. 809 There are shades of "gray" in between where a number of abstract 810 links and nodes can be shown. 812 VN creation is composed of two phases: Negotiation and 813 Implementation. 815 Negotiation: In the case of gray/white topology abstraction, there 816 is an initial phase in which the customer agrees with the provider 817 on the type of topology to be shown (e.g., 10 virtual links and 5 818 virtual nodes) with a given interconnectivity. This is something 819 that is assumed to be preconfigured by the operator off-line. What 820 is on-line is the capability to modify/delete something (e.g., a 821 virtual link). In the case of "black" abstraction this negotiation 822 phase does not happen because there is nothing to negotiate: the 823 customer can only see the APs of the network. 825 Implementation: In the case of black topology abstraction, the 826 customers can ask for connectivity with given constraints/SLA 827 between the APs and LSPs/tunnels created by the provider to satisfy 828 the request. What the customer sees is only that his CEs are 829 connected with a given SLA. In the case of grey/white topology the 830 customer creates his own LSPs accordingly to the topology that was 831 presented to him. 833 4.1. VN Creation Example 835 This section illustrates how a VN creation process is conducted over 836 a hierarchy of MDSCs via MMIs and MPIs, which is shown in Figure 6. 838 +-----+ 839 | CNC | CNC wants to create a VN 840 +-----+ between CE A and CE B 841 | 842 | 843 +-----------------------+ 844 | MDSC 1 | --o-o---o-o-- 845 +-----------------------+ 846 / \ 847 .. .. / \ .. .. 848 ( ) ( ) +--------+ +--------+ ( ) ( ) 849 --(o--o)-(o--o)-- | MDSC 2 | | MDSC 3 | --(o--o)-(o--o)-- 850 ( ) ( ) +--------+ +--------+ ( ) ( ) 851 .. .. / \ / \ .. .. 852 / \ / \ 853 +-----+ +-----+ +-----+ +-----+ 854 |PNC 1| |PNC 2| |PNC 3| |PNC 4| 855 +-----+ +-----+ +-----+ +-----+ 856 | | | | 857 ... ... ... ... 858 ( ) ( ) ( ) ( ) 859 CE A o------(o-o-o)--(o-o-o)--------(o-o-o)--(o-o-o)----o CE B 860 ( ) ( ) ( ) ( ) 861 ... ... ... ... 863 Domain 1 Domain 2 Domain 3 Domain 4 865 Figure 6: Illustration of topology abstraction granularity levels in 866 the MDSC Hierarchy 868 In the example depicted in Figure 6, there are four domains under 869 control of the respective PNCs, namely, PNC 1, PNC 2, PNC3 and PNC4. 870 Assume that MDSC 2 is controlling PNC 1 and PNC 2 while MDSC 3 is 871 controlling PNC 3 and PNC 4. Let us assume that each of the PNCs 872 provides a grey topology abstraction in which to present only border 873 nodes and border links. The abstract topology MDSC 2 would operate 874 is shown on the left side of MDSC 2 in Figure 6. It is basically a 875 combination of the two topologies the PNCs (PNC 1 and PNC 2) 876 provide. Likewise, the abstract topology MDSC 3 would operate is 877 shown on the right side of MDSC 3 in Figure 6. Both MDSC 2 and MDSC 878 3 provide a grey topology abstraction in which each PNC domain is 879 presented as one virtual node to its top level MDSC 1. Then the MDSC 880 1 combines these two topologies updated by MDSC 2 and MDSC 3 to 881 create the abstraction topology to which it operates. MDSC 1 sees 882 the whole four domain networks as four virtual nodes connected via 883 virtual links. This illustrates the point discussed in Section 3.4: 884 The top level MDSC operates on a higher level of abstraction (i.e., 885 less granular level) than the lower level MSDCs. As such, the MMI 886 carries more abstract TE information than the MPI. 887 In the process of creating a VN, the same principle applies. Let us 888 assume that a customer wants to create a virtual network that 889 connects its CE A and CE B which is depicted in Figure 6. Upon 890 receipt of this request generated by the CNC, MDSC 1, based on its 891 abstract topology at hand, determines that CE A is connected a 892 virtual node in domain 1 and CE B is connected to a virtual node in 893 domain 4 and. MDSC 1 further determines that domain 2 and domain 3 894 are interconnected to domain 1 and 4 respectively. MDSC 1 then 895 partitions the original VN request from the CNC into two separate VN 896 requests and make a VN creation request, respectively to MDSC 2 and 897 MDSC 3. MDSC 1 for instance make a VN request to MDSC 2 to connect 898 two virtual nodes. When MDSC 2 receives this VN request from MDSC 1, 899 it further partitions into two separate requests respectively to PNC 900 1 and PNC 2. This illustration shows that VN creation request 901 process recursively takes place over MMI and MPI. 903 5. Access Points and Virtual Network Access Points 905 In order not to share unwanted topological information between the 906 customer domain and provider domain, a new entity is defined which 907 is referred to as the Access Point (AP). See the definition of AP in 908 Section 1.1. 910 A customer node will use APs as the end points for the request of 911 VNs as shown in Figure 7. 913 ------------- 914 ( ) 915 - - 916 +---+ X ( ) Z +---+ 917 |CE1|---+----( )---+---|CE2| 918 +---+ | ( ) | +---+ 919 AP1 - - AP2 920 ( ) 921 ------------- 923 Figure 7: APs definition customer view 925 Let's take as an example a scenario shown in Figure 7. CE1 is 926 connected to the network via a 10Gb link and CE2 via a 40Gb link. 927 Before the creation of any VN between AP1 and AP2 the customer view 928 can be summarized as shown in Table 1: 930 +----------+------------------------+ 931 |End Point | Access Link Bandwidth | 932 +-----+----------+----------+-------------+ 933 |AP id| CE,port | MaxResBw | AvailableBw | 934 +-----+----------+----------+-------------+ 935 | AP1 |CE1,portX | 10Gb | 10Gb | 936 +-----+----------+----------+-------------+ 937 | AP2 |CE2,portZ | 40Gb | 40Gb | 938 +-----+----------+----------+-------------+ 940 Table 1: AP - customer view 942 On the other hand, what the provider sees is shown in Figure 8. 944 ------- ------- 945 ( ) ( ) 946 - - - - 947 W (+---+ ) ( +---+) Y 948 -+---( |PE1| Dom.X )----( Dom.Y |PE2| )---+- 949 | (+---+ ) ( +---+) | 950 AP1 - - - - AP2 951 ( ) ( ) 952 ------- ------- 954 Figure 8: Provider view of the AP 956 Which results in a summarization as shown in Table 2. 958 +----------+------------------------+ 959 |End Point | Access Link Bandwidth | 960 +-----+----------+----------+-------------+ 961 |AP id| PE,port | MaxResBw | AvailableBw | 962 +-----+----------+----------+-------------+ 963 | AP1 |PE1,portW | 10Gb | 10Gb | 964 +-----+----------+----------+-------------+ 965 | AP2 |PE2,portY | 40Gb | 40Gb | 966 +-----+----------+----------+-------------+ 968 Table 2: AP - provider view 970 A Virtual Network Access Point (VNAP) needs to be defined as binding 971 between the AP that is linked to a VN and that is used to allow for 972 different VNs to start from the same AP. It also allows for traffic 973 engineering on the access and/or inter-domain links (e.g., keeping 974 track of bandwidth allocation). A different VNAP is created on an AP 975 for each VN. 977 In the simple scenario depicted above we suppose we want to create 978 two virtual networks. The first with VN identifier 9 between AP1 and 979 AP2 with bandwidth of 1Gbps, while the second with VN id 5, again 980 between AP1 and AP2 and with bandwidth 2Gbps. 982 The provider view would evolve as shown in Table 3. 984 +----------+------------------------+ 985 |End Point | Access Link/VNAP Bw | 986 +---------+----------+----------+-------------+ 987 |AP/VNAPid| PE,port | MaxResBw | AvailableBw | 988 +---------+----------+----------+-------------+ 989 |AP1 |PE1,portW | 10Gbps | 7Gbps | 990 | -VNAP1.9| | 1Gbps | N.A. | 991 | -VNAP1.5| | 2Gbps | N.A | 992 +---------+----------+----------+-------------+ 993 |AP2 |PE2,portY | 40Gbps | 37Gbps | 994 | -VNAP2.9| | 1Gbps | N.A. | 995 | -VNAP2.5| | 2Gbps | N.A | 996 +---------+----------+----------+-------------+ 998 Table 3: AP and VNAP - provider view after VN creation 1000 5.1. Dual homing scenario 1002 Often there is a dual homing relationship between a CE and a pair of 1003 PEs. This case needs to be supported by the definition of VN, APs 1004 and VNAPs. Suppose CE1 connected to two different PEs in the 1005 operator domain via AP1 and AP2 and that the customer needs 5Gbps of 1006 bandwidth between CE1 and CE2. This is shown in Figure 9. 1008 ____________ 1009 AP1 ( ) AP3 1010 -------(PE1) (PE3)------- 1011 W / ( ) \X 1012 +---+/ ( ) \+---+ 1013 |CE1| ( ) |CE2| 1014 +---+\ ( ) /+---+ 1015 Y \ ( ) /Z 1016 -------(PE2) (PE4)------- 1017 AP2 (____________) 1019 Figure 9: Dual homing scenario 1021 In this case, the customer will request for a VN between AP1, AP2 1022 and AP3 specifying a dual homing relationship between AP1 and AP2. 1023 As a consequence no traffic will flow between AP1 and AP2. The dual 1024 homing relationship would then be mapped against the VNAPs (since 1025 other independent VNs might have AP1 and AP2 as end points). 1027 The customer view would be shown in Table 4. 1029 +----------+------------------------+ 1030 |End Point | Access Link/VNAP Bw | 1031 +---------+----------+----------+-------------+-----------+ 1032 |AP/VNAPid| CE,port | MaxResBw | AvailableBw |Dual Homing| 1033 +---------+----------+----------+-------------+-----------+ 1034 |AP1 |CE1,portW | 10Gbps | 5Gbps | | 1035 | -VNAP1.9| | 5Gbps | N.A. | VNAP2.9 | 1036 +---------+----------+----------+-------------+-----------+ 1037 |AP2 |CE1,portY | 40Gbps | 35Gbps | | 1038 | -VNAP2.9| | 5Gbps | N.A. | VNAP1.9 | 1039 +---------+----------+----------+-------------+-----------+ 1040 |AP3 |CE2,portX | 40Gbps | 35Gbps | | 1041 | -VNAP3.9| | 5Gbps | N.A. | NONE | 1042 +---------+----------+----------+-------------+-----------+ 1044 Table 4: Dual homing - customer view after VN creation 1046 6. End Point Selection Based On Network Status 1048 A further advanced application of ACTN is in the case of Data Center 1049 selection, where the customer requires the Data Center selection to 1050 be based on the network status; this is referred to as Multi- 1051 Destination in [ACTN-REQ]. In terms of ACTN, a CNC could request a 1052 connectivity service (virtual network) between a set of source Aps 1053 and destination APs and leave it up to the network (MDSC) to decide 1054 which source and destination access points to be used to set up the 1055 connectivity service (virtual network). The candidate list of source 1056 and destination APs is decided by a CNC (or an entity outside of 1057 ACTN) based on certain factors which are outside the scope of ACTN. 1059 Based on the AP selection as determined and returned by the network 1060 (MDSC), the CNC (or an entity outside of ACTN) should further take 1061 care of any subsequent actions such as orchestration or service 1062 setup requirements. These further actions are outside the scope of 1063 ACTN. 1065 Consider a case as shown in Figure 10, where three data centers are 1066 available, but the customer requires the data center selection to be 1067 based on the network status and the connectivity service setup 1068 between the AP1 (CE1) and one of the destination APs (AP2 (DC-A), 1069 AP3 (DC-B), and AP4 (DC-C)). The MDSC (in coordination with PNCs) 1070 would select the best destination AP based on the constraints, 1071 optimization criteria, policies, etc., and setup the connectivity 1072 service (virtual network). 1074 ------- ------- 1075 ( ) ( ) 1076 - - - - 1077 +---+ ( ) ( ) +----+ 1078 |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| 1079 +---+ | ( ) ( ) | +----+ 1080 AP1 - - - - AP2 1081 ( ) ( ) 1082 ---+--- ---+--- 1083 AP3 | AP4 | 1084 +----+ +----+ 1085 |DC-B| |DC-C| 1086 +----+ +----+ 1088 Figure 10: End point selection based on network status 1090 6.1. Pre-Planned End Point Migration 1092 Further in case of Data Center selection, customer could request for 1093 a backup DC to be selected, such that in case of failure, another DC 1094 site could provide hot stand-by protection. As shown in Figure 10 1095 DC-C is selected as a backup for DC-A. Thus, the VN should be setup 1096 by the MDSC to include primary connectivity between AP1 (CE1) and 1097 AP2 (DC-A) as well as protection connectivity between AP1 (CE1) and 1098 AP4 (DC-C). 1100 ------- ------- 1101 ( ) ( ) 1102 - - __ - - 1103 +---+ ( ) ( ) +----+ 1104 |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| 1105 +---+ | ( ) ( ) | +----+ 1106 AP1 - - - - AP2 | 1107 ( ) ( ) | 1108 ---+--- ---+--- | 1109 AP3 | AP4 | HOT STANDBY 1110 +----+ | 1111 |DC-C|<------------- 1112 +----+ 1114 Figure 10: Pre-planned end point migration 1116 6.2. On the Fly End Point Migration 1118 Compared to pre-planned end point migration, on the fly end point 1119 selection is dynamic in that the migration is not pre-planned but 1120 decided based on network condition. Under this scenario, the MDSC 1121 would monitor the network (based on the VN SLA) and notify the CNC 1122 in case where some other destination AP would be a better choice 1123 based on the network parameters. The CNC should instruct the MDSC 1124 when it is suitable to update the VN with the new AP if it is 1125 required. 1127 7. Manageability Considerations 1129 The objective of ACTN is to manage traffic engineered resources, and 1130 provide a set of mechanism to allow clients to request virtual 1131 connectivity across server network resources. ACTN will support 1132 multiple clients each with its own view of and control of the server 1133 network, the network operator will need to partition (or "slice") 1134 their network resources, and manage them resources accordingly. 1136 The ACTN platform will, itself, need to support the request, 1137 response, and reservations of client and network layer connectivity. 1138 It will also need to provide performance monitoring and control of 1139 traffic engineered resources. The management requirements may be 1140 categorized as follows: 1142 . Management of external ACTN protocols 1143 . Management of internal ACTN protocols 1144 . Management and monitoring of ACTN components 1145 . Configuration of policy to be applied across the ACTN system 1147 7.1. Policy 1149 It is expected that a policy will be an important aspect of ACTN 1150 control and management. Typically, policies are used via the 1151 components and interfaces, during deployment of the service, to 1152 ensure that the service is compliant with agreed policy factors 1153 (often described in Service Level Agreements - SLAs), these include, 1154 but are not limited to: connectivity, bandwidth, geographical 1155 transit, technology selection, security, resilience, and economic 1156 cost. 1158 Depending on the deployment the ACTN deployment architecture, some 1159 policies may have local or global significance. That is, certain 1160 policies may be ACTN component specific in scope, while others may 1161 have broader scope and interact with multiple ACTN components. Two 1162 examples are provided below: 1164 . A local policy might limit the number, type, size, and 1165 scheduling of virtual network services a customer may request 1166 via its CNC. This type of policy would be implemented locally on 1167 the MDSC. 1169 . A global policy might constrain certain customer types (or 1170 specific customer applications) to only use certain MDSCs, and 1171 be restricted to physical network types managed by the PNCs. A 1172 global policy agent would govern these types of policies. 1174 This objective of this section is to discuss the applicability of 1175 ACTN policy: requirements, components, interfaces, and examples. 1176 This section provides an analysis and does not mandate a specific 1177 method for enforcing policy, or the type of policy agent that would 1178 be responsible for propagating policies across the ACTN components. 1179 It does highlight examples of how policy may be applied in the 1180 context of ACTN, but it is expected further discussion in an 1181 applicability or solution specific document, will be required. 1183 7.2. Policy applied to the Customer Network Controller 1185 A virtual network service for a customer application will be 1186 requested from the CNC. It will reflect the application requirements 1187 and specific service policy needs, including bandwidth, traffic type 1188 and survivability. Furthermore, application access and type of 1189 virtual network service requested by the CNC, will be need adhere to 1190 specific access control policies. 1192 7.3. Policy applied to the Multi Domain Service Coordinator 1194 A key objective of the MDSC is to help the customer express the 1195 application connectivity request via its CNC as set of desired 1196 business needs, therefore policy will play an important role. 1198 Once authorised, the virtual network service will be instantiated 1199 via the CNC-MDSC Interface (CMI), it will reflect the customer 1200 application and connectivity requirements, and specific service 1201 transport needs. The CNC and the MDSC components will have agreed 1202 connectivity end-points, use of these end-points should be defined 1203 as a policy expression when setting up or augmenting virtual network 1204 services. Ensuring that permissible end-points are defined for CNCs 1205 and applications will require the MDSC to maintain a registry of 1206 permissible connection points for CNCs and application types. 1208 It may also be necessary for the MDSC to resolve policy conflicts, 1209 or at least flag any issues to administrator of the MDSC itself. 1210 Conflicts may occur when virtual network service optimisation 1211 criterion are in competition. For example, to meet objectives for 1212 service reachability a request may require an interconnection point 1213 between multiple physical networks; however, this might break a 1214 confidentially policy requirement of specific type of end-to-end 1215 service. This type of situation may be resolved using hard and soft 1216 policy constraints. 1218 7.4. Policy applied to the Physical Network Controller 1220 The PNC is responsible for configuring the network elements, 1221 monitoring physical network resources, and exposing connectivity 1222 (direct or abstracted) to the MDSC. It is therefore expected that 1223 policy will dictate what connectivity information will be exported 1224 between the PNC, via the MDSC-PNC Interface (MPI), and MDSC. 1226 Policy interactions may arise when a PNC determines that it cannot 1227 compute a requested path from the MDSC, or notices that (per a 1228 locally configured policy) the network is low on resources (for 1229 example, the capacity on key links become exhausted). In either 1230 case, the PNC will be required to notify the MDSC, which may (again 1231 per policy) act to construct a virtual network service across 1232 another physical network topology. 1234 Furthermore, additional forms of policy-based resource management 1235 will be required to provide virtual network service performance, 1236 security and resilience guarantees. This will likely be implemented 1237 via a local policy agent and subsequent protocol methods. 1239 8. Security Considerations 1241 The ACTN framework described in this document defines key components 1242 and interfaces for managed traffic engineered networks. Securing the 1243 request and control of resources, confidentially of the information, 1244 and availability of function, should all be critical security 1245 considerations when deploying and operating ACTN platforms. 1247 Several distributed ACTN functional components are required, and as 1248 a rule implementations should consider encrypting data that flow 1249 between components, especially when they are implemented at remote 1250 nodes, regardless if these are external or internal network 1251 interfaces. 1253 The ACTN security discussion is further split into two specific 1254 categories described in the following sub-sections: 1256 . Interface between the Customer Network Controller and Multi 1257 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) 1259 . Interface between the Multi Domain Service Coordinator and 1260 Physical Network Controller (PNC), MDSC-PNC Interface (MPI) 1262 From a security and reliability perspective, ACTN may encounter many 1263 risks such as malicious attack and rogue elements attempting to 1264 connect to various ACTN components. Furthermore, some ACTN 1265 components represent a single point of failure and threat vector, 1266 and must also manage policy conflicts, and eavesdropping of 1267 communication between different ACTN components. 1269 The conclusion is that all protocols used to realize the ACTN 1270 framework should have rich security features, and customer, 1271 application and network data should be stored in encrypted data 1272 stores. Additional security risks may still exist. Therefore, 1273 discussion and applicability of specific security functions and 1274 protocols will be better described in documents that are use case 1275 and environment specific. 1277 8.1. Interface between the Customer Network Controller and Multi Domain 1278 Service Coordinator (MDSC), CNC-MDSC Interface (CMI) 1280 The role of the MDSC is to detach the network and service control 1281 from underlying technology to help the customer express the network 1282 as desired by business needs. It should be noted that data stored by 1283 the MDSC will reveal details of the virtual network services, and 1284 which CNC and application is consuming the resource. The data stored 1285 must therefore be considered as a candidate for encryption. 1287 CNC Access rights to an MDSC must be managed. MDSC resources must be 1288 properly allocated, and methods to prevent policy conflicts, 1289 resource wastage and denial of service attacks on the MDSC by rogue 1290 CNCs, should also be considered. 1292 A CNC-MDSC protocol interface will likely be an external protocol 1293 interface. Again, suitable authentication and authorization of each 1294 CNC connecting to the MDSC will be required, especially, as these 1295 are likely to be implemented by different organisations and on 1296 separate functional nodes. Use of the AAA-based mechanisms would 1297 also provide role-based authorization methods, so that only 1298 authorized CNC's may access the different functions of the MDSC. 1300 8.2. Interface between the Multi Domain Service Coordinator and 1301 Physical Network Controller (PNC), MDSC-PNC Interface (MPI) 1303 The function of the Physical Network Controller (PNC) is to 1304 configure network elements, provide performance and monitoring 1305 functions of the physical elements, and export physical topology 1306 (full, partial, or abstracted) to the MDSC. 1308 Where the MDSC must interact with multiple (distributed) PNCs, a 1309 PKI-based mechanism is suggested, such as building a TLS or HTTPS 1310 connection between the MDSC and PNCs, to ensure trust between the 1311 physical network layer control components and the MDSC. 1313 Which MDSC the PNC exports topology information to, and the level of 1314 detail (full or abstracted) should also be authenticated and 1315 specific access restrictions and topology views, should be 1316 configurable and/or policy-based. 1318 9. References 1320 9.1. Informative References 1322 [RFC2702] Awduche, D., et. al., "Requirements for Traffic 1323 Engineering Over MPLS", RFC 2702, September 1999. 1325 [RFC4026] L. Andersson, T. Madsen, "Provider Provisioned Virtual 1326 Private Network (VPN) Terminology", RFC 4026, March 2005. 1328 [RFC4208] G. Swallow, J. Drake, H.Ishimatsu, Y. Rekhter, 1329 "Generalized Multiprotocol Label Switching (GMPLS) User- 1330 Network Interface (UNI): Resource ReserVation Protocol- 1331 Traffic Engineering (RSVP-TE) Support for the Overlay 1332 Model", RFC 4208, October 2005. 1334 [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path 1335 Computation Element (PCE)-Based Architecture", IETF RFC 1336 4655, August 2006. 1338 [RFC5654] Niven-Jenkins, B. (Ed.), D. Brungard (Ed.), and M. Betts 1339 (Ed.), "Requirements of an MPLS Transport Profile", RFC 1340 5654, September 2009. 1342 [RFC7149] Boucadair, M. and Jacquenet, C., "Software-Defined 1343 Networking: A Perspective from within a Service Provider 1344 Environment", RFC 7149, March 2014. 1346 [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for 1347 Information Exchange between Interconnected Traffic- 1348 Engineered Networks", RFC 7926, July 2016. 1350 [GMPLS] Manning, E., et al., "Generalized Multi-Protocol Label 1351 Switching (GMPLS) Architecture", RFC 3945, October 2004. 1353 [ONF-ARCH] Open Networking Foundation, "SDN architecture" Issue 1 - 1354 TR-502, June 2014. 1356 [RFC7491] King, D., and Farrel, A., "A PCE-based Architecture for 1357 Application-based Network Operations", RFC 7491, March 1358 2015. 1360 [Transport NBI] Busi, I., et al., "Transport North Bound Interface 1361 Use Cases", draft-tnbidt-ccamp-transport-nbi-use-cases, 1362 work in progress. 1364 10. Contributors 1366 Adrian Farrel 1367 Old Dog Consulting 1368 Email: adrian@olddog.co.uk 1370 Italo Busi 1371 Huawei 1372 Email: Italo.Busi@huawei.com 1374 Khuzema Pithewan 1375 Infinera 1376 Email: kpithewan@infinera.com 1378 Authors' Addresses 1380 Daniele Ceccarelli (Editor) 1381 Ericsson 1382 Torshamnsgatan,48 1383 Stockholm, Sweden 1384 Email: daniele.ceccarelli@ericsson.com 1386 Young Lee (Editor) 1387 Huawei Technologies 1388 5340 Legacy Drive 1389 Plano, TX 75023, USA 1390 Phone: (469)277-5838 1391 Email: leeyoung@huawei.com 1393 Luyuan Fang 1394 Microsoft 1395 Email: luyuanf@gmail.com 1397 Diego Lopez 1398 Telefonica I+D 1399 Don Ramon de la Cruz, 82 1400 28006 Madrid, Spain 1401 Email: diego@tid.es 1403 Sergio Belotti 1404 Alcatel Lucent 1405 Via Trento, 30 1406 Vimercate, Italy 1407 Email: sergio.belotti@nokia.com 1408 Daniel King 1409 Lancaster University 1410 Email: d.king@lancaster.ac.uk 1412 Dhruv Dhoddy 1413 Huawei Technologies 1414 dhruv.ietf@gmail.com 1416 Gert Grammel 1417 Juniper Networks 1418 ggrammel@juniper.net