idnits 2.17.1 draft-ietf-teas-actn-framework-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There is 1 instance of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 16, 2017) is 2625 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ACTN-REQ' is mentioned on line 1068, but not defined Summary: 2 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Daniele Ceccarelli (Ed) 2 Internet Draft Ericsson 3 Intended status: Informational Young Lee (Ed) 4 Expires: August 2017 Huawei 6 February 16, 2017 8 Framework for Abstraction and Control of Traffic Engineered Networks 10 draft-ietf-teas-actn-framework-04 12 Abstract 14 Traffic Engineered networks have a variety of mechanisms to 15 facilitate the separation of the data plane and control plane. They 16 also have a range of management and provisioning protocols to 17 configure and activate network resources. These mechanisms 18 represent key technologies for enabling flexible and dynamic 19 networking. 21 Abstraction of network resources is a technique that can be applied 22 to a single network domain or across multiple domains to create a 23 single virtualized network that is under the control of a network 24 operator or the customer of the operator that actually owns 25 the network resources. 27 This document provides a framework for Abstraction and Control of 28 Traffic Engineered Networks (ACTN). 30 Status of this Memo 32 This Internet-Draft is submitted to IETF in full conformance with 33 the provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF), its areas, and its working groups. Note that 37 other groups may also distribute working documents as Internet- 38 Drafts. 40 Internet-Drafts are draft documents valid for a maximum of six 41 months and may be updated, replaced, or obsoleted by other documents 42 at any time. It is inappropriate to use Internet-Drafts as 43 reference material or to cite them other than as "work in progress." 44 The list of current Internet-Drafts can be accessed at 45 http://www.ietf.org/ietf/1id-abstracts.txt 47 The list of Internet-Draft Shadow Directories can be accessed at 48 http://www.ietf.org/shadow.html. 50 This Internet-Draft will expire on August 16, 2017. 52 Copyright Notice 54 Copyright (c) 2017 IETF Trust and the persons identified as the 55 document authors. All rights reserved. 57 This document is subject to BCP 78 and the IETF Trust's Legal 58 Provisions Relating to IETF Documents 59 (http://trustee.ietf.org/license-info) in effect on the date of 60 publication of this document. Please review these documents 61 carefully, as they describe your rights and restrictions with 62 respect to this document. Code Components extracted from this 63 document must include Simplified BSD License text as described in 64 Section 4.e of the Trust Legal Provisions and are provided without 65 warranty as described in the Simplified BSD License. 67 Table of Contents 69 1. Introduction...................................................3 70 1.1. Terminology...............................................6 71 2. Business Model of ACTN.........................................9 72 2.1. Customers.................................................9 73 2.2. Service Providers........................................10 74 2.3. Network Providers........................................11 75 3. ACTN Architecture.............................................12 76 3.1. Customer Network Controller..............................14 77 3.2. Multi Domain Service Coordinator.........................15 78 3.3. Physical Network Controller..............................16 79 3.4. ACTN Interfaces..........................................17 80 4. VN Creation Process...........................................20 81 4.1. VN Creation Example......................................20 82 5. Access Points and Virtual Network Access Points...............22 83 5.1. Dual homing scenario.....................................25 84 6. End Point Selection Based On Network Status...................26 85 6.1. Pre-Planned End Point Migration..........................27 86 6.2. On the Fly End Point Migration...........................28 88 7. Manageability Considerations..................................28 89 7.1. Policy...................................................29 90 7.2. Policy applied to the Customer Network Controller........29 91 7.3. Policy applied to the Multi Domain Service Coordinator...30 92 7.4. Policy applied to the Physical Network Controller........30 93 8. Security Considerations.......................................31 94 8.1. Interface between the Customer Network Controller and Multi 95 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI)...32 96 8.2. Interface between the Multi Domain Service Coordinator and 97 Physical Network Controller (PNC), MDSC-PNC Interface (MPI)...32 98 9. References....................................................33 99 9.1. Informative References...................................33 100 10. Contributors.................................................34 101 Authors' Addresses...............................................35 103 1. Introduction 105 Traffic Engineered networks have a variety of mechanisms to 106 facilitate separation of data plane and control plane including 107 distributed signaling for path setup and protection, centralized 108 path computation for planning and traffic engineering, and a range 109 of management and provisioning protocols to configure and activate 110 network resources. These mechanisms represent key technologies for 111 enabling flexible and dynamic networking. 113 The term Traffic Engineered network is used in this document to 114 refer to a network that uses any connection-oriented technology 115 under the control of a distributed or centralized control plane to 116 support dynamic provisioning of end-to-end connectivity. Some 117 examples of networks that are in scope of this definition are 118 optical networks, MPLS Transport Profile (MPLS-TP) networks 119 [RFC5654], and MPLS Traffic Engineering (MPLS-TE) networks 120 [RFC2702]. 122 One of the main drivers for Software Defined Networking (SDN) 123 [RFC7149] is a decoupling of the network control plane from the data 124 plane. This separation of the control plane from the data plane has 125 been already achieved with the development of MPLS/GMPLS [GMPLS] and 126 the Path Computation Element (PCE) [RFC4655] for TE-based networks. 127 One of the advantages of SDN is its logically centralized control 128 regime that allows a global view of the underlying networks. 129 Centralized control in SDN helps improve network resource 130 utilization compared with distributed network control. For TE-based 131 networks, PCE is essentially equivalent to a logically centralized 132 path computation function. 134 Three key aspects that need to be solved by SDN are: 136 . Separation of service requests from service delivery so that 137 the orchestration of a network is transparent from the point of 138 view of the customer but remains responsive to the customer's 139 services and business needs. 141 . Network abstraction: As described in [RFC7926], abstraction is 142 the process of applying policy to a set of information about a 143 TE network to produce selective information that represents the 144 potential ability to connect across the domain. The process of 145 abstraction presents the connectivity graph in a way that is 146 independent of the underlying network technologies, 147 capabilities, and topology so that it can be used to plan and 148 deliver network services in a uniform way 150 . Coordination of resources across multiple domains and multiple 151 layers to provide end-to-end services regardless of whether the 152 domains use SDN or not. 154 As networks evolve, the need to provide separated service 155 request/orchestration and resource abstraction has emerged as a key 156 requirement for operators. In order to support multiple clients each 157 with its own view of and control of the server network, a network 158 operator needs to partition (or "slice") the network resources. The 159 resulting slices can be assigned to each client for guaranteed usage 160 which is a step further than shared use of common network resources. 162 Furthermore, each network represented to a client can be built from 163 abstractions of the underlying networks so that, for example, a link 164 in the client's network is constructed from a path or collection of 165 paths in the underlying network. 167 We call the set of management and control functions used to provide 168 these features Abstraction and Control of Traffic Engineered 169 Networks (ACTN). 171 Particular attention needs to be paid to the multi-domain case, ACTN 172 can facilitate virtual network operation via the creation of a 173 single virtualized network or a seamless service. This supports 174 operators in viewing and controlling different domains (at any 175 dimension: applied technology, administrative zones, or vendor- 176 specific technology islands) as a single virtualized network. 178 Network virtualization refers to allowing the customers of network 179 operators (see Section 2.1) to utilize a certain amount of network 180 resources as if they own them and thus control their allocated 181 resources with higher layer or application processes that enables 182 the resources to be used in the most optimal way. More flexible, 183 dynamic customer control capabilities are added to the traditional 184 VPN along with a customer-specific virtual network view. Customers 185 control a view of virtual network resources, specifically allocated 186 to each one of them. This view is called an virtual network 187 topology. Such a view may be specific to a service, the set of 188 consumed resources, or to a particular customer. 190 Network abstraction refers to presenting a customer with a view of 191 the operator's network in such a way that the links and nodes in 192 that view constitute an aggregation or abstraction of the real 193 resources in the operator's network in a way that is independent of 194 the underlying network technologies, capabilities, and topology. 195 The customer operates an abstract network as if it was their own 196 network, but the operational commands are mapped onto the underlying 197 network through domains coordination. 199 The customer controller for a virtual or abstract network is 200 envisioned to support many distinct applications. This means that 201 there may be a further level of virtualization that provides a view 202 of resources in the customer's virtual network for use by an 203 individual application. 205 The ACTN framework described in this document facilitates: 207 . Abstraction of the underlying network resources to higher-layer 208 applications and customers [RFC7926]. 210 . Virtualization of particular underlying resources, whose 211 selection criterion is the allocation of those resources to a 212 particular customer, application or service [ONF-ARCH]. 214 . Slicing of infrastructure to meet specific customers' service 215 requirements. 217 . Creation of a virtualized environment allowing operators to 218 view and control multi-domain networks as a single virtualized 219 network. 221 . The possibility of providing a customer with a virtualized 222 network. 224 . A virtualization/mapping network function that adapts the 225 customer's requests for control of the virtual resources that 226 have been allocated to the customer to control commands applied 227 to the underlying network resources. Such a function performs 228 the necessary mapping, translation, isolation and 229 security/policy enforcement, etc. 231 . The presentation to customers of networks as a virtualized 232 topology via open and programmable interfaces. This allows for 233 the recursion of controllers in a customer-provider 234 relationship. 236 1.1. Terminology 238 The following terms are used in this document. Some of them are 239 newly defined, some others reference existing definition: 240 . Node: A node is a vertex on the graph representation of a TE 241 topology. In a physical network a node corresponds to a network 242 element (NE). In a sliced network, a node is some subset of the 243 capabilities of a physical network element. In an abstract 244 network, a node (sometimes called an abstract node) is a 245 representation as a single vertex in the topology of the 246 abstract network of one or more nodes and their connecting 247 links from the physical network. The concept of a node 248 represents the ability to connect from any access to the node 249 (a link end) to any other access to that node, although 250 "limited cross-connect capabilities" may also be defined to 251 restrict this functionality. Just as network slicing and 252 network abstraction may be applied recursively, so a node in a 253 topology may be created by applying slicing or abstraction on 254 the nodes in the underlying topology. 256 . Link: A link is an edge on the graph representation of a TE 257 topology. Two nodes connected by a link are said to be 258 "adjacent" in the TE topology. In a physical network, a link 259 corresponds to a physical connection. In a sliced topology, a 260 link is some subset of the capabilities of a physical 261 connection. In an abstract network, a link (sometimes called an 262 abstract link) is a representation as an edge in the topology 263 of the abstract network of one or more links and the nodes they 264 connect from the physical network. Abstract links may be 265 realized by Label Switched Paths (LSPs) across the physical 266 network that may be pre-established or could be only 267 potentially achievable. Just as network slicing and network 268 abstraction may be applied recursively, so a link in a topology 269 may be created by applying slicing or abstraction on the links 270 in the underlying topology. While most links are point-to- 271 point, connecting just two nodes, the concept of a multi-access 272 link exists where more than two nodes are collectively adjacent 273 and data sent on the link by one node will be equally delivered 274 to all other nodes connected by the link. 276 . PNC: A Physical Network Controller is a domain controller that 277 is responsible for controlling devices or NEs under its direct 278 control. This can be an SDN controller, a Network Management 279 System (NMS), an Element Management System (EMS), an active PCE 280 or any other mean to dynamically control a set of nodes and 281 that is implementing an NBI compliant with ACTN specification. 283 . PNC domain: A PNC domain includes all the resources under the 284 control of a single PNC. It can be composed of different 285 routing domains and administrative domains, and the resources 286 may come from different layers. The interconnection between PNC 287 domains can be a link or a node. 289 _______ Border Link _______ 290 _( )================( )_ 291 _( )_ _( )_ 292 ( ) ---- ( ) 293 ( PNC )| |( PNC ) 294 ( Domain X )| |( Domain Y ) 295 ( )| |( ) 296 (_ _) ---- (_ _) 297 (_ _) Border (_ _) 298 (_______) Node (_______) 300 Figure 1: PNC Domain Borders 302 . A Virtual Network (VN) is a customer view of the TE 303 network. It is presented by the provider as a set of physical 304 and/or abstracted resources. Depending on the agreement between 305 client and provider various VN operations and VN views are 306 possible as follows: 308 o VN Creation - VN could be pre-configured and created via 309 offline negotiation between customer and provider. In 310 other cases, the VN could also be created dynamically 311 based on a request from the customer with given SLA 312 attributes which satisfy the customer's objectives. 314 o Dynamic Operations - The VN could be further modified or 315 deleted based on a customer request to request. The 316 customer can further act upon the virtual network 317 resources to perform end-to-end tunnel management (set- 318 up/release/modify). These changes will result in 319 subsequent LSP management at the operator's level. 321 o VN View: 323 a. The VN can be seen as set of end-to-end tunnels from a 324 customer point of view, where each tunnel is referred 325 as a VN member. Each VN member can then be formed by 326 recursive slicing or abstraction of paths in 327 underlying networks. Such end-to-end tunnels may 328 comprise of customer end points, access links, intra- 329 domain paths, and inter-domain links. In this view VN 330 is thus a set of VN members. 332 b. The VN can also be seen as a topology comprising of 333 physical, sliced, and abstract nodes and links. The 334 nodes in this case include physical customer end 335 points, border nodes, and internal nodes as well as 336 abstracted nodes. Similarly the links include physical 337 access links, inter-domain links, and intra-domain 338 links as well as abstract links. The abstract nodes 339 and links in this view can be pre-negotiated or 340 created dynamically. 342 . Abstraction. This process is defined in [RFC7926]. 344 . Abstract Link: The term "abstract link" is defined in 345 [RFC7926]. 347 . Abstract Topology: The topology of abstract nodes and abstract 348 links presented through the process of abstraction by a lower 349 layer network for use by a higher layer network. 351 . Access link: A link between a customer node and a provider 352 node. 354 . Inter-domain link: A link between domains managed by different 355 PNCs. The MDSC is in charge of managing inter-domain links. 357 . Access Point (AP): An access point is used to keep 358 confidentiality between the customer and the provider. It is a 359 logical identifier shared between the customer and the 360 provider, used to map the end points of the border node in both 361 the customer and the provider NW. The AP can be used by the 362 customer when requesting VN service to the provider. 364 . VN Access Point (VNAP): A VNAP is defined as the binding 365 between an AP and a given VN and is used to identify the 366 portion of the access and/or inter-domain link dedicated to a 367 given VN. 369 2. Business Model of ACTN 371 The Virtual Private Network (VPN) [RFC4026] and Overlay Network (ON) 372 models [RFC4208] are built on the premise that the network provider 373 provides all virtual private or overlay networks to its customers. 374 These models are simple to operate but have some disadvantages in 375 accommodating the increasing need for flexible and dynamic network 376 virtualization capabilities. 378 There are three key entities in the ACTN model: 380 - Customers 381 - Service Providers 382 - Network Providers 384 These are described in the following sections. 386 2.1. Customers 388 Within the ACTN framework, different types of customers may be taken 389 into account depending on the type of their resource needs, and on 390 their number and type of access. For example, it is possible to 391 group them into two main categories: 393 Basic Customer: Basic customers include fixed residential users, 394 mobile users and small enterprises. Usually, the number of basic 395 customers for a service provider is high: they require small amounts 396 of resources and are characterized by steady requests (relatively 397 time invariant). A typical request for a basic customer is for a 398 bundle of voice services and internet access. Moreover, basic 399 customers do not modify their services themselves: if a service 400 change is needed, it is performed by the provider as a proxy and the 401 services generally have very few dedicated resources (such as for 402 subscriber drop), with everything else shared on the basis of some 403 Service Level Agreement (LSA), which is usually best-efforts. 405 Advanced Customer: Advanced customers typically include enterprises, 406 governments and utilities. Such customers can ask for both point-to 407 point and multipoint connectivity with high resource demands varying 408 significantly in time and from customer to customer. This is one of 409 the reasons why a bundled service offering is not enough and it is 410 desirable to provide each advanced customer with a customized 411 virtual network service. 413 Advanced customers may own dedicated virtual resources, or share 414 resources. They may also have the ability to modify their service 415 parameters within the scope of their virtualized environments. The 416 primary focus of ACTN is Advanced Customers. 418 As customers are geographically spread over multiple network 419 provider domains, they have to interface to multiple providers and 420 may have to support multiple virtual network services with different 421 underlying objectives set by the network providers. To enable these 422 customers to support flexible and dynamic applications they need to 423 control their allocated virtual network resources in a dynamic 424 fashion, and that means that they need a view of the topology that 425 spans all of the network providers. Customers of a given service 426 provider can in turn offer a service to other customers in a 427 recursive way. 429 2.2. Service Providers 431 Service providers are the providers of virtual network services to 432 their customers. Service providers may or may not own physical 433 network resources (i.e, may or may not be network providers as 434 described in Section 2.3). When a service provider is the same as 435 the network provider, this is similar to existing VPN models applied 436 to a single provider. This approach works well when the customer 437 maintains a single interface with a single provider. When customer 438 spans multiple independent network provider domains, then it becomes 439 hard to facilitate the creation of end-to-end virtual network 440 services with this model. 442 A more interesting case arises when network providers only provide 443 infrastructure, while distinct service providers interface to the 444 customers. In this case, service providers are, themselves customers 445 of the network infrastructure providers. One service provider may 446 need to keep multiple independent network providers as its end-users 447 span geographically across multiple network provider domains. 449 The ACTN network model is predicated upon this three tier model and 450 is summarized in Figure 2: 452 +----------------------+ 453 | customer | 454 +----------------------+ 455 | 456 | /\ Service/Customer specific 457 | || Abstract Topology 458 | || 459 +----------------------+ E2E abstract 460 | Service Provider | topology creation 461 +----------------------+ 462 / | \ 463 / | \ Network Topology 464 / | \ (raw or abstract) 465 / | \ 466 +------------------+ +------------------+ +------------------+ 467 |Network Provider 1| |Network Provider 2| |Network Provider 3| 468 +------------------+ +------------------+ +------------------+ 470 Figure 2: Three tier model. 472 There can be multiple service providers to which a customer may 473 interface. 475 There are multiple types of service providers, for example: 477 . Data Center providers can be viewed as a service provider type 478 as they own and operate data center resources for various WAN 479 customers, and they can lease physical network resources from 480 network providers. 481 . Internet Service Providers (ISP) are service providers of 482 internet services to their customers while leasing physical 483 network resources from network providers. 484 . Mobile Virtual Network Operators (MVNO) provide mobile services 485 to their end-users without owning the physical network 486 infrastructure. 488 2.3. Network Providers 490 Network Providers are the infrastructure providers that own the 491 physical network resources and provide network resources to their 492 customers. The layered model described in this architecture 493 separates the concerns of network providers and customers, with 494 service providers acting as aggregators of customer requests. 496 3. ACTN Architecture 498 This section provides a high-level model of ACTN showing the 499 interfaces and the flow of control between components. 501 The ACTN architecture is aligned with the ONF SDN architecture [ONF- 502 ARCH] and presents a 3-tiers reference model. It allows for 503 hierarchy and recursiveness not only of SDN controllers but also of 504 traditionally controlled domains that use a control plane. It 505 defines three types of controllers depending on the functionalities 506 they implement. The main functionalities that are identified are: 508 . Multi-domain coordination function: This function oversees the 509 specific aspects of the different domains and builds a single 510 abstracted end-to-end network topology in order to coordinate 511 end-to-end path computation and path/service provisioning. 512 Domain sequence path calculation/determination is also a part 513 of this function. 515 . Virtualization/Abstraction function: This function provides an 516 abstracted view of the underlying network resources for use by 517 the customer - a customer may be the client or a higher level 518 controller entity. This function includes network path 519 computation based on customer service connectivity request 520 constraints, path computation based on the global network-wide 521 abstracted topology, and the creation of an abstracted view of 522 network slices allocated to each customer. These operations 523 depend on customer-specific network objective functions and 524 customer traffic profiles. 526 . Customer mapping/translation function: This function is to map 527 customer requests/commands into network provisioning requests 528 that can be sent to the Physical Network Controller (PNC) 529 according to business policies provisioned statically or 530 dynamically at the OSS/NMS. Specifically, it provides mapping and 531 translation of a customer's service request into a set of 532 parameters that are specific to a network type and technology 533 such that network configuration process is made possible. 535 . Virtual service coordination function: This function translates 536 customer service-related information into virtual network 537 service operations in order to seamlessly operate virtual 538 networks while meeting a customer's service requirements. In 539 the context of ACTN, service/virtual service coordination 540 includes a number of service orchestration functions such as 541 multi-destination load balancing, guarantees of service 542 quality, bandwidth and throughput. It also includes 543 notifications for service fault and performance degradation and 544 so forth. 546 The types of controller defined in the ACTN architecture are shown 547 in Figure 3 below and are as follows: 549 . CNC - Customer Network Controller 550 . MDSC - Multi Domain Service Coordinator 551 . PNC - Physical Network Controller 553 Figure 3 also shows the following interfaces: 555 . CMI - CNC-MDSC Interface 556 . MPI - MDSC-PNC Interface 558 VPN customer NW Mobile Customer ISP NW service Customer 559 | | | 560 +-------+ +-------+ +-------+ 561 | CNC-A | | CNC-B | | CNC-C | 562 +-------+ +-------+ +-------+ 563 \ | / 564 ----------- |CMI I/F -------------- 565 \ | / 566 +-----------------------+ 567 | MDSC | 568 +-----------------------+ 569 / | \ 570 ------------- |MPI I/F ------------- 571 / | \ 572 +-------+ +-------+ +-------+ 573 | PNC | | PNC | | PNC | 574 +-------+ +-------+ +-------+ 575 | GMPLS / | / \ 576 | trigger / | / \ 577 -------- ---- | / \ 578 ( ) ( ) | / \ 579 - - ( Phys ) | / ----- 580 ( GMPLS ) (Netw) | / ( ) 581 ( Physical ) ---- | / ( Phys. ) 582 ( Network ) ----- ----- ( Net ) 583 - - ( ) ( ) ----- 584 ( ) ( Phys. ) ( Phys ) 585 -------- ( Net ) ( Net ) 586 ----- ----- 588 Figure 3: ACTN Control Hierarchy 590 3.1. Customer Network Controller 592 A Virtual Network Service is instantiated by the Customer Network 593 Controller via the CNC-MDSC Interface (CMI). As the Customer Network 594 Controller directly interfaces to the applications, it understands 595 multiple application requirements and their service needs. It is 596 assumed that the Customer Network Controller and the MDSC have a 597 common knowledge of the end-point interfaces based on their business 598 negotiations prior to service instantiation. End-point interfaces 599 refer to customer-network physical interfaces that connect customer 600 premise equipment to network provider equipment. 602 3.2. Multi Domain Service Coordinator 604 The Multi Domain Service Coordinator (MDSC) sits between the CNC 605 that issues connectivity requests and the Physical Network 606 Controllers (PNCs) that manage the physical network resources. The 607 MDSC can be collocated with the PNC, especially in those cases where 608 the service provider and the network provider are the same entity. 610 The internal system architecture and building blocks of the MDSC are 611 out of the scope of ACTN. Some examples can be found in the 612 Application Based Network Operations (ABNO) architecture [RFC7491] 613 and the ONF SDN architecture [ONF-ARCH]. 615 The MDSC is the only building block of the architecture that can 616 implement all four ACTN main functions, i.e., multi domain 617 coordination, virtualization/abstraction, customer 618 mapping/translation, and virtual service coordination. The first two 619 functions of the MDSC, namely, multi domain coordination and 620 virtualization/abstraction are referred to as network 621 control/coordination functions while the last two functions, namely, 622 customer mapping/translation and virtual service coordination are 623 referred to as service control/coordination functions. 624 The key point of the MDSC (and of the whole ACTN framework) is 625 detaching the network and service control from underlying technology 626 to help the customer express the network as desired by business 627 needs. The MDSC envelopes the instantiation of the right technology 628 and network control to meet business criteria. In essence it 629 controls and manages the primitives to achieve functionalities as 630 desired by the CNC. 631 A hierarchy of MDSCs can be foreseen for scalability and 632 administrative choices. In this case another interface needs to be 633 defined, the MMI (MDSC-MDSC interface) as shown in Figure 4. 635 +-------+ +-------+ +-------+ 636 | CNC-A | | CNC-B | | CNC-C | 637 +-------+ +-------+ +-------+ 638 \ | / 639 ---------- |-CMI I/F ----------- 640 \ | / 641 +-----------------------+ 642 | MDSC | 643 +-----------------------+ 644 / | \ 645 ---------- |-MMI I/F ----------- 646 / | \ 647 +----------+ +----------+ +--------+ 648 | MDSC | | MDSC | | MDSC | 649 +----------+ +----------+ +--------+ 650 | / |-MPI I/F / \ 651 | / | / \ 652 +-----+ +-----+ +-----+ +-----+ +-----+ 653 | PNC | | PNC | | PNC | | PNC | | PNC | 654 +-----+ +-----+ +-----+ +-----+ +-----+ 656 Figure 4: Controller recursiveness 658 In order to allow for multi-domain coordination a 1:N relationship 659 must be allowed between MDSCs and between MDSCs and PNCs (i.e. 1 660 parent MDSC and N child MDSC or 1 MDSC and N PNCs). 662 In the case where there is a hierarchy of MDSCs, the interface above 663 the top MDSC (i.e., CMI) and the interface below the bottom MDSCs 664 (i.e., SBI) remain the same. The recursion of MDSCs in the middle 665 layers within this hierarchy of MDSCs may take place via the MMI. 666 Please see Section 4 for details of the ACTN interfaces. 668 In addition to that, it could also be possible to have an M:1 669 relationship between MDSCs and PNC to allow for network resource 670 partitioning/sharing among different customers not necessarily 671 connected to the same MDSC (e.g., different service providers). 673 3.3. Physical Network Controller 675 The Physical Network Controller (PNC) oversees configuring the 676 network elements, monitoring the topology (physical or virtual) of 677 the network, and passing information about the topology (either raw 678 or abstracted) to the MDSC. 680 The internal architecture of the PNC, its building blocks, and the 681 way it controls its domain are out of the scope of ACTN. Some 682 examples can be found in the Application Based Network Operations 683 (ABNO) architecture [RFC7491] and the ONF SDN architecture [ONF- 684 ARCH] 686 The PNC, in addition to being in charge of controlling the physical 687 network, is able to implement two of the four main ACTN main 688 functions: multi domain coordination and virtualization/abstraction 689 function. 691 3.4. ACTN Interfaces 693 To allow virtualization and multi domain coordination, the network 694 has to provide open, programmable interfaces, through which customer 695 applications can create, replace and modify virtual network 696 resources and services in an interactive, flexible and dynamic 697 fashion while having no impact on other customers. Direct customer 698 control of transport network elements and virtualized services is 699 not perceived as a viable proposition for transport network 700 providers due to security and policy concerns among other reasons. 701 In addition, as discussed in Section 3.3, the network control plane 702 for transport networks has been separated from the data plane and as 703 such it is not viable for the customer to directly interface with 704 transport network elements. 706 Figure 5 depicts a high-level control and interface architecture for 707 ACTN. A number of key ACTN interfaces exist for deployment and 708 operation of ACTN-based networks. These are highlighted in Figure 5 709 (ACTN Interfaces). 711 .-------------- 712 ------------- | 713 | Application |-- 714 ------------- 715 ^ 716 | I/F A -------- 717 v ( ) 718 -------------- - - 719 | Customer | ( Customer ) 720 | Network |--------->( Network ) 721 | Controller | ( ) 722 -------------- - - 723 ^ ( ) 724 | I/F B -------- 725 v 726 -------------- 727 | MultiDomain | 728 | Service | 729 | Coordinator| -------- 730 -------------- ( ) 731 ^ - - 732 | I/F C ( Physical ) 733 v ( Network ) 734 --------------- ( ) -------- 735 | |<----> - - ( ) 736 -------------- | ( ) - - 737 | Physical |-- -------- ( Physical ) 738 | Network |<---------------------->( Network ) 739 | Controller | I/F D ( ) 740 -------------- - - 741 ( ) 742 -------- 744 Figure 5: ACTN Interfaces 746 The interfaces and functions are described below: 748 . Interface A: A north-bound interface (NBI) that communicates 749 the service request or application demand. A request includes 750 specific service properties, including service type, topology, 751 bandwidth, and constraint information. 753 . Interface B: The CNC-MDSC Interface (CMI) is an interface 754 between a CNC and an MDSC. It is used to request the creation 755 of network resources, topology or services for the 756 applications. Note that all service related information 757 conveyed via Interface A (i.e., specific service properties, 758 including service type, topology, bandwidth, and constraint 759 information) needs to be transparently carried over this 760 interface. The MDSC may also report potential network topology 761 availability if queried for current capability from the CNC. 762 The CMI is the interface with the highest level of abstraction, 763 where the Virtual Networks are modelled and presented to the 764 customer/CNC. Most of the information over this interface is 765 technology agnostic, even if in some cases it should be 766 possible to explicitly request for a VN to be created at a 767 given layer in the network (e.g. ODU VN or MPLS VN). 769 . Interface C: The MDSC-PNC Interface (MPI) is an interface 770 between an MDSC and a PNC. It communicates the creation 771 requests for new connectivity or for bandwidth changes in the 772 physical network. In multi-domain environments, the MDSC needs 773 to establish multiple MPIs, one for each PNC, as there is one 774 PNC responsible for control of each domain. The MPI could have 775 different degrees of abstraction and present an abstracted 776 topology hiding technology specific aspects of the network or 777 convey technology specific parameters to allow for path 778 computation at the MDSC level. Please refer to CCAMP Transport 779 NBI work for the latter case [Transport NBI]. 781 . Interface D: The provisioning interface for creating forwarding 782 state in the physical network, requested via the Physical 783 Network Controller. 785 The interfaces within the ACTN scope are B and C while interfaces A 786 and D are out of the scope of ACTN and are only shown in Figure 5 to 787 give a complete context of ACTN. 788 As previously stated in Section 3.2 there might be a third interface 789 in ACTN scope, the MMI. The MMI is a special case of the MPI and 790 behaves similarly to an MPI to support general functions performed 791 by the MDSCs such as abstraction function and provisioning function. 792 From an abstraction point of view, the top level MDSC which 793 interfaces the CNC operates on a higher level of abstraction (i.e., 794 less granular level) than the lower level MSDCs. As such, the MMI 795 carries more abstract TE information than the MPI. 797 Please note that for all the three interfaces, when technology 798 specific information needs to be included, this info will be add-ons 799 on top of the general abstract topology. As far as general topology 800 abstraction standpoint, all interfaces are still recursive in 801 nature. 803 4. VN Creation Process 805 The provider can present different level of network abstraction to 806 the customer, spanning from one extreme (say "black") where nothing 807 except the Access Points (APs) is shown to the other extreme (say 808 "white") where an actual network topology is shown to the customer. 809 There are shades of "gray" in between where a number of abstract 810 links and nodes can be shown. 812 VN creation is composed of two phases: Negotiation and 813 Implementation. 815 Negotiation: In the case of gray/white topology abstraction, there 816 is an initial phase in which the customer agrees with the provider 817 on the type of topology to be shown (e.g., 10 virtual links and 5 818 virtual nodes) with a given interconnectivity. This is something 819 that is assumed to be preconfigured by the operator off-line. What 820 is on-line is the capability to modify/delete something (e.g., a 821 virtual link). In the case of "black" abstraction this negotiation 822 phase does not happen because there is nothing to negotiate: the 823 customer can only see the APs of the network. 825 Implementation: In the case of black topology abstraction, the 826 customers can ask for connectivity with given constraints/SLA 827 between the APs and LSPs/tunnels created by the provider to satisfy 828 the request. What the customer sees is only that his CEs are 829 connected with a given SLA. In the case of grey/white topology the 830 customer creates his own LSPs accordingly to the topology that was 831 presented to him. 833 4.1. VN Creation Example 835 This section illustrates how a VN creation process is conducted over 836 a hierarchy of MDSCs via MMIs and MPIs, which is shown in Figure 6. 838 +-----+ 839 | CNC | CNC wants to create a VN 840 +-----+ between CE A and CE B 841 | 842 | 843 +-----------------------+ 844 | MDSC 1 | 845 +-----------------------+ 846 / \ 847 / \ 848 +--------+ +--------+ 849 | MDSC 2 | | MDSC 3 | 850 +--------+ +--------+ 851 / \ / \ 852 / \ / \ 853 +-----+ +-----+ +-----+ +-----+ 854 CE A o----|PNC 1| |PNC 2| |PNC 3| |PNC 4|----o CE B 855 +-----+ +-----+ +-----+ +-----+ 857 Topology Seen at MDSC 1 859 --o-o--o-o- 861 Topology Seen at MDSC 2 Topology Seen at MDSC 3 862 _ _ _ _ 863 ( ) ( ) ( ) ( ) 864 ( ) ( ) ( ) ( ) 865 --(o---o)==(o---o)== ==(o---o)==(o---o)-- 866 ( ) ( ) ( ) ( ) 867 (_) (_) (_) (_) 869 Actual Topology 870 ___ ___ ___ ___ 871 ( ) ( ) ( ) ( ) 872 ( o ) ( o ) ( o--o) ( o ) 873 ( / \ ) ( |\ ) ( | | ) ( / \ ) 874 ----(o-o---o-o)==(o-o-o-o-o)==(o--o--o-o)==(o-o-o-o-o)---- 875 ( \ / ) ( | |/ ) ( | | ) ( \ / ) 876 ( o ) (o-o ) ( o--o) ( o ) 877 (___) (___) (___) (___) 879 Domain 1 Domain 2 Domain 3 Domain 4 881 Where o is a node and -- is a link and === a border link 883 Figure 6: Illustration of topology abstraction granularity levels in 884 the MDSC Hierarchy 885 In the example depicted in Figure 6, there are four domains under 886 control of the respective PNCs, namely, PNC 1, PNC 2, PNC3 and PNC4. 887 Assume that MDSC 2 is controlling PNC 1 and PNC 2 while MDSC 3 is 888 controlling PNC 3 and PNC 4. Let us assume that each of the PNCs 889 provides a grey topology abstraction in which to present only border 890 nodes and border links. The abstract topology MDSC 2 would operate 891 is shown on the left side of MDSC 2 in Figure 6. It is basically a 892 combination of the two topologies the PNCs (PNC 1 and PNC 2) 893 provide. Likewise, the abstract topology MDSC 3 would operate is 894 shown on the right side of MDSC 3 in Figure 6. Both MDSC 2 and MDSC 895 3 provide a grey topology abstraction in which each PNC domain is 896 presented as one virtual node to its top level MDSC 1. Then the MDSC 897 1 combines these two topologies updated by MDSC 2 and MDSC 3 to 898 create the abstraction topology to which it operates. MDSC 1 sees 899 the whole four domain networks as four virtual nodes connected via 900 virtual links. This illustrates the point discussed in Section 3.4: 901 The top level MDSC operates on a higher level of abstraction (i.e., 902 less granular level) than the lower level MSDCs. As such, the MMI 903 carries more abstract TE information than the MPI. 904 In the process of creating a VN, the same principle applies. Let us 905 assume that a customer wants to create a virtual network that 906 connects its CE A and CE B which is depicted in Figure 6. Upon 907 receipt of this request generated by the CNC, MDSC 1, based on its 908 abstract topology at hand, determines that CE A is connected a 909 virtual node in domain 1 and CE B is connected to a virtual node in 910 domain 4 and. MDSC 1 further determines that domain 2 and domain 3 911 are interconnected to domain 1 and 4 respectively. MDSC 1 then 912 partitions the original VN request from the CNC into two separate VN 913 requests and make a VN creation request, respectively to MDSC 2 and 914 MDSC 3. MDSC 1 for instance make a VN request to MDSC 2 to connect 915 two virtual nodes. When MDSC 2 receives this VN request from MDSC 1, 916 it further partitions into two separate requests respectively to PNC 917 1 and PNC 2. This illustration shows that VN creation request 918 process recursively takes place over MMI and MPI. 920 5. Access Points and Virtual Network Access Points 922 In order not to share unwanted topological information between the 923 customer domain and provider domain, a new entity is defined which 924 is referred to as the Access Point (AP). See the definition of AP in 925 Section 1.1. 927 A customer node will use APs as the end points for the request of 928 VNs as shown in Figure 7. 930 ------------- 931 ( ) 932 - - 933 +---+ X ( ) Z +---+ 934 |CE1|---+----( )---+---|CE2| 935 +---+ | ( ) | +---+ 936 AP1 - - AP2 937 ( ) 938 ------------- 940 Figure 7: APs definition customer view 942 Let's take as an example a scenario shown in Figure 7. CE1 is 943 connected to the network via a 10Gb link and CE2 via a 40Gb link. 944 Before the creation of any VN between AP1 and AP2 the customer view 945 can be summarized as shown in Table 1: 947 +----------+------------------------+ 948 |End Point | Access Link Bandwidth | 949 +-----+----------+----------+-------------+ 950 |AP id| CE,port | MaxResBw | AvailableBw | 951 +-----+----------+----------+-------------+ 952 | AP1 |CE1,portX | 10Gb | 10Gb | 953 +-----+----------+----------+-------------+ 954 | AP2 |CE2,portZ | 40Gb | 40Gb | 955 +-----+----------+----------+-------------+ 957 Table 1: AP - customer view 959 On the other hand, what the provider sees is shown in Figure 8. 961 ------- ------- 962 ( ) ( ) 963 - - - - 964 W (+---+ ) ( +---+) Y 965 -+---( |PE1| Dom.X )----( Dom.Y |PE2| )---+- 966 | (+---+ ) ( +---+) | 967 AP1 - - - - AP2 968 ( ) ( ) 969 ------- ------- 971 Figure 8: Provider view of the AP 973 Which results in a summarization as shown in Table 2. 975 +----------+------------------------+ 976 |End Point | Access Link Bandwidth | 977 +-----+----------+----------+-------------+ 978 |AP id| PE,port | MaxResBw | AvailableBw | 979 +-----+----------+----------+-------------+ 980 | AP1 |PE1,portW | 10Gb | 10Gb | 981 +-----+----------+----------+-------------+ 982 | AP2 |PE2,portY | 40Gb | 40Gb | 983 +-----+----------+----------+-------------+ 985 Table 2: AP - provider view 987 A Virtual Network Access Point (VNAP) needs to be defined as binding 988 between the AP that is linked to a VN and that is used to allow for 989 different VNs to start from the same AP. It also allows for traffic 990 engineering on the access and/or inter-domain links (e.g., keeping 991 track of bandwidth allocation). A different VNAP is created on an AP 992 for each VN. 994 In the simple scenario depicted above we suppose we want to create 995 two virtual networks. The first with VN identifier 9 between AP1 and 996 AP2 with bandwidth of 1Gbps, while the second with VN id 5, again 997 between AP1 and AP2 and with bandwidth 2Gbps. 999 The provider view would evolve as shown in Table 3. 1001 +----------+------------------------+ 1002 |End Point | Access Link/VNAP Bw | 1003 +---------+----------+----------+-------------+ 1004 |AP/VNAPid| PE,port | MaxResBw | AvailableBw | 1005 +---------+----------+----------+-------------+ 1006 |AP1 |PE1,portW | 10Gbps | 7Gbps | 1007 | -VNAP1.9| | 1Gbps | N.A. | 1008 | -VNAP1.5| | 2Gbps | N.A | 1009 +---------+----------+----------+-------------+ 1010 |AP2 |PE2,portY | 40Gbps | 37Gbps | 1011 | -VNAP2.9| | 1Gbps | N.A. | 1012 | -VNAP2.5| | 2Gbps | N.A | 1013 +---------+----------+----------+-------------+ 1015 Table 3: AP and VNAP - provider view after VN creation 1017 5.1. Dual homing scenario 1019 Often there is a dual homing relationship between a CE and a pair of 1020 PEs. This case needs to be supported by the definition of VN, APs 1021 and VNAPs. Suppose CE1 connected to two different PEs in the 1022 operator domain via AP1 and AP2 and that the customer needs 5Gbps of 1023 bandwidth between CE1 and CE2. This is shown in Figure 9. 1025 ____________ 1026 AP1 ( ) AP3 1027 -------(PE1) (PE3)------- 1028 W / ( ) \X 1029 +---+/ ( ) \+---+ 1030 |CE1| ( ) |CE2| 1031 +---+\ ( ) /+---+ 1032 Y \ ( ) /Z 1033 -------(PE2) (PE4)------- 1034 AP2 (____________) 1036 Figure 9: Dual homing scenario 1038 In this case, the customer will request for a VN between AP1, AP2 1039 and AP3 specifying a dual homing relationship between AP1 and AP2. 1040 As a consequence no traffic will flow between AP1 and AP2. The dual 1041 homing relationship would then be mapped against the VNAPs (since 1042 other independent VNs might have AP1 and AP2 as end points). 1044 The customer view would be shown in Table 4. 1046 +----------+------------------------+ 1047 |End Point | Access Link/VNAP Bw | 1048 +---------+----------+----------+-------------+-----------+ 1049 |AP/VNAPid| CE,port | MaxResBw | AvailableBw |Dual Homing| 1050 +---------+----------+----------+-------------+-----------+ 1051 |AP1 |CE1,portW | 10Gbps | 5Gbps | | 1052 | -VNAP1.9| | 5Gbps | N.A. | VNAP2.9 | 1053 +---------+----------+----------+-------------+-----------+ 1054 |AP2 |CE1,portY | 40Gbps | 35Gbps | | 1055 | -VNAP2.9| | 5Gbps | N.A. | VNAP1.9 | 1056 +---------+----------+----------+-------------+-----------+ 1057 |AP3 |CE2,portX | 40Gbps | 35Gbps | | 1058 | -VNAP3.9| | 5Gbps | N.A. | NONE | 1059 +---------+----------+----------+-------------+-----------+ 1061 Table 4: Dual homing - customer view after VN creation 1063 6. End Point Selection Based On Network Status 1065 A further advanced application of ACTN is in the case of Data Center 1066 selection, where the customer requires the Data Center selection to 1067 be based on the network status; this is referred to as Multi- 1068 Destination in [ACTN-REQ]. In terms of ACTN, a CNC could request a 1069 connectivity service (virtual network) between a set of source Aps 1070 and destination APs and leave it up to the network (MDSC) to decide 1071 which source and destination access points to be used to set up the 1072 connectivity service (virtual network). The candidate list of source 1073 and destination APs is decided by a CNC (or an entity outside of 1074 ACTN) based on certain factors which are outside the scope of ACTN. 1076 Based on the AP selection as determined and returned by the network 1077 (MDSC), the CNC (or an entity outside of ACTN) should further take 1078 care of any subsequent actions such as orchestration or service 1079 setup requirements. These further actions are outside the scope of 1080 ACTN. 1082 Consider a case as shown in Figure 10, where three data centers are 1083 available, but the customer requires the data center selection to be 1084 based on the network status and the connectivity service setup 1085 between the AP1 (CE1) and one of the destination APs (AP2 (DC-A), 1086 AP3 (DC-B), and AP4 (DC-C)). The MDSC (in coordination with PNCs) 1087 would select the best destination AP based on the constraints, 1088 optimization criteria, policies, etc., and setup the connectivity 1089 service (virtual network). 1091 ------- ------- 1092 ( ) ( ) 1093 - - - - 1094 +---+ ( ) ( ) +----+ 1095 |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| 1096 +---+ | ( ) ( ) | +----+ 1097 AP1 - - - - AP2 1098 ( ) ( ) 1099 ---+--- ---+--- 1100 AP3 | AP4 | 1101 +----+ +----+ 1102 |DC-B| |DC-C| 1103 +----+ +----+ 1105 Figure 10: End point selection based on network status 1107 6.1. Pre-Planned End Point Migration 1109 Further in case of Data Center selection, customer could request for 1110 a backup DC to be selected, such that in case of failure, another DC 1111 site could provide hot stand-by protection. As shown in Figure 10 1112 DC-C is selected as a backup for DC-A. Thus, the VN should be setup 1113 by the MDSC to include primary connectivity between AP1 (CE1) and 1114 AP2 (DC-A) as well as protection connectivity between AP1 (CE1) and 1115 AP4 (DC-C). 1117 ------- ------- 1118 ( ) ( ) 1119 - - __ - - 1120 +---+ ( ) ( ) +----+ 1121 |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| 1122 +---+ | ( ) ( ) | +----+ 1123 AP1 - - - - AP2 | 1124 ( ) ( ) | 1125 ---+--- ---+--- | 1126 AP3 | AP4 | HOT STANDBY 1127 +----+ | 1128 |DC-C|<------------- 1129 +----+ 1131 Figure 10: Pre-planned end point migration 1133 6.2. On the Fly End Point Migration 1135 Compared to pre-planned end point migration, on the fly end point 1136 selection is dynamic in that the migration is not pre-planned but 1137 decided based on network condition. Under this scenario, the MDSC 1138 would monitor the network (based on the VN SLA) and notify the CNC 1139 in case where some other destination AP would be a better choice 1140 based on the network parameters. The CNC should instruct the MDSC 1141 when it is suitable to update the VN with the new AP if it is 1142 required. 1144 7. Manageability Considerations 1146 The objective of ACTN is to manage traffic engineered resources, and 1147 provide a set of mechanism to allow clients to request virtual 1148 connectivity across server network resources. ACTN will support 1149 multiple clients each with its own view of and control of the server 1150 network, the network operator will need to partition (or "slice") 1151 their network resources, and manage them resources accordingly. 1153 The ACTN platform will, itself, need to support the request, 1154 response, and reservations of client and network layer connectivity. 1155 It will also need to provide performance monitoring and control of 1156 traffic engineered resources. The management requirements may be 1157 categorized as follows: 1159 . Management of external ACTN protocols 1160 . Management of internal ACTN protocols 1161 . Management and monitoring of ACTN components 1162 . Configuration of policy to be applied across the ACTN system 1164 7.1. Policy 1166 It is expected that a policy will be an important aspect of ACTN 1167 control and management. Typically, policies are used via the 1168 components and interfaces, during deployment of the service, to 1169 ensure that the service is compliant with agreed policy factors 1170 (often described in Service Level Agreements - SLAs), these include, 1171 but are not limited to: connectivity, bandwidth, geographical 1172 transit, technology selection, security, resilience, and economic 1173 cost. 1175 Depending on the deployment the ACTN deployment architecture, some 1176 policies may have local or global significance. That is, certain 1177 policies may be ACTN component specific in scope, while others may 1178 have broader scope and interact with multiple ACTN components. Two 1179 examples are provided below: 1181 . A local policy might limit the number, type, size, and 1182 scheduling of virtual network services a customer may request 1183 via its CNC. This type of policy would be implemented locally on 1184 the MDSC. 1186 . A global policy might constrain certain customer types (or 1187 specific customer applications) to only use certain MDSCs, and 1188 be restricted to physical network types managed by the PNCs. A 1189 global policy agent would govern these types of policies. 1191 This objective of this section is to discuss the applicability of 1192 ACTN policy: requirements, components, interfaces, and examples. 1193 This section provides an analysis and does not mandate a specific 1194 method for enforcing policy, or the type of policy agent that would 1195 be responsible for propagating policies across the ACTN components. 1196 It does highlight examples of how policy may be applied in the 1197 context of ACTN, but it is expected further discussion in an 1198 applicability or solution specific document, will be required. 1200 7.2. Policy applied to the Customer Network Controller 1202 A virtual network service for a customer application will be 1203 requested from the CNC. It will reflect the application requirements 1204 and specific service policy needs, including bandwidth, traffic type 1205 and survivability. Furthermore, application access and type of 1206 virtual network service requested by the CNC, will be need adhere to 1207 specific access control policies. 1209 7.3. Policy applied to the Multi Domain Service Coordinator 1211 A key objective of the MDSC is to help the customer express the 1212 application connectivity request via its CNC as set of desired 1213 business needs, therefore policy will play an important role. 1215 Once authorised, the virtual network service will be instantiated 1216 via the CNC-MDSC Interface (CMI), it will reflect the customer 1217 application and connectivity requirements, and specific service 1218 transport needs. The CNC and the MDSC components will have agreed 1219 connectivity end-points, use of these end-points should be defined 1220 as a policy expression when setting up or augmenting virtual network 1221 services. Ensuring that permissible end-points are defined for CNCs 1222 and applications will require the MDSC to maintain a registry of 1223 permissible connection points for CNCs and application types. 1225 It may also be necessary for the MDSC to resolve policy conflicts, 1226 or at least flag any issues to administrator of the MDSC itself. 1227 Conflicts may occur when virtual network service optimisation 1228 criterion are in competition. For example, to meet objectives for 1229 service reachability a request may require an interconnection point 1230 between multiple physical networks; however, this might break a 1231 confidentially policy requirement of specific type of end-to-end 1232 service. This type of situation may be resolved using hard and soft 1233 policy constraints. 1235 7.4. Policy applied to the Physical Network Controller 1237 The PNC is responsible for configuring the network elements, 1238 monitoring physical network resources, and exposing connectivity 1239 (direct or abstracted) to the MDSC. It is therefore expected that 1240 policy will dictate what connectivity information will be exported 1241 between the PNC, via the MDSC-PNC Interface (MPI), and MDSC. 1243 Policy interactions may arise when a PNC determines that it cannot 1244 compute a requested path from the MDSC, or notices that (per a 1245 locally configured policy) the network is low on resources (for 1246 example, the capacity on key links become exhausted). In either 1247 case, the PNC will be required to notify the MDSC, which may (again 1248 per policy) act to construct a virtual network service across 1249 another physical network topology. 1251 Furthermore, additional forms of policy-based resource management 1252 will be required to provide virtual network service performance, 1253 security and resilience guarantees. This will likely be implemented 1254 via a local policy agent and subsequent protocol methods. 1256 8. Security Considerations 1258 The ACTN framework described in this document defines key components 1259 and interfaces for managed traffic engineered networks. Securing the 1260 request and control of resources, confidentially of the information, 1261 and availability of function, should all be critical security 1262 considerations when deploying and operating ACTN platforms. 1264 Several distributed ACTN functional components are required, and as 1265 a rule implementations should consider encrypting data that flow 1266 between components, especially when they are implemented at remote 1267 nodes, regardless if these are external or internal network 1268 interfaces. 1270 The ACTN security discussion is further split into two specific 1271 categories described in the following sub-sections: 1273 . Interface between the Customer Network Controller and Multi 1274 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) 1276 . Interface between the Multi Domain Service Coordinator and 1277 Physical Network Controller (PNC), MDSC-PNC Interface (MPI) 1279 From a security and reliability perspective, ACTN may encounter many 1280 risks such as malicious attack and rogue elements attempting to 1281 connect to various ACTN components. Furthermore, some ACTN 1282 components represent a single point of failure and threat vector, 1283 and must also manage policy conflicts, and eavesdropping of 1284 communication between different ACTN components. 1286 The conclusion is that all protocols used to realize the ACTN 1287 framework should have rich security features, and customer, 1288 application and network data should be stored in encrypted data 1289 stores. Additional security risks may still exist. Therefore, 1290 discussion and applicability of specific security functions and 1291 protocols will be better described in documents that are use case 1292 and environment specific. 1294 8.1. Interface between the Customer Network Controller and Multi Domain 1295 Service Coordinator (MDSC), CNC-MDSC Interface (CMI) 1297 The role of the MDSC is to detach the network and service control 1298 from underlying technology to help the customer express the network 1299 as desired by business needs. It should be noted that data stored by 1300 the MDSC will reveal details of the virtual network services, and 1301 which CNC and application is consuming the resource. The data stored 1302 must therefore be considered as a candidate for encryption. 1304 CNC Access rights to an MDSC must be managed. MDSC resources must be 1305 properly allocated, and methods to prevent policy conflicts, 1306 resource wastage and denial of service attacks on the MDSC by rogue 1307 CNCs, should also be considered. 1309 A CNC-MDSC protocol interface will likely be an external protocol 1310 interface. Again, suitable authentication and authorization of each 1311 CNC connecting to the MDSC will be required, especially, as these 1312 are likely to be implemented by different organisations and on 1313 separate functional nodes. Use of the AAA-based mechanisms would 1314 also provide role-based authorization methods, so that only 1315 authorized CNC's may access the different functions of the MDSC. 1317 8.2. Interface between the Multi Domain Service Coordinator and 1318 Physical Network Controller (PNC), MDSC-PNC Interface (MPI) 1320 The function of the Physical Network Controller (PNC) is to 1321 configure network elements, provide performance and monitoring 1322 functions of the physical elements, and export physical topology 1323 (full, partial, or abstracted) to the MDSC. 1325 Where the MDSC must interact with multiple (distributed) PNCs, a 1326 PKI-based mechanism is suggested, such as building a TLS or HTTPS 1327 connection between the MDSC and PNCs, to ensure trust between the 1328 physical network layer control components and the MDSC. 1330 Which MDSC the PNC exports topology information to, and the level of 1331 detail (full or abstracted) should also be authenticated and 1332 specific access restrictions and topology views, should be 1333 configurable and/or policy-based. 1335 9. References 1337 9.1. Informative References 1339 [RFC2702] Awduche, D., et. al., "Requirements for Traffic 1340 Engineering Over MPLS", RFC 2702, September 1999. 1342 [RFC4026] L. Andersson, T. Madsen, "Provider Provisioned Virtual 1343 Private Network (VPN) Terminology", RFC 4026, March 2005. 1345 [RFC4208] G. Swallow, J. Drake, H.Ishimatsu, Y. Rekhter, 1346 "Generalized Multiprotocol Label Switching (GMPLS) User- 1347 Network Interface (UNI): Resource ReserVation Protocol- 1348 Traffic Engineering (RSVP-TE) Support for the Overlay 1349 Model", RFC 4208, October 2005. 1351 [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path 1352 Computation Element (PCE)-Based Architecture", IETF RFC 1353 4655, August 2006. 1355 [RFC5654] Niven-Jenkins, B. (Ed.), D. Brungard (Ed.), and M. Betts 1356 (Ed.), "Requirements of an MPLS Transport Profile", RFC 1357 5654, September 2009. 1359 [RFC7149] Boucadair, M. and Jacquenet, C., "Software-Defined 1360 Networking: A Perspective from within a Service Provider 1361 Environment", RFC 7149, March 2014. 1363 [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for 1364 Information Exchange between Interconnected Traffic- 1365 Engineered Networks", RFC 7926, July 2016. 1367 [GMPLS] Manning, E., et al., "Generalized Multi-Protocol Label 1368 Switching (GMPLS) Architecture", RFC 3945, October 2004. 1370 [ONF-ARCH] Open Networking Foundation, "SDN architecture", Issue 1371 1.1, ONF TR-521, June 2016. 1373 [RFC7491] King, D., and Farrel, A., "A PCE-based Architecture for 1374 Application-based Network Operations", RFC 7491, March 1375 2015. 1377 [Transport NBI] Busi, I., et al., "Transport North Bound Interface 1378 Use Cases", draft-tnbidt-ccamp-transport-nbi-use-cases, 1379 work in progress. 1381 10. Contributors 1383 Adrian Farrel 1384 Old Dog Consulting 1385 Email: adrian@olddog.co.uk 1387 Italo Busi 1388 Huawei 1389 Email: Italo.Busi@huawei.com 1391 Khuzema Pithewan 1392 Infinera 1393 Email: kpithewan@infinera.com 1395 Michael Scharf 1396 Nokia 1397 Email: michael.scharf@nokia.com 1399 Authors' Addresses 1401 Daniele Ceccarelli (Editor) 1402 Ericsson 1403 Torshamnsgatan,48 1404 Stockholm, Sweden 1405 Email: daniele.ceccarelli@ericsson.com 1407 Young Lee (Editor) 1408 Huawei Technologies 1409 5340 Legacy Drive 1410 Plano, TX 75023, USA 1411 Phone: (469)277-5838 1412 Email: leeyoung@huawei.com 1414 Luyuan Fang 1415 Microsoft 1416 Email: luyuanf@gmail.com 1418 Diego Lopez 1419 Telefonica I+D 1420 Don Ramon de la Cruz, 82 1421 28006 Madrid, Spain 1422 Email: diego@tid.es 1424 Sergio Belotti 1425 Alcatel Lucent 1426 Via Trento, 30 1427 Vimercate, Italy 1428 Email: sergio.belotti@nokia.com 1429 Daniel King 1430 Lancaster University 1431 Email: d.king@lancaster.ac.uk 1433 Dhruv Dhoddy 1434 Huawei Technologies 1435 dhruv.ietf@gmail.com 1437 Gert Grammel 1438 Juniper Networks 1439 ggrammel@juniper.net