idnits 2.17.1 draft-ietf-teas-actn-framework-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There are 41 instances of too long lines in the document, the longest one being 9 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 20, 2017) is 2472 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Service-YANG' is mentioned on line 895, but not defined == Missing Reference: 'TE-Topology' is mentioned on line 1100, but not defined == Missing Reference: 'PCEP-LS' is mentioned on line 942, but not defined == Missing Reference: 'ACTN-YANG' is mentioned on line 1178, but not defined == Missing Reference: 'ACTN-REQ' is mentioned on line 1503, but not defined == Unused Reference: 'ACTN-Abstraction' is defined on line 1823, but no explicit reference was found in the text Summary: 2 errors (**), 0 flaws (~~), 7 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Daniele Ceccarelli (Ed) 2 Internet Draft Ericsson 3 Intended status: Informational Young Lee (Ed) 4 Expires: January 19, 2018 Huawei 6 July 20, 2017 8 Framework for Abstraction and Control of Traffic Engineered Networks 10 draft-ietf-teas-actn-framework-07 12 Abstract 14 Traffic Engineered networks have a variety of mechanisms to 15 facilitate the separation of the data plane and control plane. They 16 also have a range of management and provisioning protocols to 17 configure and activate network resources. These mechanisms 18 represent key technologies for enabling flexible and dynamic 19 networking. 21 Abstraction of network resources is a technique that can be applied 22 to a single network domain or across multiple domains to create a 23 single virtualized network that is under the control of a network 24 operator or the customer of the operator that actually owns 25 the network resources. 27 This document provides a framework for Abstraction and Control of 28 Traffic Engineered Networks (ACTN). 30 Status of this Memo 32 This Internet-Draft is submitted to IETF in full conformance with 33 the provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF), its areas, and its working groups. Note that 37 other groups may also distribute working documents as Internet- 38 Drafts. 40 Internet-Drafts are draft documents valid for a maximum of six 41 months and may be updated, replaced, or obsoleted by other documents 42 at any time. It is inappropriate to use Internet-Drafts as 43 reference material or to cite them other than as "work in progress." 45 The list of current Internet-Drafts can be accessed at 46 http://www.ietf.org/ietf/1id-abstracts.txt 47 The list of Internet-Draft Shadow Directories can be accessed at 48 http://www.ietf.org/shadow.html. 50 This Internet-Draft will expire on January 19, 2018. 52 Copyright Notice 54 Copyright (c) 2017 IETF Trust and the persons identified as the 55 document authors. All rights reserved. 57 This document is subject to BCP 78 and the IETF Trust's Legal 58 Provisions Relating to IETF Documents 59 (http://trustee.ietf.org/license-info) in effect on the date of 60 publication of this document. Please review these documents 61 carefully, as they describe your rights and restrictions with 62 respect to this document. Code Components extracted from this 63 document must include Simplified BSD License text as described in 64 Section 4.e of the Trust Legal Provisions and are provided without 65 warranty as described in the Simplified BSD License. 67 Table of Contents 69 1. Introduction...................................................3 70 1.1. Terminology...............................................5 71 2. Business Model of ACTN.........................................9 72 2.1. Customers................................................10 73 2.2. Service Providers........................................10 74 2.3. Network Providers........................................12 75 3. Virtual Network Service.......................................12 76 4. ACTN Base Architecture........................................13 77 4.1. Customer Network Controller..............................15 78 4.2. Multi Domain Service Coordinator.........................16 79 4.3. Physical Network Controller..............................17 80 4.4. ACTN Interfaces..........................................17 81 5. Advanced ACTN architectures...................................18 82 5.1. MDSC Hierarchy for scalability...........................18 83 5.2. Functional Split of MDSC Functions in Orchestrators......19 84 6. Topology Abstraction Method...................................21 85 6.1. Abstraction Factors......................................22 86 6.2. Abstraction Types........................................23 87 6.2.1. Native/White Topology...............................23 88 6.2.2. Black Topology......................................24 89 6.2.3. Grey Topology.......................................25 91 6.3. Building Methods of Grey Topology........................27 92 6.3.1. Automatic generation of abstract topology by 93 configuration..............................................27 94 6.3.2. On-demand generation of supplementary topology via path 95 compute request/reply......................................28 96 6.4. Abstraction Configuration Consideration..................29 97 6.4.1. Packet Networks.....................................29 98 6.4.2. OTN Networks........................................29 99 6.4.3. WSON Networks.......................................30 100 6.5. Topology Abstraction Granularity Level example...........30 101 7. Access Points and Virtual Network Access Points...............32 102 7.1. Dual homing scenario.....................................34 103 8. Advanced ACTN Application: Multi-Destination Service..........35 104 8.1. Pre-Planned End Point Migration..........................36 105 8.2. On the Fly End Point Migration...........................37 106 9. Advanced Topic................................................37 107 10. Manageability Considerations.................................37 108 10.1. Policy..................................................38 109 10.2. Policy applied to the Customer Network Controller.......39 110 10.3. Policy applied to the Multi Domain Service Coordinator..39 111 10.4. Policy applied to the Physical Network Controller.......39 112 11. Security Considerations......................................40 113 11.1. Interface between the Customer Network Controller and Multi 114 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI)...41 115 11.2. Interface between the Multi Domain Service Coordinator and 116 Physical Network Controller (PNC), MDSC-PNC Interface (MPI)...41 117 12. References...................................................42 118 12.1. Informative References..................................42 119 13. Contributors.................................................43 120 Authors' Addresses...............................................44 121 APPENDIX A - Example of MDSC and PNC functions integrated in 122 Service/Network Orchestrator.....................................45 123 APPENDIX B - Example of IP + Optical network with L3VPN service..45 125 1. Introduction 127 Traffic Engineered networks have a variety of mechanisms to 128 facilitate separation of data plane and control plane including 129 distributed signaling for path setup and protection, centralized 130 path computation for planning and traffic engineering, and a range 131 of management and provisioning protocols to configure and activate 132 network resources. These mechanisms represent key technologies for 133 enabling flexible and dynamic networking. 135 The term Traffic Engineered network is used in this document to 136 refer to a network that uses any connection-oriented technology 137 under the control of a distributed or centralized control plane to 138 support dynamic provisioning of end-to-end connectivity. Some 139 examples of networks that are in scope of this definition are 140 optical networks, MPLS Transport Profile (MPLS-TP) networks 141 [RFC5654], and MPLS Traffic Engineering (MPLS-TE) networks 142 [RFC2702]. 144 One of the main drivers for Software Defined Networking (SDN) 145 [RFC7149] is a decoupling of the network control plane from the data 146 plane. This separation of the control plane from the data plane has 147 been already achieved with the development of MPLS/GMPLS [GMPLS] and 148 the Path Computation Element (PCE) [RFC4655] for TE-based networks. 149 One of the advantages of SDN is its logically centralized control 150 regime that allows a global view of the underlying networks. 151 Centralized control in SDN helps improve network resource 152 utilization compared with distributed network control. For TE-based 153 networks, PCE is essentially equivalent to a logically centralized 154 path computation function. 156 Three key aspects that need to be solved by SDN are: 158 . Separation of service requests from service delivery so that 159 the orchestration of a network is transparent from the point of 160 view of the customer but remains responsive to the customer's 161 services and business needs. 163 . Network abstraction: As described in [RFC7926], abstraction is 164 the process of applying policy to a set of information about a 165 TE network to produce selective information that represents the 166 potential ability to connect across the domain. The process of 167 abstraction presents the connectivity graph in a way that is 168 independent of the underlying network technologies, 169 capabilities, and topology so that it can be used to plan and 170 deliver network services in a uniform way 172 . Coordination of resources across multiple domains and multiple 173 layers to provide end-to-end services regardless of whether the 174 domains use SDN or not. 176 As networks evolve, the need to provide separated service 177 request/orchestration and resource abstraction has emerged as a key 178 requirement for operators. In order to support multiple clients each 179 with its own view of and control of the server network, a network 180 operator needs to partition (or "slice") the network resources. The 181 resulting slices can be assigned to each client for guaranteed usage 182 which is a step further than shared use of common network resources. 184 Furthermore, each network represented to a client can be built from 185 abstractions of the underlying networks so that, for example, a link 186 in the client's network is constructed from a path or collection of 187 paths in the underlying network. 189 We call the set of management and control functions used to provide 190 these features Abstraction and Control of Traffic Engineered 191 Networks (ACTN). 193 Particular attention needs to be paid to the multi-domain case, ACTN 194 can facilitate virtual network operation via the creation of a 195 single virtualized network or a seamless service. This supports 196 operators in viewing and controlling different domains (at any 197 dimension: applied technology, administrative zones, or vendor- 198 specific technology islands) as a single virtualized network. 200 The ACTN framework described in this document facilitates: 202 . Abstraction of the underlying network resources to higher-layer 203 applications and customers [RFC7926]. 205 . Virtualization of particular underlying resources, whose 206 selection criterion is the allocation of those resources to a 207 particular customer, application or service [ONF-ARCH]. 209 . Network slicing of infrastructure to meet specific customers' 210 service requirements. 212 . Creation of a virtualized environment allowing operators to 213 view and control multi-domain networks as a single virtualized 214 network. 216 . The presentation to customers of networks as a virtual network 217 via open and programmable interfaces. 219 1.1. Terminology 221 The following terms are used in this document. Some of them are 222 newly defined, some others reference existing definition: 223 . Network Slicing: In the context of ACTN, network slicing is a 224 collection of resources that are used to establish logically 225 dedicated virtual networks over TE networks. It allows a 226 network provider to provide dedicated virtual networks for 227 application/customer over a common network infrastructure. The 228 logically dedicated resources are a part of the larger common 229 network infrastructures that are shared among various network 230 slice instances which are the end-to-end realization of network 231 slicing, consisting of the combination of physically or 232 logically dedicated resources. 234 . Node: A node is a vertex on the graph representation of a TE 235 topology. In a physical network topology, a node corresponds to 236 a physical network element (NE). In an abstract network 237 topology, a node (sometimes called an abstract node) is a 238 representation as a single vertex of one or more physical NEs 239 and their connecting physical connections. The concept of a 240 node represents the ability to connect from any access to the 241 node (a link end) to any other access to that node, although 242 "limited cross-connect capabilities" may also be defined to 243 restrict this functionality. Just as network slicing and 244 network abstraction may be applied recursively, so a node in a 245 topology may be created by applying slicing or abstraction on 246 the nodes in the underlying topology. 248 . Link: A link is an edge on the graph representation of a TE 249 topology. Two nodes connected by a link are said to be 250 "adjacent" in the TE topology. In a physical network topology, 251 a link corresponds to a physical connection. In an abstract 252 network topology, a link (sometimes called an abstract link) is 253 a representation of the potential to connect a pair of points 254 with certain TE parameters (see RFC 7926 for details). Network 255 slicing/virtualization and network abstraction may be applied 256 recursively, so a link in a topology may be created by applying 257 slicing and/or abstraction on the links in the underlying 258 topology. 260 . CNC: A Customer Network Controller is responsible for 261 communicating customer's virtual network service requirements 262 to network provider. It has knowledge of the end-point 263 associated with virtual network service, service policy, and 264 other QoS information related to the service it is responsible 265 for instantiating. 267 . PNC: A Physical Network Controller is responsible for 268 controlling devices or NEs under its direct control. The PNC 269 functions can be implemented as part of an SDN domain 270 controller, a Network Management System (NMS), an Element 271 Management System (EMS), an active PCE-based controller or any 272 other means to dynamically control a set of nodes and that is 273 implementing an NBI compliant with ACTN specification. 275 . PNC domain: A PNC domain includes all the resources under the 276 control of a single PNC. It can be composed of different 277 routing domains and administrative domains, and the resources 278 may come from different layers. The interconnection between PNC 279 domains can be a link or a node. 281 _______ Border Link _______ 282 _( )================( )_ 283 _( )_ _( )_ 284 ( ) ---- ( ) 285 ( PNC )| |( PNC ) 286 ( Domain X )| |( Domain Y ) 287 ( )| |( ) 288 (_ _) ---- (_ _) 289 (_ _) Border (_ _) 290 (_______) Node (_______) 292 Figure 1: PNC Domain Borders 294 . MDSC: A multi-domain Service Coordinator is a functional block 295 that implements all four ACTN main functions, i.e., multi 296 domain coordination, virtualization/abstraction, customer 297 mapping/translation, and virtual service coordination. The 298 first two functions of the MDSC, namely, multi domain 299 coordination and virtualization/abstraction are referred to as 300 network-related functions while the last two functions, namely, 301 customer mapping/translation and virtual service coordination 302 are referred to as service-related functions. See details on 303 these functions in Section 4.2. In some implementation, PNC and 304 MDSC functions can be co-located and implemented in the same 305 box. 307 . A Virtual Network (VN) is a customer view of the TE 308 network. Depending on the agreement between client and 309 provider various VN operations and VN views are possible as 310 follows: 312 o VN Creation - VN could be pre-configured and created via 313 offline negotiation between customer and provider. In 314 other cases, the VN could also be created dynamically 315 based on a request from the customer with given SLA 316 attributes which satisfy the customer's objectives. 318 o Dynamic Operations - The VN could be further modified or 319 deleted based on a customer request. The customer can 320 further act upon the virtual network resources to perform 321 end-to-end tunnel management (set-up/release/modify). 322 These changes will result in subsequent LSP management at 323 the operator's level. 325 o VN Type: 327 a. The VN can be seen as set of end-to-end tunnels from a 328 customer point of view, where each tunnel is referred 329 as a VN member. Each VN member can then be formed by 330 recursive slicing or abstraction of paths in 331 underlying networks. Such end-to-end tunnels may 332 comprise of customer end points, access links, intra- 333 domain paths, and inter-domain links. In this view, VN 334 is thus a set of VN members (which is referred to as 335 Type 1 VN) 337 b. The VN can also be seen as a topology comprising of 338 physical, sliced, and abstract nodes and links. This 339 VN is referred to as Type 2 VN. The nodes in this case 340 include physical customer end points, border nodes, 341 and internal nodes as well as abstracted nodes. 342 Similarly the links include physical access links, 343 inter-domain links, and intra-domain links as well as 344 abstract links. With VN type 2, it is still possible 345 to view VN member-level. 347 . Virtual Network Service (VNS) is requested by the customer and 348 negotiated with the provider. There are three types of VNS 349 defined in this document. Type 1 VNS refers to VNS in which 350 customer is allowed to create and operate a Type 1 VN. Type 2a 351 and 2b VNS refers to the VNS in which customer is allowed to 352 create and operates a Type 2 VN. With Type 2a VNS, once the VN 353 is statically created at service configuration time, the 354 customer is not allowed to change the topology (i.e., adding or 355 deleting abstract nodes/links). Type 2b VNS is the same as Type 356 2a VNS except that the customer is allowed to change topology 357 dynamically from the initial topology created at service 358 configuration time. See Section 3 for details. 360 . Abstraction. This process is defined in [RFC7926]. 362 . Abstract Link: The term "abstract link" is defined in 363 [RFC7926]. 365 . Abstract Topology: The topology of abstract nodes and abstract 366 links presented through the process of abstraction by a lower 367 layer network for use by a higher layer network. 369 . Access link: A link between a customer node and a provider 370 node. 372 . Inter-domain link: A link between domains managed by different 373 PNCs. The MDSC is in charge of managing inter-domain links. 375 . Access Point (AP): An access point is used to keep 376 confidentiality between the customer and the provider. It is a 377 logical identifier shared between the customer and the 378 provider, used to map the end points of the border node in both 379 the customer and the provider NW. The AP can be used by the 380 customer when requesting VN service to the provider. 382 . VN Access Point (VNAP): A VNAP is defined as the binding 383 between an AP and a given VN and is used to identify the 384 portion of the access and/or inter-domain link dedicated to a 385 given VN. 387 2. Business Model of ACTN 389 The Virtual Private Network (VPN) [RFC4026] and Overlay Network (ON) 390 models [RFC4208] are built on the premise that the network provider 391 provides all virtual private or overlay networks to its customers. 392 These models are simple to operate but have some disadvantages in 393 accommodating the increasing need for flexible and dynamic network 394 virtualization capabilities. 396 There are three key entities in the ACTN model: 398 - Customers 399 - Service Providers 400 - Network Providers 402 These are described in the following sections. 404 2.1. Customers 406 Within the ACTN framework, different types of customers may be taken 407 into account depending on the type of their resource needs, and on 408 their number and type of access. For example, it is possible to 409 group them into two main categories: 411 Basic Customer: Basic customers include fixed residential users, 412 mobile users and small enterprises. Usually, the number of basic 413 customers for a service provider is high: they require small amounts 414 of resources and are characterized by steady requests (relatively 415 time invariant). A typical request for a basic customer is for a 416 bundle of voice services and internet access. Moreover, basic 417 customers do not modify their services themselves: if a service 418 change is needed, it is performed by the provider as a proxy and the 419 services generally have very few dedicated resources (such as for 420 subscriber drop), with everything else shared on the basis of some 421 Service Level Agreement (LSA), which is usually best-efforts. 423 Advanced Customer: Advanced customers typically include enterprises, 424 governments and utilities. Such customers can ask for both point-to 425 point and multipoint connectivity with high resource demands varying 426 significantly in time and from customer to customer. This is one of 427 the reasons why a bundled service offering is not enough and it is 428 desirable to provide each advanced customer with a customized 429 virtual network service. 431 Advanced customers may own dedicated virtual resources, or share 432 resources. They may also have the ability to modify their service 433 parameters within the scope of their virtualized environments. The 434 primary focus of ACTN is Advanced Customers. 436 As customers are geographically spread over multiple network 437 provider domains, they have to interface to multiple providers and 438 may have to support multiple virtual network services with different 439 underlying objectives set by the network providers. To enable these 440 customers to support flexible and dynamic applications they need to 441 control their allocated virtual network resources in a dynamic 442 fashion, and that means that they need a view of the topology that 443 spans all of the network providers. Customers of a given service 444 provider can in turn offer a service to other customers in a 445 recursive way. 447 2.2. Service Providers 449 Service providers are the providers of virtual network services (see 450 Section 3 for details) to their customers. Service providers may or 451 may not own physical network resources (i.e, may or may not be 452 network providers as described in Section 2.3). When a service 453 provider is the same as the network provider, this is similar to 454 existing VPN models applied to a single provider. This approach 455 works well when the customer maintains a single interface with a 456 single provider. When customer spans multiple independent network 457 provider domains, then it becomes hard to facilitate the creation of 458 end-to-end virtual network services with this model. 460 A more interesting case arises when network providers only provide 461 infrastructure, while distinct service providers interface to the 462 customers. In this case, service providers are, themselves customers 463 of the network infrastructure providers. One service provider may 464 need to keep multiple independent network providers as its end-users 465 span geographically across multiple network provider domains. 467 The ACTN network model is predicated upon this three tier model and 468 is summarized in Figure 2: 470 +----------------------+ 471 | customer | 472 +----------------------+ 473 | 474 VNS || | /\ VNS 475 Request || | || Reply 476 \/ | || 477 +----------------------+ 478 | Service Provider | 479 +----------------------+ 480 / | \ 481 / | \ 482 / | \ 483 / | \ 484 +------------------+ +------------------+ +------------------+ 485 |Network Provider 1| |Network Provider 2| |Network Provider 3| 486 +------------------+ +------------------+ +------------------+ 488 Figure 2: Three tier model. 490 There can be multiple service providers to which a customer may 491 interface. 493 There are multiple types of service providers, for example: 495 . Data Center providers can be viewed as a service provider type 496 as they own and operate data center resources for various WAN 497 customers, and they can lease physical network resources from 498 network providers. 499 . Internet Service Providers (ISP) are service providers of 500 internet services to their customers while leasing physical 501 network resources from network providers. 502 . Mobile Virtual Network Operators (MVNO) provide mobile services 503 to their end-users without owning the physical network 504 infrastructure. 506 2.3. Network Providers 508 Network Providers are the infrastructure providers that own the 509 physical network resources and provide network resources to their 510 customers. The layered model described in this architecture 511 separates the concerns of network providers and customers, with 512 service providers acting as aggregators of customer requests. 514 3. Virtual Network Service 516 Virtual Network Service (VNS) is requested by the customer and 517 negotiated with the provider. There are three types of VNS defined 518 in this document. 520 Type 1 VNS refers to VNS in which customer is allowed to create and 521 operate a Type 1 VN. Type 1 VN is a VN that comprises a set of end- 522 to-end tunnels from a customer point of view, where each tunnel is 523 referred as a VN member. With Type 1 VNS, the network operator does 524 not need to provide additional abstract VN topology associated with 525 the Type 1 VN. 527 Type 2a VNS refer to VNS in which customer is allowed to create and 528 operates a Type 2 VN, but not allowed to change topology once it is 529 configured at service configuration time. Type 2 VN is an abstract 530 VN topology that may comprise of virtual/abstract nodes and links. 531 The nodes in this case may include physical customer end points, 532 border nodes, and internal nodes as well as abstracted nodes. 533 Similarly, the links may include physical access links, inter-domain 534 links, and intra-domain links as well as abstract links. 536 Type 2b VNS refers to VNS in which customer is allowed to create and 537 operate a Type 2 VN and the customer is allowed to dynamically 538 change abstract VN topology from the initially configured abstract 539 VN topology at service configuration time. 541 From an implementation standpoint, Type 2a VNS and Type 2b VNS 542 differentiation might be fulfilled via local policy. 544 In all types of VNS, customer can specify a set of service related 545 parameters such as connectivity type, VN traffic matrix (e.g., 546 bandwidth, latency, diversity, etc.), VN survivability, VN service 547 policy and other characteristics. 549 4. ACTN Base Architecture 551 This section provides a high-level model of ACTN showing the 552 interfaces and the flow of control between components. 554 The ACTN architecture is aligned with the ONF SDN architecture [ONF- 555 ARCH] and presents a 3-tiers reference model. It allows for 556 hierarchy and recursiveness not only of SDN controllers but also of 557 traditionally controlled domains that use a control plane. It 558 defines three types of controllers depending on the functionalities 559 they implement. The main functionalities that are identified are: 561 . Multi-domain coordination function: This function oversees the 562 specific aspects of the different domains and builds a single 563 abstracted end-to-end network topology in order to coordinate 564 end-to-end path computation and path/service provisioning. 565 Domain sequence path calculation/determination is also a part 566 of this function. 568 . Virtualization/Abstraction function: This function provides an 569 abstracted view of the underlying network resources for use by 570 the customer - a customer may be the client or a higher level 571 controller entity. This function includes network path 572 computation based on customer service connectivity request 573 constraints, path computation based on the global network-wide 574 abstracted topology, and the creation of an abstracted view of 575 network resources allocated to each customer. These operations 576 depend on customer-specific network objective functions and 577 customer traffic profiles. 579 . Customer mapping/translation function: This function is to map 580 customer requests/commands into network provisioning requests 581 that can be sent to the Physical Network Controller (PNC) 582 according to business policies provisioned statically or 583 dynamically at the OSS/NMS. Specifically, it provides mapping and 584 translation of a customer's service request into a set of 585 parameters that are specific to a network type and technology 586 such that network configuration process is made possible. 588 . Virtual service coordination function: This function translates 589 customer service-related information into virtual network 590 service operations in order to seamlessly operate virtual 591 networks while meeting a customer's service requirements. In 592 the context of ACTN, service/virtual service coordination 593 includes a number of service orchestration functions such as 594 multi-destination load balancing, guarantees of service 595 quality, bandwidth and throughput. It also includes 596 notifications for service fault and performance degradation and 597 so forth. 599 Figure 3 depicts the base ACTN architecture with three controller 600 types and the corresponding interfaces between these controllers. 601 The types of controller defined in the ACTN architecture are shown 602 in Figure 3 below and are as follows: 604 . CNC - Customer Network Controller 605 . MDSC - Multi Domain Service Coordinator 606 . PNC - Physical Network Controller 608 Figure 3 also shows the following interfaces: 610 . CMI - CNC-MDSC Interface 611 . MPI - MDSC-PNC Interface 612 . SBI - South Bound Interface 613 +--------------+ +---------------+ +--------------+ 614 | CNC-A | | CNC-B | | CNC-C | 615 |(DC provider) | | (ISP) | | (MVNO) | 616 +--------------+ +---------------+ +--------------+ 617 \ | / 618 Business \ | / 619 Boundary =======\========================|=========================/======= 620 Between \ | CMI / 621 Customer & ----------- | -------------- 622 Network Provider \ | / 623 +-----------------------+ 624 | MDSC | 625 +-----------------------+ 626 / | \ 627 ------------ |MPI ---------------- 628 / | \ 629 +-------+ +-------+ +-------+ 630 | PNC | | PNC | | PNC | 631 +-------+ +-------+ +-------+ 632 | GMPLS / | / \ 633 | trigger / |SBI / \ 634 -------- ----- | / \ 635 ( ) ( ) | / \ 636 - - ( Phys. ) | / ----- 637 ( GMPLS ) ( Net ) | / ( ) 638 ( Physical ) ---- | / ( Phys. ) 639 ( Network ) ----- ----- ( Net ) 640 - - ( ) ( ) ----- 641 ( ) ( Phys. ) ( Phys. ) 642 -------- ( Net ) ( Net ) 643 ----- ----- 645 Figure 3: ACTN Base Architecture 647 4.1. Customer Network Controller 649 A Virtual Network Service is instantiated by the Customer Network 650 Controller via the CNC-MDSC Interface (CMI). As the Customer Network 651 Controller directly interfaces to the applications, it understands 652 multiple application requirements and their service needs. It is 653 assumed that the Customer Network Controller and the MDSC have a 654 common knowledge of the end-point interfaces based on their business 655 negotiations prior to service instantiation. End-point interfaces 656 refer to customer-network physical interfaces that connect customer 657 premise equipment to network provider equipment. 659 4.2. Multi Domain Service Coordinator 661 The Multi Domain Service Coordinator (MDSC) sits between the CNC 662 that issues connectivity requests and the Physical Network 663 Controllers (PNCs) that manage the physical network resources. The 664 MDSC can be collocated with the PNC. 666 The internal system architecture and building blocks of the MDSC are 667 out of the scope of ACTN. Some examples can be found in the 668 Application Based Network Operations (ABNO) architecture [RFC7491] 669 and the ONF SDN architecture [ONF-ARCH]. 671 The MDSC is the only building block of the architecture that can 672 implement all four ACTN main functions, i.e., multi domain 673 coordination, virtualization/abstraction, customer 674 mapping/translation, and virtual service coordination. The first two 675 functions of the MDSC, namely, multi domain coordination and 676 virtualization/abstraction are referred to as network-related 677 functions while the last two functions, namely, customer 678 mapping/translation and virtual service coordination are referred to 679 as service-related functions. 680 The key point of the MDSC (and of the whole ACTN framework) is 681 detaching the network and service control from underlying technology 682 to help the customer express the network as desired by business 683 needs. The MDSC envelopes the instantiation of the right technology 684 and network control to meet business criteria. In essence it 685 controls and manages the primitives to achieve functionalities as 686 desired by the CNC. 688 In order to allow for multi-domain coordination a 1:N relationship 689 must be allowed between MDSCs and between MDSCs and PNCs (i.e. 1 690 parent MDSC and N child MDSC or 1 MDSC and N PNCs). 692 In addition to that, it could also be possible to have an M:1 693 relationship between MDSCs and PNC to allow for network resource 694 partitioning/sharing among different customers not necessarily 695 connected to the same MDSC (e.g., different service providers). 697 4.3. Physical Network Controller 699 The Physical Network Controller (PNC) oversees configuring the 700 network elements, monitoring the topology (physical or virtual) of 701 the network, and passing information about the topology (either raw 702 or abstracted) to the MDSC. 704 The internal architecture of the PNC, its building blocks, and the 705 way it controls its domain are out of the scope of ACTN. Some 706 examples can be found in the Application Based Network Operations 707 (ABNO) architecture [RFC7491] and the ONF SDN architecture [ONF- 708 ARCH] 710 The PNC, in addition to being in charge of controlling the physical 711 network, is able to implement two of the four main ACTN main 712 functions: multi domain coordination and virtualization/abstraction 713 function. 714 Note that from an implementation point of view it is possible to 715 integrate one or more MDSC functions and one or more PNC functions 716 within the same controller. 718 4.4. ACTN Interfaces 720 The network has to provide open, programmable interfaces, through 721 which customer applications can create, replace and modify virtual 722 network resources and services in an interactive, flexible and 723 dynamic fashion while having no impact on other customers. Direct 724 customer control of transport network elements and virtualized 725 services is not perceived as a viable proposition for transport 726 network providers due to security and policy concerns among other 727 reasons. In addition, the network control plane for transport 728 networks has been separated from the data plane and as such it is 729 not viable for the customer to directly interface with transport 730 network elements. 732 . CMI Interface: The CNC-MDSC Interface (CMI) is an interface 733 between a CNC and an MDSC. As depicted in Figure 3, the CMI is 734 a business boundary between customer and network provider. It 735 is used to request virtual network services required for the 736 applications. Note that all service related information such as 737 specific service properties, including virtual network service 738 type, topology, bandwidth, and constraint information are 739 conveyed over this interface. Most of the information over this 740 interface is technology agnostic; however, there are some 741 cases, e.g., access link configuration, where it should be 742 possible to explicitly request for a VN to be created at a 743 given layer in the network (e.g. ODU VN or MPLS VN). 745 . MPI Interface: The MDSC-PNC Interface (MPI) is an interface 746 between an MDSC and a PNC. It communicates the creation 747 requests for new connectivity or for bandwidth changes in the 748 physical network. In multi-domain environments, the MDSC needs 749 to establish multiple MPIs, one for each PNC, as there is one 750 PNC responsible for control of each domain. The MPI could have 751 different degrees of abstraction and present an abstracted 752 topology hiding technology specific aspects of the network or 753 convey technology specific parameters to allow for path 754 computation at the MDSC level. Please refer to CCAMP Transport 755 NBI work for the latter case [Transport NBI]. 757 . SBI Interface: This interface is out of the scope of ACTN. It 758 is shown in Figure 3 for reference reason only. 760 Please note that for all the three interfaces, when technology 761 specific information needs to be included, this info will be add-ons 762 on top of the general abstract topology. As far as general topology 763 abstraction standpoint, all interfaces are still recursive in 764 nature. 766 5. Advanced ACTN architectures 768 This section describes advanced forms of ACTN architectures as 769 possible implementation choices. 771 5.1. MDSC Hierarchy for scalability 773 A hierarchy of MDSCs can be foreseen for many reasons, among which 774 are scalability, administrative choices or putting together 775 different layers and technologies in the network. In the case where 776 there is a hierarchy of MDSCs, we introduce the higher-level MDSC 777 (MDSC-H) the lower-level MDSC (MDSC-L) and the interface between 778 them is basically of a recursive nature of the MPI. An 779 implementation choice could foresee the usage of an MDSC-L for all 780 the PNCs related to a given network layer or technology (e.g. 781 IP/MPLS) a different MDSC-L for the PNCs related to another 782 layer/technology (e.g. OTN/WDM) and an MDSC-H to coordinate them. 784 Figure 4 shows this case. 786 +--------+ 787 | CNC | 788 +--------+ 789 | 790 | 791 +----------+ 792 --------| MDSC-H |-------- 793 | +----------+ | 794 | | 795 +---------+ +---------+ 796 | MDSC-L | | MDSC-L | 797 +---------+ +---------+ 799 Figure 4: MDSC Hierarchy 801 Note that both the MDSC-H and the MDSC-L in general cases implement 802 all four functions of the MDSC discussed in Section 3.2. 804 5.2. Functional Split of MDSC Functions in Orchestrators 806 Another implementation choice could foresee the separation of MDSC 807 functions into two groups (i.e., one group for service-related 808 functions and another group for network-related functions) which 809 will result in a service orchestrator for providing service-related 810 functions of MDSC and other non-ACTN functions and a network 811 orchestrator for providing network-related functions of MDSC and 812 other non-ACTN functions. Figure 5 shows this case and it also 813 depicts the mapping between ACTN architecture and the YANG service 814 model architecture described in [Service-YANG]. This mapping is 815 helpful for the readers who are not familiar with some TEAS specific 816 terminology used in this document. A number of key ACTN interfaces 817 exist for deployment and operation of ACTN-based networks. These are 818 highlighted in Figure 5 (ACTN Interfaces). 820 +------------------------------+ 821 | Customer | 822 | +-----+ +----------+ | 823 | | CNC | |Other fns.| | 824 | +-----+ +----------+ | 825 +------------------------------+ 826 | Customer Service Model 827 | 828 +-----------------------------------------------+ 829 ********|********************** Service Orchestrator | 830 * MDSC | +------+ +------+ * +-----------+ | 831 * | | MDSC | | MDSC | * | Other fns.| | 832 * | | F1 | | F2 | * | (non-ACTN)| | 833 * | +------+ +------+ * +-----------+ | 834 * +---------------------*-------------------------+ 835 * * | Service Delivery Model 836 * * | 837 * +---------------------*-------------------------+ 838 * | * Network Orchestrator | 839 * | +------+ +------+ * +-----------+ | 840 * | | MDSC | | MDSC | * | Other fns.| | 841 * | | F3 | | F4 | * | (non-ACTN)| | 842 * | +------+ +------+ * +-----------+ | 843 ********|********************** | 844 +-----------------------------------------------+ 845 | Network Configuration Model 846 | 847 +-------------------------------------------+ 848 | Domain Controller | 849 | +------+ +-----------+ | 850 | | PNC | | Other fns.| | 851 | +------+ | (non-ACTN)| | 852 | +-----------+ | 853 +-------------------------------------------+ 854 | Device Configuration Model 855 | 856 -------- 857 | Device | 858 -------- 860 Figure 5: ACTN Architecture in the context of YANG Service Models 862 In Figure 5, MDSC F1 and F2 correspond to customer 863 mapping/translation, and virtual service coordination, respectively, 864 which are the MDSC service-related functions as defined in Section 865 4. MDSC F3 and F4 correspond to multi domain coordination, 866 virtualization/abstraction, respectively, which are the MDSC 867 network-related functions as defined in Section 4. In some 868 implementation, MDSC F1 and F2 can be implemented as part of a 869 Service Orchestrator which may support other non-ACTN functions. 870 Likewise, the MDSC F3 and F4 can be implemented as part of a Network 871 Orchestrator which may support other non-ACTN MDSC functions. 873 Also note that the PNC is not same as domain controller. Domain 874 controller in general has a larger set of functions than that of 875 PNC. The main functions of PNC are explained in Section 3.3. 876 Likewise, Customer has a larger set of functions than that of the 877 CNC. 879 Customer service model describes a service as offer or delivered to 880 a customer by a network operator as defined in [Service-YANG]. The 881 CMI is a subset of a customer service model to support VNS. This 882 model encompasses other non-TE/non-ACTN models to control non-ACTN 883 services (e.g., L3SM). 885 Service delivery model is used by a network operator to define and 886 configure how a service is provided by the network as defined in 887 [Service-YANG]. This model is similar to the MPI model as the 888 network-related functions of the MDSC, i.e., F3 and F4, provide an 889 abstract topology view of the E2E network to the service-related 890 functions of the MDSC, i.e., F1 and F2, which translate customer's 891 request at the CMI into the network configuration at the MPI. 893 Network configuration model is used by a network orchestrator to 894 provide network-level configuration model to a controller as defined 895 in [Service-YANG]. The MPI is a subset of network configuration 896 model to support TE configuration. This model encompasses the MPI 897 model plus other non-TE/non-ACTN models to control non-ACTN 898 functions of the domain controller (e.g., L3VPN). 900 Device configuration model is used by a controller to configure 901 physical network elements. 903 6. Topology Abstraction Method 905 This section discusses topology abstraction factors, types and their 906 context in ACTN architecture. Topology abstraction is useful in ACTN 907 architecture as a way to scale multi-domain network operation. Note 908 that this is the abstraction performed by the PNC to the MDSC or by 909 the MDSC-L to the MDSC-H, and that this is different from the VN 910 Type 2 topology (that is created and negotiated between the CNC and 911 the MDSC as part of the VNS). The purpose of topology abstraction 912 discussed in this section is for an efficient internal network 913 operation based on abstraction principle. 915 6.1. Abstraction Factors 917 This section provides abstraction factors in the ACTN architecture. 919 The MDSC oversees the specific aspects of the different domains and 920 builds a single abstracted end-to-end network topology in order to 921 coordinate end-to-end path computation and path/service 922 provisioning. In order for the MDSC to perform its coordination 923 function, it depends on the coordination with the PNCs which are the 924 domain-level controllers especially as to what level of domain 925 network resource abstraction is agreed upon between the MDSC and the 926 PNCs. 928 As discussed in [RFC7926], abstraction is tied with policy of the 929 networks. For instance, per an operational policy, the PNC would not 930 be allowed to provide any technology specific details (e.g., optical 931 parameters for WSON) in its update. In such case, the abstraction 932 level of the update will be in a generic nature. In order for the 933 MDSC to get technology specific topology information from the PNC, a 934 request/reply mechanism may be employed. 936 In some cases, abstraction is also tied with the controller's 937 capability of abstraction as it involves some rules and algorithms 938 to be applied to the actual network resource information (which is 939 also known as network topology). 941 [TE-Topology] describes YANG models for TE-network abstraction. 942 [PCEP-LS] describes PCEP Link-state mechanism that also allows for 943 transport of abstract topology in the context of Hierarchical PCE. 945 There are factors that may impact the choice of abstraction. Here 946 are the most relevant: 948 - The nature of underlying domain networks: Abstraction depends on 949 the nature of the underlying domain networks. For instance, packet 950 networks may have different level of abstraction requirements from 951 that of optical networks. Within optical networks, WSON may have 952 different level of abstraction requirements than the OTN networks. 954 - The capability of the PNC: Abstraction depends on the capability 955 of the PNCs. As abstraction requires hiding details of the 956 underlying resource network resource information, the PNC 957 capability to run some internal optimization algorithm impacts the 958 feasibility of abstraction. Some PNC may not have the ability to 959 abstract native topology while other PNCs may have such an ability 960 to abstract actual topology by using sophisticated algorithms. 962 - Scalability factor: Abstraction is a function of scalability. If 963 the actual network resource information is of small size, then the 964 need for abstraction would be less than the case where the native 965 network resource information is of large size. In some cases, 966 abstraction may not be needed at all. 968 - The frequency of topology updates: The proper abstraction level 969 may depend on the frequency of topology updates and vice versa. 971 - The capability/nature of the MDSC: The nature of the MDSC impacts 972 the degree/level of abstraction. If the MDSC is not capable of 973 handling optical parameters such as those specific to OTN/WSON, 974 then white topology abstraction may not work well. 976 - The confidentiality: In some cases where the PNC would like to 977 hide key internal topological data from the MDSC, the abstraction 978 method should consider this aspect. 980 - The scope of abstraction: All of the aforementioned factors are 981 equally applicable to both the MPI (MDSC-PNC Interface) and the 982 CMI (CNC-MDSC Interface). 984 6.2. Abstraction Types 986 This section defines the following three types of topology 987 abstraction: 989 . Native/White Topology (Section 6.2.1) 990 . Black Topology (Section 6.2.2) 991 . Grey Topology (Section 6.2.3) 993 6.2.1. Native/White Topology 995 This is a case where the PNC provides the actual network topology to 996 the MDSC without any hiding or filtering of information as shown in 997 Figure 6a. In this case, the MDSC has the full knowledge of the 998 underlying network topology and as such there is no need for the 999 MDSC to send a path computation request to the PNC. The computation 1000 burden will fall on the MDSC to find an optimal end-to-end path and 1001 optimal per domain paths. 1003 +--+ +--+ +--+ +--+ 1004 +-+ +-----+ +-----+ +-----+ +-+ 1005 ++-+ ++-+ +-++ +-++ 1006 | | | | 1007 | | | | 1008 | | | | 1009 | | | | 1010 ++-+ ++-+ +-++ +-++ 1011 +-+ +-----+ +-----+ +-----+ +-+ 1012 +--+ +--+ +--+ +--+ 1014 Figure 6a: The native/white topology 1016 6.2.2. Black Topology 1018 The entire domain network is abstracted as a single virtual node 1019 (see the definition of virtual node in [RFC7926]) with the 1020 access/egress links without disclosing any node internal 1021 connectivity information. 1022 Figure 6b depicts a native topology with the corresponding black 1023 topology with one virtual node and inter-domain links. In this case, 1024 the MDSC has to make path computation requests to the PNCs before it 1025 can determine an end-to-end path. If there are a large number of 1026 inter-connected domains, this abstraction method may impose a heavy 1027 coordination load at the MDSC level in order to find an optimal end- 1028 to-end path. 1029 The black topology would not give the MDSC any critical network 1030 resource information other than the border nodes/links information 1031 and as such it is likely to have a need for complementary 1032 communications between the MDSC and the PNCs (e.g., Path computation 1033 Request/Reply). 1035 +--+ +--+ +--+ +--+ 1036 +-+ +-----+ +-----+ +-----+ +-+ 1037 ++-+ ++-+ +-++ +-++ 1038 | | | | 1039 | | | | 1040 | | | | 1041 | | | | 1042 ++-+ ++-+ +-++ +-++ 1043 +-+ +-----+ +-----+ +-----+ +-+ 1044 +--+ +--+ +--+ +--+ 1046 +--------+ 1047 +--+ +--+ 1048 | | 1049 | | 1050 | | 1051 | | 1052 | | 1053 | | 1054 +--+ +--+ 1055 +--------+ 1057 Figure 6b: The native topology and the corresponding black topology 1058 with one virtual node and inter-domain links 1060 6.2.3. Grey Topology 1062 This abstraction level, referred to a grey topology, represents a 1063 compromise between black and white topology from a granularity point 1064 of view. As shown in Figures 7a and 7b, we may further differentiate 1065 from a perspective of how to abstract internal TE resources between 1066 the pairs of border nodes: 1067 . Grey topology type A: border nodes with a TE links between them 1068 in a full mesh fashion (See Figure 7a). 1070 +--+ +--+ +--+ +--+ 1071 +-+ +-----+ +-----+ +-----+ +-+ 1072 ++-+ ++-+ +-++ +-++ 1073 | | | | 1074 | | | | 1075 | | | | 1076 | | | | 1077 ++-+ ++-+ +-++ +-++ 1078 +-+ +-----+ +-----+ +-----+ +-+ 1079 +--+ +--+ +--+ +--+ 1081 +--+ +--+ 1082 +-+ +----+ +-+ 1083 ++-+ +-++ 1084 | \ / | 1085 | \/ | 1086 | /\ | 1087 | / \ | 1088 ++-+ +-++ 1089 +-+ +----+ +-+ 1090 +--+ +--+ 1092 Figure 7a: The native topology and the corresponding grey topology 1093 type A with TE links between border nodes 1095 For each pair of ingress and egress nodes (i.e., border nodes 1096 to/from the domain), TE link metric is provided with TE attributes 1097 such as max bandwidth available, link delay, etc. This abstraction 1098 depends on the underlying TE networks. 1099 Note that this grey topology can also be represented as a single 1100 abstract node with the connectivity matrix defined in [TE-Topology], 1101 abstracting the internal connectivity information. The only thing 1102 might be different is some additional information about the end 1103 points of the links of the border nodes (i.e., links outward 1104 customer-facing) as they cannot be included in the connectivity 1105 matrix's termination points. 1107 . Grey topology type B: border nodes with some internal 1108 abstracted nodes and abstracted links (See Figure 7b) 1109 +--+ +--+ +--+ 1110 +-+ +-----+ +-----+ +-+ 1111 ++-+ ++-+ +-++ 1112 | | 1113 | | 1114 | | 1115 | | 1116 ++-+ ++-+ +-++ 1117 +-+ +-----+ +-----+ +-+ 1118 +--+ +--+ +--+ 1120 Figure 7b: The grey topology type B with abstract nodes/links 1121 between border nodes 1123 The grey abstraction type B would allow the MDSC to have more 1124 information about the internals of the domain networks by the PNCs 1125 so that the MDSC can flexibly determine optimal paths. The MDSC may 1126 configure some of the internal virtual nodes (e.g., cross-connect) 1127 to redirect its traffic as it sees changes from the domain networks. 1129 6.3. Building Methods of Grey Topology 1131 This section discusses two different methods of building a grey 1132 topology: 1134 . Automatic generation of abstract topology by configuration 1135 (Section 6.3.1) 1136 . On-demand generation of supplementary topology via path 1137 computation request/reply (Section 6.3.2) 1139 6.3.1. Automatic generation of abstract topology by configuration 1141 The "Automatic generation" method is based on the 1142 abstraction/summarization of the whole domain by the PNC and its 1143 advertisement on MPI interface once the abstraction level is 1144 configured. The level of abstraction advertisement can be decided 1145 based on some PNC configuration parameters (e.g. provide the 1146 potential connectivity between any PE and any ASBR in an MPLS-TE 1147 network. 1149 Note that the configuration parameters for this potential topology 1150 can include available B/W, latency, or any combination of defined 1151 parameters. How to generate such tunnel information is beyond the 1152 scope of this document. 1154 Such potential topology needs to be periodically or 1155 incrementally/asynchronously updated every time that a failure, a 1156 recovery or the setup of new VNs causes a change in the 1157 characteristics of the advertised grey topology (e.g. in our 1158 previous case if due to changes in the network is it now possible to 1159 provide connectivity between a given PE and a given ASBR with a 1160 higher delay in the update). 1162 6.3.2. On-demand generation of supplementary topology via path compute 1163 request/reply 1165 The "on-demand generation" of supplementary topology is to be 1166 distinguished from automatic generation of abstract topology. While 1167 abstract topology is generated and updated automatically by 1168 configuration as explained in Section 6.3.1, additional 1169 supplementary topology may be obtained by the MDSC via path compute 1170 request/reply mechanism. Starting with a black topology 1171 advertisement from the PNCs, the MDSC may need additional 1172 information beyond the level of black topology from the PNCs. 1174 It is assumed that the black topology advertisement from PNCs would 1175 give the MDSC each domain's the border node/link information. Under 1176 this scenario, when the MDSC needs to allocate a new VN, the MDSC 1177 can issue a number of Path Computation requests as described in 1178 [ACTN-YANG] to different PNCs with constraints matching the VN 1179 request. An example is provided in Figure 4, where the MDSC is 1180 requesting to setup a P2P VN between AP1 and AP2. The MDSC can use 1181 two different inter-domain links to get from Domain X to Domain Y, 1182 namely the one between ASBRX.1 and ASBRY.1 and the one between 1183 ASBRX.2 and ASBRY.2, but in order to choose the best end to end path 1184 it needs to know what domain X and Y can offer in term of 1185 connectivity and constraints between the PE nodes and the ASBR 1186 nodes. 1188 ------- ------- 1189 ( ) ( ) 1190 - ASBRX.1------- ASBRY.1 - 1191 (+---+ ) ( +---+) 1192 -+---( |PE1| Dom.X ) ( Dom.Y |PE2| )---+- 1193 | (+---+ ) ( +---+) | 1194 AP1 - ASBRX.2------- ASBRY.2 - AP2 1195 ( ) ( ) 1196 ------- ------- 1198 Figure 4: A multi-domain networks example 1200 A path computation request will be issued to PNC.X asking for 1201 potential connectivity between PE1 and ASBRX.1 and between PE1 and 1202 ASBRX.2 with related objective functions and TE metric constraints. 1203 A similar request will be issued to PNC.Y and the results merged 1204 together at the MDSC to be able to compute the optimal end-to-end 1205 path including the inter domain links. 1207 The info related to the potential connectivity may be cached by the 1208 MDSC for subsequent path computation processes or discarded, but in 1209 this case the PNCs are not requested to keep the grey topology 1210 updated. 1212 6.4. Abstraction Configuration Consideration 1214 This section provides a set of abstraction configuration 1215 considerations. 1217 It is expected that the abstraction level be configured between the 1218 CNC and the MDSC (i.e., the CMI) depending on the capability of the 1219 CNC. This negotiated level of abstraction on the CMI may also impact 1220 the way the MDSC and the PNCs configure and encode the abstracted 1221 topology. For example, if the CNC is capable of sophisticated 1222 technology specific operation, then this would impact the level of 1223 abstraction at the MDSC with the PNCs. On the other hand, if the CNC 1224 asks for a generic topology abstraction, then the level of 1225 abstraction at the MDSC with the PNCs can be less technology 1226 specific than the former case. 1228 The subsequent sections provide a list of possible abstraction 1229 levels for various technology domain networks. 1231 6.4.1. Packet Networks 1233 - For grey abstraction, the type of abstraction and its parameters 1234 can be defined and configured. 1235 o Abstraction Level 1: TE-tunnel abstraction for all (S-D) 1236 border pairs with: 1237 . Maximum B/W available per Priority Level 1238 . Minimum Latency 1240 6.4.2. OTN Networks 1242 For OTN networks, max bandwidth available may be per ODU 0/1/2/3 1243 switching level or aggregated across all ODU switching levels (i.e., 1244 ODUj/k). Clearly, there is a trade-off between these two abstraction 1245 methods. Some OTN switches can switch any level of ODUs and in such 1246 case there is no need for ODU level abstraction. 1248 - For grey abstraction, the type of abstraction and its parameters 1249 can be defined and configured. 1251 o Abstraction Level 1: Per ODU Switching level (i.e., ODU type 1252 and number) TE-tunnel abstraction for all (S-D) border pairs 1253 with: 1254 . Maximum B/W available per Priority Level 1255 . Minimum Latency 1257 o Abstraction Level 2: Aggregated TE-tunnel abstraction for all 1258 (S-D) border pairs with: 1259 . Maximum B/W available per Priority Level 1260 . Minimum Latency 1262 6.4.3. WSON Networks 1264 For WSON networks, max bandwidth available may be per 1265 lambda/frequency level (OCh) or aggregated across all 1266 lambda/frequency level. Per OCh level abstraction gives more 1267 detailed data to the MDSC at the expense of more information 1268 processing. Either OCh-level or aggregated level abstraction should 1269 factor in the RWA constraint (i.e., wavelength continuity) at the 1270 PNC level. This means the PNC should have this capability and 1271 advertise it as such. 1273 For grey abstraction, the type of abstraction and its parameters can 1274 be defined and configured as follows: 1276 o Abstraction Level 1: Per Lambda/Frequency level TE-tunnel 1277 abstraction for all (S-D) border pairs with: 1278 . Maximum B/W available per Priority Level 1279 . Minimum Latency 1281 o Abstraction Level 2: Aggregated TE-tunnel abstraction for all 1282 (S-D) border pairs with: 1283 . Maximum B/W available per Priority Level 1285 6.5. Topology Abstraction Granularity Level example 1287 This section illustrates how topology abstraction operates in 1288 different level of granularity over a hierarchy of MDSCs which is 1289 shown in Figure 8 below. 1291 +-----+ 1292 | CNC | CNC wants to create a VN 1293 +-----+ between CE A and CE B 1294 | 1295 | 1296 +-----------------------+ 1297 | MDSC-H | 1298 +-----------------------+ 1299 / \ 1300 / \ 1301 +--------+ +--------+ 1302 | MDSC-L1| | MDSC-L2| 1303 +--------+ +--------+ 1304 / \ / \ 1305 / \ / \ 1306 +-----+ +-----+ +-----+ +-----+ 1307 CE A o----|PNC 1| |PNC 2| |PNC 3| |PNC 4|----o CE B 1308 +-----+ +-----+ +-----+ +-----+ 1310 Topology operated by MDSC-H 1312 --o=o=o=o-- 1314 Topology operated by MDSC-L1 Topology operated by MDSC-L2 1315 _ _ _ _ 1316 ( ) ( ) ( ) ( ) 1317 ( ) ( ) ( ) ( ) 1318 --(o---o)==(o---o)== ==(o---o)==(o---o)-- 1319 ( ) ( ) ( ) ( ) 1320 (_) (_) (_) (_) 1322 Actual Topology 1323 ___ ___ ___ ___ 1324 ( ) ( ) ( ) ( ) 1325 ( o ) ( o ) ( o--o) ( o ) 1326 ( / \ ) ( |\ ) ( | | ) ( / \ ) 1327 ----(o-o---o-o)==(o-o-o-o-o)==(o--o--o-o)==(o-o-o-o-o)---- 1328 ( \ / ) ( | |/ ) ( | | ) ( \ / ) 1329 ( o ) (o-o ) ( o--o) ( o ) 1330 (___) (___) (___) (___) 1332 Domain 1 Domain 2 Domain 3 Domain 4 1334 Where o is a node and -- is a link and === a border link 1336 Figure 8: Illustration of topology abstraction granularity levels 1337 In the example depicted in Figure 8, there are four domains under 1338 control of the respective PNCs, namely, PNC 1, PNC 2, PNC3 and PNC4. 1339 Assume that MDSC L-1 is controlling PNC 1 and PNC 2 while MDSC L-2 1340 is controlling PNC 3 and PNC 4. Let us assume that each of the PNCs 1341 provides a grey topology abstraction in which to present only border 1342 nodes and links within and outside the domain. The abstract topology 1343 MDSC-L1 would operate is basically a combination of the two 1344 topologies the PNCs (PNC 1 and PNC 2) provide. Likewise, the 1345 abstract topology MDSC-L2 would operate is shown in Figure 8. Both 1346 MDSC-L1 and MDSC-L2 provide a black topology abstraction in which 1347 each PNC domain is presented as one virtual node to its top level 1348 MDSC-H. Then the MDSC-H combines these two topologies updated by 1349 MDSC-L1 and MDSC-L2 to create the abstraction topology to which it 1350 operates. MDSC-H sees the whole four domain networks as four virtual 1351 nodes connected via virtual links. The top level MDSC may operate on 1352 a higher level of abstraction (i.e., less granular level) than the 1353 lower level MSDCs. 1355 7. Access Points and Virtual Network Access Points 1357 In order not to share unwanted topological information between the 1358 customer domain and provider domain, a new entity is defined which 1359 is referred to as the Access Point (AP). See the definition of AP in 1360 Section 1.1. 1362 A customer node will use APs as the end points for the request of 1363 VNS as shown in Figure 9. 1365 ------------- 1366 ( ) 1367 - - 1368 +---+ X ( ) Z +---+ 1369 |CE1|---+----( )---+---|CE2| 1370 +---+ | ( ) | +---+ 1371 AP1 - - AP2 1372 ( ) 1373 ------------- 1375 Figure 9: APs definition customer view 1377 Let's take as an example a scenario shown in Figure 7. CE1 is 1378 connected to the network via a 10Gb link and CE2 via a 40Gb link. 1379 Before the creation of any VN between AP1 and AP2 the customer view 1380 can be summarized as shown in Table 1: 1382 +----------+------------------------+ 1383 |End Point | Access Link Bandwidth | 1384 +-----+----------+----------+-------------+ 1385 |AP id| CE,port | MaxResBw | AvailableBw | 1386 +-----+----------+----------+-------------+ 1387 | AP1 |CE1,portX | 10Gb | 10Gb | 1388 +-----+----------+----------+-------------+ 1389 | AP2 |CE2,portZ | 40Gb | 40Gb | 1390 +-----+----------+----------+-------------+ 1392 Table 1: AP - customer view 1394 On the other hand, what the provider sees is shown in Figure 10. 1396 ------- ------- 1397 ( ) ( ) 1398 - - - - 1399 W (+---+ ) ( +---+) Y 1400 -+---( |PE1| Dom.X )----( Dom.Y |PE2| )---+- 1401 | (+---+ ) ( +---+) | 1402 AP1 - - - - AP2 1403 ( ) ( ) 1404 ------- ------- 1406 Figure 10: Provider view of the AP 1408 Which results in a summarization as shown in Table 2. 1410 +----------+------------------------+ 1411 |End Point | Access Link Bandwidth | 1412 +-----+----------+----------+-------------+ 1413 |AP id| PE,port | MaxResBw | AvailableBw | 1414 +-----+----------+----------+-------------+ 1415 | AP1 |PE1,portW | 10Gb | 10Gb | 1416 +-----+----------+----------+-------------+ 1417 | AP2 |PE2,portY | 40Gb | 40Gb | 1418 +-----+----------+----------+-------------+ 1420 Table 2: AP - provider view 1422 A Virtual Network Access Point (VNAP) needs to be defined as binding 1423 between the AP that is linked to a VN and that is used to allow for 1424 different VNs to start from the same AP. It also allows for traffic 1425 engineering on the access and/or inter-domain links (e.g., keeping 1426 track of bandwidth allocation). A different VNAP is created on an AP 1427 for each VN. 1429 In the simple scenario depicted above we suppose we want to create 1430 two virtual networks. The first with VN identifier 9 between AP1 and 1431 AP2 with bandwidth of 1Gbps, while the second with VN id 5, again 1432 between AP1 and AP2 and with bandwidth 2Gbps. 1434 The provider view would evolve as shown in Table 3. 1436 +----------+------------------------+ 1437 |End Point | Access Link/VNAP Bw | 1438 +---------+----------+----------+-------------+ 1439 |AP/VNAPid| PE,port | MaxResBw | AvailableBw | 1440 +---------+----------+----------+-------------+ 1441 |AP1 |PE1,portW | 10Gbps | 7Gbps | 1442 | -VNAP1.9| | 1Gbps | N.A. | 1443 | -VNAP1.5| | 2Gbps | N.A | 1444 +---------+----------+----------+-------------+ 1445 |AP2 |PE2,portY | 40Gbps | 37Gbps | 1446 | -VNAP2.9| | 1Gbps | N.A. | 1447 | -VNAP2.5| | 2Gbps | N.A | 1448 +---------+----------+----------+-------------+ 1450 Table 3: AP and VNAP - provider view after VNS creation 1452 7.1. Dual homing scenario 1454 Often there is a dual homing relationship between a CE and a pair of 1455 PEs. This case needs to be supported by the definition of VN, APs 1456 and VNAPs. Suppose CE1 connected to two different PEs in the 1457 operator domain via AP1 and AP2 and that the customer needs 5Gbps of 1458 bandwidth between CE1 and CE2. This is shown in Figure 11. 1460 ____________ 1461 AP1 ( ) AP3 1462 -------(PE1) (PE3)------- 1463 W / ( ) \X 1464 +---+/ ( ) \+---+ 1465 |CE1| ( ) |CE2| 1466 +---+\ ( ) /+---+ 1467 Y \ ( ) /Z 1468 -------(PE2) (PE4)------- 1469 AP2 (____________) 1471 Figure 11: Dual homing scenario 1473 In this case, the customer will request for a VN between AP1, AP2 1474 and AP3 specifying a dual homing relationship between AP1 and AP2. 1475 As a consequence no traffic will flow between AP1 and AP2. The dual 1476 homing relationship would then be mapped against the VNAPs (since 1477 other independent VNs might have AP1 and AP2 as end points). 1479 The customer view would be shown in Table 4. 1481 +----------+------------------------+ 1482 |End Point | Access Link/VNAP Bw | 1483 +---------+----------+----------+-------------+-----------+ 1484 |AP/VNAPid| CE,port | MaxResBw | AvailableBw |Dual Homing| 1485 +---------+----------+----------+-------------+-----------+ 1486 |AP1 |CE1,portW | 10Gbps | 5Gbps | | 1487 | -VNAP1.9| | 5Gbps | N.A. | VNAP2.9 | 1488 +---------+----------+----------+-------------+-----------+ 1489 |AP2 |CE1,portY | 40Gbps | 35Gbps | | 1490 | -VNAP2.9| | 5Gbps | N.A. | VNAP1.9 | 1491 +---------+----------+----------+-------------+-----------+ 1492 |AP3 |CE2,portX | 40Gbps | 35Gbps | | 1493 | -VNAP3.9| | 5Gbps | N.A. | NONE | 1494 +---------+----------+----------+-------------+-----------+ 1496 Table 4: Dual homing - customer view after VN creation 1498 8. Advanced ACTN Application: Multi-Destination Service 1500 A further advanced application of ACTN is in the case of Data Center 1501 selection, where the customer requires the Data Center selection to 1502 be based on the network status; this is referred to as Multi- 1503 Destination in [ACTN-REQ]. In terms of ACTN, a CNC could request a 1504 connectivity service (virtual network) between a set of source Aps 1505 and destination APs and leave it up to the network (MDSC) to decide 1506 which source and destination access points to be used to set up the 1507 connectivity service (virtual network). The candidate list of source 1508 and destination APs is decided by a CNC (or an entity outside of 1509 ACTN) based on certain factors which are outside the scope of ACTN. 1511 Based on the AP selection as determined and returned by the network 1512 (MDSC), the CNC (or an entity outside of ACTN) should further take 1513 care of any subsequent actions such as orchestration or service 1514 setup requirements. These further actions are outside the scope of 1515 ACTN. 1517 Consider a case as shown in Figure 12, where three data centers are 1518 available, but the customer requires the data center selection to be 1519 based on the network status and the connectivity service setup 1520 between the AP1 (CE1) and one of the destination APs (AP2 (DC-A), 1521 AP3 (DC-B), and AP4 (DC-C)). The MDSC (in coordination with PNCs) 1522 would select the best destination AP based on the constraints, 1523 optimization criteria, policies, etc., and setup the connectivity 1524 service (virtual network). 1526 ------- ------- 1527 ( ) ( ) 1528 - - - - 1529 +---+ ( ) ( ) +----+ 1530 |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| 1531 +---+ | ( ) ( ) | +----+ 1532 AP1 - - - - AP2 1533 ( ) ( ) 1534 ---+--- ---+--- 1535 AP3 | AP4 | 1536 +----+ +----+ 1537 |DC-B| |DC-C| 1538 +----+ +----+ 1540 Figure 12: End point selection based on network status 1542 8.1. Pre-Planned End Point Migration 1544 Further in case of Data Center selection, customer could request for 1545 a backup DC to be selected, such that in case of failure, another DC 1546 site could provide hot stand-by protection. As shown in Figure 13 1547 DC-C is selected as a backup for DC-A. Thus, the VN should be setup 1548 by the MDSC to include primary connectivity between AP1 (CE1) and 1549 AP2 (DC-A) as well as protection connectivity between AP1 (CE1) and 1550 AP4 (DC-C). 1552 ------- ------- 1553 ( ) ( ) 1554 - - __ - - 1555 +---+ ( ) ( ) +----+ 1556 |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| 1557 +---+ | ( ) ( ) | +----+ 1558 AP1 - - - - AP2 | 1559 ( ) ( ) | 1560 ---+--- ---+--- | 1561 AP3 | AP4 | HOT STANDBY 1562 +----+ +----+ | 1563 |DC-D| |DC-C|<------------- 1564 +----+ +----+ 1566 Figure 13: Pre-planned end point migration 1568 8.2. On the Fly End Point Migration 1570 Compared to pre-planned end point migration, on the fly end point 1571 selection is dynamic in that the migration is not pre-planned but 1572 decided based on network condition. Under this scenario, the MDSC 1573 would monitor the network (based on the VN SLA) and notify the CNC 1574 in case where some other destination AP would be a better choice 1575 based on the network parameters. The CNC should instruct the MDSC 1576 when it is suitable to update the VN with the new AP if it is 1577 required. 1579 9. Advanced Topic 1581 This section describes how ACTN architecture supports some 1582 deployment scenarios. See Appendix A for details on MDSC and PNC 1583 functions integrated in Service/Network Orchestrator and Appendix B 1584 for IP + Optical with L3VPN service. 1586 10. Manageability Considerations 1588 The objective of ACTN is to manage traffic engineered resources, and 1589 provide a set of mechanism to allow clients to request virtual 1590 connectivity across server network resources. ACTN will support 1591 multiple clients each with its own view of and control of the server 1592 network, the network operator will need to partition (or "slice") 1593 their network resources, and manage them resources accordingly. 1595 The ACTN platform will, itself, need to support the request, 1596 response, and reservations of client and network layer connectivity. 1597 It will also need to provide performance monitoring and control of 1598 traffic engineered resources. The management requirements may be 1599 categorized as follows: 1601 . Management of external ACTN protocols 1602 . Management of internal ACTN protocols 1603 . Management and monitoring of ACTN components 1604 . Configuration of policy to be applied across the ACTN system 1606 10.1. Policy 1608 It is expected that a policy will be an important aspect of ACTN 1609 control and management. Typically, policies are used via the 1610 components and interfaces, during deployment of the service, to 1611 ensure that the service is compliant with agreed policy factors 1612 (often described in Service Level Agreements - SLAs), these include, 1613 but are not limited to: connectivity, bandwidth, geographical 1614 transit, technology selection, security, resilience, and economic 1615 cost. 1617 Depending on the deployment the ACTN deployment architecture, some 1618 policies may have local or global significance. That is, certain 1619 policies may be ACTN component specific in scope, while others may 1620 have broader scope and interact with multiple ACTN components. Two 1621 examples are provided below: 1623 . A local policy might limit the number, type, size, and 1624 scheduling of virtual network services a customer may request 1625 via its CNC. This type of policy would be implemented locally on 1626 the MDSC. 1628 . A global policy might constrain certain customer types (or 1629 specific customer applications) to only use certain MDSCs, and 1630 be restricted to physical network types managed by the PNCs. A 1631 global policy agent would govern these types of policies. 1633 This objective of this section is to discuss the applicability of 1634 ACTN policy: requirements, components, interfaces, and examples. 1635 This section provides an analysis and does not mandate a specific 1636 method for enforcing policy, or the type of policy agent that would 1637 be responsible for propagating policies across the ACTN components. 1638 It does highlight examples of how policy may be applied in the 1639 context of ACTN, but it is expected further discussion in an 1640 applicability or solution specific document, will be required. 1642 10.2. Policy applied to the Customer Network Controller 1644 A virtual network service for a customer application will be 1645 requested from the CNC. It will reflect the application requirements 1646 and specific service policy needs, including bandwidth, traffic type 1647 and survivability. Furthermore, application access and type of 1648 virtual network service requested by the CNC, will be need adhere to 1649 specific access control policies. 1651 10.3. Policy applied to the Multi Domain Service Coordinator 1653 A key objective of the MDSC is to help the customer express the 1654 application connectivity request via its CNC as set of desired 1655 business needs, therefore policy will play an important role. 1657 Once authorized, the virtual network service will be instantiated 1658 via the CNC-MDSC Interface (CMI), it will reflect the customer 1659 application and connectivity requirements, and specific service 1660 transport needs. The CNC and the MDSC components will have agreed 1661 connectivity end-points, use of these end-points should be defined 1662 as a policy expression when setting up or augmenting virtual network 1663 services. Ensuring that permissible end-points are defined for CNCs 1664 and applications will require the MDSC to maintain a registry of 1665 permissible connection points for CNCs and application types. 1667 It may also be necessary for the MDSC to resolve policy conflicts, 1668 or at least flag any issues to administrator of the MDSC itself. 1669 Conflicts may occur when virtual network service optimization 1670 criterion are in competition. For example, to meet objectives for 1671 service reachability a request may require an interconnection point 1672 between multiple physical networks; however, this might break a 1673 confidentially policy requirement of specific type of end-to-end 1674 service. This type of situation may be resolved using hard and soft 1675 policy constraints. 1677 10.4. Policy applied to the Physical Network Controller 1679 The PNC is responsible for configuring the network elements, 1680 monitoring physical network resources, and exposing connectivity 1681 (direct or abstracted) to the MDSC. It is therefore expected that 1682 policy will dictate what connectivity information will be exported 1683 between the PNC, via the MDSC-PNC Interface (MPI), and MDSC. 1685 Policy interactions may arise when a PNC determines that it cannot 1686 compute a requested path from the MDSC, or notices that (per a 1687 locally configured policy) the network is low on resources (for 1688 example, the capacity on key links become exhausted). In either 1689 case, the PNC will be required to notify the MDSC, which may (again 1690 per policy) act to construct a virtual network service across 1691 another physical network topology. 1693 Furthermore, additional forms of policy-based resource management 1694 will be required to provide virtual network service performance, 1695 security and resilience guarantees. This will likely be implemented 1696 via a local policy agent and subsequent protocol methods. 1698 11. Security Considerations 1700 The ACTN framework described in this document defines key components 1701 and interfaces for managed traffic engineered networks. Securing the 1702 request and control of resources, confidentially of the information, 1703 and availability of function, should all be critical security 1704 considerations when deploying and operating ACTN platforms. 1706 Several distributed ACTN functional components are required, and as 1707 a rule implementations should consider encrypting data that flow 1708 between components, especially when they are implemented at remote 1709 nodes, regardless if these are external or internal network 1710 interfaces. 1712 The ACTN security discussion is further split into two specific 1713 categories described in the following sub-sections: 1715 . Interface between the Customer Network Controller and Multi 1716 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) 1718 . Interface between the Multi Domain Service Coordinator and 1719 Physical Network Controller (PNC), MDSC-PNC Interface (MPI) 1721 From a security and reliability perspective, ACTN may encounter many 1722 risks such as malicious attack and rogue elements attempting to 1723 connect to various ACTN components. Furthermore, some ACTN 1724 components represent a single point of failure and threat vector, 1725 and must also manage policy conflicts, and eavesdropping of 1726 communication between different ACTN components. 1728 The conclusion is that all protocols used to realize the ACTN 1729 framework should have rich security features, and customer, 1730 application and network data should be stored in encrypted data 1731 stores. Additional security risks may still exist. Therefore, 1732 discussion and applicability of specific security functions and 1733 protocols will be better described in documents that are use case 1734 and environment specific. 1736 11.1. Interface between the Customer Network Controller and Multi 1737 Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) 1739 The role of the MDSC is to detach the network and service control 1740 from underlying technology to help the customer express the network 1741 as desired by business needs. It should be noted that data stored by 1742 the MDSC will reveal details of the virtual network services, and 1743 which CNC and application is consuming the resource. The data stored 1744 must therefore be considered as a candidate for encryption. 1746 CNC Access rights to an MDSC must be managed. MDSC resources must be 1747 properly allocated, and methods to prevent policy conflicts, 1748 resource wastage and denial of service attacks on the MDSC by rogue 1749 CNCs, should also be considered. 1751 A CNC-MDSC protocol interface will likely be an external protocol 1752 interface. Again, suitable authentication and authorization of each 1753 CNC connecting to the MDSC will be required, especially, as these 1754 are likely to be implemented by different organizations and on 1755 separate functional nodes. Use of the AAA-based mechanisms would 1756 also provide role-based authorization methods, so that only 1757 authorized CNC's may access the different functions of the MDSC. 1759 11.2. Interface between the Multi Domain Service Coordinator and 1760 Physical Network Controller (PNC), MDSC-PNC Interface (MPI) 1762 The function of the Physical Network Controller (PNC) is to 1763 configure network elements, provide performance and monitoring 1764 functions of the physical elements, and export physical topology 1765 (full, partial, or abstracted) to the MDSC. 1767 Where the MDSC must interact with multiple (distributed) PNCs, a 1768 PKI-based mechanism is suggested, such as building a TLS or HTTPS 1769 connection between the MDSC and PNCs, to ensure trust between the 1770 physical network layer control components and the MDSC. 1772 Which MDSC the PNC exports topology information to, and the level of 1773 detail (full or abstracted) should also be authenticated and 1774 specific access restrictions and topology views, should be 1775 configurable and/or policy-based. 1777 12. References 1779 12.1. Informative References 1781 [RFC2702] Awduche, D., et. al., "Requirements for Traffic 1782 Engineering Over MPLS", RFC 2702, September 1999. 1784 [RFC4026] L. Andersson, T. Madsen, "Provider Provisioned Virtual 1785 Private Network (VPN) Terminology", RFC 4026, March 2005. 1787 [RFC4208] G. Swallow, J. Drake, H.Ishimatsu, Y. Rekhter, 1788 "Generalized Multiprotocol Label Switching (GMPLS) User- 1789 Network Interface (UNI): Resource ReserVation Protocol- 1790 Traffic Engineering (RSVP-TE) Support for the Overlay 1791 Model", RFC 4208, October 2005. 1793 [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path 1794 Computation Element (PCE)-Based Architecture", IETF RFC 1795 4655, August 2006. 1797 [RFC5654] Niven-Jenkins, B. (Ed.), D. Brungard (Ed.), and M. Betts 1798 (Ed.), "Requirements of an MPLS Transport Profile", RFC 1799 5654, September 2009. 1801 [RFC7149] Boucadair, M. and Jacquenet, C., "Software-Defined 1802 Networking: A Perspective from within a Service Provider 1803 Environment", RFC 7149, March 2014. 1805 [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for 1806 Information Exchange between Interconnected Traffic- 1807 Engineered Networks", RFC 7926, July 2016. 1809 [GMPLS] Manning, E., et al., "Generalized Multi-Protocol Label 1810 Switching (GMPLS) Architecture", RFC 3945, October 2004. 1812 [ONF-ARCH] Open Networking Foundation, "SDN architecture", Issue 1813 1.1, ONF TR-521, June 2016. 1815 [RFC7491] King, D., and Farrel, A., "A PCE-based Architecture for 1816 Application-based Network Operations", RFC 7491, March 1817 2015. 1819 [Transport NBI] Busi, I., et al., "Transport North Bound Interface 1820 Use Cases", draft-tnbidt-ccamp-transport-nbi-use-cases, 1821 work in progress. 1823 [ACTN-Abstraction] Y. Lee, et al., "Abstraction and Control of TE 1824 Networks (ACTN) Abstraction Methods", draft-lee-teas-actn- 1825 abstraction, work in progress. 1827 13. Contributors 1829 Adrian Farrel 1830 Old Dog Consulting 1831 Email: adrian@olddog.co.uk 1833 Italo Busi 1834 Huawei 1835 Email: Italo.Busi@huawei.com 1837 Khuzema Pithewan 1838 Infinera 1839 Email: kpithewan@infinera.com 1841 Michael Scharf 1842 Nokia 1843 Email: michael.scharf@nokia.com 1845 Authors' Addresses 1847 Daniele Ceccarelli (Editor) 1848 Ericsson 1849 Torshamnsgatan,48 1850 Stockholm, Sweden 1851 Email: daniele.ceccarelli@ericsson.com 1853 Young Lee (Editor) 1854 Huawei Technologies 1855 5340 Legacy Drive 1856 Plano, TX 75023, USA 1857 Phone: (469)277-5838 1858 Email: leeyoung@huawei.com 1860 Luyuan Fang 1861 Microsoft 1862 Email: luyuanf@gmail.com 1864 Diego Lopez 1865 Telefonica I+D 1866 Don Ramon de la Cruz, 82 1867 28006 Madrid, Spain 1868 Email: diego@tid.es 1870 Sergio Belotti 1871 Alcatel Lucent 1872 Via Trento, 30 1873 Vimercate, Italy 1874 Email: sergio.belotti@nokia.com 1875 Daniel King 1876 Lancaster University 1877 Email: d.king@lancaster.ac.uk 1879 Dhruv Dhody 1880 Huawei Technologies 1881 Divyashree Techno Park, Whitefield 1882 Bangalore, Karnataka 560066 1883 India 1884 Email: dhruv.ietf@gmail.com 1886 Gert Grammel 1887 Juniper Networks 1888 Email: ggrammel@juniper.net 1890 APPENDIX A - Example of MDSC and PNC functions integrated in 1891 Service/Network Orchestrator 1893 This section provides an example of a possible deployment scenario, 1894 in which Service/Network Orchestrator can include a number of 1895 functionalities, among which, in the example below, PNC 1896 functionalities for domain 2 and MDSC functionalities to coordinate 1897 the PNC1 functionalities (hosted in a separate domain controller) 1898 and PNC2 functionalities (co-hosted in the network orchestrator). 1900 Customer 1901 +-------------------------------+ 1902 | +-----+ | 1903 | | CNC | | 1904 | +-----+ | 1905 +-------|-----------------------+ 1906 |-CMI 1907 Service/Network | 1908 Orchestrator | 1909 +-------|------------------------+ 1910 | +------+ MPI +------+ | 1911 | | MDSC |----|--> | PNC2 | | 1912 | +------+ +------+ | 1913 +-------|------------------|-----+ 1914 |-MPI | 1915 Domain Controller | | 1916 +-------|-----+ | 1917 | +-----+ | | 1918 | |PNC1 | | | 1919 | +-----+ | | 1920 +-------|-----+ | 1921 v v 1922 ------- ------- 1923 ( ) ( ) 1924 - - - - 1925 ( ) ( ) 1926 ( Domain 1 )----( Domain 2 ) 1927 ( ) ( ) 1928 - - - - 1929 ( ) ( ) 1930 ------- ------- 1932 APPENDIX B - Example of IP + Optical network with L3VPN service 1934 This section provides a more complex deployment scenario in which 1935 ACTN hierarchy is deployed to control a multi-layer network via an 1936 IP/MPLS PNC and an Optical PNC. The scenario is further enhanced by 1937 the introduction of an upper layer service configuration (e.g. 1939 L3VPN). The provisioning of the L3VPN service is outside ACTN scope 1940 but it is worth showing how the two parts are integrated for the end 1941 to end service fulfilment. An example of service configuration 1942 function in the Service/Network Orchestrator is discussed in [I- 1943 D.dhjain-bess-bgp-l3vpn-yang]. 1945 Customer 1946 +-------------------------------+ 1947 | +-----+ | 1948 | | CNC | | 1949 | +-----+ | 1950 +-------|--------+--------------+ 1951 |-CMI | Customer Service Model 1952 | | (non-ACTN interface) 1953 Service/Network | | 1954 Orchestrator | | 1955 +-------|--------|--------------------------+ 1956 | | +-------------------------+ | 1957 | | |Service Mapping Function | | 1958 | | +-------------------------+ | 1959 | | | | | 1960 | +------+ | +---------------+ | 1961 | | MDSC |--- |Service Config.| | 1962 | +------+ +---------------+ | 1963 +------|------------------|-----------------+ 1964 MPI-| +------------+ (non-ACTN Interf.) 1965 | / 1966 +-----------/------------+ 1967 IP/MPLS | / | 1968 Domain | / | Optical Domain 1969 Controller | / | Controller 1970 +--------|-------/----+ +---|--------------+ 1971 | +-----+ +-----+ | | +-----+ | 1972 | |PNC1 | |Serv.| | | |PNC2 | | 1973 | +-----+ +-----+ | | +-----+ | 1974 +---------------------+ +------------------+ 1975 | | 1976 v | 1977 +---------------------------------+ | 1978 / IP/MPLS Network \ | 1979 +-------------------------------------+ | 1980 V 1981 +--------------------------------------+ 1982 / Optical Network \ 1983 +------------------------------------------+