idnits 2.17.1 draft-ceccarelli-actn-framework-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 726: '...t network domain SHOULD support standa...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 750 has weird spacing: '...NC) and a hie...' == Line 866 has weird spacing: '...ansport uti...' == Line 916 has weird spacing: '...tion of in ...' == Line 943 has weird spacing: '...ing and man...' == Line 944 has weird spacing: '...e usage pro...' == (2 more instances...) -- The document date (December 23, 2014) is 3383 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ONF-ARCH' is mentioned on line 557, but not defined == Missing Reference: 'App' is mentioned on line 1353, but not defined == Missing Reference: 'Ext' is mentioned on line 1358, but not defined == Missing Reference: 'New' is mentioned on line 1301, but not defined == Unused Reference: 'PCE-S' is defined on line 1614, but no explicit reference was found in the text == Unused Reference: 'NFV-AF' is defined on line 1620, but no explicit reference was found in the text == Unused Reference: 'ONF' is defined on line 1628, but no explicit reference was found in the text == Unused Reference: 'VNM-OP' is defined on line 1635, but no explicit reference was found in the text Summary: 3 errors (**), 0 flaws (~~), 15 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Daniele Ceccarelli 2 Internet Draft Ericsson 4 Intended status: Informational Luyuan Fang 5 Expires: June 2015 Microsoft 7 Young Lee 8 Huawei 10 Diego Lopez 11 Telefonica 13 Sergio Belotti 14 Alcatel-Lucent 16 Daniel King 17 Lancaster University 19 December 23, 2014 21 Framework for Abstraction and Control of Transport Networks 23 draft-ceccarelli-actn-framework-06.txt 25 Status of this Memo 27 This Internet-Draft is submitted to IETF in full conformance with 28 the provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF), its areas, and its working groups. Note that 32 other groups may also distribute working documents as Internet- 33 Drafts. 35 Internet-Drafts are draft documents valid for a maximum of six 36 months and may be updated, replaced, or obsoleted by other documents 37 at any time. It is inappropriate to use Internet-Drafts as 38 reference material or to cite them other than as "work in progress." 40 The list of current Internet-Drafts can be accessed at 41 http://www.ietf.org/ietf/1id-abstracts.txt 43 The list of Internet-Draft Shadow Directories can be accessed at 44 http://www.ietf.org/shadow.html. 46 This Internet-Draft will expire on June 23, 2015. 48 Copyright Notice 50 Copyright (c) 2014 IETF Trust and the persons identified as the 51 document authors. All rights reserved. 53 This document is subject to BCP 78 and the IETF Trust's Legal 54 Provisions Relating to IETF Documents 55 (http://trustee.ietf.org/license-info) in effect on the date of 56 publication of this document. Please review these documents 57 carefully, as they describe your rights and restrictions with 58 respect to this document. Code Components extracted from this 59 document must include Simplified BSD License text as described in 60 Section 4.e of the Trust Legal Provisions and are provided without 61 warranty as described in the Simplified BSD License. 63 Abstract 65 This draft provides a framework for abstraction and control of 66 transport networks. 68 Table of Contents 70 1. Introduction...................................................3 71 2. Business Model of ACTN.........................................5 72 2.1. Customers.................................................6 73 2.2. Service Providers.........................................7 74 2.3. Network Providers.........................................9 75 3. ACTN architecture..............................................9 76 3.1. Customer Network Controller..............................12 77 3.2. Multi Domain Service Coordinator.........................13 78 3.3. Physical Network Controller..............................13 79 3.4. ACTN interfaces..........................................14 80 4. ACTN Applicability............................................16 81 4.1. ACTN Use cases Summary...................................17 82 4.2. Work in Scope of ACTN....................................20 83 4.2.1. Coordination of Multi-destination Service 84 Requirement/Policy.........................................25 85 4.2.2. Application Service Policy-aware Network Operation..27 86 4.2.3. Network Function Virtualization Services............29 87 4.2.4. Dynamic Service Control Policy Enforcement for 88 Performance and Fault Management...........................30 89 4.2.5. E2E VN Survivability and Multi-Layer (Packet-Optical) 90 Coordination for Protection/Restoration....................32 91 5. ACTN interfaces requirements..................................33 92 5.1. CMI Interface Requirements...............................34 93 5.2. MPI (MDSC-PNC Interface).................................37 94 6. References....................................................41 95 6.1. Informative References...................................41 96 Appendix A.......................................................42 97 Contributors' Addresses..........................................42 98 Authors' Addresses...............................................43 99 7. Appendix I: Abstracted Topology Illustration..................44 101 1. Introduction 103 Transport networks have a variety of mechanisms to facilitate 104 separation of data plane and control plane including distributed 105 signaling for path setup and protection, centralized path 106 computation for planning and traffic engineering, and a range of 107 management and provisioning protocols to configure and activate 108 network resources. These mechanisms represent key technologies for 109 enabling flexible and dynamic networking. 111 Transport networks in this draft refer to a set of different type of 112 connection-oriented networks, primarily Connection-Oriented Circuit 113 Switched (CO-CS) networks and Connection-Oriented Packet Switched 114 (CO-PS) networks. This implies that at least the following transport 115 networks are in scope of the discussion of this draft: Layer 1(L1) 116 and Layer 0 (L0) optical networks (e.g., Optical Transport Network 117 (OTN), Optical Channel Data Unit (ODU), Optical Channel 118 (OCh)/Wavelength Switched Optical Network (WSON)), Multi-Protocol 119 Label Switching - Transport Profile (MPLS-TP), Multi-Protocol Label 120 Switching - Traffic Engineering (MPLS-TE), as well as other emerging 121 technologies with connection-oriented behavior. One of the 122 characteristics of these network types is the ability of dynamic 123 provisioning and traffic engineering such that resource guarantees 124 can be provided to their clients. 126 One of the main drivers for Software Defined Networking (SDN) is a 127 decoupling of the network control plane from the data plane. This 128 separation of the control plane from the data plane has been already 129 achieved with the development of MPLS/GMPLS [GMPLS] and PCE [PCE] 130 for TE-based transport networks. One of the advantages of SDN is its 131 logically centralized control regime that allows a global view of 132 the underlying network under its control. Centralized control in SDN 133 helps improve network resources utilization from a distributed 134 network control. For TE-based transport network control, PCE is 135 essentially equivalent to a logically centralized control for path 136 computation function. 138 Two key aspects that need to be solved by SDN are: 140 . Network and service abstraction 141 . End to end coordination of multiple SDN and pre-SDN domains 142 e.g. NMS, MPLS-TE or GMPLS. 144 As transport networks evolve, the need to provide network and 145 service abstraction has emerged as a key requirement for operators; 146 this implies in effect the virtualization of network resources so 147 that the network is "sliced" for different tenants shown as a 148 dedicated portion of the network resources 150 Particular attention needs to be paid to the multi-domain case, 151 where Abstraction and Control of Transport Networks (ACTN) can 152 facilitate virtual network operation via the creation of a single 153 virtualized network or a seamless service. This supports operators 154 in viewing and controlling different domains (at any dimension: 155 applied technology, administrative zones, or vendor-specific 156 technology islands) as a single virtualized network. 158 Network virtualization, in general, refers to allowing the customers 159 to utilize a certain amount of network resources as if they own them 160 and thus control their allocated resources in a way most optimal 161 with higher layer or application processes. This empowerment of 162 customer control facilitates introduction of new services and 163 applications as the customers are permitted to create, modify, and 164 delete their virtual network services. More flexible, dynamic 165 customer control capabilities are added to the traditional VPN along 166 with a customer specific virtual network view. Customers control a 167 view of virtual network resources, specifically allocated to each 168 one of them. This view is called an abstracted network topology. 169 Such a view may be specific to the set of consumed services as well 170 as to a particular customer. As the Customer Network Controller is 171 envisioned to support a plethora of distinct applications, there 172 would be another level of virtualization from the customer to 173 individual applications. 175 The framework described in this draft is named Abstraction and 176 Control of Transport Network (ACTN) and facilitates: 178 - Abstraction of the underlying network resources to higher-layer 179 applications and users (customers); abstraction for a specific 180 application or customer is referred to as virtualization in the 181 ONF SDN architecture. [ONF-ARCH] 183 - Slicing infrastructure to connect multiple customers to meet 184 specific customer's service requirements; 186 - Creation of a virtualized environment allowing operators to 187 view and control multi-subnet multi-technology networks into a 188 single virtualized network; 190 - Possibility of providing a customer with abstracted network or 191 abstracted services (totally hiding the network). 193 - A virtualization/mapping network function that adapts customer 194 requests to the virtual resources (allocated to them) to the 195 supporting physical network control and performs the necessary 196 mapping, translation, isolation and security/policy 197 enforcement, etc.; This function is often referred to as 198 orchestration. 200 - The multi-domain coordination of the underlying transport 201 domains, presenting it as an abstracted topology to the 202 customers via open and programmable interfaces. This allows for 203 the recursion of controllers in a customer-provider 204 relationship. 206 The organization of this draft is as follows. Section 2 provides a 207 discussion for a Business Model, Section 3 ACTN Architecture, 208 Section 4 ACTN Applicability, and Section 5 ACTN Interface 209 requirements. 211 2. Business Model of ACTN 213 The traditional Virtual Private Network (VPN) and Overlay Network 214 (ON) models are built on the premise that one single network 215 provider provides all virtual private or overlay networks to its 216 customers. This model is simple to operate but has some 217 disadvantages in accommodating the increasing need for flexible and 218 dynamic network virtualization capabilities. 220 The ACTN model is built upon entities that reflect the current 221 landscape of network virtualization environments. There are three 222 key entities in the ACTN model [ACTN-PS]: 224 - Customers 225 - Service Providers 226 - Network Providers 228 2.1. Customers 230 Within the ACTN framework, different types of customers may be taken 231 into account depending on the type of their resource needs, on their 232 number and type of access. As example, it is possible to group them 233 into two main categories: 235 Basic Customer: Basic customers include fixed residential users, 236 mobile users and small enterprises. Usually the number of basic 237 customers is high; they require small amounts of resources and are 238 characterized by steady requests (relatively time invariant). A 239 typical request for a basic customer is for a bundle of voice 240 services and internet access. Moreover basic customers do not modify 241 their services themselves; if a service change is needed, it is 242 performed by the provider as proxy and they generally have very few 243 dedicated resources (subscriber drop), with everything else shared 244 on the basis of some SLA, which is usually best-efforts. 246 Advanced Customer: Advanced customers typically include enterprises, 247 governments and utilities. Such customers can ask for both point to 248 point and multipoint connectivity with high resource demand 249 significantly varying in time and from customer to customer. This is 250 one of the reasons why a bundled services offer is not enough but it 251 is desirable to provide each of them with customized virtual network 252 services. Advanced customers may own dedicated virtual resources, or 253 share resources, but shared resources are likely to be governed by 254 more complex SLA agreements; moreover they may have the ability to 255 modify their service parameters directly (within the scope of their 256 virtualized environments. As customers are geographically spread 257 over multiple network provider domains, the necessary control and 258 data interfaces to support such customer needs is no longer a single 259 interface between the customer and one single network provider. With 260 this premise, customers have to interface multiple providers to get 261 their end-to-end network connectivity service and the associated 262 topology information. Customers may have to support multiple virtual 263 network services with different service objectives and QoS 264 requirements. For flexible and dynamic applications, customers may 265 want to control their allocated virtual network resources in a 266 dynamic fashion. To allow that, customers should be given an 267 abstracted view of topology on which they can perform the necessary 268 control decisions and take the corresponding actions. ACTN's primary 269 focus is Advanced Customers. 271 Customers of a given service provider can in turn offer a service to 272 other customers in a recursive way. An example of recursiveness with 273 2 service providers is shown below. 275 - Customer (of service B) 276 - Customer (of service A) & Service Provider (of service B) 277 - Service Provider (of service A) 278 - Network Provider 280 +------------------------------------------------------------+ --- 281 | | ^ 282 | Customer (of service B)| . 283 | +--------------------------------------------------------+ | B 284 | | | |--- . 285 | |Customer (of service A) & Service Provider(of service B)| | ^ . 286 | | +---------------------------------------------------+ | | . . 287 | | | | | | . . 288 | | | Service Provider (of service A)| | | A . 289 | | |+------------------------------------------+ | | | . . 290 | | || | | | | . . 291 | | || Network provider| | | | v v 292 | | |+------------------------------------------+ | | |------ 293 | | +---------------------------------------------------+ | | 294 | +--------------------------------------------------------+ | 295 +------------------------------------------------------------+ 297 Figure 1: Network Recursiveness. 299 2.2. Service Providers 301 Service providers are the providers of virtual network services to 302 their customers. Service providers may or may not own physical 303 network resources. When a service provider is the same as the 304 network provider, this is similar to traditional VPN models. This 305 model works well when the customer maintains a single interface with 306 a single provider. When customer location spans across multiple 307 independent network provider domains, then it becomes hard to 308 facilitate the creation of end-to-end virtual network services with 309 this model. 311 A more interesting case arises when network providers only provide 312 infrastructure while service providers directly interface their 313 customers. In this case, service providers themselves are customers 314 of the network infrastructure providers. One service provider may 315 need to keep multiple independent network providers as its end-users 316 span geographically across multiple network provider domains. 318 Customer X -----------------------------------X 320 Service Provider A X -----------------------------------X 322 Network Provider B X-----------------X 324 Network Provider A X------------------X 326 The ACTN network model is predicated upon this three tier model and 327 is summarized in figure below: 329 +----------------------+ 330 | customer | 331 +----------------------+ 332 | 333 | /\ Service/Customer specific 334 | || Abstract Topology 335 | || 336 +----------------------+ E2E abstract 337 | Service Provider | topology creation 338 +----------------------+ 339 / | \ 340 / | \ Network Topology 341 / | \ (raw or abstract) 342 / | \ 343 +------------------+ +------------------+ +------------------+ 344 |Network Provider 1| |Network Provider 2| |Network Provider 3| 345 +------------------+ +------------------+ +------------------+ 347 Figure 2: Three tier model. 349 There can be multiple types of service providers. 351 . Data Center providers: can be viewed as a service provider type 352 as they own and operate data center resources to various WAN 353 clients, they can lease physical network resources from network 354 providers. 355 . Internet Service Providers (ISP): can be a service provider of 356 internet services to their customers while leasing physical 357 network resources from network providers. 358 . Mobile Virtual Network Operators (MVNO): provide mobile 359 services to their end-users without owning the physical network 360 infrastructure. 362 The network provider space is the one where recursiveness occurs. A 363 customer-provider relationship between multiple service providers 364 can be established leading to a hierarchical architecture of 365 controllers within service provider network. 367 2.3. Network Providers 369 Network Providers are the infrastructure providers that own the 370 physical network resources and provide network resources to their 371 customers. The layered model proposed by this draft separates the 372 concerns of network providers and customers, with service providers 373 acting as aggregators of customer requests. 375 3. ACTN architecture 377 This section provides a high-level control and interface model of 378 ACTN. 380 The ACTN architecture, while being aligned with the ONF SDN 381 architecture [ONF-ARCH], is presenting a 3-tiers reference model. It 382 allows for hierarchy and recursiveness not only of SDN controllers 383 but also of traditionally controlled domains. It defines three types 384 of controllers depending on the functionalities they implement. The 385 main functionalities that are identified are: 387 . Multi domain coordination function: With the definition of 388 domain being "everything that is under the control of the same 389 controller",it is needed to have a control entity that oversees 390 the specific aspects of the different domains and to build a 391 single abstracted end-to-end network topology in order to 392 coordinate end-to-end path computation and path/service 393 provisioning. 395 . Virtualization/Abstraction function: To provide an abstracted 396 view of the underlying network resources towards customer, 397 being it the client or a higher level controller entity. It 398 includes computation of customer resource requests into virtual 399 network paths based on the global network-wide abstracted 400 topology and the creation of an abstracted view of network 401 slices allocated to each customer, according to customer- 402 specific virtual network objective functions, and to the 403 customer traffic profile. 405 . Customer mapping function: In charge of mapping customer VN 406 setup commands into network provisioning requests to the 407 Physical Network Controller (PNC) according to business OSS/NMS 408 provisioned static or dynamic policy. Moreover it provides 409 mapping and translation of customer virtual network slices into 410 physical network resources 412 . Virtual service coordination: Virtual service coordination 413 function in ACTN incorporates customer service-related 414 knowledge into the virtual network operations in order to 415 seamlessly operate virtual networks while meeting customer's 416 service requirements. 418 The functionality is covering two types of services: 420 - Service-aware Connectivity Services: This category includes 421 all the network service operations used to provide 422 connectivity between customer end-points while meeting 423 policies and service related constraints. The data model for 424 this category would include topology entities such as 425 virtual nodes, virtual links, adaptation and termination 426 points and service-related entities such as policies and 427 service related constraints. (See Section 4.2.2) 429 - Network Function Virtualization Services: These kinds of 430 services are usually setup between customers' premises and 431 service provider premises and are provided mostly by cloud 432 providers or content delivery providers. The context may 433 include, but not limited to a security function like 434 firewall, a traffic optimizer, the provisioning of storage 435 or computation capacity where the customer does not care 436 whether the service is implemented in a given data center or 437 another. These services may be hosted virtually by the 438 provider or physically part of the network. This allows the 439 service provider to hide his own resources (both network and 440 data centers) and divert customer requests where most 441 suitable. This is also known as "end points mobility" case 442 and introduces new concepts of traffic and service 443 provisioning and resiliency. (e.g. Virtual Machine 444 mobility)." (See Section 4.2.3) 446 About the Customer service-related knowledge it includes: 448 - VN Service Requirements: The end customer would have 449 specific service requirements for the VN including the 450 customer endpoints access profile as well as the E2E 451 customer service objectives. The ACTN framework 452 architectural "entities" would monitor the E2E service 453 during the lifetime of VN by focusing on both the 454 connectivity provided by the network as well as the customer 455 service objectives. These E2E service requirements go beyond 456 the VN service requirements and include customer 457 infrastructure as well. 459 - Application Service Policy: Apart for network connectivity, 460 the customer may also require some policies for application 461 specific features or services. The ACTN framework would take 462 these application service policies and requirements into 463 consideration while coordinating the virtual network 464 operations, which require end customer connectivity for 465 these advanced services. 467 While the "types" of controller defined are shown in Figure 3 below 468 and are the following: 470 . CNC - Customer Network Controller 471 . MDSC - Multi Domain Service Coordinator 472 . PNC - Physical Network Controller 474 VPN customer NW Mobile Customer ISP NW service Customer 475 | | | 476 +-------+ +-------+ +-------+ 477 | CNC-A | | CNC-B | | CNC-C | 478 +-------+ +-------+ +-------+ 479 \ | / 480 ---------- | ------------ 481 \ | / 482 +-----------------------+ 483 | MDSC | 484 +-----------------------+ 485 / | \ 486 --------- | ------------ 487 / | \ 488 +-------+ +-------+ +-------+ 489 | PNC | | PNC | | PNC | 490 +-------+ +-------+ +-------+ 491 | GMPLS / | / \ 492 | trigger / | / \ 493 -------- ---- +-----+ +-----+ \ 494 ( ) ( ) | PNC | | PCE | \ 495 - - ( Phys ) +-----+ +-----+ ----- 496 ( GMPLS ) (Netw) | / ( ) 497 ( Physical ) ---- | / ( Phys. ) 498 ( Network ) ----- ----- ( Net ) 499 - - ( ) ( ) ----- 500 ( ) ( Phys. ) ( Phys ) 501 -------- ( Net ) ( Net ) 502 ----- ----- 504 Figure 3: ACTN Control Hierarchy 506 3.1. Customer Network Controller 508 A Virtual Network Service is instantiated by the Customer Network 509 Controller via the CMI (CNC-MDSC Interface). As the Customer Network 510 Controller directly interfaces the application stratum, it 511 understands multiple application requirements and their service 512 needs. It is assumed that the Customer Network Controller and the 513 MDSC have a common knowledge on the end-point interfaces based on 514 their business negotiation prior to service instantiation. End-point 515 interfaces refer to customer-network physical interfaces that 516 connect customer premise equipment to network provider equipment. 517 Figure 10 in Appendix shows an example physical network topology 518 that supports multiple customers. In this example, customer A has 519 three end-points A.1, A.2 and A.3. The interfaces between customers 520 and transport networks are assumed to be 40G OTU links. 522 In addition to abstract networks, ACTN allows to provide the CNC 523 with services. Example of services include connectivity between one 524 of the customer's end points with a given set of resources in a data 525 center from the service provider. 527 3.2. Multi Domain Service Coordinator 529 The MSDC (Multi Domain Service Coordinator) sits between the CNC 530 (the one issuing connectivity requests) and the PNCs (Physical 531 Network Controllersr - the ones managing the physical network 532 resources). The MSDC can be collocated with the PNC, especially in 533 those cases where the service provider and the network provider are 534 the same entity. 536 The internal system architecture and building blocks of the MDSC are 537 out of the scope of ACTN. Some examples can be found in the 538 Application Based Network Operations (ABNO) architecture [ABNO] and 539 the ONF SDN architecture [ONF-ARCH]. 541 The MDSC is the only building block of the architecture that is able 542 to implement all the four ACTN main functionalities, i.e. multi 543 domain coordination function, virtualization/abstraction function, 544 customer mapping function and virtual service coordination. 545 A hierarchy of MSDCs can be foreseen for scalability and 546 administrative choices. 548 3.3. Physical Network Controller 550 The physical network controller is the one in charge of configuring 551 the network elements, monitoring the physical topology of the 552 network and passing it, either raw or abstracted, to the MDSC. 554 The internal architecture of the PNC, his building blocks and the 555 way it controls its domain, are out of the scope of ACTN. Some 556 examples can be found in the Application Based Network Operations 557 (ABNO) architecture [ABNO] and the ONF SDN architecture [ONF-ARCH] 559 The PNC, in addition to being in charge of controlling the physical 560 network, is able to implement two of the four ACTN main 561 functionalities: multi domain coordination function and 562 virtualization/abstraction function 563 A hierarchy of PNCs can be foreseen for scalability and 564 administrative choices. 566 3.4. ACTN interfaces 568 To allow virtualization and multi domain coordination, the network 569 has to provide open, programmable interfaces, in which customer 570 applications can create, replace and modify virtual network 571 resources and services in an interactive, flexible and dynamic 572 fashion while having no impact on other customers. Direct customer 573 control of transport network elements and virtualized services is 574 not perceived as a viable proposition for transport network 575 providers due to security and policy concerns among other reasons. 576 In addition, as discussed in the previous section, the network 577 control plane for transport networks has been separated from data 578 plane and as such it is not viable for the customer to directly 579 interface with transport network elements. 581 While the current network control plane is well suited for control 582 of physical network resources via dynamic provisioning, path 583 computation, etc., a multi service domain controller needs to be 584 built on top of physical network controller to support network 585 virtualization. On a high-level, virtual network control refers to a 586 mediation layer that performs several functions: 588 Figure 4 depicts a high-level control and interface architecture for 589 ACTN. A number of key ACTN interfaces exist for deployment and 590 operation of ACTN-based networks. These are highlighted in Figure 4 591 (ACTN Interfaces) below: 593 .-------------- 594 ------------- | 595 | Application |-- 596 ------------- 597 ^ 598 | I/F A -------- 599 v ( ) 600 -------------- - - 601 | Customer | ( Customer ) 602 | Network |--------->( Network ) 603 | Controller | ( ) 604 -------------- - - 605 ^ ( ) 606 | I/F B -------- 607 v ^ ^ 608 -------------- : : 609 | MultiDomain | : . 610 | Service | : . 611 | Coordinator| -------- . I/F E 612 -------------- ( ) . 613 ^ - - . 614 | I/F C ( Physical ) . 615 v ( Network ) . 616 --------------- ( ) -------- 617 | |<----> - - ( ) 618 -------------- | ( ) - - 619 | Physical |-- -------- ( Physical ) 620 | Network |<---------------------->( Network ) 621 | Controller | I/F D ( ) 622 -------------- - - 623 ( ) 624 -------- 626 Figure 4: ACTN Interfaces 628 The interfaces and functions are described below: 630 . Interface A: A north-bound interface (NBI) that will 631 communicate the service request or application demand. A 632 request will include specific service properties, including: 633 services, topology, bandwidth and constraint information. 635 . Interface B: The CNC-MSDC Interface (CMI) is an interface 636 between a Customer Network Controller and a Multi Service 637 Domain Controller. It requests the creation of the network 638 resources, topology or services for the applications. The 639 Virtual Network Controller may also report potential network 640 topology availability if queried for current capability from 641 the Customer Network Controller. 643 . Interface C: The MDSC-PNC Interface (MPI) is an interface 644 between a Multi Domain Service Coordinator and a Physical 645 Network Controller. It communicates the creation request, if 646 required, of new connectivity of bandwidth changes in the 647 physical network, via the PNC. In multi-domain environments, 648 the MDSC needs to establish multiple MPIs, one for each PNC, as 649 there are multiple PNCs responsible for its domain control. 651 . Interface D: The provisioning interface for creating forwarding 652 state in the physical network, requested via the Physical 653 Network Controller. 655 . Interface E: A mapping of physical resources to overlay 656 resources. 658 The interfaces within the ACTN scope are B and C. 660 4. ACTN Applicability 662 This section provides a high-level applicability of ACTN based on a 663 number of use-cases listed in the following: 665 - draft-cheng-actn-ptn-requirements-00 (ACTN Use-cases for Packet 666 Transport Networks in Mobile Backhaul Networks) 668 - draft-dhody-actn-poi-use-case-03 (Packet Optical Integration (POI) 669 Use Cases for Abstraction and Control of Transport Networks 670 (ACTN)) 672 - draft-fang-actn-multidomain-dci-01 (ACTN Use Case for Multi-domain 673 Data Center Interconnect) 675 - draft-klee-actn-connectivity-multi-vendor-domains-03 (ACTN Use- 676 case for On-demand E2E Connectivity Services in Multiple Vendor 677 Domain Transport Networks) 679 - draft-kumaki-actn-multitenant-vno-00 (ACTN : Use case for Multi 680 Tenant VNO) 682 - draft-lopez-actn-vno-multidomains-01 (ACTN Use-case for Virtual 683 Network Operation for Multiple Domains in a Single Operator 684 Network) 686 - draft-shin-actn-mvno-multi-domain-00 (ACTN Use-case for Mobile 687 Virtual Network Operation for Multiple Domains in a Single 688 Operator Network) 690 - draft-xu-actn-perf-dynamic-service-control-02 (Use Cases and 691 Requirements of Dynamic Service Control based on Performance 692 Monitoring in ACTN Architecture) 694 4.1. ACTN Use cases Summary 696 Listed below is a set of generalized requirements identified by each of 697 the aforementioned use-cases: 699 - draft-cheng-actn-ptn-requirements-00 701 o Faster End-to-End Enterprise Services Provisioning 702 o Multi-layer coordination in L2/L3 Packet Transport Networks 703 o Optimizing the network resources utilization (supporting 704 various performances monitoring matrix, such as traffic flow 705 statistics, packet delay, delay variation, throughput and 706 packet-loss rate) 707 o Virtual Networks Operations for multi-domain Packet Transport 708 Networks 710 - draft-dhody-actn-poi-use-case-03 712 o Packet Optical Integration to support Traffic Planning, 713 performance Monitoring, automated congestion management and 714 Automatic Network Adjustments 715 o Protection and Restoration Synergy in Packet Optical Multi- 716 layer network. 717 o Service Awareness and Coordination between Multiple Network 718 Domains 720 - draft-fang-actn-multidomain-dci-01 722 - Multi-domain Data Center Interconnection to support VM Migration, 723 Global Load Balancing, Disaster Recovery, On-demand Virtual 724 Connection/Circuit Services 725 - The interfaces between the Data Center Operation and each 726 transport network domain SHOULD support standards-based 727 abstraction with a common information/data model to support the 728 following: 730 . Network Query (Pull Model) from the Data Center 731 Operation to each transport network domain to collect 732 potential resource availability (e.g., BW availability, 733 latency range, etc.) between a few data center 734 locations. 735 . Network Path Computation Request from the Data Center 736 Operation to each transport network domain to estimate 737 the path availability. 738 . Network Virtual Connections/Circuits Request from the 739 Data Center Operation to each transport domain to 740 establish end-to-end virtual connections/circuits (with 741 type, concurrency, duration, SLA.QoS parameters, 742 protection.reroute policy options, policy constraints 743 such as peering preference, etc.). 744 . Network Virtual Connections/Circuits Modification 745 Request 747 - draft-klee-actn-connectivity-multi-vendor-domains-02 749 o Two-stage path computation capability in a hierarchical 750 control architecture (MDSC-PNC) and a hierarchical 751 composition of integrated network views 753 o Coordination of signal flow for E2E connections. 755 o Abstraction of: 757 . Inter-connection data between domains 759 . Customer Endpoint data 761 . The multiple levels/granularities of the abstraction of 762 network resource (which is subject to policy and service 763 need). 765 . Any physical network constraints (such as SRLG, link 766 distance, etc.) should be reflected in abstraction. 768 . Domain preference and local policy (such as preferred 769 peering point(s), preferred route, etc.), Domain network 770 capability (e.g., support of push/pull model). 772 - draft-kumaki-actn-multitenant-vno-00 774 o On-demand Virtual Network Service Creation 775 o Domain Control Plane/Routing Layer Separation 776 o Independent service Operation for Virtual Services from 777 control of other domains 778 o Multiple service level support for each VN (e.g., bandwidth 779 and latency for each VN service). 780 o VN diversity/survivability should be met in physical network 781 mapping. 782 o VN confidentiality and sharing constraint should be supported. 784 - draft-lopez-actn-vno-multidomains-01 786 o Creation of a global abstraction of network topology: The VNO 787 Coordinator assembles each domain level abstraction of 788 network topology into a global abstraction of the end-to- 789 endnetwork. 790 o End-to-end connection lifecycle management 791 o Invocation of path provisioning request to each domain 792 (including optimization requests) 793 o Invocation of path protection/reroute to the affected 794 domain(s) 795 o End-to-end network monitoring and fault management. This could 796 imply potential KPIs and alarm correlation capabilities. 797 o End-to-end accounting and generation of detailed records for 798 resource usage 799 o End-to-end policy enforcement 801 - draft-shin-actn-mvno-multi-domain-00 803 o Resource abstraction: operational mechanisms in mobile 804 backhaul network to give the current network usage 805 information for dynamic and elastic applications be 806 provisioned dynamically with QoS guarantee. 808 o Load balancing or for recovery, the selection of core DC 809 location from edge constitutes a data center selection 810 problem. 812 o Multi-layer routing and optimization, coordination between 813 these two layers. 815 - draft-xu-actn-perf-dynamic-service-control-02 817 o Dynamic Service Control Policy enforcement and Traffic/SLA 818 Monitoring: 819 . Customer service performance monitoring strategy, 820 including the traffic monitoring object (the service 821 need to be monitored) 823 . monitoring parameters (e.g., transmitted and received 824 bytes per unit time), 825 . traffic monitoring cycle (e.g., 15 minutes, 24 hours), 826 . threshold of traffic monitoring (e.g., high and low 827 threshold), etc. 829 4.2. Work in Scope of ACTN 831 This section provides a summary of use-cases in terms of two 832 categories: (i) service-specific requirements; (ii) network-related 833 requirements. 835 Service-specific requirements listed below are uniquely applied to 836 the work scope of ACTN. Service-specific requirements are related to 837 virtual service coordination function defined in Section 3. These 838 requirements are related to customer's VNs in terms of service 839 policy associated with VNs such as service performance objectives, 840 VN endpoint location information for certain required service- 841 specific functions (e.g., security and others), VN survivability 842 requirement, or dynamic service control policy, etc. 844 Network-related requirements are related to virtual network 845 operation function defined in Section 3. These requirements are 846 related to multi-domain and multi-layer signaling, routing, 847 protection/restoration and synergy, re-optimization/re-grooming, 848 etc. These requirements are not inherently unique for the scope of 849 ACTN but some of these requirements are in scope of ACTN, especially 850 for coherent/seamless operation aspect of multiple controller 851 hierarchy. 853 The following table gives an overview of service-specific 854 requirements and network-related requirements respectively for each 855 ACTN use-case and identifies the work in scope of ACTN. 857 Use- Service- Network-related ACTN Work 858 case specific Requirements Scope 859 Requirements 861 ------- -------------- --------------- -------------- 862 Cheng - E2E service - Multi-layer - Dynamic 863 provisioning (L2/L2.5) multi-layer 864 - Performance coordination coordination 865 monitoring - VNO for multi- based on 866 - Resource domain transport utilization is 867 utilization networks in scope of 868 abstraction ACTN 869 - YANG for 870 utilization 871 abstraction 873 ------- -------------- ---------------- -------------- 874 Dhody - Service - POI - Performance 875 awareness/ Performance related data 876 coordination monitoring model may be 877 between P/O. - Protection/ in scope of 878 Restoration ACTN 879 synergy - Customer's 880 VN 881 survivability 882 policy 883 enforcement 884 for 885 protection/ 886 restoration 887 is unique to 888 ACTN 890 ------- -------------- ---------------- -------------- 891 Fang - Dynamic VM - On-demand - Multi- 892 migration virtual circuit destination 893 (service), request service 894 Global load - Network Path selection 895 balancing Connection policy 896 (utilization request enforcement 897 efficiency), and its 898 Disaster related 899 recovery primitives/inf 900 - Service- ormation are 901 aware network unique to 902 query ACTN. 903 - Service - Service- 904 Policy aware network 905 Enforcement query and its 906 data model can 907 be extended by 908 ACTN. 910 ------- -------------- ---------------- -------------- 911 Klee - Two stage path - Multi-domain 912 computation service policy 913 E2E signaling coordination 914 coordination to network 915 primitives is 916 - Abstraction of in scope of 917 inter-domain ACTN. 918 info 919 - Enforcement of 920 network policy 921 (peering, domain 922 preference) 923 - Network 924 capability 925 exchange 926 (pull/push, 927 abstraction 928 level, etc.) 930 ------- -------------- ---------------- -------------- 931 Kumaki - On-demand VN - All of the 932 creation service- 933 - Multi- specific lists 934 service level in the left 935 for VN column is 936 - VN unique to 937 survivability ACTN. 938 /diversity/con 939 fidentiality 941 ------- -------------- ---------------- -------------- 942 Lopez - E2E - E2E connection - Escalation 943 accounting and management, path of performance 944 resource usage provisioning and fault 945 data - E2E network management 946 - E2E service monitoring and data to CNC 947 policy fault management and the policy 948 enforcement enforcement 949 for this area 950 is unique to 951 ACTN. 953 ------- -------------- ---------------- -------------- 954 Shin - Current - LB for - Multi-layer 955 network recovery routing and 956 resource - Multi-layer optimization 957 abstraction routing and are related to 958 Endpoint/DC optimization VN's dynamic 959 dynamic coordination endpoint 960 selection (for selection 961 VM migration) policy. 963 ------- -------------- ---------------- -------------- 964 Xu - Dynamic - Traffic - Dynamic 965 service monitoring service 966 control policy - SLA monitoring control policy 967 enforcement enforcement 968 - Dynamic and its 969 service control 970 control primitives are 971 in scope of 972 ACTN 973 - Data model 974 to support 975 traffic 976 monitoring 977 data is an 978 extension of 979 YANG model 980 ACTN can 981 extend. 983 The subsequent sections provide some illustration of the ACTN's unique 984 work scope identified by the above analysis: 986 - Coordination of Multi-destination Service Requirement/Policy 987 (Section 4.2.1) 988 - Application Service Policy-aware Network Operation (section 4.2.2) 989 - Network Function Virtualization Services (section 4.2.3) 990 - Dynamic Service Control Policy Enforcement for Performance/Fault 991 Management (Section 4.2.4) 992 - E2E VN Survivability and Multi-Layer (Packet-Optical) Coordination 993 for Protection/Restoration (Section 4.2.5) 995 4.2.1. Coordination of Multi-destination Service Requirement/Policy 997 +----------------+ 998 | CNC | 999 | (Global DC | 1000 | Operation | 1001 | Control) | 1002 +--------+-------+ 1003 | | Service Requirement/Policy: 1004 | | - Endpoint/DC location info 1005 | | - Endpoint/DC dynamic 1006 | | selection policy 1007 | | (for VM migration, DR, LB) 1008 | v 1009 +---------+--------+ 1010 | Multi-domain | Service policy-driven 1011 |Service Controller| dynamic DC selection 1012 +-----+---+---+----+ 1013 | | | 1014 | | | 1015 +----------------+ | +----------------+ 1016 | | | 1017 +-----+-----+ +-----+------+ +------+-----+ 1018 | PNC for | | PNC for | | PNC for | 1019 | Transport | | Transport | | Transport | 1020 | Network A | | Network B | | network C | 1021 +-----------+ +------------+ +------------+ 1022 | | | 1023 +---+ ------ ------ ------ +---+ 1024 |DC1|--//// \\\\ //// \\\\ //// \\\\---+DC4| 1025 +---+ | | | | | | +---+ 1026 | TN A +-----+ TN B +----+ TN C | 1027 / | | | | | 1028 / \\\\ //// / \\\\ //// \\\\ //// 1029 +---+ ------ / ------ \ ------ \ 1030 |DC2| / \ \+---+ 1031 +---+ / \ |DC6| 1032 +---+ \ +---+ +---+ 1033 |DC3| \|DC4| 1034 +---+ +---+ 1036 DR: Disaster Recovery 1037 LB: Load Balancing 1039 Figure 5: Service Policy-driven Data Center Selection 1041 Figure 5 shows how VN service policies from the CNC are incorporated 1042 by the MDSC to support multi-destination applications. Multi- 1043 destination applications refer to applications in which the 1044 selection of the destination of a network path for a given source 1045 needs to be decided dynamically to support such applications. 1047 Data Center selection problems arise for VM mobility, disaster 1048 recovery and load balancing cases. VN's service policy plays an 1049 important role for virtual network operation. Service policy can be 1050 static or dynamic. Dynamic service policy for data center selection 1051 may be placed as a result of utilization of data center resources 1052 supporting VNs. The MSDC would then incorporate this information to 1053 meet the service objective of this application. 1055 4.2.2. Application Service Policy-aware Network Operation 1057 +----------------+ 1058 | CNC | 1059 | (Global DC | 1060 | Operation | 1061 | Control) | 1062 +--------+-------+ 1063 | | Application Service Policy 1064 | | - VNF requirement (e.g. 1065 | | security function, etc.) 1066 | | - Location profile for each VNF 1067 | v 1068 +---------+--------+ 1069 | Multi-domain | Dynamically select the 1070 |Service Controller| network destination to 1071 +-----+---+---+----+ meet VNF requirement. 1072 | | | 1073 | | | 1074 +---------------+ | +----------------+ 1075 | | | 1076 +------+-----+ +-----+------+ +------+-----+ 1077 | PNC for | | PNC for | | PNC for | 1078 | Transport | | Transport | | Transport | 1079 | Network A | | Network B | | network C | 1080 | | | | | | 1081 +------------+ +------------+ +------------+ 1082 | | | 1083 {VNF b} | | | {VNF b,c} 1084 +---+ ------ ------ ------ +---+ 1085 |DC1|--//// \\\\ //// \\\\ //// \\\\-|DC4| 1086 +---+ | | | | | |+---+ 1087 | TN A +---+ TN B +--+ TN C | 1088 / | | | | | 1089 / \\\\ //// / \\\\ //// \\\\ //// 1090 +---+ ------ / ------ \ ------ \ 1091 |DC2| / \ \\+---+ 1092 +---+ / \ |DC6| 1093 {VNF a} +---+ +---+ +---+ 1094 |DC3| |DC4| {VNF a,b,c} 1095 +---+ +---+ 1096 {VNF a, b} {VNF a, c} 1098 Figure 6: Application Service Policy-aware Network Operation 1100 This scenario is similar to the previous case in that the VN service 1101 policy for the application can be met by a set of multiple 1102 destinations that provide the required virtual network functions 1103 (VNF). Virtual network functions can be, for example, security 1104 functions required by the VN application. The VN service policy by 1105 the CNC would indicate the locations of a certain VNF that can be 1106 fulfilled. This policy information is critical in finding the 1107 optimal network path subject to this constraint. As VNFs can be 1108 dynamically moved across different DCs, this policy should be 1109 dynamically enforced from the CNC to the MDSC and the PNCs. 1111 4.2.3. Network Function Virtualization Services 1113 +----------------+ 1114 | CNC | 1115 | (Global DC | 1116 | Operation | 1117 | Control) | 1118 +--------+-------+ 1119 | | Service Policy 1120 | | (e.g., firewall, traffic 1121 | | optimizer) 1122 | | 1123 | v 1124 +---------+--------+ 1125 | Multi-domain | Select network 1126 |Service Controller| connectivity subject to 1127 +-----+---+---+----+ meeting service policy 1128 | | | 1129 | | | 1130 +---------------+ | +----------------+ 1131 | | | 1132 +------+-----+ +-----+------+ +------+-----+ 1133 | PNC for | | PNC for | | PNC for | 1134 | Transport | | Transport | | Transport | 1135 | Network A | | Network B | | network C | 1136 | | | | | | 1137 +------------+ +------------+ +------------+ 1138 | | | 1139 | | | 1140 +---+ ------ ------ ------ +---+ 1141 |DC1|--//// \\\\ //// \\\\ //// \\\\-|DC4| 1142 +---+ | | | | | |+---+ 1143 | TN A +---+ TN B +--+ TN C | 1144 / | | | | | 1145 / \\\\ //// / \\\\ //// \\\\ //// 1146 +---+ ------ / ------ \ ------ \ 1147 |DC2| / \ \\+---+ 1148 +---+ / \ |DC6| 1149 +---+ +---+ +---+ 1150 |DC3| |DC4| 1151 +---+ +---+ 1153 Figure 7: Network Function Virtualization Services 1155 Network Function Virtualization Services are usually setup between 1156 customers' premises and service provider premises and are provided 1157 mostly by cloud providers or content delivery providers. The context 1158 may include, but not limited to a security function like firewall, a 1159 traffic optimizer, the provisioning of storage or computation 1160 capacity where the customer does not care whether the service is 1161 implemented in a given data center or another. 1163 These services may be hosted virtually by the provider or physically 1164 part of the network. This allows the service provider to hide his 1165 own resources (both network and data centers) and divert customer 1166 requests where most suitable. This is also known as "end points 1167 mobility" case and introduces new concepts of traffic and service 1168 provisioning and resiliency (e.g., Virtual Machine mobility). 1170 4.2.4. Dynamic Service Control Policy Enforcement for Performance and 1171 Fault Management 1173 +------------------------------------------------+ 1174 | Customer Network Controller | 1175 +------------------------------------------------+ 1176 1.Traffic| /|\4.Traffic | /|\ 1177 Monitor& | | Monitor | | 8.Traffic 1178 Optimize | | Result 5.Service | | modify & 1179 Policy | | modify& | | optimize 1180 \|/ | optimize Req.\|/ | result 1181 +------------------------------------------------+ 1182 | Mult-domain Service Controller | 1183 +------------------------------------------------+ 1184 2. Path | /|\3.Traffic | | 1185 Monitor | | Monitor | |7.Path 1186 Request | | Result 6.Path | | modify & 1187 | | modify& | | optimize 1188 \|/ | optimize Req.\|/ | result 1189 +------------------------------------------------+ 1190 | Physical Network Controller | 1191 +------------------------------------------------+ 1193 Figure 8: Dynamic Service Control for Performance and Fault 1194 Management 1196 Figure 8 shows the flow of dynamic service control policy 1197 enforcement for performance and fault management initiated by 1198 customer per their VN. The feedback loop and filtering mechanism 1199 tailored for VNs performed by the MDSC differentiates this ACTN 1200 scope from traditional network management paradigm. VN level dynamic 1201 OAM data model is a building block to support this capability. 1203 4.2.5. E2E VN Survivability and Multi-Layer (Packet-Optical) 1204 Coordination for Protection/Restoration 1206 +----------------+ 1207 | Customer | 1208 | Network | 1209 | Controller | 1210 +--------*-------+ 1211 * | E2E VN Survivability Req. 1212 * | - VN Protection/Restoration 1213 * v - 1+1, Restoration, etc. 1214 +------*-----+ - End Point (EP) info. 1215 | | 1216 | MDSC | MDSC enforces VN survivability 1217 | | requirement, determining the 1218 | | optimal combination of Packet/ 1219 +------*-----+ Opticalprotection/restoration, 1220 * Optical bypass, etc. 1221 * 1222 * 1223 ********************************************** 1224 * * * * 1225 +----*-----+ +----*----+ +----*-----+ +----*----+ 1226 |PNC for | |PNC for | |PNC for | |PNC for | 1227 |Access N. | |Packet C.| |Optical C.| |Access N.| 1228 +----*-----+ +----*----+ +----*-----+ +---*-----+ 1229 * --*--- * * 1230 * /// \\\ * * 1231 --*--- | Packet | * ----*- 1232 /// \\\ | Core +------+------/// \\\ 1233 | Access +----\\ /// * | Access | 1234 | Network | ---+-- * | Network | +---+ 1235 |\\\ /// | * \\\ ///---+EP6| 1236 | +---+- | | -----* -+---+ +---+ 1237 +-+-+ | | +----/// \\\ | | 1238 |EP1| | +--------------+ Optical | | | +---+ 1239 +---+ | | Core +------+ +--+EP5| 1240 +-+-+ \\\ /// +---+ 1241 |EP2| ------ | 1242 +---+ | | 1243 +--++ ++--+ 1244 |EP3| |EP4| 1245 +---+ +---+ 1247 Figure 9: E2E VN Survivability and Multi-layer Coordination for 1248 Protection and Restoration 1250 Figure 9 shows the need for E2E protection/restoration control 1251 coordination that involves CNC, MDSC and PNCs to meet the VN 1252 survivability requirement. VN survivability requirement and its 1253 policy need to be translated into multi-domain and multi-layer 1254 network protection and restoration scenarios across different 1255 controller types. After an E2E path is setup successfully, the MSDC 1256 has a unique role to enforce policy-based flexible VN survivability 1257 requirement by coordinating all PNC domains. 1259 As seen in Figure 9, multi-layer (i.e., packet/optical) coordination 1260 is a subset of this E2E protection/restoration control operation. 1261 The MDSC has a role to play in determining an optimal 1262 protection/restoration level based on the customer's VN 1263 survivability requirement. For instance, the MDSC needs to interface 1264 the PNC for packet core as well as the PNC for optical core and 1265 enforce protection/restoration policy as part of the E2E 1266 protection/restoration. Neither the PNC for packet core nor the PNC 1267 for optical core is in a position to be aware of the E2E path and 1268 its protection/restoration situation. This role of the MSDC is 1269 unique for this reason. In some cases, the MDSC will have to 1270 determine and enforce optical bypass to find a feasible reroute path 1271 upon packet core network failure which cannot be resolved the packet 1272 core network itself. 1274 To coordinate this operation, the PNCs will need to update its 1275 domain level abstract topology upon resource changes due to a 1276 network failure or other factors. The MSDC will incorporate all 1277 these update to determine if an alternate E2E reroute path is 1278 necessary or not based on the changes reported from the PNCs. It 1279 will need to update the E2E abstract topology and the affected CN's 1280 VN topology in real-time. This refers to dynamic synchronization of 1281 topology from Physical topology to abstract topology to VN topology. 1283 MDSC will also need to perform the path restoration signaling to the 1284 affected PNCs whenever necessary. 1286 5. ACTN interfaces requirements 1288 This section provides ACTN interface requirements for the two 1289 interfaces that are within the ACTN scope. 1291 . CMI: CNC-MDSC Interface (Section 5.1) 1292 . MPI: MDSC-PNC Interface (Section 5.2) 1294 For each requirement, it also identifies the following categories 1295 where possible: 1297 1. Applicable [App]: Existing components are applicable to ACTN 1298 architecture 1299 2. Extensible [Ext]: Existing components can be extended to ACTN 1300 architecture 1301 3. New [New]: The components are new work to ACTN architecture 1303 5.1. CMI Interface Requirements 1305 Requirement Notes 1306 ------------------------------- ---------------------------- 1307 1. Security/Policy Negotiation - Some new element for 1308 (Who are you?) (Between CNC controller-controller 1309 and MDSC) (CNC-MDSC) 1310 - Configured vs. Discovered security/policy 1311 [new] negotiation aspect. 1312 - Trust domain verification - It is not entirely 1313 (External Entity vs. Internal clear if there is 1314 Service Department) [ext] existing work that can 1315 - Push/Pull support (for be extended to support 1316 policy) [ext/new?] all requirements 1318 2. VN Topology Query (Can you - New for some primitives 1319 give me VN?) (From CNC to and IEs (e.g., VN 1320 MDSC) Topology Query, VN 1321 - VN end-points (CE end) [new] Topo. Negotiation, VN 1322 - VN Topology Service-specific end-points) 1323 Multi-Cost Objective Function 1324 [ext] - Extensible for some 1325 o Latency Map IE/Objects from PCEP 1326 o Available B/W Map (e.g., Objective 1327 o Latency Map and function, etc.) 1328 Available B/W Map 1329 together 1330 o Other types 1331 - VN Topology diversity [new] 1332 o Node/Link disjoint from 1333 other VNs 1334 o VN Topology level 1335 diversity (e.g., VN1 and 1336 VN2 must be disjoint) 1337 - VN Topology type [ext] 1338 o Path vector (tunnel) 1339 o Node/Links (graph) 1341 3. VN Topology Query Response - Similar comment to #2. 1342 (From MDSC to CNC: Here's the 1343 VN Topology that can be given 1344 to you if you accept) 1345 - For VN Topology, [ext] 1346 o This is what can be 1347 reserved for you 1348 o This is what is 1349 available beyond what is 1350 given to you (potential) 1352 4. VN Topology Abstraction Model - Applicable (Generic TE 1353 (generic network model) [App] YANG model) 1355 5. VN Topology Abstraction Model - Extensible from generic 1356 (Service-specific model that TE Abstraction Model 1357 include customer endpoints) (TEAS WG) to include 1358 [Ext] service-related 1359 parameters and end- 1360 point abstraction 1362 6. Basic VN Instantiation - It is not completely 1363 Request/Confirmation clear if existing 1364 (Between CNC and MDSC: I need components can be 1365 VN for my service, please extended or if these 1366 instantiate my VN) require new 1367 - VN instance ID [ext] protocol/primitives/IEs 1368 - VN end-points [ext/new?] . 1369 - VN service requirement [ext] - It appears that there 1370 o Latency only is no existing proper 1371 o B/W guarantee protocol that supports 1372 o Latency and B/W all required 1373 guarantee together primitives/IEs, but 1374 - VN diversity [ext] this is subject to 1375 o Node/Link disjoint from further analysis. 1376 other VNs 1377 - VN level diversity (e.g., VN1 1378 and VN2 must be disjoint) 1379 [ext] 1380 - VN type [ext] 1381 o Path vector (tunnel) 1382 o Node/Links (graph) 1383 - VN instance ID per service 1384 (unique id to identify VNs) 1385 [ext/new?] 1386 - If failed to instantiate the 1387 requested VN, say why [ext] 1389 7. Dynamic/On-demand VN - New: dynamic policy 1390 Instantiation/Modification enforcement seems to be 1391 and Confirmation with new while abstraction 1392 feedback loop (This is to be of service-aware 1393 differentiated from Basic VN abstraction model can 1394 Instantiation) be extended from basic 1395 - Performance/Fault Monitoring TE YANG model. 1396 [ext/new?] - Note: Feedback loop 1397 - Utilization Monitoring requires very frequent 1398 (Frequency of report) [new] updates of abstracted 1399 - Abstraction of Resource topology real-time. 1400 Topology reflecting these - Current management 1401 service-related parameters interface may not be 1402 [ext/new?] appropriate to support 1404 - Dynamic Policy enforcement this feedback loop and 1405 [new] the real-time 1406 operation. 1407 This is related to Section 1408 4.2.4. 1410 8. VN lifecycle - This is extensible from 1411 management/operation [ext] existing LSP lifecycle 1412 - Create (same as VN management/operation. 1413 instantiate Request) 1414 - Delete 1415 - Modify 1416 - Update (VN level OAM 1417 Monitoring) under policy 1418 agreement 1420 9. Coordination of multi- - This is from Section 1421 destination service 4.2.1 and Requirement 7 1422 requirement/policy to support (above) but there are 1423 dynamic applications such as unique requirements. 1424 VM migration, disaster - New: Primitives that 1425 recovery, load balancing, allow integrated 1426 etc. network operation and 1427 - Service-policy primitives and service operation 1428 its parameters [new] - See also the 1429 corresponding MPI 1430 requirement. 1432 5.2. MPI (MDSC-PNC Interface) 1434 Requirement Notes 1435 ------------------------------ ------------------------------- 1436 1. Security/Policy negotiation - Extensible from 1437 (who are you?) PCEP/YANG 1438 - Exchange of key, etc. [ext] - End-point mobility for 1439 - Domain preference + local multi-destination 1440 policy exchange [ext] policy is new element 1441 - Push/Pull support [ext] in primitives and Data 1442 - Preferred peering points Model 1443 [ext] 1444 - Preferred route [ext] 1445 - Reroute policy [ext] 1446 - End-point mobility (for 1447 multi-destination) [new] 1449 2. Topology Query /Response - Pull Model with 1450 (Pull Model from MDSC to PNC: Customer's VN 1451 Please give me your domain requirement can be 1452 topology) extended from existing 1453 - TED Abstraction level components. 1454 negotiation [new] - Abstraction negotiation 1455 - Abstract topology (per primitive seems to be 1456 policy) [ext] new ACTN work. 1457 o Node/Link metrics 1458 o Node/Link Type 1459 (Border/Gateway, etc.) 1460 o All TE metrics (SRLG, 1461 etc.) 1462 o Topology Metrics 1463 (latency, B/W available, 1464 etc.) 1466 3. Topology Update (Push Model - Push/Subscription can 1467 from PNC to MDSC) be extended from 1468 - Under policy agreement, existing components 1469 topology changes to be pushed (YANG) 1470 to MDSC from PNC [ext] 1472 4. VN Path Computation Request - Extensible from PCEP 1473 (From MDSC to PNC: Please 1474 give me a path in your 1475 domain) 1476 - VN Instance ID [ext] 1477 - End-point information [ext] 1478 - CE ends [ext] 1479 - Border points (if applicable) 1480 [ext] 1481 - All other PCE request info 1482 (PCEP) [ext] 1484 5. VN Path Computation Reply - Extensible from PCEP 1485 (here's the path info per 1486 your request) 1487 - Path level abstraction [ext] 1488 - LSP DB [ext] 1489 - LSP ID ?? [ext] 1490 - VN ID [ext] 1492 6. Coordination of multi-domain - New element on 1493 Centralized Signaling (MSDC centralized signaling 1494 operation) Path Setup operation for MSDC as 1495 Operation well as control-control 1496 - MSDC computes E2E path across primitives (different 1497 multi-domain (based on from NE-NE signaling 1498 abstract topology from each primitives) although 1499 PNC) [new] RSVP-TE can be extended 1500 - MDSC determines the domain to support some 1501 sequence [new/ext?] functions defined here 1502 - MDSC request path signaling if not all. 1503 to each PNC (domain) [ext] 1504 - MDSC finds alternative path 1505 if any of the PNCs cannot 1506 find its domain path [ext] 1507 o PNC will crankback to 1508 MDSC if it cannot find 1509 its domain path 1510 o PNC will confirm to MDSC 1511 if it finds its domain 1512 path 1514 7. Path Restoration Operation - New for MDSC's central 1515 (after an E2E path is setup path restoration 1516 successfully, some domain had primitives and 1517 a failure that cannot be interaction with each 1518 restored by the PNC domain) PNC to coordinate this 1519 - The problem PNC will send real-time operation. 1520 this notification with 1521 changed abstract topology - Related to Section 4.2.5. 1522 (computed after resource 1523 changes due to failure/other 1524 factors) [ext] 1525 - MDSC will find an alternate 1526 E2E path based on the changes 1527 reported from PNC. It will 1528 need to update the E2E 1529 abstract topology and the 1530 affected CN's VN topology in 1531 real-time (This refers to 1532 dynamic synchronization of 1533 topology from Physical 1534 topology to abstract topology 1535 to VN topology) [new/ext?] 1536 - MDSC will perform the path 1537 restoration signaling to the 1538 affected PNCs.[ext] 1540 8. Coordination of Multi- - Related to Section 1541 destination service 4.2.1. 1542 restoration operation (CNC - New for ACTN in 1543 have, for example, multiple determining the optimal 1544 endpoints where the source destination on the fly 1545 endpoint can send its data to given customer policy 1546 either one of the endpoints) and network condition 1547 - PNC reports domain problem and its related real- 1548 that cannot be resolved at time network operation 1549 MDSC level because of there procedures. 1550 is no network restoration - Other operations are 1551 path to a given destination. extensible from 1552 [ext] existing mechanism. 1553 - Then MDSC has Customers' 1554 profile in which to find the 1555 customer has "multi- 1556 destination" application. 1557 [new] 1558 - Under policy A, MDSC will be 1559 allowed to reroute the 1560 customer traffic to one of 1561 the pre-negotiated 1562 destinations and proceed with 1563 restoration of this 1564 particular customer's 1565 traffic. [ext] 1566 - Under policy B, CNC may 1567 reroute on its VN topology 1568 level and push this to MDSC 1569 and MDSC maps this into its 1570 abstract topology and proceed 1571 with restoration of this 1572 customer's traffic. [new] 1573 - In either case, the MDSC will 1574 proceed its restoration 1575 operation (as explained in 1576 Req. 6) to the corresponding 1577 PNCs. [ext] 1579 9. MDSC-PNC policy negotiation - This seems to be new to 1580 is also needed as to how ACTN. 1581 restoration is done across 1582 MDSC and PNCs. [new] 1584 10. Generic Abstract Topology - Current Generic TE YANG 1585 Update per changes due to new model applicable. 1586 path setup/connection However, the real-time 1587 failure/degradation/restorati nature of these models 1588 on [ext] with frequent update 1589 and synchronization 1590 check is new for ACTN. 1592 11. Service-specific Abstract - Extensible from generic 1593 Topology Update per changes TE Abstraction Model 1594 due to new path (TEAS WG) to include 1595 setup/connection service-related 1596 failure/degradation/restorati parameters and end- 1597 on [ext] point abstraction 1599 12. Abstraction model of - Extensible from generic 1600 technology-specific topology TE Abstraction Model 1601 element [ext] (TEAS WG) to include 1602 abstraction of 1603 technology-specific 1604 element. 1606 6. References 1608 6.1. Informative References 1610 [PCE] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path 1611 Computation Element (PCE)-Based Architecture", IETF RFC 1612 4655, August 2006. 1614 [PCE-S] Crabbe, E, et. al., "PCEP extension for stateful 1615 PCE",draft-ietf-pce-stateful-pce, work in progress. 1617 [GMPLS] Manning, E., et al., "Generalized Multi-Protocol Label 1618 Switching (GMPLS) Architecture", RFC 3945, October 2004. 1620 [NFV-AF] "Network Functions Virtualization (NFV); Architectural 1621 Framework", ETSI GS NFV 002 v1.1.1, October 2013. 1623 [ACTN-PS] Y. Lee, D. King, M. Boucadair, R. Jing, L. Contreras 1624 Murillo, "Problem Statement for Abstraction and Control of 1625 Transport Networks", draft-leeking-actn-problem-statement, 1626 work in progress. 1628 [ONF] Open Networking Foundation, "OpenFlow Switch Specification 1629 Version 1.4.0 (Wire Protocol 0x05)", October 2013. 1631 [ABNO] King, D., and Farrel, A., "A PCE-based Architecture for 1632 Application-based Network Operations", draft-farrkingel- 1633 pce-abno-architecture, work in progress. 1635 [VNM-OP] Melo, M, et al. "Virtual Network Mapping - An Optimization 1636 Problem", Springer Berlin Heidelberg, January 2012. 1638 Appendix A 1640 Contributors' Addresses 1642 Dhruv Dhoddy 1643 Huawei Technologies 1644 dhruv.ietf@gmail.com 1646 Authors' Addresses 1648 Daniele Ceccarelli 1649 Ericsson 1650 Torshamnsgatan,48 1651 Stockholm, Sweden 1652 Email: daniele.ceccarelli@ericsson.com 1654 Luyuan Fang 1655 Email: luyuanf@gmail.com 1657 Young Lee 1658 Huawei Technologies 1659 5340 Legacy Drive 1660 Plano, TX 75023, USA 1661 Phone: (469)277-5838 1662 Email: leeyoung@huawei.com 1664 Diego Lopez 1665 Telefonica I+D 1666 Don Ramon de la Cruz, 82 1667 28006 Madrid, Spain 1668 Email: diego@tid.es 1670 Sergio Belotti 1671 Alcatel Lucent 1672 Via Trento, 30 1673 Vimercate, Italy 1674 Email: sergio.belotti@alcatel-lucent.com 1676 Daniel King 1677 Lancaster University 1678 Email: d.king@lancaster.ac.uk 1680 7. Appendix I: Abstracted Topology Illustration 1682 There are two levels of abstracted topology that needs to be 1683 maintained and supported for ACTN. Customer-specific Abstracted 1684 Topology refers to the abstracted view of network resources 1685 allocated (shared or dedicated) to the customer. The granularity of 1686 this abstraction varies depending on the nature of customer 1687 applications. Figure 11 illustrates this. 1689 Figure 10 shows how three independent customers A, B and C provide 1690 its respective traffic demand matrix to the MDSC. The physical 1691 network topology shown in Figure 6 is the provider's network 1692 topology generated by the PNC topology creation engine such as the 1693 link state database (LSDB) and Traffic Engineering DB (TEDB) based 1694 on control plane discovery function. This topology is internal to 1695 PNC and not available to customers. What is available to them is an 1696 abstracted network topology (a virtual network topology) based on 1697 the negotiated level of abstraction. This is a part of VNS 1698 instantiation between a client control and MDSC. 1700 +------+ +------+ +------+ 1701 A.1 ------o o-----------o o----------o o------- A.2 1702 B.1 ------o 1 | | 2 | | 3 | 1703 C.1 ------o o-----------o o----------o o------- B.2 1704 +-o--o-+ +-o--o-+ +-o--o-+ 1705 | | | | | | 1706 | | | | | | 1707 | | | | | | 1708 | | +-o--o-+ +-o--o-+ 1709 | `-------------o o----------o o------- B.3 1710 | | 4 | | 5 | 1711 `----------------o o----------o o------- C.3 1712 +-o--o-+ +------+ 1713 | | 1714 | | 1715 C.2 A.3 1717 Traffic Matrix Traffic Matrix Traffic Matrix 1718 for Customer A for Customer B for Customer C 1720 A.1 A.2 A.3 B.1 B.2 B.3 C.1 C.2 C.3 1722 ------------------- ------------------ ----------------- 1723 A.1 - 20G 20G B.1 - 40G 40G C.1 - 20G 20G 1724 A.2 20G - 10G B.2 40G - 20G C.2 20G - 10G 1725 A.3 20G 10G - B.3 40G 20G - C.3 20G 10G - 1727 Figure 10: Physical network topology shared with multiple customers 1729 Figure 11 depicts illustrative examples of different level of 1730 topology abstractions that can be provided by the MDSC topology 1731 abstraction engine based on the physical topology base maintained by 1732 the PNC. The level of topology abstraction is expressed in terms of 1733 the number of virtual nodes (VNs) and virtual links (VLs). For 1734 example, the abstracted topology for customer A shows there are 5 1735 VNEs and 10 VLs. This is by far the most detailed topology 1736 abstraction with a minimal link hiding compared to other abstracted 1737 topologies. 1739 (a) Abstracted Topology for Customer A (5 VNEs and 10 VLs) 1741 +------+ +------+ +------+ 1742 A.1 ------o o-----------o o----------o o------- A.2 1743 | 1 | | 2 | | 3 | 1744 | | | | | | 1745 +-o----+ +-o----+ +-o----+ 1746 | | | 1747 | | | 1748 | | | 1749 | +-o----+ +-o--o-+ 1750 | | | | | 1751 | | 4 | | 5 | 1752 `----------------o o----------o | 1753 +----o-+ +------+ 1754 | 1755 | 1756 A.3 1758 (b) Abstracted Topology for Customer B (3 VNEs and 6 VLs) 1760 +------+ +------+ 1761 B.1 ------o o-----------------------------o o------ B.2 1762 | 1 | | 3 | 1763 | | | | 1764 +-o----+ +-o----+ 1765 \ | 1766 \ | 1767 \ | 1768 `------------------- | 1769 ` +-o----+ 1770 \ | o------ B.3 1771 \ | 5 | 1772 `-------o | 1773 +------+ 1775 (c) Abstracted Topology for Customer C (1 VNE and 3 VLs) 1777 +-------------------------------------------+ 1778 | | 1779 | | 1780 C.1 ------o | 1781 | | 1782 | | 1783 | | 1784 | o--------C.3 1785 | | 1786 +--------------------o----------------------+ 1787 | 1788 | 1789 | 1790 | 1791 C.2 1793 Figure 11: Topology Abstraction Examples for Customers 1795 As different customers have different control/application needs, 1796 abstracted topologies for customers B and C, respectively show a 1797 much higher degree of abstraction. The level of abstraction is 1798 determined by the policy (e.g., the granularity level) placed for 1799 the customer and/or the path computation results by the PCE operated 1800 by the PNC. The more granular the abstraction topology is, the more 1801 control is given to the Customer Network Controller. If the Customer 1802 Network Controller has applications that require more granular 1803 control of virtual network resources, then the abstracted topology 1804 shown for customer A may be the right abstraction level for such 1805 controller. For instance, if the customer is a third-party virtual 1806 service broker/provider, then it would desire much more 1807 sophisticated control of virtual network resources to support 1808 different application needs. On the other hand, if the customer were 1809 only to support simple tunnel services to its applications, then the 1810 abstracted topology shown for customer C (one VNE and three VLs) 1811 would suffice.