idnits 2.17.1 draft-geng-netslices-architecture-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 3, 2017) is 2481 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC792' is mentioned on line 1095, but not defined == Missing Reference: 'RFC4443' is mentioned on line 1096, but not defined == Missing Reference: 'RFC7011' is mentioned on line 1096, but not defined Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group L. Geng 3 Internet-Draft China Mobile 4 Intended status: Informational J. Dong 5 Expires: January 4, 2018 S. Bryant 6 K. Makhijani 7 Huawei Technologies 8 A. Galis 9 University College London 10 X. de Foy 11 InterDigital Inc. 12 S. Kuklinsk 13 Orange 14 July 3, 2017 16 Network Slicing Architecture 17 draft-geng-netslices-architecture-02 19 Abstract 21 This document defines the overall architecture of network slicing. 22 Based on the general architecture, basic concepts of network slicing 23 and examples of network slicing instances are introduced for 24 clarification purposes. Some architectural considerations about the 25 data plane, control plane, management and orchestration of network 26 slicing are described to give a general view of network slicing 27 implementation principles. 29 Status of This Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at http://datatracker.ietf.org/drafts/current/. 39 Internet-Drafts are draft documents valid for a maximum of six months 40 and may be updated, replaced, or obsoleted by other documents at any 41 time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress." 44 This Internet-Draft will expire on January 4, 2018. 46 Copyright Notice 48 Copyright (c) 2017 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 64 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 65 1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 66 2. Demand for Network Slicing . . . . . . . . . . . . . . . . . 6 67 2.1. Guaranteed Service Performance . . . . . . . . . . . . . 7 68 2.2. End-to-end Customization . . . . . . . . . . . . . . . . 7 69 2.3. Network Slicing as a Service . . . . . . . . . . . . . . 7 70 3. Network Slicing Architecture . . . . . . . . . . . . . . . . 8 71 3.1. Requirements . . . . . . . . . . . . . . . . . . . . . . 8 72 3.2. High-Level Functional Components . . . . . . . . . . . . 8 73 3.2.1. Service Component . . . . . . . . . . . . . . . . . . 11 74 3.2.2. Network Slicing Management and Orchestration . . . . 11 75 3.2.3. Resource Component . . . . . . . . . . . . . . . . . 14 76 3.3. Network Slicing Capabilities . . . . . . . . . . . . . . 15 77 3.3.1. Reclusiveness . . . . . . . . . . . . . . . . . . . . 15 78 3.3.2. Protection . . . . . . . . . . . . . . . . . . . . . 15 79 3.3.3. Elasticity . . . . . . . . . . . . . . . . . . . . . 16 80 3.3.4. Extensibility . . . . . . . . . . . . . . . . . . . . 16 81 3.3.5. Safety . . . . . . . . . . . . . . . . . . . . . . . 16 82 3.3.6. Isolation . . . . . . . . . . . . . . . . . . . . . . 16 83 3.4. Network Slices Capability Exposure . . . . . . . . . . . 16 84 4. Data Plane of Network Slicing . . . . . . . . . . . . . . . . 17 85 4.1. Propagation of Guarantees . . . . . . . . . . . . . . . . 17 86 4.2. The Underlying Physical Layer . . . . . . . . . . . . . . 17 87 4.3. Hard vs Soft Slicing in the Data-plane . . . . . . . . . 18 88 4.4. The Role of Deterministic Networking . . . . . . . . . . 18 89 4.5. The Role of VPNs . . . . . . . . . . . . . . . . . . . . 19 90 4.6. Dynamic Reprovisioning . . . . . . . . . . . . . . . . . 19 91 4.7. Non-IP Data Plane . . . . . . . . . . . . . . . . . . . . 19 92 5. Control Plane of Network Slicing . . . . . . . . . . . . . . 19 93 5.1. NS Infrastructure Control Plane . . . . . . . . . . . . . 20 94 5.2. NS Infrastructure Control Operations and Protocols . . . 20 95 5.3. Programmability of the NS Infrastructure Control Plane . 21 96 5.4. Intra-Slice Control Plane . . . . . . . . . . . . . . . . 21 97 6. Management Plane of Network Slicing . . . . . . . . . . . . . 22 98 6.1. Network Slice Creation - Reservation / Release Messages 99 Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 22 100 6.2. Self- Management Operations . . . . . . . . . . . . . . . 23 101 6.3. Programmability of the Management Plane . . . . . . . . . 24 102 6.4. Management plane slicing protocols . . . . . . . . . . . 24 103 7. Service Functions and Mappings . . . . . . . . . . . . . . . 24 104 8. OAM and Telemetry . . . . . . . . . . . . . . . . . . . . . . 24 105 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 25 106 10. Security Considerations . . . . . . . . . . . . . . . . . . . 25 107 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 25 108 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 25 109 12.1. Normative References . . . . . . . . . . . . . . . . . . 25 110 12.2. Informative References . . . . . . . . . . . . . . . . . 26 111 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 26 113 1. Introduction 115 The Internet has always been designed to support a variety of 116 services. The emerging 5G market is expected to bring this diversity 117 of services to a new level [NS_WP]. Typical examples of new 118 bandwidth-hungry services enabled by 5G include high definition (HD) 119 video, virtual reality (VR) and augmented reality (AR). The high 120 bandwidth requirement of these services is not particularly 121 challenging thanks to the continuing advancing technologies. 122 However, the guarantee of high bandwidth performance of these 123 services based-on a spontaneous on-demand pattern is fairly 124 challenging. Moreover, providing high bandwidth with strict packet 125 loss tolerances and high mobility is also difficult for the current 126 networks which are commonly designed for best effort purposes. 128 Given that most Internet protocols are designed to comply with a best 129 effort, or enhanced best effort paradigm, it is inevitable that the 130 network will suffer from performance degradation in case of 131 congestion. Recent work on deterministic networking (DetNet) 132 [I-D.finn-detnet-architecture] aims to improve this situation by 133 providing a ceiling on latency for a particular traffic flow, which 134 significant improves packet error rate for specific DetNet services. 135 This pioneering work gives a great example that new approaches are 136 investigated to make the Internet aware of certain performance 137 requirement other than the bandwidth. 139 Taking a look at the network infrastructure, service provider used to 140 build dedicated network and resources for services requiring 141 guaranteed performance. This is simply not cost-effective, neither 142 is it flexible. The emergence of virtualization and VPN technologies 143 make it possible to set up logically isolated computing and network 144 instances from shared infrastructures. This can be used dedicatedly 145 by specific services for improved performances. However, many 146 questions are still to be answered as different technologies in 147 various domains need to be combined to build network slices, which 148 may require the separation of different resources and various types 149 of performance guarantees. 151 1.1. Requirements Language 153 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 154 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 155 document are to be interpreted as described in RFC 2119. 157 1.2. Terminology 159 I. Networking Servicing Terms 161 Service - A piece of software that performs one or more functions and 162 provides one or more APIs to applications or other services of the 163 same or different layers to make use of said functions and returns 164 one or more results. Services can be combined with other services, 165 or called in a certain serialized manner, to create a new service. 167 Service Instance - An instance of an end-user service or a business 168 service that is realized within or by a network slice. Each service 169 is represented by a service instance. Services and service instances 170 would be provided by the network operator or by third parties. 172 Administrative domain - A collection of systems and networks operated 173 by a single organization or administrative authority. Infrastructure 174 domain is an administrative domain that provides virtualized 175 infrastructure resources such as compute, network, and storage, or a 176 composition of those resources via a service abstraction to another 177 Administrative Domain, and is responsible for the management and 178 orchestration of those resources. 180 II. Network Resource Terms 182 Resource - A physical or virtual (network, compute, storage) 183 component available within a system. Resources can be very simple or 184 fine-grained (e.g., a port or a queue) or complex, comprised of 185 multiple resources (e.g., a network device). 187 Logical Resource - An independently manageable partition of a 188 physical resource, which inherits the same characteristics as the 189 physical resource and whose capability is bound to the capability of 190 the physical resource. 192 Virtual Resource - An abstraction of a physical or logical resource, 193 which may have different characteristics from that resource, and 194 whose capability may not be bound to the capability of that resource. 196 Network Function (NF) - A processing function in a network. It 197 includes but is not limited to network nodes functionality, e.g. 198 session management, mobility management, switching, routing 199 functions, which has defined functional behaviour and interfaces. 200 Network functions can be implemented as a network node on a dedicated 201 hardware or as a virtualized software functions. Data, Control, 202 Management, Orchestration planes functions are Network Functions. 204 Virtual Network Function (VNF) - A network function whose functional 205 software is decoupled from hardware. One or more virtual machines 206 running different software and processes on top of industry-standard 207 high-volume servers, switches and storage, or cloud computing 208 infrastructure, and capable of implementing network functions 209 traditionally implemented via custom hardware appliances and middle- 210 boxes (e.g. router, NAT, firewall, load balancer, etc.) 212 Network Element - A network element is defined as a manageable 213 logical entity uniting one or more network devices. This allows 214 distributed devices to be managed in a unified way using one 215 management system. It means also a facility or equipment used in the 216 provision of a communication service. Such term also includes 217 features, functions, and capabilities that are provided by means of 218 such facility or equipment, including subscriber numbers, databases, 219 signalling systems, and information sufficient for billing and 220 collection or used in the transmission, routing, or other provision 221 of a telecommunications service. 223 III. Network Slicing Terms used in this draft 225 Resource Slice - A grouping of physical or virtual (network, compute, 226 storage) resources. It inherits the characteristics of the resources 227 which are also bound to the capability of the resource. A resource 228 slice could be one of the components of Network Slice, however on its 229 own does not represent fully a Network Slice. 231 Network Slice - A Network slice is a managed group of subsets of 232 resources, network functions / network virtual functions at the data, 233 control, management/orchestration planes and services at a given 234 time. Network slice is programmable and has the ability to expose 235 its capabilities. The behaviour of the network slice realized via 236 network slice instance(s). 238 End-to-end Network Slice - A cross-domain network slice which may 239 consist of access network (fixed or cellular), transport network, 240 (mobile) core network and etc. End-to-end network slice can be 241 customized according to the requirements of network slice tenants 243 Network Slice Instance - An activated network slice. It is created 244 based on network template. A set of managed run-time network 245 functions, and resources to run these network functions, forming a 246 complete instantiated logical network to meet certain network 247 characteristics required by the service instance(s). It provides the 248 network characteristics that are required by a service instance. A 249 network slice instance may also be shared across multiple service 250 instances provided by the network operator. 252 Network Slice Provider - A network slicing provider, typically a 253 telecommunication service provider, is the owner or tenant of the 254 network infrastructures from which network slices are created. The 255 network slicing provider takes the responsibilities of managing and 256 orchestrating corresponding resources that the network slicing 257 consists of. 259 Network Slice Terminal - A terminal that is network-slice-aware, 260 typically subscribed to the service which is hosted within a network 261 slice instance. A network slice terminal may be capable of 262 subscribing to multiple network slice instance simultaneously. 264 Network Slice Tenant - A network slice tenant is the user of specific 265 NSIs, with which specific services can be provided to end customers. 266 Network slice tenants can make requests of the creation of new 267 network slice instances. Certain level of management capability 268 should be exposed to network slice tenant from network slice service 269 provider. 271 Network Slice Repository - A repository that in each domain consists 272 of a list of active Network Slices with their identifiers and 273 description. This description defines also the rules that have to be 274 fulfilled in order to access a slice. Network Slice Repository is 275 updated by slice orchestrator. In case of recursive slicing the 276 Network Slice Repository keeps information about all slices that 277 compose a higher level slice but such slice has its own identifier 278 and descriptors. 280 2. Demand for Network Slicing 282 It is expected that a diversity of new services will emerge in both 283 mobile/5G and fixed networks.[I-D.qin-netslices-use-cases] describes 284 many of the differentiated services (e.g. smart home, industrial 285 control, remote healthcare, Vehicle-to-Everything (V2X) etc.) and 286 their relevance to the Network Slicing. These use cases are typical 287 examples of service verticals requiring features beyond connectivity 288 such as uRLL, high-bandwidth, and isolation. 290 2.1. Guaranteed Service Performance 292 One of the most challenging requirements for future network is to 293 provide guaranteed performance for varieties of new services whilst 294 maintaining the economies of scale that accrue through resource 295 sharing. It has been foreseen that the requirements of different 296 services would be diversified and complex. 298 Network slicing can deal with these challenges by mapping the 299 performance requirements to physically or logically dedicated 300 resources. 302 2.2. End-to-end Customization 304 Customization is another significant feature of future services. 305 Many vertical industries are expected to offer customization 306 capabilities as a service to both internal manufacturing processes 307 and specific end users. Meanwhile, these customized services need to 308 be deployed with short time-to-market. The network needs to adapt to 309 this challenge since customers may frequently adjust and refine their 310 customization requirements. 312 There is ongoing work such as network orchestration, software defined 313 networks and network function virtualization that aims to address 314 this problem. In principle, these new technologies share a common 315 request for the network to provide the ability to provide agile 316 resource allocation. 318 2.3. Network Slicing as a Service 320 It is anticipated that the operation of 5G and future networks will 321 involve new business models. Given that the network is more 322 flexible, elastic, modularized and customized, the shared network 323 infrastructure can be sliced and offered as a service to the 324 customer. For instance, dedicated, isolated, end-to-end network 325 resources with a customized topology can be provided as a network 326 slice service to the tenant of this network slice.The tenants are 327 allowed to have a certain level of provisioning of their network 328 slices. 330 3. Network Slicing Architecture 332 This section introduces the general system architecture of network 333 slicing. 335 3.1. Requirements 337 To meet the diversified Quality of Experience (QoE) demands of 338 different vertical industries, the gap analysis document has 339 identified the following requirements: 341 o Req.1 Network Slicing Resource Specification 343 o Req.2 Cross-Network Segment; Cross-Domain Negotiation 345 o Req.3 Guaranteed Slice Performance and Isolation 347 o Req.4 Slice Discovery and Identification 349 o Req.5 NS Domain-Abstraction 351 o Req.6 OAM Operations with Customized Granularity 353 In the following sections, these requirements will be addressed and 354 associated with different aspects of the Network Slicing 355 architecture. 357 3.2. High-Level Functional Components 359 End-to-end network slice is a broad area and comprises of several 360 functional components. In the context of distribution of role and 361 responsibilities, a network slice consists of the following 362 components as shown in Figure 1. It can be seen that two network 363 slice instances are created from the shared network infrastructures. 364 In principle, the network slicing subnets (NS Subnets) represent any 365 general physical and logical network resources for demonstration 366 purposes. The two network slice instances created share the 367 computing, connectivity and storage resources, whether they are in 368 physical or virtual forms. 370 It is fundamental to network slicing that slices may be created, the 371 topology and/or its resources modified, and that the slices may be 372 decommissioned in a timely manner with minimum work by the network 373 slicing provider or the customer. This is not however unique to 374 network slicing, it is a goal of modern classical networks to be able 375 to do this. 377 The descriptions of functional components are introduced in the 378 following sections. 380 +------------------------------------------------------------------+ 381 | Service Component | 382 +------------------------------------------------------------------+ 383 +------------------------------------------------------------------+ 384 | Network Slice Management and Orchestration | 385 | +---------------+ +-----------+ +----------------+ | 386 | | Template | | NS | |Life cycle Mngt.| | 387 | | Management | |Repository.| |and monitoring | | 388 | +---------------+ +-----------+ +----------------+ | 389 | +-------------++---------------++---------++----------+ | 390 | | E2E || Domain || NS || Resource | | 391 | |Orchestration|| Orchestration || Manager || Registrar| | 392 | +-------------++---------------++---------++----------+ | 393 +------------------------------------------------------------------+ 394 +------------------------------------------------------------------+ 395 | Resource Component | 396 | +---+ +---+ +---+ +---+ | 397 | |NE1+----+ |NE3+------+ +--+NE5+------+NE6| | 398 | +---+ | +-+-+ | | +-+-+ +---+ | 399 | +-+-+ | +-+-+ | | | 400 | |NE2+----+ |NE4+-+ | | 401 | +-+-+ +-+-+ | | 402 | | | | | 403 | +------------------------+ | 404 +------------------------------------------------------------------+ 405 | 406 | 407 \/ 408 +------------------------------------------------------------------+ 409 | Created Network Slice Instance | 410 | +--------------------------------------------------------------+ | 411 | | +-----------+ +-----------+ +-----------+ +-----+ | | 412 | | |NS-Subnet 1+----+ |NS-Subnet 3| |NS-Subnet 6|----| SBC | | | 413 | | +-----------+ | +-----------+ +-----------+ +-----+ | | 414 | | | | | | 415 | | +-----------+ |+-----------+ | | 416 | | |NS-Subnet 2+-----| NS-Subnet 4| | | 417 | | +-----------+ +------------+ Network Slice | | 418 | | | | Instance 1 | | 419 | | +----------------------+ | | 420 | +--------------------------------------------------------------+ | 421 | +--------------------------------------------------------------+ | 422 | | Network Slice | | 423 | | Instance N | | 424 | +--------------------------------------------------------------+ | 425 +------------------------------------------------------------------+ 427 Figure 1: Network Slicing Architecture 429 3.2.1. Service Component 431 A service represents an end-user's business logic. It is realized 432 within or by Network slice instance. A service may demand a set of 433 network resources and attributes in form of a network slice. A 434 service is either mapped to a network slice instance or an ordered 435 chain of network slice instances. 437 3.2.2. Network Slicing Management and Orchestration 439 As seen in Figure 1, The management and orchestration layer of 440 network slicing system consist of the following functional 441 components. 443 1. Template Management 445 A network slice template consists of complete description of the 446 structure, configuration and the plans/work flows for how to 447 instantiate and control the network slice instance during its life 448 cycle. 450 2.NS Repository 452 To provide mechanism that will allow the end-user selection and 453 attachment to a slice instance or, if required, to multiple slice 454 instances at the same time, a NS repository (or repositories) is 455 needed, in which there are stored slices with the description of 456 their properties and access rules. 458 The service component should have an access to such repository in 459 order to check if the required slice exists. If such a slice doesn't 460 exist a matching procedure should allow an attachment of the service 461 to a slice which properties are the most similar ones to the 462 requested slice (under certain policies agreement between network 463 slice provider and tenant). Optionally the service may trigger the 464 deployment of a new slice. During the attachment of the service 465 component to a slice the slice data forwarding mechanisms are 466 configured in way that will redirect a selected part of the end-user 467 traffic to the slice. 469 3.Life cycle management and monitoring 471 Network slicing enables the operator to create logically partitioned 472 networks at a given time customized to provide optimized services for 473 different market scenarios. These scenarios demand diverse 474 requirements in terms of service characteristics, required customized 475 network and virtual network functionality (at the data, control, 476 management planes), required network resources, performance, 477 isolation, elasticity and QoS issues. A network slice is created 478 only with the necessary network functions and network resources at a 479 given time. They are gathered from a complete set of resources and 480 network /virtual network functions and orchestrated for the 481 particular services and purposes. 483 A network slice is a dynamic entity therefore its lifecycle has to be 484 managed. The network slice lifecycle management is (creation, 485 update, deletion) is managed by the network slice orchestrator. The 486 slice orchestrator according to requests that can be send by the 487 orchestrator operator, 3rd parties or even by the end-users creates a 488 new slice instance that is based on slice template that is stored is 489 slice template repository however it takes into account slice 490 operator (owner) preferences (policies). 492 4.E2E Orchestration 494 This section describes E2E Slices Orchestration and its 495 functionality. Orchestration refers to the system functions in a 496 domain that automate and autonomically co-ordination of network 497 functions in slices autonomically coordinate the slices lifecycle and 498 all the components that are part of the slice (i.e. Service 499 Instances, Network Slice Instances, Resources, Capabilities exposure) 500 to ensure an optimized allocation of the necessary resources across 501 the network. The main functionality of E2E slice orchestration may 502 include the following aspects. 504 (1) Coordinate a number of interrelated resources, often distributed 505 across a number of subordinate domains, and to assure 506 transactional integrity as part of the process. 508 (2) Autonomically control of slice life cycle management, including 509 concatenation of slices in each segment of the infrastructure 510 including the data pane, the control plane, and the management 511 plane. 513 (3) Autonomically coordinate and trigger of slice elasticity and 514 placement of logical resources in slices. 516 (4) Coordinates and (re)-configure logical resources in the slice by 517 taking over the control of all the virtualized network functions 518 assigned to the slice. 520 It is the continuous process of allocating resources to satisfy 521 contending demands in an optimal manner. The idea of optimization 522 would include at least prioritized SLA commitments , and factors such 523 as customer endpoint location, geographic or topological proximity, 524 delay, aggregate or fine-grained load, monetary cost, fate- sharing 525 or affinity. The word continuing incorporates recognition that the 526 environment and the service demands constantly change over the course 527 of time, so that orchestration is a continuous, multi-dimensional 528 optimization feedback loop. The E2E slice orchestration should have 529 the following characteristics. 531 o It protects the infrastructure from instabilities and side effects 532 due to the presence of many slice components running in parallel. 534 o It ensures the proper triggering sequence of slice functionality 535 and their stable operation. 537 o It defines conditions/constraints under which service components 538 will be activated, taking into account operator service and 539 network requirements (inclusive of optimize the use of the 540 available network; compute resources and avoid situations that can 541 lead to sub-par performance and even unstable and oscillatory 542 behaviors. 544 + ------------------------------------------------+ 545 | E2E Slice Orchestration | 546 +-------------------------------------------------+ 547 | | | 548 +----------+ +-----------+ +----------+ 549 | Network | | Network | | Network | 550 | Slice 1 | | Slice 2 | | Slice N | 551 | NS Mngr |------| NS Mngr |------ ... -- | NS Mngr | 552 +-----------+ +-----------+ +----------+ 553 | | | 554 +-------------------------------------------------------------+ 555 | Resources / Network Functions | 556 +-------------------------------------------------------------+ 558 Figure 2: E2E Slice Orchestration 560 5. Domain Orchestration 562 Another value that the network slicing brings is fast, automated and 563 dynamic deployment services in end-to-end manner, even in 564 heterogeneous environment. In order to achieve that goal the problem 565 of providing a slice that spans multiple domains has to be solved. 566 There two possible solutions. The first one lies on appropriate 567 allocation of resources in each domain (i.e. creation of the resource 568 slice instance), their aggregation and using of a single orchestrator 569 in order to deploy a slice. Another possibility is to use per domain 570 orchestrators with domain specific template and provide the chaining 571 of domain slices in order to obtain the end-to-end slice. In such a 572 case the orchestration is hierarchical one, i.e. the domain 573 orchestration is driven by a high level orchestrator that interacts 574 with orchestrators of all domains that are involved in end-to-end 575 slice instance creation. The slice that is composed of multiple 576 domain level slices requires specific mechanisms for inter-slice 577 operations like topology information exchange and/or appropriate 578 protocol conversion/adaptation. 580 The approach may lead to recursive slicing (or sub-slicing) in which 581 higher level slice instances are composed of lower level ones. The 582 creation of end-to-end slice composed of several slices may require 583 specific description of such slice and changes of functions of domain 584 slices. For example the traffic redirection can be implemented only 585 in this domain slice which is an ingress slice. 587 6. NS Manager 589 NS Manager management entity for a specific network slice instance. 590 it manages all access permissions and all interaction between a 591 Network Slice and external functions (i.e. other Network Slices, 592 Orchestrators, etc). Each NS Manager maps requirements from 593 orchestrator into network resources and manages these resources of a 594 specific network slice instance. 596 Allow 3rd parties to access via APIs information regarding 597 services provided by the slice (e.g. connectivity information, 598 QoS, mobility, autonomicity, etc.) 600 Allow dynamical customization of the network characteristics for 601 different diverse use cases within the limits set of functions by 602 the operator. Network slice enables the operator to create 603 networks customized to provide flexible solutions for different 604 market scenarios, which have diverse requirements, with respect to 605 the functionality, performance and resource separation. 607 It includes a description of the structure (and contained 608 components) and configuration of the slice instance. 610 7. Resource Registration 612 Resource registration component manages the exposed capability of the 613 network infrastructure. Details description is TBD. 615 3.2.3. Resource Component 617 Resource component includes physical, logical and virtual resources 618 (defined in Section 2). An abstraction of resources is required in 619 order to consistently map the requirements such as latency, 620 reliability, band-width. Resource component may need interfaces with 621 elements in network slice functional component as well as NS manager 622 for that purpose of discovering capabilities. 624 3.3. Network Slicing Capabilities 626 3.3.1. Reclusiveness 628 Recursion is a property of some functional blocks: a larger 629 functional block can be created by aggregating a number of a smaller 630 functional block and interconnecting them with a specific topology. 631 As such one could summarize the concept of recursive network slice 632 definition as the ability to build a new network slice out of 633 existing network slice (s). A certain resource or network function 634 /virtual network function could scale recursively, meaning that a 635 certain pattern could replace part of itself. This leads to a more 636 elastic network slice definition, where a network slice template, 637 describing the functionality, can be filled by a specific pattern or 638 implementation, depending on the required performance, required QoS 639 or available infrastructure. If a certain part of a network slice 640 can be replaced by different patterns, this can offer some 641 advantages: 643 o Each pattern might have its own capabilities in terms of 644 performance.Depending on the required workload, a network function 645 /virtual network function might be replaced by a pattern able to 646 process at higher performance. Similarly, a service or network 647 function /virtual network function can be decomposed so it can be 648 deployed on the available infrastructure. 650 o From an orchestrating point of view, above way of using recursive 651 network slice templates, can be beneficial for the placement 652 algorithm used by the orchestrator. The success rate, solution 653 quality and/or runtime of such an embedding algorithm benefits 654 from information on both possible scaling or decomposition 655 topologies and available infrastructure. 657 o Enabling methods for network slice template segmentation allowing 658 a slicing hierarchy with parent - child relationships. 660 3.3.2. Protection 662 Protection refers to the related capability and mechanisms so that 663 events within one network slice, such as congestion, do not have a 664 negative impact on another slice. 666 3.3.3. Elasticity 668 Elasticity refers to the capability, mechanisms and triggers for the 669 growth /shrinkage of network resources, and/or network and service 670 functions in an Network Slice as function of service needs. 672 3.3.4. Extensibility 674 Extensibility refers to the capability and ability to expand a 675 network slice with additional functionality and/or characteristics, 676 or through the modification of existing network function / virtual 677 network function while minimizing impact to existing functions. 679 3.3.5. Safety 681 Safety refers to the conditions in within one network slice of being 682 protected against different types and the consequences of failure, 683 error harm or any other event, which could be considered non- 684 desirable in an other network slice. 686 3.3.6. Isolation 688 Efficient slice creation is expected to guarantee the isolation and 689 non interference between network slices in the Data /Control 690 /Management planes as well as safety and security for multi-tenancy 691 in slices. 693 3.4. Network Slices Capability Exposure 695 An important value of network slicing is the capability of a slice to 696 be tightly coupled with services, i.e. the slice instance can be 697 designed that way that it support a specific service or limited 698 number of services only, but not all of them in the same slice. The 699 property means that not only the slice data plane operations are 700 properly tuned, but also the control plane can be designed according 701 to the requirements of slice specific services. In general it is 702 possible that a single slice instance may support a single service 703 only, however it is more scalable to provide more than a single 704 service per slice. Such approach has important implications. First 705 of all in order to add services to a slice each slice should expose 706 its functions to services/applications. Moreover the service 707 lifecycle management is different than slice lifecycle management. 708 This is similar to the existing networks, however in opposite to them 709 the deployment of a new service may lead to important reconfiguration 710 of a slice to which the service is attached (the slice is 711 programmable what means that we are going beyond the API approach - 712 the services templates are melted with the slice template). The goal 713 is to have tightly coupled services with networks and providing joint 714 optimization of networks and services at the level that is impossible 715 to achieve in present, hardware based solutions. 717 4. Data Plane of Network Slicing 719 In the network slicing architecture, the data plane in the edge and 720 core of the network will likely be one or more of the standard IETF 721 data planes: IPv4/IPv6, MPLS or Pseudo-wires (PW). This section 722 assumes that the IETF protocol stack exists as-is, and describes the 723 performance consideration in different layers of the data plane. 725 4.1. Propagation of Guarantees 727 Guarantees of delay start at the physical layer and propagate up the 728 stack layer by layer. Any layer can add delay, and can take various 729 steps to minimize the impact of delay on its layer, but no layer can 730 reduce the delay introduced by a lower layer. 732 Guarantees of loss and jitter can, by contrast be upheld or improved 733 at any layer of the protocol stack, but usually at a cost of 734 increased delay. Where delay is a constrain as it is in some 5G 735 applications the option of trading delay for better loss or jitter 736 characteristics is not an option. In these circumstances it is 737 critical that the quality characteristics start at the physical layer 738 and be maintained at each layer of the protocol stack. 740 4.2. The Underlying Physical Layer 742 A point to point dedicated physical channel provides the delay, 743 jitter and loss characteristics limited only by the media itself. 744 This does not fulfill the need for rapid reconfiguration of the 745 network to provision new services. 747 To address the need to provision a slice of the data-plane one 748 approach that can be deployed is to time-slice access to the physical 749 service. Ignoring many of the classic TDM offering as being too 750 slow, a number of technologies are available that might be applied 751 including OTN and FlexE. Whilst the provisioning of the channel 752 provided by underlays such as FlexE and the interconnection of FlexE 753 channels is within the scope of this architecture the operation of 754 the underlay is outside its scope. 756 The logical sub-division of a physical channel be that a single 757 channel with the full bandwidth available or a channel multiplexed at 758 the physical layer such as is provided by FlexE we will consider in 759 the following section. 761 4.3. Hard vs Soft Slicing in the Data-plane 763 Hard slicing refers to the provision of resources in such a way that 764 they are dedicated to a specific NSI. Data-plane resources are 765 provided in the data-plane through the allocation of a lambda, 766 through the allocation of a time domain multiplexed resource such as 767 a FlexE channel or through a service such as an MPLS hard-pipe. Note 768 that although hard-pipes can be used to allocate dedicated, non- 769 shared resources to an NSI, the using of allocation is bandwidth, 770 which can result in more "lumpiness" in the physical channel that 771 would not be present with a true physical layer multiplexing scheme. 773 Soft slicing refers to the provision of resources in such a way that 774 whilst the slices are separated such that they cannot statically 775 interfere with each other (one cannot receive the others packets or 776 observe or interfere with the other's storage), they can interact 777 dynamically (one may find the other is sending a packet just when it 778 wants to, or the other may be using CPU cycles just when the other 779 needs to process some information), which means they may compete for 780 some particular resource at some specific time. Soft slicing is 781 achieved through logically multiplexing the data-plane over a 782 physical channel include various types of tunnel (IP or MPLS) or 783 various types of pseudo-wire (again IP or MPLS). Although the design 784 of deterministic networking techniques helps, it is not possible to 785 achieve the same degree of isolation with these techniques as it is 786 possible to achieve with pure physical layer multiplexing techniques. 787 However where such techniques provide sufficient isolation their use 788 leads to a network design that may be deployed on existing equipment 789 designs and which can make unused bandwidth available to best effort 790 traffic. 792 4.4. The Role of Deterministic Networking 794 Deterministic networking is a technology under development in the 795 IETF that aims to both minimize congestion loss and set an upper 796 bound on per hop latency. It allows a packet layer to emulate the 797 behaviour of a fully partitioned underlay such might be provided 798 through some physical layer multiplexing system such as FlexE. 800 Deterministic networking works by policing the ingress rate of a flow 801 to an agreed maximum and then scheduling the transmission time of 802 each flow to reduce the "lumpiness" and hence the possible buildup of 803 queues and hence congestion loss. 805 Whilst deterministic networking is not as perfect as physical layer 806 multiplexing in terms of latency minimization, because the scheduling 807 is hop by hop and not end to end meaning that at each hop a packet 808 has to wait for the transmission slot allocated to its flow, it has 809 the advantage that it is able to allocate slots not needed by the 810 allocated traffic to best effort traffic. This reallocation of the 811 unused transmission slots to background traffic significantly 812 improves the efficiency of the network by amortizing the cost between 813 the scheduled high priority users and the best effort users. 815 4.5. The Role of VPNs 817 VPNs are considered candidate technologies for network slicing. The 818 existing VPN technologies mainly focus on the isolation of forwarding 819 tables between different tenants and provide a virtual topology for 820 the connectivity between different sites of a tenant. The VPN layer 821 and the underlying network resources are usually loosely coupled, and 822 statistical multiplexing is adopted to improve network utilization. 824 Although VPNs have been widely used to provide enterprise services in 825 service provide networks, it is unclear that whether VPNs along with 826 existing underlying tunnel technologies can meet the performance and 827 isolation requirements of critical services in the vertical 828 industries. 830 4.6. Dynamic Reprovisioning 832 A requirement of the network slicing system is that it can be 833 dynamically and non-disruptively reprovisioned. That is not an 834 unusual requirement of a modern network. However the frequency of 835 reprovisioning with network slicing will be relatively high, such 836 that it in many cases it is not possible to hide any disruption 837 during a "quiet" time. 839 Physical multiplexing methods such as FlexE have the ability to 840 seamlessly reprovision multiplex slots. At the network layer 841 techniques such as make-before-break, segment routing, and loop-free- 842 convergence can be used to provide uninterrupted operation during a 843 topology change. 845 4.7. Non-IP Data Plane 847 Non-IP data plane in support of Information Centric Networking (ICN), 848 some of the IoT services and other similar requirements will be added 849 in a future version. 851 5. Control Plane of Network Slicing 853 There are two control plane systems that need to be considered. The 854 first is the control plane of the slicing infrastructure itself (NS 855 Infrastructure Control Plane), the second is the control plane of an 856 individual slice (Intra-Slice Control Plane). 858 5.1. NS Infrastructure Control Plane 860 The NS infrastructure control plane receives the instruction of 861 creating a network slice with particular requirements from the 862 orchestration layer. It then creates the network slice by allocating 863 a set of network resources in the corresponding network 864 infrastructure. This set of network resources is associated with the 865 network slice during this operation. 867 The NS infrastructure control plane is also responsible, with the 868 support of the orchestration layer, for dynamically adjusting the 869 network according to slice change requests (e.g. from slice tenants), 870 and to changes in network infrastructure. As it is critical to meet 871 the service requirements of a network slice independently from 872 activity and changes occurred in other network slices or in 873 infrastructure, appropriate service assurance mechanisms should be 874 deployed in the network. The control plane, with the support of the 875 orchestration layer, MUST be able to react within a pre-determined 876 (possibly system-specific) time to any network events, such as 877 resource addition and failure. The orchestration layer SHOULD be 878 involved, directly or indirectly, to take reactive decisions, e.g. to 879 re-route a flow, to ensure that other network slices are not 880 affected. Indirect involvement includes, for example, reactive 881 programming by the orchestration layer to address foreseeable events 882 or cases where connection to the orchestration layer is lost. 884 The NS infrastructure control plane can be implemented as an 885 extension of the Virtual Infrastructure Manager (VIM), in cases where 886 the NFV-MANO architecture is used for the management and control 887 architecture of the system. Especially, the VNF Manager is 888 considered part of the management plane and not control plane. From 889 technology standpoint, NS infrastructure control plane can be an 890 extension of Cloud infrastructure technology (e.g. OpenStack), which 891 itself can integrate SDN technology for network control. This 892 logically centralized control can be supplemented or replaced with 893 distributed control protocols, that can provide some benefits in 894 scenarios which require fast reaction, robustness and efficient 895 information distribution. A hybrid architecture is anticipated, 896 where distributed protocols complement and simplify a centralized 897 control system. 899 5.2. NS Infrastructure Control Operations and Protocols 901 The following operations should be supported. Different control 902 protocols can be used to control different types of resources. 903 Multiple control protocols can be supported simultaneously. 905 o Setting up or tearing down network function instances within a 906 slice. Set, increase or decrease compute capacity of NFs. 908 * Control protocols can be based on openstack APIs and other 909 Cloud infrastructure control protocols. 911 o Setting up, tearing down, increase or decrease capacity of 912 connectivity between network function instances within a slice, 913 e.g. as L2-L3 virtual network or software function chain. 915 * Control protocols can include NVO3 control protocol, SFC 916 control protocol and NetConf. 918 o Reservation/release of traffic flows within a slice, possibly with 919 associated QoS and routing requirements. 921 * Control protocols can include DETNET, MPLS-TE, etc. 923 * Interconnect slices or slice flows, including across domains 925 * Control protocols are TBD. 927 5.3. Programmability of the NS Infrastructure Control Plane 929 The NS Control Plane exposes a Northbound API, typically for use by 930 the orchestration layer. A higher-than-physical representation level 931 of abstraction can be used, enabling the manipulation of a logical 932 network, that is translated down to physical resource manipulation by 933 the NS infrastructure control plane. The level of this abstraction 934 and of its associated logical network is TBD. Programmability should 935 include programming reactions to events, which reduces the dynamic 936 involvement of the orchestration layer, and therefore reaction time 937 to events. 939 5.4. Intra-Slice Control Plane 941 Intra-slice control plane maintains proper connectivity and 942 networking characteristics within the slice. A full range of 943 existing control plane technologies needs to be permissible. Intra- 944 slice control plane technologies can include existing IGP protocols 945 (such as IS-IS or OSPF), BGP, overlay control (such as NVO3 or SFC). 946 Some slices may be controlled by their own SDN controllers. Intra- 947 slice control plane can span across multiple domains (since NS 948 infrastructure control deals with slice interconnection). 950 6. Management Plane of Network Slicing 952 It is expected that the management and orchestration layer would use 953 state of the art management technologies to support short time-to- 954 market, and help the operators to build an open ecosystem for new 955 services in vertical industries. In multi-tenant environment the 956 slice tenants can trigger the creation of slice instances for them by 957 interacting with the E2E Orchestrator. After the creation of the 958 slice the slice tenant is able to monitor slice KPIs (performance, 959 faults) and send slice reconfiguration requests to E2E Orchestrator. 961 The basic functional architecture of management and orchestration 962 layer of network slicing system has been discussed in section 3. 963 This section further introduces some essential characteristics. 965 6.1. Network Slice Creation - Reservation / Release Messages Flow 967 The establishment of Network slices is both business-driven (i.e. 968 slices are in support for different types and service characteristics 969 and business cases) and technology-driven as network slice is a 970 grouping of physical or virtual resources (network, compute, storage) 971 and a grouping network functions and virtual network functions (at 972 the data, control and management planes) which can act as a sub 973 network at a given time. A network slice can accommodate service 974 components and network functions (physical or virtual) in all network 975 segments: access, core and edge / enterprise networks. 977 The management plane creates the grouping of network resources 978 (physical, virtual or a combination thereof), it connects with the 979 physical and virtual network and service functions and it 980 instantiates all of the network and service functions assigned to the 981 slice. 983 Once a network slice is created, the slice control plane takes over 984 the control, slice operations and governing of all the network 985 resources, network functions, and service functions assigned to the 986 slice. It (re-) configures them as appropriate and as per elasticity 987 needs, in order to provide an end-to-end service. In particular, 988 ingress routers are configured so that appropriate traffic is bound 989 to the relevant slice. Identification means for the traffic may be 990 simple (relying on a subset of the transport coordinate, DSCP/traffic 991 class, or flow label), or identification may be a more sophisticated 992 one. Also, the traffic capacity that is specified for a slice can be 993 changed dynamically, based on some events (e.g. triggered by a 994 service request). The slice control plane is responsible for 995 instructing the involved elements to guarantee such needs. 997 Inter Network Slice Slice Element Element Network 998 Orchestrator Manager Manager Function 999 | | | | 1000 | Discovery - | Discovery - | Discovery | 1001 | -Response | Response | Response | 1002 | <------------>|<------------->|<-------------> | 1003 | | | | 1004 | | | | 1005 | Request | | | 1006 | Net Slice | | | 1007 |---------------->| Request | | 1008 | | Net Sice | | 1009 | |-----------> | Request | 1010 | | | Net Slice | 1011 | | |------------> | 1012 | |Confirm-Waiting| | 1013 | |<--------------| | 1014 | | | Negotiation | 1015 | | |(Single/Multiple| 1016 | | | Rounds)| 1017 | | |<-------------> | 1018 |Confirm-Waiting | | | 1019 |<----------------| | | 1020 | | Negotiation | | 1021 | |Single/Multiple| | 1022 | | Rounds | | 1023 | Negotiation | <-----------> | | 1024 |Single/Multiple | | | 1025 | Rounds | | | 1026 | <-------------> | | | 1028 Figure 3: Network Slice Reservation / Release Messages Flow 1030 6.2. Self- Management Operations 1032 Self-management operations are focused on self-optimization and self- 1033 healing of network slice instances (including intra-slice functions 1034 management), network slice instance services and resources that are 1035 used for all slice instances. All these operations are combined with 1036 efficient and economical monitoring and reconfigurations at 1037 appropriate level. In order to make the management scalable and 1038 environment aware the management architecture is composed of many 1039 functional entities that follows the feedback loop management 1040 paradigm (aka autonomic management). The self-management functions 1041 may realize different goals and have to be coordinated according to 1042 slice instance and infrastructure operator policies. The self- 1043 management deals with dynamic (1) allocation of resources to slice 1044 instances in a economical way that provides required slice instances 1045 performance, (2) self-optimization and self-healing of slice 1046 instances during their deployment (lifecycle management) and 1047 operations (3) self-optimization and self-healing of services of each 1048 slice instance. Their lifecycle, that is typically different than 1049 slice instance lifecycle should also be managed in the autonomous 1050 way. Despite the self-managed functions may have different goals and 1051 involved entities the slice instance self-management should be 1052 coordinated with self-management of their services and self- 1053 management of resources (inter-slice operations) should be aligned 1054 with in-slice self-management operations. In the implementation the 1055 self-management functionality is split between NS manager (that is a 1056 part of slice template) and slice orchestrator in case of slice 1057 management and between service specific management and NS manager in 1058 case of services that use a specific slice. 1060 6.3. Programmability of the Management Plane 1062 The Management Plane is composed of multiple functional entities and 1063 is responsible for resource, slice instance and slice service 1064 management. In case of slice instances and services their management 1065 comes as a part of appropriate slice or service template 1066 respectively. That way slice or service related management functions 1067 are instantiated for each slice and/or service. The Management Plane 1068 may expose a set of APIs which can be used by additional management 1069 services that are added independently on service or slice instance 1070 lifecycle. Using these APIs and allocation additional resource the 1071 slice or service operator can add advanced and new management 1072 functions. That way the Management Plane programmability is 1073 provided. 1075 6.4. Management plane slicing protocols 1077 At this stage it is too early do define protocols (IMHO). We have to 1078 define the management architecture first with functional entities and 1079 reference points/interfaces. Having them we could define which 1080 protocol(s) we want to use for each of them. Maybe we can mention 1081 some protocols but generally they should be a part of separate 1082 specification. 1084 7. Service Functions and Mappings 1086 8. OAM and Telemetry 1088 OAM and telemetry to instrument the system need to be provided for 1089 each NSI so that the NSI provider can monitor the health of the NSI 1090 and so that the NSI owner can independently verify the health of 1091 their NSI. 1093 Running OAM on the NSI from the perspective of its owner can be 1094 undertaken by the owner using the native tools for the NSI network 1095 type. For example if the NSI is IP, tools like ICMP [RFC792], ICMPv6 1096 [RFC4443], or IPFIX [RFC7011] can be used. Similarly the native OAM 1097 tools for MPLS and Ethernet can be used. If the NSI provides a 1098 partial emulation of the network type that limits the ability to 1099 operate such native instrumentation tools, then this needs to be made 1100 clear to the NSI owner. 1102 Similarly running OAM on the underlay will also use the native tools 1103 for the network type providing the underlay. Care must be taken that 1104 any OAM run by the NS provider does not impinge on the operation of 1105 the NSI, and SHOULD be undetectable in the NSI. 1107 Telemetry will need to be provided to both the NS provider and the 1108 NSI owner. Telemetry of the underlay will use the NS providers pub- 1109 sub system of choice. 1111 Telemetry of the NSI may be provided purely by the NSI owner 1112 installing a telemetry collection system. However significant 1113 efficiencies may be realised by if the NS provider exports relevant 1114 telemetry to the NSI owner's pub-sub system. Where this is done, 1115 consideration must be given to the security of the measurement and 1116 export system so to no information is leaked between NSIs. 1118 9. IANA Considerations 1120 This document makes no request of IANA. 1122 10. Security Considerations 1124 Each layer of the system has its own security requirements. 1126 11. Acknowledgements 1128 12. References 1130 12.1. Normative References 1132 [I-D.finn-detnet-architecture] 1133 Finn, N. and P. Thubert, "Deterministic Networking 1134 Architecture", draft-finn-detnet-architecture-08 (work in 1135 progress), August 2016. 1137 [I-D.qin-netslices-use-cases] 1138 Qin, J., kiran.makhijani@huawei.com, k., Dong, J., Qiang, 1139 L., and S. Peng, "Network Slicing Use Cases: Network 1140 Customization for Different Services", draft-qin- 1141 netslices-use-cases-00 (work in progress), March 2017. 1143 12.2. Informative References 1145 [NS_WP] China Mobile Communication Corporation, Huawei 1146 Technologies Co. Deutsche Telekom AG,Volkswagen, "5G 1147 Service-Guaranteed Network Slicing White Paper", 2016, 1148 . 1151 Authors' Addresses 1153 Liang Geng 1154 China Mobile 1155 Beijing 1156 China 1158 Email: gengliang@chinamobile.com 1160 Jie Dong 1161 Huawei Technologies 1162 Huawei Campus, No. 156 Beiqing Rd. 1163 Beijing 100095 1165 Email: jie.dong@huawei.com 1167 Stewart Bryant 1168 Huawei Technologies 1169 U.K. 1171 Email: stewart.bryant@gmail.com 1173 Kiran Makhijani 1174 Huawei Technologies 1175 2890 Central Expressway 1176 Santa Clara CA 95050 1178 Email: kiran.makhijani@huawei.com 1179 Alex Galis 1180 University College London 1181 London 1182 U.K. 1184 Email: a.galis@ucl.ac.uk 1186 Xavier de Foy 1187 InterDigital Inc. 1188 1000 Sherbrooke West 1189 Montreal 1190 Canada 1192 Email: Xavier.Defoy@InterDigital.com 1194 Slawomir Kuklinski 1195 Orange 1197 Email: slawomir.kuklinski@gmail.com