idnits 2.17.1 draft-bestbar-teas-ns-packet-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 11, 2021) is 992 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-01) exists of draft-bestbar-lsr-spring-sa-00 == Outdated reference: A later version (-02) exists of draft-bestbar-spring-scalable-ns-01 == Outdated reference: A later version (-02) exists of draft-bestbar-teas-yang-slice-policy-00 == Outdated reference: A later version (-05) exists of draft-decraene-mpls-slid-encoded-entropy-label-id-01 == Outdated reference: A later version (-09) exists of draft-filsfils-spring-srv6-stateless-slice-id-02 == Outdated reference: A later version (-26) exists of draft-ietf-lsr-flex-algo-15 == Outdated reference: A later version (-03) exists of draft-kompella-mpls-mspl4fa-00 ** Obsolete normative reference: RFC 7752 (Obsoleted by RFC 9552) == Outdated reference: A later version (-27) exists of draft-ietf-teas-rfc3272bis-11 Summary: 1 error (**), 0 flaws (~~), 9 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TEAS Working Group T. Saad 3 Internet-Draft V. Beeram 4 Intended status: Standards Track Juniper Networks 5 Expires: January 12, 2022 B. Wen 6 Comcast 7 D. Ceccarelli 8 J. Halpern 9 Ericsson 10 S. Peng 11 R. Chen 12 ZTE Corporation 13 X. Liu 14 Volta Networks 15 L. Contreras 16 Telefonica 17 R. Rokui 18 Nokia 19 July 11, 2021 21 Realizing Network Slices in IP/MPLS Networks 22 draft-bestbar-teas-ns-packet-03 24 Abstract 26 Network slicing provides the ability to partition a physical network 27 into multiple logical networks of varying sizes, structures, and 28 functions so that each slice can be dedicated to specific services or 29 customers. Network slices need to operate in parallel while 30 providing slice elasticity in terms of network resource allocation. 31 The Differentiated Service (Diffserv) model allows for carrying 32 multiple services on top of a single physical network by relying on 33 compliant nodes to apply specific forwarding treatment (scheduling 34 and drop policy) on to packets that carry the respective Diffserv 35 code point. This document proposes a solution based on the Diffserv 36 model to realize network slicing in IP/MPLS networks. 38 Status of This Memo 40 This Internet-Draft is submitted in full conformance with the 41 provisions of BCP 78 and BCP 79. 43 Internet-Drafts are working documents of the Internet Engineering 44 Task Force (IETF). Note that other groups may also distribute 45 working documents as Internet-Drafts. The list of current Internet- 46 Drafts is at https://datatracker.ietf.org/drafts/current/. 48 Internet-Drafts are draft documents valid for a maximum of six months 49 and may be updated, replaced, or obsoleted by other documents at any 50 time. It is inappropriate to use Internet-Drafts as reference 51 material or to cite them other than as "work in progress." 53 This Internet-Draft will expire on January 12, 2022. 55 Copyright Notice 57 Copyright (c) 2021 IETF Trust and the persons identified as the 58 document authors. All rights reserved. 60 This document is subject to BCP 78 and the IETF Trust's Legal 61 Provisions Relating to IETF Documents 62 (https://trustee.ietf.org/license-info) in effect on the date of 63 publication of this document. Please review these documents 64 carefully, as they describe your rights and restrictions with respect 65 to this document. Code Components extracted from this document must 66 include Simplified BSD License text as described in Section 4.e of 67 the Trust Legal Provisions and are provided without warranty as 68 described in the Simplified BSD License. 70 Table of Contents 72 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 73 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 74 1.2. Acronyms and Abbreviations . . . . . . . . . . . . . . . 6 75 2. Network Resource Slicing Membership . . . . . . . . . . . . . 6 76 2.1. Dedicated Network Resources . . . . . . . . . . . . . . . 6 77 2.2. Shared Network Resources . . . . . . . . . . . . . . . . 7 78 3. Path Selection . . . . . . . . . . . . . . . . . . . . . . . 7 79 4. Slice Policy Modes . . . . . . . . . . . . . . . . . . . . . 8 80 4.1. Data plane Slice Policy Mode . . . . . . . . . . . . . . 8 81 4.2. Control Plane Slice Policy Mode . . . . . . . . . . . . . 9 82 4.3. Data and Control Plane Slice Policy Mode . . . . . . . . 11 83 5. Slice Policy Instantiation . . . . . . . . . . . . . . . . . 12 84 5.1. Slice Policy Definition . . . . . . . . . . . . . . . . . 13 85 5.1.1. Slice Policy Data Plane Selector . . . . . . . . . . 14 86 5.1.2. Slice Policy Resource Reservation . . . . . . . . . . 17 87 5.1.3. Slice Policy Per Hop Behavior . . . . . . . . . . . . 18 88 5.1.4. Slice Policy Topology . . . . . . . . . . . . . . . . 19 89 5.2. Slice Policy Boundary . . . . . . . . . . . . . . . . . . 19 90 5.2.1. Slice Policy Edge Nodes . . . . . . . . . . . . . . . 19 91 5.2.2. Slice Policy Interior Nodes . . . . . . . . . . . . . 20 92 5.2.3. Slice Policy Incapable Nodes . . . . . . . . . . . . 20 93 5.2.4. Combining Slice Policy Modes . . . . . . . . . . . . 21 94 5.3. Mapping Traffic on Slice Aggregates . . . . . . . . . . . 22 95 6. Control Plane Extensions . . . . . . . . . . . . . . . . . . 22 96 7. Applicability to Path Control Technologies . . . . . . . . . 23 97 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 98 9. Security Considerations . . . . . . . . . . . . . . . . . . . 23 99 10. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 24 100 11. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 24 101 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 24 102 12.1. Normative References . . . . . . . . . . . . . . . . . . 24 103 12.2. Informative References . . . . . . . . . . . . . . . . . 26 104 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 27 106 1. Introduction 108 Network slicing allows a Service Provider to create independent and 109 logical networks on top of a common or shared physical network 110 infrastructure. Such network slices can be offered to customers or 111 used internally by the Service Provider to facilitate or enhance 112 their service offerings. A Service Provider can also use network 113 slicing to structure and organize the elements of its infrastructure. 114 This document provides a path control technology agnostic solution 115 that a Service Provider can deploy to realize network slicing in IP/ 116 MPLS networks. 118 The definition of network slice for use within the IETF and the 119 characteristics of IETF network slice are specified in 120 [I-D.ietf-teas-ietf-network-slice-definition]. A framework for 121 reusing IETF VPN and traffic-engineering technologies to realize IETF 122 network slices is discussed in [I-D.nsdt-teas-ns-framework]. These 123 documents also discuss the function of an IETF Network Slice 124 Controller and the requirements on its northbound and southbound 125 interfaces. 127 This document introduces the notion of a slice aggregate which 128 comprises of one of more IETF network slice traffic streams. It 129 describes how a slice policy can be used to realize a slice aggregate 130 by instantiating specific control and data plane behaviors on select 131 topological elements in IP/MPLS networks. The onus is on the IETF 132 Network Slice Controller to maintain the mapping between one or more 133 IETF network slices and a slice aggregate. The mechanisms used by 134 the controller to determine the mapping are outside the scope of this 135 document. The focus of this document is on the mechanisms required 136 at the device level to address the requirements of network slicing in 137 packet networks. 139 In a Differentiated Service (Diffserv) domain [RFC2475], packets 140 requiring the same forwarding treatment (scheduling and drop policy) 141 are classified and marked with a Class Selector (CS) at domain 142 ingress nodes. At transit nodes, the CS field inside the packet is 143 inspected to determine the specific forwarding treatment to be 144 applied before the packet is forwarded further. Similar principles 145 are adopted by this document to realize network slicing. 147 When logical networks representing slice aggregates are realized on 148 top of a shared physical network infrastructure, it is important to 149 steer traffic on the specific network resources allocated for the 150 slice aggregate. In packet networks, the packets that traverse a 151 specific slice aggregate MAY be identified by one or more specific 152 fields carried within the packet. A slice policy ingress boundary 153 node populates the respective field(s) in packets that enter a slice 154 aggregate to allow interior slice policy nodes to identity those 155 packets and apply the specific Per Hop Behavior (PHB) that is 156 associated with the slice aggregate. The PHB defines the scheduling 157 treatment and, in some cases, the packet drop probability. 159 The slice aggregate traffic may further carry a Diffserv CS to allow 160 differentiation of forwarding treatments for packets within a slice 161 aggregate. For example, when using MPLS as a dataplane, it is 162 possible to identify packets belonging to the same slice aggregate by 163 carrying a global MPLS label in the label stack that identifies the 164 slice aggregate in each packet. Additional Diffserv classification 165 may be indicated in the Traffic Class (TC) bits of the global MPLS 166 label to allow further differentiation of forwarding treatments for 167 traffic traversing the same slice aggregate network resources. 169 This document covers different modes of slice policy and discusses 170 how each slice policy mode can ensure proper placement of slice 171 aggregate paths and respective treatment of slice aggregate traffic. 173 1.1. Terminology 175 The reader is expected to be familiar with the terminology specified 176 in [I-D.ietf-teas-ietf-network-slice-definition] and 177 [I-D.nsdt-teas-ns-framework]. 179 The following terminology is used in the document: 181 IETF network slice: 182 a well-defined composite of a set of endpoints, the connectivity 183 requirements between subsets of these endpoints, and associated 184 requirements; the term 'network slice' in this document refers to 185 'IETF network slice' as defined in 186 [I-D.ietf-teas-ietf-network-slice-definition]. 188 IETF Network Slice Controller (NSC): 189 controller that is used to realize an IETF network slice 190 [I-D.ietf-teas-ietf-network-slice-definition]. 192 Slice policy: 193 a policy construct that enables instantiation of mechanisms in 194 support of IETF network slice specific control and data plane 195 behaviors on select topological elements; the enforcement of a 196 slice policy results in the creation of a slice aggregate. 198 Slice aggregate: 199 a collection of packets that match a slice policy selection 200 criteria and are given the same forwarding treatment; a slice 201 aggregate comprises of one or more IETF network slice traffic 202 streams; the mapping of one or more IETF network slices to a slice 203 aggregate is maintained by the IETF Network Slice Controller. 205 Slice policy capable node: 206 a node that supports one of the slice policy modes described in 207 this document. 209 Slice policy incapable node: 210 a node that does not support any of the slice policy modes 211 described in this document. 213 Slice aggregate traffic: 214 traffic that is forwarded over network resources associated with a 215 specific slice aggregate. 217 Slice aggregate path: 218 a path that is setup over network resources associated with a 219 specific slice aggregate. 221 Slice aggregate packet: 222 a packet that traverses network resources associated with a 223 specific slice aggregate. 225 Slice policy topology: 226 a set of topological elements associated with a slice policy. 228 Slice aggregate aware TE: 229 a mechanism for TE path selection that takes into account the 230 available network resources associated with a specific slice 231 aggregate. 233 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 234 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 235 "OPTIONAL" in this document are to be interpreted as described in 236 BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all 237 capitals, as shown here. 239 1.2. Acronyms and Abbreviations 241 BA: Behavior Aggregate 243 CS: Class Selector 245 SS: Slice Selector 247 S-PHB: Slice policy Per Hop Behavior as described in Section 5.1.3 249 SSL: Slice Selector Label as described in Section 5.1.1 251 SSLI: Slice Selector Label Indicator 253 SLA: Service Level Agreement 255 SLO: Service Level Objective 257 Diffserv: Differentiated Services 259 MPLS: Multiprotocol Label Switching 261 LSP: Label Switched Path 263 RSVP: Resource Reservation Protocol 265 TE: Traffic Engineering 267 SR: Segment Routing 269 VRF: VPN Routing and Forwarding 271 2. Network Resource Slicing Membership 273 A slice aggregate can be instantiated over parts of an IP/MPLS 274 network (e.g., all or specific network resources in the access, 275 aggregation, or core network), and can stretch across multiple 276 domains administered by a provider. A slice policy topology may 277 include all or a sub-set of the physical nodes and links of an IP/ 278 MPLS network; it may be comprised of dedicated and/or shared network 279 resources (e.g., in terms of processing power, storage, and 280 bandwidth). 282 2.1. Dedicated Network Resources 284 Physical network resources may be fully dedicated to a specific slice 285 aggregate. For example, traffic belonging to a slice aggregate can 286 traverse dedicated network resources without being subjected to 287 contention from traffic of other slice aggregates. Dedicated network 288 resource slicing allows for simple partitioning of the physical 289 network resources amongst slice aggregates without the need to 290 distinguish packets traversing the dedicated network resources since 291 only one slice aggregate traffic stream can traverse the dedicated 292 resource at any time. 294 2.2. Shared Network Resources 296 To optimize network utilization, sharing of the physical network 297 resources may be desirable. In such case, the same physical network 298 resource capacity is divided among multiple slice aggregates. Shared 299 network resources can be partitioned in the data plane (for example 300 by applying hardware policers and shapers) and/or partitioned in the 301 control plane by providing a logical representation of the physical 302 link that has a subset of the network resources available to it. 304 3. Path Selection 306 Path selection in a network can be network state dependent, or 307 network state independent as described in Section 5.1 of 308 [I-D.ietf-teas-rfc3272bis]. The latter is the choice commonly used 309 by IGPs when selecting a best path to a destination prefix, while the 310 former is used by ingress TE routers, or Path Computation Engines 311 (PCEs) when optimizing the placement of a flow based on the current 312 network resource utilization. 314 For example, when steering traffic on a delay optimized path, the IGP 315 can use its link state database's view of the network topology to 316 compute a path optimizing for the delay metric of each link in the 317 network resulting in a cumulative lowest delay path. 319 When path selection is network state dependent, the path computation 320 can leverage Traffic Engineering mechanisms (e.g., as defined in 321 [RFC2702]) to compute feasible paths taking into account the incoming 322 traffic demand rate and current state of network. This allows 323 avoiding overly utilized links, and reduces the chance of congestion 324 on traversed links. 326 To enable TE path placement, the link state is advertised with 327 current reservations, thereby reflecting the available bandwidth on 328 each link. Such link reservations may be maintained centrally on a 329 network wide network resource manager, or distributed on devices (as 330 usually done with RSVP). TE extensions exist today to allow IGPs 331 (e.g., [RFC3630] and [RFC5305]), and BGP-LS [RFC7752] to advertise 332 such link state reservations. 334 When network resource reservations are also slice aggregate aware, 335 the link state can carry per slice aggregate state (e.g., reservable 336 bandwidth). This allows path computation to take into account the 337 specific network resources available for a slice aggregate when 338 determining the path for a specific flow. In this case, we refer to 339 the process of path placement and path provisioning as slice 340 aggregate aware TE. 342 4. Slice Policy Modes 344 A slice policy can be used to dictate if the partitioning of the 345 shared network resources amongst multiple slice aggregates can be 346 achieved by realizing slice aggregates in: 348 a) data plane only, or 350 b) control plane only, or 352 c) both control and data planes. 354 4.1. Data plane Slice Policy Mode 356 The physical network resources can be partitioned on network devices 357 by applying a Per Hop forwarding Behavior (PHB) onto packets that 358 traverse the network devices. In the Diffserv model, a Class 359 Selector (CS) is carried in the packet and is used by transit nodes 360 to apply the PHB that determines the scheduling treatment and drop 361 probability for packets. 363 When data plane slice policy mode is applied, packets need to be 364 forwarded on the specific slice aggregate network resources and need 365 to be applied a specific forwarding treatment that is dictated in the 366 slice policy (refer to Section 5.1 below). A Slice Selector (SS) 367 MUST be carried in each packet to identify the slice aggregate that 368 it belongs to. 370 The ingress node of a slice policy domain, in addition to marking 371 packets with a Diffserv CS, MAY also add an SS to each slice 372 aggregate packet. The transit nodes within a slice policy domain MAY 373 use the SS to associate packets with a slice aggregate and to 374 determine the Slice policy Per Hop Behavior (S-PHB) that is applied 375 to the packet (refer to Section 5.1.3 for further details). The CS 376 MAY be used to apply a Diffserv PHB on to the packet to allow 377 differentiation of traffic treatment within the same slice aggregate. 379 When data plane only slice policy mode is used, routers may rely on a 380 network state independent view of the topology to determine the best 381 paths to reach destinations. In this case, the best path selection 382 dictates the forwarding path of packets to the destination. The SS 383 field carried in each packet determines the specific S-PHB treatment 384 along the selected path. 386 For example, the Segment-Routing Flexible Algorithm 387 [I-D.ietf-lsr-flex-algo] may be deployed in a network to steer 388 packets on the IGP computed lowest cumulative delay path. A slice 389 policy may be used to allow links along the least latency path to 390 share its data plane resources amongst multiple slice aggregates. In 391 this case, the packets that are steered on a specific slice policy 392 carry the SS field that enables routers (along with the Diffserv CS) 393 to determine the S-PHB and enforce slice aggregate traffic streams. 395 4.2. Control Plane Slice Policy Mode 397 The physical network resources in the network can be logically 398 partitioned by having a representation of network resources appear in 399 a virtual topology. The virtual topology can contain all or a subset 400 of the physical network resources by applying specific topology 401 filters on the native topology. The logical network resources that 402 appear in the virtual topology can reflect a part, whole, or in- 403 excess of the physical network resource capacity (when 404 oversubscription is desirable). For example, a physical link 405 bandwidth can be divided into fractions, each dedicated to a slice 406 aggregate. Each fraction of the physical link bandwidth MAY be 407 represented as a logical link in a virtual topology that is used when 408 determining paths associated with a specific slice aggregate. The 409 virtual topology associated with the slice policy can be used by 410 routing protocols, or by the ingress/PCE when computing slice 411 aggregate aware TE paths. 413 To perform network state dependent path computation in this mode 414 (slice aggregate aware TE), the resource reservation on each link 415 needs to be slice aggregate aware. Details of required IGP 416 extensions to support SA-TE are described in 417 [I-D.bestbar-lsr-slice-aware-te]. 419 The same physical link may be member of multiple slice policies that 420 instantiate different slice aggregates. The slice aggregate network 421 resource availability on such a link is updated (and may be 422 advertised) whenever new paths are placed in the network. The slice 423 aggregate resource reservation, in this case, MAY be maintained on 424 each device or off the device on a resource reservation manager that 425 holds reservation states for those links in the network. 427 Multiple slice aggregates can form a group and share the available 428 network resources allocated to each slice aggregate. In this case, a 429 node can update the reservable bandwidth for each slice aggregate to 430 take into consideration the available bandwidth from other slice 431 aggregates in the same group. 433 For illustration purposes, the diagram below represents bandwidth 434 isolation or sharing amongst a group of slice aggregates. In 435 Figure 1a, the slice aggregates: S_AGG1, S_AGG2, S_AGG3 and S_AGG4 436 are not sharing any bandwidths between each other. In Figure 1b, the 437 slice aggregates: S_AGG1 and S_AGG2 can share the available bandwidth 438 portion allocated to each amongst them. Similarly, S_AGG3 and S_AGG4 439 can share amongst themselves any available bandwidth allocated to 440 them, but they cannot share available bandwidth allocated to S_AGG1 441 or S_AGG2. In both cases, the Max Reservable Bandwidth may exceed 442 the actual physical link resource capacity to allow for over 443 subscription. 445 I-----------------------------I I-----------------------------I 446 <--S_AGG1-> I I-----------------I I 447 I---------I I I <-S_AGG1-> I I 448 I I I I I-------I I I 449 I---------I I I I I I I 450 I I I I-------I I I 451 <-----S_AGG2------> I I I I 452 I-----------------I I I <-S_AGG2-> I I 453 I I I I I---------I I I 454 I-----------------I I I I I I I 455 I I I I---------I I I 456 <---S_AGG3----> I I I I 457 I-------------I I I S_AGG1 + S_AGG2 I I 458 I I I I-----------------I I 459 I-------------I I I I 460 I I I I 461 <---S_AGG4----> I I-----------------I I 462 I-------------I I I <-S_AGG3-> I I 463 I I I I I-------I I I 464 I-------------I I I I I I I 465 I I I I-------I I I 466 I S_AGG1+S_AGG2+S_AGG3+S_AGG4 I I I I 467 I I I <-S_AGG4-> I I 468 I-----------------------------I I I---------I I I 469 <--Max Reservable Bandwidth--> I I I I I 470 I I---------I I I 471 I I I 472 I S_AGG3 + S_AGG4 I I 473 I-----------------I I 474 I S_AGG1+S_AGG2+S_AGG3+S_AGG4 I 475 I I 476 I-----------------------------I 477 <--Max Reservable Bandwidth--> 479 (a) No bandwidth sharing (b) Sharing bandwidth between 480 between slice aggregates. slice aggregates of the 481 same group 483 Figure 1: Bandwidth Isolation/Sharing. 485 4.3. Data and Control Plane Slice Policy Mode 487 In order to support strict guarantees for slice aggregates, the 488 network resources can be partitioned in both the control plane and 489 data plane. 491 The control plane partitioning allows the creation of customized 492 topologies per slice aggregate that routers or a Path Computation 493 Engine (PCE) can use to determine optimal path placement for specific 494 demand flows (Slice aggregate aware TE). 496 The data plane partitioning protects slice aggregate traffic from 497 network resource contention that could occur due to bursts in traffic 498 from other slice aggregates traversing the same shared network 499 resource. 501 5. Slice Policy Instantiation 503 A network slice can span multiple technologies and multiple 504 administrative domains. Depending on the network slice consumer's 505 requirements, a network slice can be differentiated from other 506 network slices in terms of data, control or management planes. 508 The consumer of a network slice expresses their intent by specifying 509 requirements rather than mechanisms to realize the slice. The 510 requirements for a network slice can vary and can be expressed in 511 terms of connectivity needs between end-points (point-to-point, 512 point-to-multipoint or multipoint-to-multipoint) with customizable 513 network capabilities that may include data speed, quality, latency, 514 reliability, security, and services (refer to 515 [I-D.ietf-teas-ietf-network-slice-definition] for more details). 516 These capabilities are always provided based on a Service Level 517 Agreement (SLA) between the network slice consumer and the provider. 519 The onus is on the network slice controller to consume the service 520 layer slice intent and realize it with an appropriate slice policy. 521 Multiple IETF network slices can be mapped to the same slice policy 522 resulting in a slice aggregate. The network wide consistent slice 523 policy definition is distributed to the devices in the network as 524 shown in Figure 2. The specification of the network slice intent on 525 the northbound interface of the controller and the mechanism used to 526 map the network slice to a slice policy are outside the scope of this 527 document. 529 | 530 | IETF Network Slice 531 | (service) 532 +--------------------+ 533 | IETF Network | 534 | Slice Controller | 535 +--------------------+ 536 | 537 | Slice Policy 538 /|\ 539 / | \ 540 slice policy capable 541 nodes/controllers 542 / / | \ \ 543 v v v v v 544 xxxxxxxxxxxxxxxxxxxx 545 xxxx xxxx 546 xxxx Slice xxxx 547 xxxx Aggregate xxxx 548 xxxx xxxx 549 xxxxxxxxxxxxxxxxxxxx 551 <------ Path Control ------> 552 RSVP-TE/SR-Policy/SR-FlexAlgo 554 Figure 2: Slice Policy Instantiation. 556 5.1. Slice Policy Definition 558 The slice policy is network-wide construct that is consumed by 559 network devices, and may include rules that control the following: 561 o Data plane specific policies: This includes the SS, any firewall 562 rules or flow-spec filters, and QoS profiles associated with the 563 slice policy and any classes within it. 565 o Control plane specific policies: This includes guaranteed 566 bandwidth, any network resource sharing amongst slice policies, 567 and reservation preference to prioritize any reservations of a 568 specific slice policy over others. 570 o Topology membership policies: This defines topology filter 571 policies that dictate node/link/function network resource topology 572 association for a specific slice policy. 574 There is a desire for flexibility in realizing network slices to 575 support the services across networks consisting of products from 576 multiple vendors. These networks may also be grouped into disparate 577 domains and deploy various path control technologies and tunnel 578 techniques to carry traffic across the network. It is expected that 579 a standardized data model for slice policy will facilitate the 580 instantiation and management of slice aggregates on slice policy 581 capable nodes. A YANG data model for the slice policy instantiation 582 on network devices is described in 583 [I-D.bestbar-teas-yang-slice-policy]. 585 It is also possible to distribute the slice policy to network devices 586 using several mechanisms, including protocols such as NETCONF or 587 RESTCONF, or exchanging it using a suitable routing protocol that 588 network devices participate in (such as IGP(s) or BGP). The 589 extensions to enable specific protocols to carry a slice policy 590 definition will be described in separate documents. 592 5.1.1. Slice Policy Data Plane Selector 594 A router MUST be able to identify a packet belonging to a slice 595 aggregate before it can apply the associated forwarding treatment or 596 S-PHB. One or more fields within the packet MAY be used as an SS to 597 do this. 599 Forwarding Address Based Slice Selector: 601 It is possible to assign a different forwarding address (or MPLS 602 forwarding label in case of MPLS network) for each slice aggregate 603 on a specific node in the network. [RFC3031] states in 604 Section 2.1 that: 'Some routers analyze a packet's network layer 605 header not merely to choose the packet's next hop, but also to 606 determine a packet's "precedence" or "class of service"'. 607 Assigning a unique forwarding address (or MPLS forwarding label) 608 to each slice aggregate allows slice aggregate packets destined to 609 a node to be distinguished by the destination address (or MPLS 610 forwarding label) that is carried in the packet. 612 This approach requires maintaining per slice aggregate state for 613 each destination in the network in both the control and data plane 614 and on each router in the network. For example, consider a 615 network slicing provider with a network composed of 'N' nodes, 616 each with 'K' adjacencies to its neighbors. Assuming a node can 617 be reached over 'M' different slice aggregates, the node assigns 618 and advertises reachability to 'N' unique forwarding addresses, or 619 MPLS forwarding labels. Similarly, each node assigns a unique 620 forwarding address (or MPLS forwarding label) for each of its 'K' 621 adjacencies to enable strict steering over the adjacency for each 622 slice. The total number of control and data plane states that 623 need to be stored and programmed in a router's forwarding is 624 (N+K)*M states. Hence, as 'N', 'K', and 'M' parameters increase, 625 this approach suffers from scalability challenges in both the 626 control and data planes. 628 Global Identifier Based Slice Selector: 630 A slice policy MAY include a Global Identifier Slice Selector 631 (GISS) field as defined in [I-D.kompella-mpls-mspl4fa] that is 632 carried in each packet in order to associate it to a specific 633 slice aggregate, independent of the forwarding address or MPLS 634 forwarding label that is bound to the destination. Routers within 635 the slice policy domain can use the forwarding address (or MPLS 636 forwarding label) to determine the forwarding next-hop(s), and use 637 the GISS field in the packet to infer the specific forwarding 638 treatment that needs to be applied on the packet. 640 The GISS can be carried in one of multiple fields within the 641 packet, depending on the dataplane used. For example, in MPLS 642 networks, the GISS can be encoded within an MPLS label that is 643 carried in the packet's MPLS label stack. All packets that belong 644 to the same slice aggregate MAY carry the same GISS in the MPLS 645 label stack. It is also possible to have multiple GISS's map to 646 the same slice aggregate. 648 The GISS can be encoded in an MPLS label and may appear in several 649 positions in the MPLS label stack. For example, the VPN service 650 label may act as a GISS to allow VPN packets to be associated with 651 a specific slice aggregate. In this case, a single VPN service 652 label acting as a GISS MAY be allocated by all Egress PEs of a 653 VPN. Alternatively, multiple VPN service labels MAY act as GISS's 654 that map a single VPN to the same slice aggregate to allow for 655 multiple Egress PEs to allocate different VPN service labels for a 656 VPN. In other cases, a range of VPN service labels acting as 657 multiple GISS's MAY map multiple VPN traffic to a single slice 658 aggregate. An example of such deployment is shown in Figure 3. 660 SR Adj-SID: GISS (VPN service label) on PE2: 1001 661 9012: P1-P2 662 9023: P2-PE2 664 /-----\ /-----\ /-----\ /-----\ 665 | PE1 | ----- | P1 | ------ | P2 |------ | PE2 | 666 \-----/ \-----/ \-----/ \-----/ 668 In 669 packet: 670 +------+ +------+ +------+ +------+ 671 | IP | | 9012 | | 9023 | | 1001 | 672 +------+ +------+ +------+ +------+ 673 | Pay- | | 9023 | | 1001 | | IP | 674 | Load | +------+ +------+ +------+ 675 +----- + | 1001 | | IP | | Pay- | 676 +------+ +------+ | Load | 677 | IP | | Pay- | +------+ 678 +------+ | Load | 679 | Pay- | +------+ 680 | Load | 681 +------+ 683 Figure 3: GISS or VPN label at bottom of label stack. 685 In some cases, the position of the GISS may not be at a fixed 686 position in the MPLS label header. In this case, the GISS label 687 can show up in any position in the MPLS label stack. To enable a 688 transit router to identify the position of the GISS label, a 689 special purpose label (ideally a base special purpose label 690 (bSPL)) can be used as a GISS label indicator. 691 [I-D.kompella-mpls-mspl4fa] proposes a new bSPL called Forwarding 692 Actions Identifier (FAI) that is assigned to alert of the presence 693 of multiple actions and action data (including the presence of the 694 GISS) that are carried within the MPLS label stack. The slice 695 policy ingress boundary node, in this case, imposes two labels: 696 the FAI label and a forwarding actions label that includes the 697 GISS to identify the slice aggregate that packets belong to as 698 shown in Figure 4. 700 [I-D.decraene-mpls-slid-encoded-entropy-label-id] also proposes to 701 repurpose the ELI/EL [RFC6790] to carry the Slice Identifier in 702 order to minimize the size of the MPLS stack and ease incremental 703 deployment. 705 SR Adj-SID: GISS: 1001 706 9012: P1-P2 707 9023: P2-PE2 709 /-----\ /-----\ /-----\ /-----\ 710 | PE1 | ----- | P1 | ------ | P2 |------ | PE2 | 711 \-----/ \-----/ \-----/ \-----/ 713 In 714 packet: 715 +------+ +------+ +------+ +------+ 716 | IP | | 9012 | | 9023 | | FAI | 717 +------+ +------+ +------+ +------+ 718 | Pay- | | 9023 | | FAI | | 1001 | 719 | Load | +------+ +------+ +------+ 720 +------+ | FAI | | 1001 | | IP | 721 +------+ +------+ +------+ 722 | 1001 | | IP | | Pay- | 723 +------+ +------+ | Load | 724 | IP | | Pay- | +------+ 725 +------+ | Load | 726 | Pay- | +------+ 727 | Load | 728 +------+ 730 Figure 4: FAI and GISS label in the label stack. 732 When the slice is realized over an IP dataplane, the GISS can be 733 encoded in the IP header. For example, the SSL can be encoded in 734 portion of the IPv6 Flow Label field as described in 735 [I-D.filsfils-spring-srv6-stateless-slice-id]. 737 5.1.2. Slice Policy Resource Reservation 739 Bandwidth and network resource allocation strategies for slice 740 policies are essential to achieve optimal placement of paths within 741 the network while still meeting the target SLOs. 743 Resource reservation allows for the managing of available bandwidth 744 and for prioritization of existing allocations to enable preference- 745 based preemption when contention on a specific network resource 746 arises. Sharing of a network resource's available bandwidth amongst 747 a group of slice policies may also be desirable. For example, a 748 slice aggregate may not always be using all of its reservable 749 bandwidth; this allows other slice policies in the same group to use 750 the available bandwidth resources. 752 Congestion on shared network resources may result from sub-optimal 753 placement of paths in different slice policies. When this occurs, 754 preemption of some slice aggregate specific paths may be desirable to 755 alleviate congestion. A preference based allocation scheme enables 756 prioritization of slice aggregate paths that can be preempted. 758 Since network characteristics and its state can change over time, the 759 slice policy topology and its state also need to be propagated in the 760 network to enable ingress TE routers or Path Computation Engine 761 (PCEs) to perform accurate path placement based on the current state 762 of the slice policy network resources. 764 5.1.3. Slice Policy Per Hop Behavior 766 In Diffserv terminology, the forwarding behavior that is assigned to 767 a specific class is called a Per Hop Behavior (PHB). The PHB defines 768 the forwarding precedence that a marked packet with a specific CS 769 receives in relation to other traffic on the Diffserv-aware network. 771 A Slice policy Per Hop Behavior (S-PHB) is the externally observable 772 forwarding behavior applied to a specific packet belonging to a slice 773 aggregate. The goal of an S-PHB is to provide a specified amount of 774 network resources for traffic belonging to a specific slice 775 aggregate. A single slice policy may also support multiple 776 forwarding treatments or services that can be carried over the same 777 logical network. 779 The slice aggregate traffic may be identified at slice policy ingress 780 boundary nodes by carrying a SS to allow routers to apply a specific 781 forwarding treatment that guarantee the SLA(s). 783 With Differentiated Services (Diffserv) it is possible to carry 784 multiple services over a single converged network. Packets requiring 785 the same forwarding treatment are marked with a Class Selector (CS) 786 at domain ingress nodes. Up to eight classes or Behavior Aggregates 787 (BAs) may be supported for a given Forwarding Equivalence Class (FEC) 788 [RFC2475]. To support multiple forwarding treatments over the same 789 slice aggregate, a slice aggregate packet MAY also carry a Diffserv 790 CS to identify the specific Diffserv forwarding treatment to be 791 applied on the traffic belonging to the same slice policy. 793 At transit nodes, the CS field carried inside the packets are used to 794 determine the specific PHB that determines the forwarding and 795 scheduling treatment before packets are forwarded, and in some cases, 796 drop probability for each packet. 798 5.1.4. Slice Policy Topology 800 A key element of the slice policy is a customized topology that may 801 include the full or subset of the physical network topology. The 802 slice policy topology could also span multiple administrative domains 803 and/or multiple dataplane technologies. 805 A slice policy topology can overlap or share a subset of links with 806 another slice policy topology. A number of topology filtering 807 policies can be defined as part of the slice policy to limit the 808 specific topology elements that belong to a slice policy. For 809 example, a topology filtering policy can leverage Resource Affinities 810 as defined in [RFC2702] to include or exclude certain links for a 811 specific slice aggregate. The slice policy may also include a 812 reference to a predefined topology (e.g., derived from a Flexible 813 Algorithm Definition (FAD) as defined in [I-D.ietf-lsr-flex-algo], or 814 Multi-Topology ID as defined [RFC4915]. 816 5.2. Slice Policy Boundary 818 A network slice originates at the edge nodes of a network slice 819 provider. Traffic that is steered over the corresponding slice 820 aggregate may traverse slice policy capable interior nodes as well as 821 slice policy incapable interior nodes. 823 The network slice may encompass one or more domains administered by a 824 provider. For example, an organization's intranet or an ISP. The 825 network provider is responsible for ensuring that adequate network 826 resources are provisioned and/or reserved to support the SLAs offered 827 by the network end-to-end. 829 5.2.1. Slice Policy Edge Nodes 831 Slice policy edge nodes sit at the boundary of a network slice 832 provider network and receive traffic that requires steering over 833 network resources specific to a slice aggregate. These edge nodes 834 are responsible for identifying slice aggregate specific traffic 835 flows by possibly inspecting multiple fields from inbound packets 836 (e.g., implementations may inspect IP traffic's network 5-tuple in 837 the IP and transport protocol headers) to decide on which slice 838 policy it can be steered. 840 Network slice ingress nodes may condition the inbound traffic at 841 network boundaries in accordance with the requirements or rules of 842 each service's SLAs. The requirements and rules for network slice 843 services are set using mechanisms which are outside the scope of this 844 document. 846 When data plane slice policy is applied, the slice policy ingress 847 boundary nodes are responsible for adding a suitable SS onto packets 848 that belong to specific slice aggregate. In addition, edge nodes MAY 849 mark the corresponding Diffserv CS to differentiate between different 850 types of traffic carried over the same slice aggregate. 852 5.2.2. Slice Policy Interior Nodes 854 A slice policy interior node receives slice traffic and MAY be able 855 to identify the packets belonging to a specific slice aggregate by 856 inspecting the SS field carried inside each packet, or by inspecting 857 other fields within the packet that may identify the traffic streams 858 that belong to a specific slice aggregate. For example, when data 859 plane slice policy is applied, interior nodes can use the SS carried 860 within the packet to apply the corresponding S-PHB forwarding 861 behavior. Nodes within the network slice provider network may also 862 inspect the Diffserv CS within each packet to apply a per Diffserv 863 class PHB within the slice policy, and allow differentiation of 864 forwarding treatments for packets forwarded over the same slice 865 aggregate network resources. 867 5.2.3. Slice Policy Incapable Nodes 869 Packets that belong to a slice aggregate may need to traverse nodes 870 that are slice policy incapable. In this case, several options are 871 possible to allow the slice traffic to continue to be forwarded over 872 such devices and be able to resume the slice policy forwarding 873 treatment once the traffic reaches devices that are slice policy 874 capable. 876 When data plane slice policy is applied, packets carry a SS to allow 877 slice interior nodes to identify them. To enable end-to-end network 878 slicing, the SS MUST be maintained in the packets as they traverse 879 devices within the network - including slice policy incapable 880 devices. 882 For example, when the SS is an MPLS label at the bottom of the MPLS 883 label stack, packets can traverse over devices that are slice policy 884 incapable without any further considerations. On the other hand, 885 when the SSL is at the top of the MPLS label stack, packets can be 886 bypassed (or tunneled) over the slice policy incapable devices 887 towards the next device that supports slice policy as shown in 888 Figure 5. 890 SR Node-SID: SSL: 1001 @@@: slice policy enforced 891 1601: P1 ...: slice policy not enforced 892 1602: P2 893 1603: P3 894 1604: P4 895 1605: P5 897 @@@@@@@@@@@@@@ ........................ 898 . 899 /-----\ /-----\ /-----\ . 900 | P1 | ----- | P2 | ----- | P3 | . 901 \-----/ \-----/ \-----/ . 902 | @@@@@@@@@@ 903 | 904 /-----\ /-----\ 905 | P4 | ------ | P5 | 906 \-----/ \-----/ 908 +------+ +------+ +------+ 909 | 1001 | | 1604 | | 1001 | 910 +------+ +------+ +------+ 911 | 1605 | | 1001 | | IP | 912 +------+ +------+ +------+ 913 | IP | | 1605 | | Pay- | 914 +------+ +------+ | Load | 915 | Pay- | | IP | +------+ 916 | Load | +------+ 917 +----- + | Pay- | 918 | Load | 919 +------+ 921 Figure 5: Extending network slice over slice policy incapable 922 device(s). 924 5.2.4. Combining Slice Policy Modes 926 It is possible to employ a combination of the slice policy modes that 927 were discussed in Section 4 to realize a network slice. For example, 928 data and control plane slice policy mode can be employed in parts of 929 a network, while control plane slice policy mode can be employed in 930 the other parts of the network. The path selection, in such case, 931 can take into account the slice aggregate specific available network 932 resources. The SS carried within packets allow transit nodes to 933 enforce the corresponding S-PHB on the parts of the network that 934 apply the data plane slice policy mode. The SS can be maintained 935 while traffic traverses nodes that do not enforce data plane slice 936 policy mode, and so slice PHB enforcement can resume once traffic 937 traverses capable nodes. 939 5.3. Mapping Traffic on Slice Aggregates 941 The usual techniques to steer traffic onto paths can be applicable 942 when steering traffic over paths established for a specific slice 943 aggregate. 945 For example, one or more (layer-2 or layer-3) VPN services can be 946 directly mapped to paths established for a slice aggregate. In this 947 case, the per Virtual Routing and Forwarding (VRF) instance traffic 948 that arrives on the Provider Edge (PE) router over external 949 interfaces can be directly mapped to a specific slice aggregate path. 950 External interfaces can be further partitioned (e.g., using VLANs) to 951 allow mapping one or more VLANs to specific slice aggregate paths. 953 Another option is steer traffic to specific destinations directly 954 over multiple slice policies. This allows traffic arriving on any 955 external interface and targeted to such destinations to be directly 956 steered over the slice paths. 958 A third option that can also be used is to utilize a data plane 959 firewall filter or classifier to enable matching of several fields in 960 the incoming packets to decide whether the packet is steered on a 961 specific slice aggregate. This option allows for applying a rich set 962 of rules to identify specific packets to be mapped to a slice 963 aggregate. However, it requires data plane network resources to be 964 able to perform the additional checks in hardware. 966 6. Control Plane Extensions 968 Routing protocols may need to be extended to carry additional per 969 slice aggregate link state. For example, [RFC5305], [RFC3630], and 970 [RFC7752] are ISIS, OSPF, and BGP protocol extensions to exchange 971 network link state information to allow ingress TE routers and PCE(s) 972 to do proper path placement in the network. The extensions required 973 to support network slicing may be defined in other documents, and are 974 outside the scope of this document. 976 The instantiation of a slice policy may need to be automated. 977 Multiple options are possible to facilitate automation of 978 distribution of a slice policy to capable devices. 980 For example, a YANG data model for the slice policy may be supported 981 on network devices and controllers. A suitable transport (e.g., 982 NETCONF [RFC6241], RESTCONF [RFC8040], or gRPC) may be used to enable 983 configuration and retrieval of state information for slice policies 984 on network devices. The slice policy YANG data model is outside the 985 scope of this document, and is defined in 986 [I-D.bestbar-teas-yang-slice-policy]. 988 7. Applicability to Path Control Technologies 990 The slice policy modes described in this document are agnostic to the 991 technology used to setup paths that carry slice aggregate traffic. 992 One or more paths connecting the endpoints of the mapped IETF network 993 slices may be selected to steer the corresponding traffic streams 994 over the resources allocated for the slice aggregate. 996 For example, once the feasible paths within a slice policy topology 997 are selected, it is possible to use RSVP-TE protocol [RFC3209] to 998 setup or signal the LSPs that would be used to carry the slice 999 aggregate traffic. Specific extensions to RSVP-TE protocol to enable 1000 signaling of slice aggregate aware RSVP LSPs are outside the scope of 1001 this document. 1003 Alternatively, Segment Routing (SR) [RFC8402] may be used and the 1004 feasible paths can be realized by steering over specific segments or 1005 segment-lists using an SR policy. Further details on how the slice 1006 policy modes presented in this document can be realized over an SR 1007 network is discussed in [I-D.bestbar-spring-scalable-ns], and 1008 [I-D.bestbar-lsr-spring-sa]. 1010 8. IANA Considerations 1012 This document has no IANA actions. 1014 9. Security Considerations 1016 The main goal of network slicing is to allow for varying treatment of 1017 traffic from multiple different network slices that are utilizing a 1018 common network infrastructure and to allow for different levels of 1019 services to be provided for traffic traversing a given network 1020 resource. 1022 A variety of techniques may be used to achieve this, but the end 1023 result will be that some packets may be mapped to specific resources 1024 and may receive different (e.g., better) service treatment than 1025 others. The mapping of network traffic to a specific slice policy is 1026 indicated primarily by the SS, and hence an adversary may be able to 1027 utilize resources allocated to a specific slice policy by injecting 1028 packets carrying the same SS field in their packets. 1030 Such theft-of-service may become a denial-of-service attack when the 1031 modified or injected traffic depletes the resources available to 1032 forward legitimate traffic belonging to a specific slice policy. 1034 The defense against this type of theft and denial-of-service attacks 1035 consists of a combination of traffic conditioning at slice policy 1036 domain boundaries with security and integrity of the network 1037 infrastructure within a slice policy domain. 1039 10. Acknowledgement 1041 The authors would like to thank Krzysztof Szarkowicz, Swamy SRK, 1042 Navaneetha Krishnan, Prabhu Raj Villadathu Karunakaran and Jie Dong 1043 for their review of this document, and for providing valuable 1044 feedback on it. 1046 11. Contributors 1048 The following individuals contributed to this document: 1050 Colby Barth 1051 Juniper Networks 1052 Email: cbarth@juniper.net 1054 Srihari R. Sangli 1055 Juniper Networks 1056 Email: ssangli@juniper.net 1058 Chandra Ramachandran 1059 Juniper Networks 1060 Email: csekar@juniper.net 1062 12. References 1064 12.1. Normative References 1066 [I-D.bestbar-lsr-slice-aware-te] 1067 Britto, W., Shetty, R., Barth, C., Wen, B., Peng, S., and 1068 R. Chen, "IGP Extensions for Support of Slice Aggregate 1069 Aware Traffic Engineering", draft-bestbar-lsr-slice-aware- 1070 te-00 (work in progress), February 2021. 1072 [I-D.bestbar-lsr-spring-sa] 1073 Saad, T., Beeram, V. P., Chen, R., Peng, S., Wen, B., and 1074 D. Ceccarelli, "IGP Extensions for SR Slice Aggregate 1075 SIDs", draft-bestbar-lsr-spring-sa-00 (work in progress), 1076 February 2021. 1078 [I-D.bestbar-spring-scalable-ns] 1079 Saad, T., Beeram, V. P., Chen, R., Peng, S., Wen, B., and 1080 D. Ceccarelli, "Scalable Network Slicing over SR 1081 Networks", draft-bestbar-spring-scalable-ns-01 (work in 1082 progress), February 2021. 1084 [I-D.bestbar-teas-yang-slice-policy] 1085 Saad, T., Beeram, V. P., Wen, B., Ceccarelli, D., Peng, 1086 S., Chen, R., Contreras, L. M., and X. Liu, "YANG Data 1087 Model for Slice Policy", draft-bestbar-teas-yang-slice- 1088 policy-00 (work in progress), February 2021. 1090 [I-D.decraene-mpls-slid-encoded-entropy-label-id] 1091 Decraene, B., Filsfils, C., Henderickx, W., Saad, T., 1092 Beeram, V. P., and L. Jalil, "Using Entropy Label for 1093 Network Slice Identification in MPLS networks.", draft- 1094 decraene-mpls-slid-encoded-entropy-label-id-01 (work in 1095 progress), February 2021. 1097 [I-D.filsfils-spring-srv6-stateless-slice-id] 1098 Filsfils, C., Clad, F., Camarillo, P., and K. Raza, 1099 "Stateless and Scalable Network Slice Identification for 1100 SRv6", draft-filsfils-spring-srv6-stateless-slice-id-02 1101 (work in progress), January 2021. 1103 [I-D.ietf-lsr-flex-algo] 1104 Psenak, P., Hegde, S., Filsfils, C., Talaulikar, K., and 1105 A. Gulko, "IGP Flexible Algorithm", draft-ietf-lsr-flex- 1106 algo-15 (work in progress), April 2021. 1108 [I-D.kompella-mpls-mspl4fa] 1109 Kompella, K., Beeram, V. P., Saad, T., and I. Meilik, 1110 "Multi-purpose Special Purpose Label for Forwarding 1111 Actions", draft-kompella-mpls-mspl4fa-00 (work in 1112 progress), February 2021. 1114 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1115 Requirement Levels", BCP 14, RFC 2119, 1116 DOI 10.17487/RFC2119, March 1997, 1117 . 1119 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 1120 Label Switching Architecture", RFC 3031, 1121 DOI 10.17487/RFC3031, January 2001, 1122 . 1124 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 1125 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 1126 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 1127 . 1129 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 1130 (TE) Extensions to OSPF Version 2", RFC 3630, 1131 DOI 10.17487/RFC3630, September 2003, 1132 . 1134 [RFC4915] Psenak, P., Mirtorabi, S., Roy, A., Nguyen, L., and P. 1135 Pillay-Esnault, "Multi-Topology (MT) Routing in OSPF", 1136 RFC 4915, DOI 10.17487/RFC4915, June 2007, 1137 . 1139 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 1140 Engineering", RFC 5305, DOI 10.17487/RFC5305, October 1141 2008, . 1143 [RFC6790] Kompella, K., Drake, J., Amante, S., Henderickx, W., and 1144 L. Yong, "The Use of Entropy Labels in MPLS Forwarding", 1145 RFC 6790, DOI 10.17487/RFC6790, November 2012, 1146 . 1148 [RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and 1149 S. Ray, "North-Bound Distribution of Link-State and 1150 Traffic Engineering (TE) Information Using BGP", RFC 7752, 1151 DOI 10.17487/RFC7752, March 2016, 1152 . 1154 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1155 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1156 May 2017, . 1158 [RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L., 1159 Decraene, B., Litkowski, S., and R. Shakir, "Segment 1160 Routing Architecture", RFC 8402, DOI 10.17487/RFC8402, 1161 July 2018, . 1163 12.2. Informative References 1165 [I-D.ietf-teas-ietf-network-slice-definition] 1166 Rokui, R., Homma, S., Makhijani, K., Contreras, L. M., and 1167 J. Tantsura, "Definition of IETF Network Slices", draft- 1168 ietf-teas-ietf-network-slice-definition-01 (work in 1169 progress), February 2021. 1171 [I-D.ietf-teas-rfc3272bis] 1172 Farrel, A., "Overview and Principles of Internet Traffic 1173 Engineering", draft-ietf-teas-rfc3272bis-11 (work in 1174 progress), April 2021. 1176 [I-D.nsdt-teas-ns-framework] 1177 Gray, E. and J. Drake, "Framework for IETF Network 1178 Slices", draft-nsdt-teas-ns-framework-05 (work in 1179 progress), February 2021. 1181 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 1182 and W. Weiss, "An Architecture for Differentiated 1183 Services", RFC 2475, DOI 10.17487/RFC2475, December 1998, 1184 . 1186 [RFC2702] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., and J. 1187 McManus, "Requirements for Traffic Engineering Over MPLS", 1188 RFC 2702, DOI 10.17487/RFC2702, September 1999, 1189 . 1191 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 1192 and A. Bierman, Ed., "Network Configuration Protocol 1193 (NETCONF)", RFC 6241, DOI 10.17487/RFC6241, June 2011, 1194 . 1196 [RFC8040] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 1197 Protocol", RFC 8040, DOI 10.17487/RFC8040, January 2017, 1198 . 1200 Authors' Addresses 1202 Tarek Saad 1203 Juniper Networks 1205 Email: tsaad@juniper.net 1207 Vishnu Pavan Beeram 1208 Juniper Networks 1210 Email: vbeeram@juniper.net 1212 Bin Wen 1213 Comcast 1215 Email: Bin_Wen@cable.comcast.com 1216 Daniele Ceccarelli 1217 Ericsson 1219 Email: daniele.ceccarelli@ericsson.com 1221 Joel Halpern 1222 Ericsson 1224 Email: joel.halpern@ericsson.com 1226 Shaofu Peng 1227 ZTE Corporation 1229 Email: peng.shaofu@zte.com.cn 1231 Ran Chen 1232 ZTE Corporation 1234 Email: chen.ran@zte.com.cn 1236 Xufeng Liu 1237 Volta Networks 1239 Email: xufeng.liu.ietf@gmail.com 1241 Luis M. Contreras 1242 Telefonica 1244 Email: luismiguel.contrerasmurillo@telefonica.com 1246 Reza Rokui 1247 Nokia 1249 Email: reza.rokui@nokia.com