idnits 2.17.1 draft-bestbar-teas-ns-packet-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 22, 2021) is 1152 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'I-D.bestbar-teas-yang-slice-policy' is defined on line 1111, but no explicit reference was found in the text == Outdated reference: A later version (-02) exists of draft-bestbar-spring-scalable-ns-00 == Outdated reference: A later version (-09) exists of draft-filsfils-spring-srv6-stateless-slice-id-02 == Outdated reference: A later version (-26) exists of draft-ietf-lsr-flex-algo-13 ** Obsolete normative reference: RFC 7752 (Obsoleted by RFC 9552) == Outdated reference: A later version (-01) exists of draft-ietf-teas-ietf-network-slice-definition-00 == Outdated reference: A later version (-27) exists of draft-ietf-teas-rfc3272bis-10 == Outdated reference: A later version (-05) exists of draft-nsdt-teas-ns-framework-04 Summary: 1 error (**), 0 flaws (~~), 8 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TEAS Working Group T. Saad 3 Internet-Draft V. Beeram 4 Intended status: Standards Track Juniper Networks 5 Expires: August 26, 2021 B. Wen 6 Comcast 7 D. Ceccarelli 8 J. Halpern 9 Ericsson 10 S. Peng 11 R. Chen 12 ZTE Corporation 13 X. Liu 14 Volta Networks 15 L. Contreras 16 Telefonica 17 February 22, 2021 19 Realizing Network Slices in IP/MPLS Networks 20 draft-bestbar-teas-ns-packet-02 22 Abstract 24 Network slicing provides the ability to partition a physical network 25 into multiple logical networks of varying sizes, structures, and 26 functions so that each slice can be dedicated to specific services or 27 customers. Network slices need to operate in parallel while 28 providing slice elasticity in terms of network resource allocation. 29 The Differentiated Service (Diffserv) model allows for carrying 30 multiple services on top of a single physical network by relying on 31 compliant nodes to apply specific forwarding treatment (scheduling 32 and drop policy) on to packets that carry the respective Diffserv 33 code point. This document proposes a solution based on the Diffserv 34 model to realize network slicing in IP/MPLS networks. 36 Status of This Memo 38 This Internet-Draft is submitted in full conformance with the 39 provisions of BCP 78 and BCP 79. 41 Internet-Drafts are working documents of the Internet Engineering 42 Task Force (IETF). Note that other groups may also distribute 43 working documents as Internet-Drafts. The list of current Internet- 44 Drafts is at https://datatracker.ietf.org/drafts/current/. 46 Internet-Drafts are draft documents valid for a maximum of six months 47 and may be updated, replaced, or obsoleted by other documents at any 48 time. It is inappropriate to use Internet-Drafts as reference 49 material or to cite them other than as "work in progress." 51 This Internet-Draft will expire on August 26, 2021. 53 Copyright Notice 55 Copyright (c) 2021 IETF Trust and the persons identified as the 56 document authors. All rights reserved. 58 This document is subject to BCP 78 and the IETF Trust's Legal 59 Provisions Relating to IETF Documents 60 (https://trustee.ietf.org/license-info) in effect on the date of 61 publication of this document. Please review these documents 62 carefully, as they describe your rights and restrictions with respect 63 to this document. Code Components extracted from this document must 64 include Simplified BSD License text as described in Section 4.e of 65 the Trust Legal Provisions and are provided without warranty as 66 described in the Simplified BSD License. 68 Table of Contents 70 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 71 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 72 1.2. Acronyms and Abbreviations . . . . . . . . . . . . . . . 6 73 2. Network Resource Slicing Membership . . . . . . . . . . . . . 6 74 2.1. Dedicated Network Resources . . . . . . . . . . . . . . . 6 75 2.2. Shared Network Resources . . . . . . . . . . . . . . . . 7 76 3. Path Selection . . . . . . . . . . . . . . . . . . . . . . . 7 77 4. Slice Policy Modes . . . . . . . . . . . . . . . . . . . . . 8 78 4.1. Data plane Slice Policy Mode . . . . . . . . . . . . . . 8 79 4.2. Control Plane Slice Policy Mode . . . . . . . . . . . . . 9 80 4.3. Data and Control Plane Slice Policy Mode . . . . . . . . 11 81 5. Slice Policy Instantiation . . . . . . . . . . . . . . . . . 11 82 5.1. Slice Policy Definition . . . . . . . . . . . . . . . . . 12 83 5.1.1. Slice Policy Data Plane Selector . . . . . . . . . . 13 84 5.1.2. Slice Policy Resource Reservation . . . . . . . . . . 17 85 5.1.3. Slice Policy Per Hop Behavior . . . . . . . . . . . . 18 86 5.1.4. Slice Policy Topology . . . . . . . . . . . . . . . . 19 87 5.2. Slice Policy Boundary . . . . . . . . . . . . . . . . . . 19 88 5.2.1. Slice Policy Edge Nodes . . . . . . . . . . . . . . . 19 89 5.2.2. Slice Policy Interior Nodes . . . . . . . . . . . . . 20 90 5.2.3. Slice Policy Incapable Nodes . . . . . . . . . . . . 20 91 5.2.4. Combining Slice Policy Modes . . . . . . . . . . . . 21 92 5.3. Mapping Traffic on Slice Aggregates . . . . . . . . . . . 22 93 6. Control Plane Extensions . . . . . . . . . . . . . . . . . . 22 94 7. Applicability to Path Control Technologies . . . . . . . . . 23 95 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 96 9. Security Considerations . . . . . . . . . . . . . . . . . . . 23 97 10. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 24 98 11. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 24 99 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 24 100 12.1. Normative References . . . . . . . . . . . . . . . . . . 24 101 12.2. Informative References . . . . . . . . . . . . . . . . . 26 102 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 27 104 1. Introduction 106 Network slicing allows a Service Provider to create independent and 107 logical networks on top of a common or shared physical network 108 infrastructure. Such network slices can be offered to customers or 109 used internally by the Service Provider to facilitate or enhance 110 their service offerings. A Service Provider can also use network 111 slicing to structure and organize the elements of its infrastructure. 112 This document provides a path control technology agnostic solution 113 that a Service Provider can deploy to realize network slicing in IP/ 114 MPLS networks. 116 The definition of network slice for use within the IETF and the 117 characteristics of IETF network slice are specified in 118 [I-D.ietf-teas-ietf-network-slice-definition]. A framework for 119 reusing IETF VPN and traffic-engineering technologies to realize IETF 120 network slices is discussed in [I-D.nsdt-teas-ns-framework]. These 121 documents also discuss the function of an IETF Network Slice 122 Controller and the requirements on its northbound and southbound 123 interfaces. 125 This document introduces the notion of a slice aggregate which 126 comprises of one of more IETF network slice traffic streams. It 127 describes how a slice policy can be used to realize a slice aggregate 128 by instantiating specific control and data plane behaviors on select 129 topological elements in IP/MPLS networks. The onus is on the IETF 130 Network Slice Controller to maintain the mapping between one or more 131 IETF network slices and a slice aggregate. The mechanisms used by 132 the controller to determine the mapping are outside the scope of this 133 document. The focus of this document is on the mechanisms required 134 at the device level to address the requirements of network slicing in 135 packet networks. 137 In a Differentiated Service (Diffserv) domain [RFC2475], packets 138 requiring the same forwarding treatment (scheduling and drop policy) 139 are classified and marked with a Class Selector (CS) at domain 140 ingress nodes. At transit nodes, the CS field inside the packet is 141 inspected to determine the specific forwarding treatment to be 142 applied before the packet is forwarded further. Similar principles 143 are adopted by this document to realize network slicing. 145 When logical networks representing slice aggregates are realized on 146 top of a shared physical network infrastructure, it is important to 147 steer traffic on the specific network resources allocated for the 148 slice aggregate. In packet networks, the packets that traverse a 149 specific slice aggregate MAY be identified by one or more specific 150 fields carried within the packet. A slice policy ingress boundary 151 node populates the respective field(s) in packets that enter a slice 152 aggregate to allow interior slice policy nodes to identity those 153 packets and apply the specific Per Hop Behavior (PHB) that is 154 associated with the slice aggregate. The PHB defines the scheduling 155 treatment and, in some cases, the packet drop probability. 157 The slice aggregate traffic may further carry a Diffserv CS to allow 158 differentiation of forwarding treatments for packets within a slice 159 aggregate. For example, when using MPLS as a dataplane, it is 160 possible to identify packets belonging to the same slice aggregate by 161 carrying a global MPLS label in the label stack that identifies the 162 slice aggregate in each packet. Additional Diffserv classification 163 may be indicated in the Traffic Class (TC) bits of the global MPLS 164 label to allow further differentiation of forwarding treatments for 165 traffic traversing the same slice aggregate network resources. 167 This document covers different modes of slice policy and discusses 168 how each slice policy mode can ensure proper placement of slice 169 aggregate paths and respective treatment of slice aggregate traffic. 171 1.1. Terminology 173 The reader is expected to be familiar with the terminology specified 174 in [I-D.ietf-teas-ietf-network-slice-definition] and 175 [I-D.nsdt-teas-ns-framework]. 177 The following terminology is used in the document: 179 IETF network slice: 180 a well-defined composite of a set of endpoints, the connectivity 181 requirements between subsets of these endpoints, and associated 182 requirements; the term 'network slice' in this document refers to 183 'IETF network slice' 184 [I-D.ietf-teas-ietf-network-slice-definition]. 186 IETF Network Slice Controller (NSC): 187 controller that is used to realize an IETF network slice 188 [I-D.ietf-teas-ietf-network-slice-definition]. 190 Slice policy: 191 a policy construct that enables instantiation of mechanisms in 192 support of IETF network slice specific control and data plane 193 behaviors on select topological elements; the enforcement of a 194 slice policy results in the creation of a slice aggregate. 196 Slice aggregate: 197 a collection of packets that match a slice policy selection 198 criteria and are given the same forwarding treatment; a slice 199 aggregate comprises of one or more IETF network slice traffic 200 streams; the mapping of one or more IETF network slices to a slice 201 aggregate is maintained by the IETF Network Slice Controller. 203 Slice policy capable node: 204 a node that supports one of the slice policy modes described in 205 this document. 207 Slice policy incapable node: 208 a node that does not support any of the slice policy modes 209 described in this document. 211 Slice aggregate traffic: 212 traffic that is forwarded over network resources associated with a 213 specific slice aggregate. 215 Slice aggregate path: 216 a path that is setup over network resources associated with a 217 specific slice aggregate. 219 Slice aggregate packet: 220 a packet that traverses network resources associated with a 221 specific slice aggregate. 223 Slice policy topology: 224 a set of topological elements associated with a slice policy. 226 Slice aggregate aware TE: 227 a mechanism for TE path selection that takes into account the 228 available network resources associated with a specific slice 229 aggregate. 231 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 232 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 233 "OPTIONAL" in this document are to be interpreted as described in BCP 234 14 [RFC2119] [RFC8174] when, and only when, they appear in all 235 capitals, as shown here. 237 1.2. Acronyms and Abbreviations 239 BA: Behavior Aggregate 241 CS: Class Selector 243 SS: Slice Selector 245 S-PHB: Slice policy Per Hop Behavior as described in Section 5.1.3 247 SSL: Slice Selector Label as described in section Section 5.1.1 249 SSLI: Slice Selector Label Indicator 251 SLA: Service Level Agreement 253 SLO: Service Level Objective 255 Diffserv: Differentiated Services 257 MPLS: Multiprotocol Label Switching 259 LSP: Label Switched Path 261 RSVP: Resource Reservation Protocol 263 TE: Traffic Engineering 265 SR: Segment Routing 267 VRF: VPN Routing and Forwarding 269 2. Network Resource Slicing Membership 271 A slice aggregate can span multiple parts of an IP/MPLS network 272 (e.g., all or specific network resources in the access, aggregation, 273 or core network), and can stretch across multiple domains 274 administered by a provider. A slice policy topology may include all 275 or a sub-set of the physical nodes and links of an IP/MPLS network; 276 it may be comprised of dedicated and/or shared network resources 277 (e.g., in terms of processing power, storage, and bandwidth). 279 2.1. Dedicated Network Resources 281 Physical network resources may be fully dedicated to a specific slice 282 aggregate. For example, traffic belonging to a slice aggregate can 283 traverse dedicated network resources without being subjected to 284 contention from traffic of other slice aggregates. Dedicated network 285 resource slicing allows for simple partitioning of the physical 286 network resources amongst slice aggregates without the need to 287 distinguish packets traversing the dedicated network resources since 288 only one slice aggregate traffic stream can traverse the dedicated 289 resource at any time. 291 2.2. Shared Network Resources 293 To optimize network utilization, sharing of the physical network 294 resources may be desirable. In such case, the same physical network 295 resource capacity is divided among multiple slice aggregates. Shared 296 network resources can be partitioned in the data plane (for example 297 by applying hardware policers and shapers) and/or partitioned in the 298 control plane by providing a logical representation of the physical 299 link that has a subset of the network resources available to it. 301 3. Path Selection 303 Path selection in a network can be network state dependent, or 304 network state independent as described in Section 5.1 of 305 [I-D.ietf-teas-rfc3272bis]. The latter is the choice commonly used 306 by IGPs when selecting a best path to a destination prefix, while the 307 former is used by ingress TE routers, or Path Computation Engines 308 (PCEs) when optimizing the placement of a flow based on the current 309 network resource utilization. 311 For example, when steering traffic on a delay optimized path, the IGP 312 can use its link state database's view of the network topology to 313 compute a path optimizing for the delay metric of each link in the 314 network resulting in a cumulative lowest delay path. 316 When path selection is network state dependent, the path computation 317 can leverage Traffic Engineering mechanisms (e.g., as defined in 318 [RFC2702]) to compute feasible paths taking into account the incoming 319 traffic demand rate and current state of network. This allows 320 avoiding overly utilized links, and reduces the chance of congestion 321 on traversed links. 323 To enable TE path placement, the link state is advertised with 324 current reservations, thereby reflecting the available bandwidth on 325 each link. Such link reservations may be maintained centrally on a 326 network wide network resource manager, or distributed on devices (as 327 usually done with RSVP). TE extensions exist today to allow IGPs 328 (e.g., [RFC3630] and [RFC5305]), and BGP-LS [RFC7752] to advertise 329 such link state reservations. 331 When network resource reservations are also slice aggregate aware, 332 the link state can carry per slice aggregate state (e.g., reservable 333 bandwidth). This allows path computation to take into account the 334 specific network resources available for a slice aggregate when 335 determining the path for a specific flow. In this case, we refer to 336 the process of path placement and path provisioning as slice 337 aggregate aware TE. 339 4. Slice Policy Modes 341 A slice policy can be used to dictate if the partitioning of the 342 shared network resources amongst multiple slice aggregates can be 343 achieved by realizing slice aggregates in: 345 a) data plane only, or 347 b) control plane only, or 349 c) both control and data planes. 351 4.1. Data plane Slice Policy Mode 353 The physical network resources can be partitioned on network devices 354 by applying a Per Hop forwarding Behavior (PHB) onto packets that 355 traverse the network devices. In the Diffserv model, a Class 356 Selector (CS) is carried in the packet and is used by transit nodes 357 to apply the PHB that determines the scheduling treatment and drop 358 probability for packets. 360 When data plane slice policy mode is applied, packets need to be 361 forwarded on the specific slice aggregate network resources and need 362 to be applied a specific forwarding treatment that is dictated in the 363 slice policy (refer to Section 5.1 below). A Slice Selector (SS) 364 MUST be carried in each packet to identify the slice aggregate that 365 it belongs to. 367 The ingress node of a slice policy domain, in addition to marking 368 packets with a Diffserv CS, MAY also add an SS to each slice 369 aggregate packet. The transit nodes within a slice policy domain MAY 370 use the SS to associate packets with a slice aggregate and to 371 determine the Slice policy Per Hop Behavior (S-PHB) that is applied 372 to the packet (refer to Section 5.1.3 for further details). The CS 373 MAY be used to apply a Diffserv PHB on to the packet to allow 374 differentiation of traffic treatment within the same slice aggregate. 376 When data plane only slice policy mode is used, routers may rely on a 377 network state independent view of the topology to determine the best 378 paths to reach destinations. In this case, the best path selection 379 dictates the forwarding path of packets to the destination. The SS 380 field carried in each packet determines the specific S-PHB treatment 381 along the selected path. 383 For example, the Segment-Routing Flexible Algorithm 384 [I-D.ietf-lsr-flex-algo] may be deployed in a network to steer 385 packets on the IGP computed lowest cumulative delay path. A slice 386 policy may be used to allow links along the least latency path to 387 share its data plane resources amongst multiple slice aggregates. In 388 this case, the packets that are steered on a specific slice policy 389 carry the SS field that enables routers (along with the Diffserv CS) 390 to determine the S-PHB and enforce slice aggregate traffic streams. 392 4.2. Control Plane Slice Policy Mode 394 The physical network resources in the network can be logically 395 partitioned by having a representation of network resources appear in 396 a virtual topology. The virtual topology can contain all or a subset 397 of the physical network resources. The logical network resources 398 that appear in the virtual topology can reflect a part, whole, or in- 399 excess of the physical network resource capacity (when 400 oversubscription is desirable). For example, a physical link 401 bandwidth can be divided into fractions, each dedicated to a slice 402 aggregate. Each fraction of the physical link bandwidth MAY be 403 represented as a logical link in a virtual topology that is used when 404 determining paths associated with a specific slice aggregate. The 405 virtual topology associated with the slice policy can be used by 406 routing protocols, or by the ingress/PCE when computing slice 407 aggregate aware TE paths. 409 To perform network state dependent path computation in this mode 410 (slice aggregate aware TE), the resource reservation on each link 411 needs to be slice aggregate aware. Multiple slice policies may be 412 applied on the same physical link. The slice aggregate network 413 resource availability on links is updated (and may eventually be 414 advertised in the network) when new paths are placed in the network. 415 The slice aggregate resource reservation, in this case, can be 416 maintained on each device or be centralized on a resource reservation 417 manager that holds reservation states on links in the network. 419 Multiple slice aggregates can form a group and share the available 420 network resources allocated to each slice aggregate. In this case, a 421 node can update the reservable bandwidth for each slice aggregate to 422 take into consideration the available bandwidth from other slice 423 aggregates in the same group. 425 For illustration purposes, the diagram below represents bandwidth 426 isolation or sharing amongst a group of slice aggregates. In 427 Figure 1a, the slice aggregates: S_AGG1, S_AGG2, S_AGG3 and S_AGG4 428 are not sharing any bandwidths between each other. In Figure 1b, the 429 slice aggregates: S_AGG1 and S_AGG2 can share the available bandwidth 430 portion allocated to each amongst them. Similarly, S_AGG3 and S_AGG4 431 can share amongst themselves any available bandwidth allocated to 432 them, but they cannot share available bandwidth allocated to S_AGG1 433 or S_AGG2. In both cases, the Max Reservable Bandwidth may exceed 434 the actual physical link resource capacity to allow for over 435 subscription. 437 I-----------------------------I I-----------------------------I 438 <--S_AGG1-> I I-----------------I I 439 I---------I I I <-S_AGG1-> I I 440 I I I I I-------I I I 441 I---------I I I I I I I 442 I I I I-------I I I 443 <-----S_AGG2------> I I I I 444 I-----------------I I I <-S_AGG2-> I I 445 I I I I I---------I I I 446 I-----------------I I I I I I I 447 I I I I---------I I I 448 <---S_AGG3----> I I I I 449 I-------------I I I S_AGG1 + S_AGG2 I I 450 I I I I-----------------I I 451 I-------------I I I I 452 I I I I 453 <---S_AGG4----> I I-----------------I I 454 I-------------I I I <-S_AGG3-> I I 455 I I I I I-------I I I 456 I-------------I I I I I I I 457 I I I I-------I I I 458 I S_AGG1+S_AGG2+S_AGG3+S_AGG4 I I I I 459 I I I <-S_AGG4-> I I 460 I-----------------------------I I I---------I I I 461 <--Max Reservable Bandwidth--> I I I I I 462 I I---------I I I 463 I I I 464 I S_AGG3 + S_AGG4 I I 465 I-----------------I I 466 I S_AGG1+S_AGG2+S_AGG3+S_AGG4 I 467 I I 468 I-----------------------------I 469 <--Max Reservable Bandwidth--> 471 (a) No bandwidth sharing (b) Sharing bandwidth between 472 between slice aggregates. slice aggregates of the 473 same group 475 Figure 1: Bandwidth Isolation/Sharing. 477 4.3. Data and Control Plane Slice Policy Mode 479 In order to support strict guarantees for slice aggregates, the 480 network resources can be partitioned in both the control plane and 481 data plane. 483 The control plane partitioning allows the creation of customized 484 topologies per slice aggregate that routers or a Path Computation 485 Engine (PCE) can use to determine optimal path placement for specific 486 demand flows (Slice aggregate aware TE). 488 The data plane partitioning protects slice aggregate traffic from 489 network resource contention that could occur due to bursts in traffic 490 from other slice aggregates traversing the same shared network 491 resource. 493 5. Slice Policy Instantiation 495 A network slice can span multiple technologies and multiple 496 administrative domains. Depending on the network slice consumer's 497 requirements, a network slice can be differentiated from other 498 network slices in terms of data, control or management planes. 500 The consumer of a network slice expresses their intent by specifying 501 requirements rather than mechanisms to realize the slice. The 502 requirements for a network slice can vary and can be expressed in 503 terms of connectivity needs between end-points (point-to-point, 504 point-to-multipoint or multipoint-to-multipoint) with customizable 505 network capabilities that may include data speed, quality, latency, 506 reliability, security, and services (refer to 507 [I-D.ietf-teas-ietf-network-slice-definition] for more details). 508 These capabilities are always provided based on a Service Level 509 Agreement (SLA) between the network slice consumer and the provider. 511 The onus is on the network slice controller to consume the service 512 layer slice intent and realize it with an appropriate slice policy. 513 Multiple IETF network slices can be mapped to the same slice policy 514 resulting in a slice aggregate. The network wide consistent slice 515 policy definition is distributed to the devices in the network as 516 shown in Figure 2. The specification of the network slice intent on 517 the northbound interface of the controller and the mechanism used to 518 map the network slice to a slice policy are outside the scope of this 519 document. 521 | 522 | IETF Network Slice 523 | (service) 524 +--------------------+ 525 | IETF Network | 526 | Slice Controller | 527 +--------------------+ 528 | 529 | Slice Policy 530 /|\ 531 / | \ 532 slice policy capable 533 nodes/controllers 534 / / | \ \ 535 v v v v v 536 xxxxxxxxxxxxxxxxxxxx 537 xxxx xxxx 538 xxxx Slice xxxx 539 xxxx Aggregate xxxx 540 xxxx xxxx 541 xxxxxxxxxxxxxxxxxxxx 543 <------ Path Control ------> 544 RSVP-TE/SR-Policy/SR-FlexAlgo 546 Figure 2: Slice Policy Instantiation. 548 5.1. Slice Policy Definition 550 The slice policy is network-wide construct that is consumed by 551 network devices, and may include rules that control the following: 553 o Data plane specific policies: This includes the SS, any firewall 554 rules or flow-spec filters, and QoS profiles associated with the 555 slice policy and any classes within it. 557 o Control plane specific policies: This includes guaranteed 558 bandwidth, any network resource sharing amongst slice policies, 559 and reservation preference to prioritize any reservations of a 560 specific slice policy over others. 562 o Topology membership policies: This defines policies that dictate 563 node/link/function network resource topology association for a 564 specific slice policy. 566 There is a desire for flexibility in realizing network slices to 567 support the services across networks consisting of products from 568 multiple vendors. These networks may also be grouped into disparate 569 domains and deploy various path control technologies and tunnel 570 techniques to carry traffic across the network. It is expected that 571 a standardized data model for slice policy will facilitate the 572 instantiation and management of slice aggregates on slice policy 573 capable nodes. 575 It is also possible to distribute the slice policy to network devices 576 using several mechanisms, including protocols such as NETCONF or 577 RESTCONF, or exchanging it using a suitable routing protocol that 578 network devices participate in (such as IGP(s) or BGP). 580 5.1.1. Slice Policy Data Plane Selector 582 A router MUST be able to identify a packet belonging to a slice 583 aggregate before it can apply the proper forwarding treatment or 584 S-PHB associated with the slice policy. One or more fields within 585 the packet MAY be used as an SS to do this. 587 Forwarding Address Slice Selector: 589 One approach to distinguish packets targeted to a destination but 590 belonging to different slice aggregates is to assign multiple 591 forwarding addresses (or multiple MPLS label bindings in the case 592 of MPLS network) for the same node - one for each slice aggregate 593 that traffic can be steered on towards the destination. For 594 example, when realizing a network slice over an IP dataplane, the 595 same destination can be assigned multiple IP addresses (or 596 multiple SRv6 locators in the case of SRv6 network) to enable 597 steering of traffic to the same destination over multiple slice 598 policies. 600 Similarly, for MPLS dataplane, [RFC3031] states in Section 2.1 601 that: 'Some routers analyze a packet's network layer header not 602 merely to choose the packet's next hop, but also to determine a 603 packet's "precedence" or "class of service"'. In such case, the 604 same destination can be assigned multiple MPLS label bindings 605 corresponding to an LSP that traverses network resources of a 606 specific slice aggregate towards the destination. 608 The slice aggregate specific forwarding address (or MPLS 609 forwarding label) can be carried in the packet to allow (IP or 610 MPLS) routers along the path to identify the packets and apply the 611 respective S-PHB and forwarding treatment. This approach requires 612 maintaining per slice aggregate state for each destination in the 613 network in both the control and data plane and on each router in 614 the network. 616 For example, consider a network slicing provider with a network 617 composed of 'N' nodes, each with 'K' adjacencies to its neighbors. 618 Assuming a node is reachable in as many as 'M' slice policies, the 619 node will have to assign and advertise reachability for 'N' unique 620 forwarding addresses, or MPLS forwarding labels. Similarly, each 621 node will have to assign a unique forwarding address (or MPLS 622 forwarding label) for each of its 'K' adjacencies to enable strict 623 steering over each. Consequently, the control plane at any node 624 in the network will need to store as many as (N+K)*M states. In 625 addition, a node will have to store and program (N+K)*M forwarding 626 addresses or labels entries in its Forwarding Information Base 627 (FIB) to realize this. Therefore, as 'N', 'K', and 'M' parameters 628 increase, this approach will have scalability challenges both in 629 the control and data planes. 631 Global Identifier Slice Selector: 633 A slice policy can include a global Slice Selector (SS) field can 634 be carried in each packet to identify the packet belonging to a 635 specific slice aggregate, independent of the forwarding address or 636 MPLS forwarding label that is bound to the destination. Routers 637 within the slice policy domain can use the forwarding address (or 638 MPLS forwarding label) to determine the forwarding path, and use 639 the SS field in the packet to determine the specific S-PHB that 640 gets applied on the packet. This approach allows better scale 641 since it relies on a single forwarding address or MPLS label 642 binding to be used independent of the number of slice policies 643 required along the path. In this case, the additional SS field 644 will need to be carried, and maintained in each packet while it 645 traverses the slice policy domain. 647 The SS can be carried in one of multiple fields within the packet, 648 depending on the dataplane type used. For example, in MPLS 649 networks, the SS can be represented as a global MPLS label that is 650 carried in the packet's MPLS label stack. All packets that belong 651 to the same slice aggregate MAY carry the same SS label in the 652 MPLS label stack. It is possible, as well, to have multiple SS 653 labels that map to the same slice policy S-PHB. 655 The MPLS SS Label (SSL) may appear in several positions in the 656 MPLS label stack. For example, the MPLS SSL can be maintained at 657 the top of the label stack while the packet is forwarded along the 658 MPLS path. In this case, the forwarding at each hop is determined 659 by the forwarding label that resides below the SSL. Figure 3 660 shows an example where the SSL appears at the top of MPLS label 661 stack in a packet. PE1 is a slice policy edge node that receives 662 the packet that needs to be steered over a slice specific MPLS 663 Path. PE1 computes the SR Path composed of the Label Segment- 664 List={9012, 9023}. It imposes an SSL 1001 corresponding to Slice- 665 ID 1001 followed by the SR Path Segment-List. At P1, the top 666 label sets the context of the packet to Slice-ID=1001. The 667 forwarding of the packet is determined by inspecting the 668 forwarding label (below the SSL) within the context of SSL. 670 SR Adj-SID: SSL: 1001 671 9012: P1-P2 672 9023: P2-PE2 674 /-----\ /-----\ /-----\ /-----\ 675 | PE1 | ----- | P1 | ------ | P2 |------ | PE2 | 676 \-----/ \-----/ \-----/ \-----/ 678 In 679 packet: 680 +------+ +------+ +------+ +------+ 681 | IP | | 1001 | | 1001 | | 1001 | 682 +------+ +------+ +------+ +------+ 683 | Pay- | | 9012 | | 9023 | | IP | 684 | Load | +------+ +------+ +------+ 685 +----- + | 9023 | | IP | | Pay- | 686 +------+ +------+ | Load | 687 | IP | | Pay- | +------+ 688 +------+ | Load | 689 | Pay- | +------+ 690 | Load | 691 +------+ 693 Figure 3: SSL at top of label stack. 695 The SSL can also reside at the bottom of the label stack. For 696 example, the VPN service label may also be used as an SSL which 697 allows steering of traffic towards one or more egress PEs over the 698 same slice aggregate. In such cases, one or more service labels 699 MAY be mapped to the same slice aggregate. The same VPN label may 700 also be allocated on all Egress PEs so it can serve as a single 701 SSL for a specific slice policy. Alternatively, a range of VPN 702 labels may be mapped to a single slice aggregate to allow carrying 703 multiple VPNs over the same slice aggregate as shown in Figure 4. 705 SR Adj-SID: SSL (VPN) on PE2: 1001 706 9012: P1-P2 707 9023: P2-PE2 709 /-----\ /-----\ /-----\ /-----\ 710 | PE1 | ----- | P1 | ------ | P2 |------ | PE2 | 711 \-----/ \-----/ \-----/ \-----/ 713 In 714 packet: 715 +------+ +------+ +------+ +------+ 716 | IP | | 9012 | | 9023 | | 1001 | 717 +------+ +------+ +------+ +------+ 718 | Pay- | | 9023 | | 1001 | | IP | 719 | Load | +------+ +------+ +------+ 720 +----- + | 1001 | | IP | | Pay- | 721 +------+ +------+ | Load | 722 | IP | | Pay- | +------+ 723 +------+ | Load | 724 | Pay- | +------+ 725 | Load | 726 +------+ 728 Figure 4: SSL or VPN label at bottom of label stack. 730 In some cases, the position of the SSL may not be at a fixed place 731 in the MPLS label header. In this case, transit routers cannot 732 expect the SSL at a fixed place in the MPLS label stack. This can 733 be addressed by introducing a new Special Purpose Label from the 734 label reserved space called a Slice Selector Label Indicator 735 (SSLI). The slice policy ingress boundary node, in this case, 736 will need to impose at least two additional MPLS labels (SSLI + 737 SSL) to identify the slice aggregate that the packets belong to as 738 shown in Figure 5. 740 SR Adj-SID: SSLI/SSL: SSLI/1001 741 9012: P1-P2 742 9023: P2-PE2 744 /-----\ /-----\ /-----\ /-----\ 745 | PE1 | ----- | P1 | ------ | P2 |------ | PE2 | 746 \-----/ \-----/ \-----/ \-----/ 748 In 749 packet: 750 +------+ +------+ +------+ +------+ 751 | IP | | 9012 | | 9023 | | SSLI | 752 +------+ +------+ +------+ +------+ 753 | Pay- | | 9023 | | SSLI | | 1001 | 754 | Load | +------+ +------+ +------+ 755 +------+ | SSLI | | 1001 | | IP | 756 +------+ +------+ +------+ 757 | 1001 | | IP | | Pay- | 758 +------+ +------+ | Load | 759 | IP | | Pay- | +------+ 760 +------+ | Load | 761 | Pay- | +------+ 762 | Load | 763 +------+ 765 Figure 5: SSLI and bottom SSL at bottom of label stack. 767 When the slice is realized over an IP dataplane, the SSL can be 768 encoded in the IP header. For example, the SSL can be encoded in 769 portion of the IPv6 Flow Label field as described in 770 [I-D.filsfils-spring-srv6-stateless-slice-id]. 772 5.1.2. Slice Policy Resource Reservation 774 Bandwidth and network resource allocation strategies for slice 775 policies are essential to achieve optimal placement of paths within 776 the network while still meeting the target SLOs. 778 Resource reservation allows for the managing of available bandwidth 779 and for prioritization of existing allocations to enable preference- 780 based preemption when contention on a specific network resource 781 arises. Sharing of a network resource's available bandwidth amongst 782 a group of slice policies may also be desirable. For example, a 783 slice aggregate may not always be using all of its reservable 784 bandwidth; this allows other slice policies in the same group to use 785 the available bandwidth resources. 787 Congestion on shared network resources may result from sub-optimal 788 placement of paths in different slice policies. When this occurs, 789 preemption of some slice aggregate specific paths may be desirable to 790 alleviate congestion. A preference based allocation scheme enables 791 prioritization of slice aggregate paths that can be preempted. 793 Since network characteristics and its state can change over time, the 794 slice policy topology and its state also needs to be propagated in 795 the network to enable ingress TE routers or Path Computation Engine 796 (PCEs) to perform accurate path placement based on the current state 797 of the slice policy network resources. 799 5.1.3. Slice Policy Per Hop Behavior 801 In Diffserv terminology, the forwarding behavior that is assigned to 802 a specific class is called a Per Hop Behavior (PHB). The PHB defines 803 the forwarding precedence that a marked packet with a specific CS 804 receives in relation to other traffic on the Diffserv-aware network. 806 A Slice policy Per Hop Behavior (S-PHB) is the externally observable 807 forwarding behavior applied to a specific packet belonging to a slice 808 aggregate. The goal of an S-PHB is to provide a specified amount of 809 network resources for traffic belonging to a specific slice 810 aggregate. A single slice policy may also support multiple 811 forwarding treatments or services that can be carried over the same 812 logical network. 814 The slice aggregate traffic may be identified at slice policy ingress 815 boundary nodes by carrying a SS to allow routers to apply a specific 816 forwarding treatment that guarantee the SLA(s). 818 With Differentiated Services (Diffserv) it is possible to carry 819 multiple services over a single converged network. Packets requiring 820 the same forwarding treatment are marked with a Class Selector (CS) 821 at domain ingress nodes. Up to eight classes or Behavior Aggregates 822 (BAs) may be supported for a given Forwarding Equivalence Class (FEC) 823 [RFC2475]. To support multiple forwarding treatments over the same 824 slice aggregate, a slice aggregate packet MAY also carry a Diffserv 825 CS to identify the specific Diffserv forwarding treatment to be 826 applied on the traffic belonging to the same slice policy. 828 At transit nodes, the CS field carried inside the packets are used to 829 determine the specific PHB that determines the forwarding and 830 scheduling treatment before packets are forwarded, and in some cases, 831 drop probability for each packet. 833 5.1.4. Slice Policy Topology 835 A key element of the slice policy is a customized topology that may 836 include the full or subset of the physical network topology. The 837 slice policy topology could also span multiple administrative domains 838 and/or multiple dataplane technologies. 840 A slice policy topology can overlap or share a subset of links with 841 another slice policy topology. A number of topology filtering 842 policies can be defined as part of the slice policy to limit the 843 specific topology elements that belong to a slice policy. For 844 example, a topology filtering policy can leverage Resource Affinities 845 as defined in [RFC2702] to include or exclude certain links for a 846 specific slice aggregate. The slice policy may also include a 847 reference to a predefined topology (e.g. derived from a Flexible 848 Algorithm Definition (FAD) as defined in [I-D.ietf-lsr-flex-algo], or 849 Multi-Topology ID as defined [RFC4915]. 851 5.2. Slice Policy Boundary 853 A network slice originates at the edge nodes of a network slice 854 provider. Traffic that is steered over the corresponding slice 855 policy may traverse slice policy capable interior nodes, as well as, 856 slice policy incapable interior nodes. 858 The network slice may encompass one or more domains administered by a 859 provider. For example, an organization's intranet or an ISP. The 860 network provider is responsible for ensuring that adequate network 861 resources are provisioned and/or reserved to support the SLAs offered 862 by the network end-to-end. 864 5.2.1. Slice Policy Edge Nodes 866 Slice policy edge nodes sit at the boundary of a network slice 867 provider network and receive traffic that requires steering over 868 network resources specific to a slice aggregate. These edge nodes 869 are responsible for identifying slice aggregate specific traffic 870 flows by possibly inspecting multiple fields from inbound packets 871 (e.g. implementations may inspect IP traffic's network 5-tuple in the 872 IP and transport protocol headers) to decide on which slice policy it 873 can be steered. 875 Network slice ingress nodes may condition the inbound traffic at 876 network boundaries in accordance with the requirements or rules of 877 each service's SLAs. The requirements and rules for network slice 878 services are set using mechanisms which are outside the scope of this 879 document. 881 When data plane slice policy is applied, the slice policy ingress 882 boundary nodes are responsible for adding a suitable SS onto packets 883 that belong to specific slice aggregate. In addition, edge nodes MAY 884 mark the corresponding Diffserv CS to differentiate between different 885 types of traffic carried over the same slice aggregate. 887 5.2.2. Slice Policy Interior Nodes 889 A slice policy interior node receives slice traffic and MAY be able 890 to identify the packets belonging to a specific slice aggregate by 891 inspecting the SS field carried inside each packet, or by inspecting 892 other fields within the packet that may identify the traffic streams 893 that belong to a specific slice aggregate. For example when data 894 plane slice policy is applied, interior nodes can use the SS carried 895 within the packet to apply the corresponding S-PHB forwarding 896 behavior. Nodes within the network slice provider network may also 897 inspect the Diffserv CS within each packet to apply a per Diffserv 898 class PHB within the slice policy, and allow differentiation of 899 forwarding treatments for packets forwarded over the same slice 900 aggregate network resources. 902 5.2.3. Slice Policy Incapable Nodes 904 Packets that belong to a slice aggregate may need to traverse nodes 905 that are slice policy incapable. In this case, several options are 906 possible to allow the slice traffic to continue to be forwarded over 907 such devices and be able to resume the slice policy forwarding 908 treatment once the traffic reaches devices that are slice policy 909 capable. 911 When data plane slice policy is applied, packets carry a SS to allow 912 slice interior nodes to identify them. To enable end-to-end network 913 slicing, the SS MUST be maintained in the packets as they traverse 914 devices within the network - including slice policy incapable 915 devices. 917 For example, when the SS is an MPLS label at the bottom of the MPLS 918 label stack, packets can traverse over devices that are slice policy 919 incapable without any further considerations. On the other hand, 920 when the SSL is at the top of the MPLS label stack, packets can be 921 bypassed (or tunneled) over the slice policy incapable devices 922 towards the next device that supports slice policy as shown in 923 Figure 6. 925 SR Node-SID: SSL: 1001 @@@: slice policy enforced 926 1601: P1 ...: slice policy not enforced 927 1602: P2 928 1603: P3 929 1604: P4 930 1605: P5 932 @@@@@@@@@@@@@@ ........................ 933 . 934 /-----\ /-----\ /-----\ . 935 | P1 | ----- | P2 | ----- | P3 | . 936 \-----/ \-----/ \-----/ . 937 | @@@@@@@@@@ 938 | 939 /-----\ /-----\ 940 | P4 | ------ | P5 | 941 \-----/ \-----/ 943 +------+ +------+ +------+ 944 | 1001 | | 1604 | | 1001 | 945 +------+ +------+ +------+ 946 | 1605 | | 1001 | | IP | 947 +------+ +------+ +------+ 948 | IP | | 1605 | | Pay- | 949 +------+ +------+ | Load | 950 | Pay- | | IP | +------+ 951 | Load | +------+ 952 +----- + | Pay- | 953 | Load | 954 +------+ 956 Figure 6: Extending network slice over slice policy incapable 957 device(s). 959 5.2.4. Combining Slice Policy Modes 961 It is possible to employ a combination of the slice policy modes that 962 were discussed in Section 4 to realize a network slice. For example, 963 data and control plane slice policy mode can be employed in parts of 964 a network, while control plane slice policy mode can be employed in 965 the other parts of the network. The path selection, in such case, 966 can take into account the slice aggregate specific available network 967 resources. The SS carried within packets allow transit nodes to 968 enforce the corresponding S-PHB on the parts of the network that 969 apply the data plane slice policy mode. The SS can be maintained 970 while traffic traverses nodes that do not enforce data plane slice 971 policy mode, and so slice PHB enforcement can resume once traffic 972 traverses capable nodes. 974 5.3. Mapping Traffic on Slice Aggregates 976 The usual techniques to steer traffic onto paths can be applicable 977 when steering traffic over paths established for a specific slice 978 aggregate. 980 For example, one or more (layer-2 or layer-3) VPN services can be 981 directly mapped to paths established for a slice aggregate. In this 982 case, the per Virtual Routing and Forwarding (VRF) instance traffic 983 that arrives on the Provider Edge (PE) router over external 984 interfaces can be directly mapped to a specific slice aggregate path. 985 External interfaces can be further partitioned (e.g. using VLANs) to 986 allow mapping one or more VLANs to specific slice aggregate paths. 988 Another option is steer traffic to specific destinations directly 989 over multiple slice policies. This allows traffic arriving on any 990 external interface and targeted to such destinations to be directly 991 steered over the slice paths. 993 A third option that can also be used is to utilize a data plane 994 firewall filter or classifier to enable matching of several fields in 995 the incoming packets to decide whether the packet is steered on a 996 specific slice aggregate. This option allows for applying a rich set 997 of rules to identify specific packets to be mapped to a slice 998 aggregate. However, it requires data plane network resources to be 999 able to perform the additional checks in hardware. 1001 6. Control Plane Extensions 1003 Routing protocols may need to be extended to carry additional per 1004 slice aggregate link state. For example, [RFC5305], [RFC3630], and 1005 [RFC7752] are ISIS, OSPF, and BGP protocol extensions to exchange 1006 network link state information to allow ingress TE routers and PCE(s) 1007 to do proper path placement in the network. The extensions required 1008 to support network slicing may be defined in other documents, and are 1009 outside the scope of this document. 1011 The instantiation of a slice policy may need to be automated. 1012 Multiple options are possible to facilitate automation of 1013 distribution of a slice policy to capable devices. 1015 For example, a YANG data model for the slice policy may be supported 1016 on network devices and controllers. A suitable transport (e.g. 1017 NETCONF [RFC6241], RESTCONF [RFC8040], or gRPC) may be used to enable 1018 configuration and retrieval of state information for slice policies 1019 on network devices. The slice policy YANG data model is outside the 1020 scope of this document, and is defined [I-D.bestbar-teas-yang-slice- 1021 policy]. 1023 7. Applicability to Path Control Technologies 1025 The slice policy modes described in this document are agnostic to the 1026 technology used to setup paths that carry slice aggregate traffic. 1027 One or more paths connecting the endpoints of the mapped IETF network 1028 slices may be selected to steer the corresponding traffic streams 1029 over the resources allocated for the slice aggregate. 1031 For example, once the feasible paths within a slice policy topology 1032 are selected, it is possible to use RSVP-TE protocol [RFC3209] to 1033 setup or signal the LSPs that would be used to carry slice aggregate 1034 traffic. Specific extensions to RSVP-TE protocol to enable signaling 1035 of slice aggregate aware RSVP LSPs are outside the scope of this 1036 document. 1038 Alternatively, Segment Routing (SR) [RFC8402] may be used and the 1039 feasible paths can be realized by steering over specific segments or 1040 segment-lists using an SR policy. Further details on how the slice 1041 policy modes presented in this document can be realized over an SR 1042 network is discussed in [I-D.bestbar-spring-scalable-ns], and 1043 [I-D.bestbar-lsr-spring-sa]. 1045 8. IANA Considerations 1047 This document has no IANA actions. 1049 9. Security Considerations 1051 The main goal of network slicing is to allow for varying treatment of 1052 traffic from multiple different network slices that are utilizing a 1053 common network infrastructure and to allow for different levels of 1054 services to be provided for traffic traversing a given network 1055 resource. 1057 A variety of techniques may be used to achieve this, but the end 1058 result will be that some packets may be mapped to specific resources 1059 and may receive different (e.g., better) service treatment than 1060 others. The mapping of network traffic to a specific slice policy is 1061 indicated primarily by the SS, and hence an adversary may be able to 1062 utilize resources allocated to a specific slice policy by injecting 1063 packets carrying the same SS field in their packets. 1065 Such theft-of-service may become a denial-of-service attack when the 1066 modified or injected traffic depletes the resources available to 1067 forward legitimate traffic belonging to a specific slice policy. 1069 The defense against this type of theft and denial-of-service attacks 1070 consists of a combination of traffic conditioning at slice policy 1071 domain boundaries with security and integrity of the network 1072 infrastructure within a slice policy domain. 1074 10. Acknowledgement 1076 The authors would like to thank Krzysztof Szarkowicz, Swamy SRK, 1077 Navaneetha Krishnan, Prabhu Raj Villadathu Karunakaran and Jie Dong 1078 for their review of this document, and for providing valuable 1079 feedback on it. 1081 11. Contributors 1083 The following individuals contributed to this document: 1085 Colby Barth 1086 Juniper Networks 1087 Email: cbarth@juniper.net 1089 Srihari R. Sangli 1090 Juniper Networks 1091 Email: ssangli@juniper.net 1093 Chandra Ramachandran 1094 Juniper Networks 1095 Email: csekar@juniper.net 1097 12. References 1099 12.1. Normative References 1101 [I-D.bestbar-lsr-spring-sa] 1102 Saad, T., Beeram, V., Chen, R., Peng, S., Wen, B., and D. 1103 Ceccarelli, "IGP Extensions for SR Slice Aggregate SIDs", 1104 February 2021. 1106 [I-D.bestbar-spring-scalable-ns] 1107 Saad, T. and V. Beeram, "Scalable Network Slicing over SR 1108 Networks", draft-bestbar-spring-scalable-ns-00 (work in 1109 progress), December 2020. 1111 [I-D.bestbar-teas-yang-slice-policy] 1112 Saad, T. and V. Beeram, "YANG Data Model for Slice 1113 Policy", draft-bestbar-teas-yang-ns-phd-00 (work 1114 in progress), November 2020. 1116 [I-D.filsfils-spring-srv6-stateless-slice-id] 1117 Filsfils, C., Clad, F., Camarillo, P., and K. Raza, 1118 "Stateless and Scalable Network Slice Identification for 1119 SRv6", draft-filsfils-spring-srv6-stateless-slice-id-02 1120 (work in progress), January 2021. 1122 [I-D.ietf-lsr-flex-algo] 1123 Psenak, P., Hegde, S., Filsfils, C., Talaulikar, K., and 1124 A. Gulko, "IGP Flexible Algorithm", draft-ietf-lsr-flex- 1125 algo-13 (work in progress), October 2020. 1127 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1128 Requirement Levels", BCP 14, RFC 2119, 1129 DOI 10.17487/RFC2119, March 1997, 1130 . 1132 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 1133 Label Switching Architecture", RFC 3031, 1134 DOI 10.17487/RFC3031, January 2001, 1135 . 1137 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 1138 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 1139 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 1140 . 1142 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 1143 (TE) Extensions to OSPF Version 2", RFC 3630, 1144 DOI 10.17487/RFC3630, September 2003, 1145 . 1147 [RFC4915] Psenak, P., Mirtorabi, S., Roy, A., Nguyen, L., and P. 1148 Pillay-Esnault, "Multi-Topology (MT) Routing in OSPF", 1149 RFC 4915, DOI 10.17487/RFC4915, June 2007, 1150 . 1152 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 1153 Engineering", RFC 5305, DOI 10.17487/RFC5305, October 1154 2008, . 1156 [RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and 1157 S. Ray, "North-Bound Distribution of Link-State and 1158 Traffic Engineering (TE) Information Using BGP", RFC 7752, 1159 DOI 10.17487/RFC7752, March 2016, 1160 . 1162 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1163 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1164 May 2017, . 1166 [RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L., 1167 Decraene, B., Litkowski, S., and R. Shakir, "Segment 1168 Routing Architecture", RFC 8402, DOI 10.17487/RFC8402, 1169 July 2018, . 1171 12.2. Informative References 1173 [I-D.ietf-teas-ietf-network-slice-definition] 1174 Rokui, R., Homma, S., Makhijani, K., Contreras, L., and J. 1175 Tantsura, "Definition of IETF Network Slices", draft-ietf- 1176 teas-ietf-network-slice-definition-00 (work in progress), 1177 January 2021. 1179 [I-D.ietf-teas-rfc3272bis] 1180 Farrel, A., "Overview and Principles of Internet Traffic 1181 Engineering", draft-ietf-teas-rfc3272bis-10 (work in 1182 progress), December 2020. 1184 [I-D.nsdt-teas-ns-framework] 1185 Gray, E. and J. Drake, "Framework for Transport Network 1186 Slices", draft-nsdt-teas-ns-framework-04 (work in 1187 progress), July 2020. 1189 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 1190 and W. Weiss, "An Architecture for Differentiated 1191 Services", RFC 2475, DOI 10.17487/RFC2475, December 1998, 1192 . 1194 [RFC2702] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., and J. 1195 McManus, "Requirements for Traffic Engineering Over MPLS", 1196 RFC 2702, DOI 10.17487/RFC2702, September 1999, 1197 . 1199 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 1200 and A. Bierman, Ed., "Network Configuration Protocol 1201 (NETCONF)", RFC 6241, DOI 10.17487/RFC6241, June 2011, 1202 . 1204 [RFC8040] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 1205 Protocol", RFC 8040, DOI 10.17487/RFC8040, January 2017, 1206 . 1208 Authors' Addresses 1210 Tarek Saad 1211 Juniper Networks 1213 Email: tsaad@juniper.net 1215 Vishnu Pavan Beeram 1216 Juniper Networks 1218 Email: vbeeram@juniper.net 1220 Bin Wen 1221 Comcast 1223 Email: Bin_Wen@cable.comcast.com 1225 Daniele Ceccarelli 1226 Ericsson 1228 Email: daniele.ceccarelli@ericsson.com 1230 Joel Halpern 1231 Ericsson 1233 Email: joel.halpern@ericsson.com 1235 Shaofu Peng 1236 ZTE Corporation 1238 Email: peng.shaofu@zte.com.cn 1240 Ran Chen 1241 ZTE Corporation 1243 Email: chen.ran@zte.com.cn 1244 Xufeng Liu 1245 Volta Networks 1247 Email: xufeng.liu.ietf@gmail.com 1249 Luis M. Contreras 1250 Telefonica 1252 Email: luismiguel.contrerasmurillo@telefonica.com