idnits 2.17.1 draft-bestbar-teas-ns-packet-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (11 January 2022) is 830 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-26) exists of draft-ietf-lsr-flex-algo-18 == Outdated reference: A later version (-03) exists of draft-kompella-mpls-mspl4fa-01 ** Obsolete normative reference: RFC 7752 (Obsoleted by RFC 9552) == Outdated reference: A later version (-05) exists of draft-bestbar-teas-yang-topology-filter-02 == Outdated reference: A later version (-05) exists of draft-decraene-mpls-slid-encoded-entropy-label-id-02 == Outdated reference: A later version (-09) exists of draft-filsfils-spring-srv6-stateless-slice-id-04 == Outdated reference: A later version (-25) exists of draft-ietf-teas-ietf-network-slices-05 == Outdated reference: A later version (-27) exists of draft-ietf-teas-rfc3272bis-13 == Outdated reference: A later version (-02) exists of draft-xpbs-pce-topology-filter-01 Summary: 1 error (**), 0 flaws (~~), 9 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TEAS Working Group T. Saad 3 Internet-Draft V. Beeram 4 Intended status: Standards Track Juniper Networks 5 Expires: 15 July 2022 B. Wen 6 Comcast 7 D. Ceccarelli 8 J. Halpern 9 Ericsson 10 S. Peng 11 R. Chen 12 ZTE Corporation 13 X. Liu 14 Volta Networks 15 L. Contreras 16 Telefonica 17 R. Rokui 18 Nokia 19 L. Jalil 20 Verizon 21 11 January 2022 23 Realizing Network Slices in IP/MPLS Networks 24 draft-bestbar-teas-ns-packet-07 26 Abstract 28 Network slicing provides the ability to partition a physical network 29 into multiple logical networks of varying sizes, structures, and 30 functions so that each slice can be dedicated to specific services or 31 customers. Network slices need to co-exist on the same network while 32 ensuring slice elasticity in terms of network resource allocation. 33 The Differentiated Service (Diffserv) model allows for carrying 34 multiple services on top of a single physical network by relying on 35 compliant domains and nodes to provide forwarding treatment 36 (scheduling and drop policy) on to packets that carry the respective 37 Diffserv code point. This document adopts a similar approach to 38 Diffserv and proposes a scalable approach to realize network slicing 39 in IP/MPLS networks. The solution does not mandate Diffserv to be 40 enabled in the network to provide a specific forwarding treatment, 41 but can co-exist with and complement it when enabled. 43 Status of This Memo 45 This Internet-Draft is submitted in full conformance with the 46 provisions of BCP 78 and BCP 79. 48 Internet-Drafts are working documents of the Internet Engineering 49 Task Force (IETF). Note that other groups may also distribute 50 working documents as Internet-Drafts. The list of current Internet- 51 Drafts is at https://datatracker.ietf.org/drafts/current/. 53 Internet-Drafts are draft documents valid for a maximum of six months 54 and may be updated, replaced, or obsoleted by other documents at any 55 time. It is inappropriate to use Internet-Drafts as reference 56 material or to cite them other than as "work in progress." 58 This Internet-Draft will expire on 15 July 2022. 60 Copyright Notice 62 Copyright (c) 2022 IETF Trust and the persons identified as the 63 document authors. All rights reserved. 65 This document is subject to BCP 78 and the IETF Trust's Legal 66 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 67 license-info) in effect on the date of publication of this document. 68 Please review these documents carefully, as they describe your rights 69 and restrictions with respect to this document. Code Components 70 extracted from this document must include Revised BSD License text as 71 described in Section 4.e of the Trust Legal Provisions and are 72 provided without warranty as described in the Revised BSD License. 74 Table of Contents 76 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 77 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 5 78 1.2. Acronyms and Abbreviations . . . . . . . . . . . . . . . 6 79 2. Network Resource Slicing Membership . . . . . . . . . . . . . 7 80 3. IETF Network Slice Realization . . . . . . . . . . . . . . . 8 81 3.1. Network Topology Filters . . . . . . . . . . . . . . . . 9 82 3.2. IETF Network Slice Service Request . . . . . . . . . . . 9 83 3.3. Slice-Flow Aggregation Mapping . . . . . . . . . . . . . 10 84 3.4. Path Placement over NRP Topology . . . . . . . . . . . . 10 85 3.5. NRP Policy Installation . . . . . . . . . . . . . . . . . 10 86 3.6. Path Instantiation . . . . . . . . . . . . . . . . . . . 11 87 3.7. Service Mapping . . . . . . . . . . . . . . . . . . . . . 11 88 3.8. Network Slice-Flow Aggregate Relationships . . . . . . . 11 89 4. Network Resource Partition Modes . . . . . . . . . . . . . . 12 90 4.1. Data plane Network Resource Partition Mode . . . . . . . 12 91 4.2. Control Plane Network Resource Partition Mode . . . . . . 13 92 4.3. Data and Control Plane Network Resource Partition Mode . 15 93 5. Network Resource Partition Instantiation . . . . . . . . . . 15 94 5.1. NRP Policy Definition . . . . . . . . . . . . . . . . . . 15 95 5.1.1. Network Resource Partition Data Plane Selector . . . 16 96 5.1.2. Network Resource Partition Resource Reservation . . . 19 97 5.1.3. Network Resource Partition Per Hop Behavior . . . . . 20 98 5.1.4. Network Resource Partition Topology . . . . . . . . . 21 99 5.2. Network Resource Partition Boundary . . . . . . . . . . . 21 100 5.2.1. Network Resource Partition Edge Nodes . . . . . . . . 21 101 5.2.2. Network Resource Partition Interior Nodes . . . . . . 22 102 5.2.3. Network Resource Partition Incapable Nodes . . . . . 22 103 5.2.4. Combining Network Resource Partition Modes . . . . . 23 104 5.3. Mapping Traffic on Slice-Flow Aggregates . . . . . . . . 24 105 6. Path Selection and Instantiation . . . . . . . . . . . . . . 24 106 6.1. Applicability of Path Selection to Slice-Flow 107 Aggregates . . . . . . . . . . . . . . . . . . . . . . . 24 108 6.2. Applicability of Path Control Technologies to Slice-Flow 109 Aggregates . . . . . . . . . . . . . . . . . . . . . . . 25 110 6.2.1. RSVP-TE Based Slice-Flow Aggregate Paths . . . . . . 25 111 6.2.2. SR Based Slice-Flow Aggregate Paths . . . . . . . . . 25 112 7. Network Resource Partition Protocol Extensions . . . . . . . 26 113 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 26 114 9. Security Considerations . . . . . . . . . . . . . . . . . . . 27 115 10. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 27 116 11. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 27 117 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 28 118 12.1. Normative References . . . . . . . . . . . . . . . . . . 28 119 12.2. Informative References . . . . . . . . . . . . . . . . . 29 120 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 31 122 1. Introduction 124 Network slicing allows a Service Provider to create independent and 125 logical networks on top of a common or shared physical network 126 infrastructure. Such network slices can be offered to customers or 127 used internally by the Service Provider to enhance the delivery of 128 their service offerings. A Service Provider can also use network 129 slicing to structure and organize the elements of its infrastructure. 130 This document provides a path control technology (e.g., RSVP, SR, or 131 other) agnostic solution that a Service Provider can deploy to 132 realize network slicing in IP/MPLS networks. 134 [I-D.ietf-teas-ietf-network-slices] provides the definition of a 135 network slice for use within the IETF and discusses the general 136 framework for requesting and operating IETF Network Slices, their 137 characteristics, and the necessary system components and interfaces. 138 It also discusses the function of an IETF Network Slice Controller 139 and the requirements on its northbound and southbound interfaces. 141 This document introduces the notion of a Slice-Flow Aggregate which 142 comprises of one of more IETF network slice traffic streams. It also 143 describes the Network Resource Partition (NRP) and the NRP Policy 144 that can be used to instantiate control and data plane behaviors on 145 select topological elements associated with the NRP that supports a 146 Slice-Flow Aggregate - refer Section 5.1 for further details. 148 The IETF Network Slice Controller is responsible for the aggregation 149 of multiple IETF network traffic streams into a Slice-Flow Aggregate, 150 and for maintaining the mapping required between them. The 151 mechanisms used by the controller to determine the mapping of one or 152 more IETF network slice to a Slice-Flow Aggregate are outside the 153 scope of this document. The focus of this document is on the 154 mechanisms required at the device level to address the requirements 155 of network slicing in packet networks. 157 In a Diffserv (DS) domain [RFC2475], packets requiring the same 158 forwarding treatment (scheduling and drop policy) are classified and 159 marked with the respective Class Selector (CS) Codepoint (or the 160 Traffic Class (TC) field for MPLS packets [RFC5462]) at the DS domain 161 ingress nodes. Such packets are said to belong to a Behavior 162 Aggregates (BA) that has a common set of behavioral characteristics 163 or a common set of delivery requirements. At transit nodes, the CS 164 is inspected to determine the specific forwarding treatment to be 165 applied before the packet is forwarded. A similar approach is 166 adopted in this document to realize network slicing. The solution 167 proposed in this document does not mandate Diffserv to be enabled in 168 the network to provide a specific forwarding treatment. 170 When logical networks associated with an NRP are realized on top of a 171 shared physical network infrastructure, it is important to steer 172 traffic on the specific network resources partition that is allocated 173 for a given Slice-Flow Aggregate. In packet networks, the packets of 174 a specific Slice-Flow Aggregate may be identified by one or more 175 specific fields carried within the packet. An NRP ingress boundary 176 node (where Slice-Flow Aggregate traffic enters the NRP) populates 177 the respective field(s) in packets that are mapped to a Slice-Flow 178 Aggregate in order to allow interior NRP nodes to identify and apply 179 the specific Per NRP Hop Behavior (NRP-PHB) associated with the 180 Slice-Flow Aggregate. The NRP-PHB defines the scheduling treatment 181 and, in some cases, the packet drop probability. 183 If Diffserv is enabled within the network, the Slice-Flow Aggregate 184 traffic can further carry a Diffserv CS to enable differentiation of 185 forwarding treatments for packets within a Slice-Flow Aggregate. 187 For example, when using MPLS as a dataplane, it is possible to 188 identify packets belonging to the same Slice-Flow Aggregate by 189 carrying an identifier in an MPLS Label Stack Entry (LSE). 190 Additional Diffserv classification may be indicated in the Traffic 191 Class (TC) bits of the global MPLS label to allow further 192 differentiation of forwarding treatments for traffic traversing the 193 same NRP. 195 This document covers different modes of NRPs and discusses how each 196 mode can ensure proper placement of Slice-Flow Aggregate paths and 197 respective treatment of Slice-Flow Aggregate traffic. 199 1.1. Terminology 201 The reader is expected to be familiar with the terminology specified 202 in [I-D.ietf-teas-ietf-network-slices]. 204 The following terminology is used in the document: 206 IETF Network Slice: 207 refer to the definition of 'IETF network slice' in 208 [I-D.ietf-teas-ietf-network-slices]. 210 IETF Network Slice Controller (NSC): 211 refer to the definition in [I-D.ietf-teas-ietf-network-slices]. 213 Network Resource Partition: 214 the set of network resources that are used to support a Slice-Flow 215 Aggregate to meet the requested SLOs and SLEs. 217 Slice-Flow Aggregate: 218 a collection of packets that match an NRP Policy selection 219 criteria and are given the same forwarding treatment; a Slice-Flow 220 Aggregate comprises of one or more IETF network slice traffic 221 streams; the mapping of one or more IETF network slices to a 222 Slice-Flow Aggregate is maintained by the IETF Network Slice 223 Controller. 225 Network Resource Partition Policy (NRP): 226 a policy construct that enables instantiation of mechanisms in 227 support of IETF network slice specific control and data plane 228 behaviors on select topological elements; the enforcement of an 229 NRP Policy results in the creation of an NRP. 231 NRP Identifier (NRP-ID): 232 an identifier that is globally unique within an NRP domain and 233 that can be used in the control plane to identify the resources 234 associated with the NRP. 236 NRP Capable Node: 237 a node that supports one of the NRP modes described in this 238 document. 240 NRP Incapable Node: 241 a node that does not support any of the NRP modes described in 242 this document. 244 Slice-Flow Aggregate Path: 245 a path that is setup over the NRP that is associated with a 246 specific Slice-Flow Aggregate. 248 Slice-Flow Aggregate Packet: 249 a packet that traverses over the NRP that is associated with a 250 specific Slice-Flow Aggregate. 252 NRP Topology: 253 a set of topological elements associated with a Network Resource 254 Partition. 256 NRP state aware TE (NRP-TE): 257 a mechanism for TE path selection that takes into account the 258 available network resources associated with a specific NRP. 260 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 261 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 262 "OPTIONAL" in this document are to be interpreted as described in 263 BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all 264 capitals, as shown here. 266 1.2. Acronyms and Abbreviations 268 BA: Behavior Aggregate 270 CS: Class Selector 272 NRP-PHB: NRP Per Hop Behavior as described in Section 5.1.3 274 SAS: Slice-Flow Aggregate Selector 276 SASL: Slice-Flow Aggregate Selector Label as described in 277 Section 5.1.1 279 SLA: Service Level Agreements 281 SLO: Service Level Objectives 283 SLE: Service Level Expectations 284 Diffserv: Differentiated Services 286 MPLS: Multiprotocol Label Switching 288 LSP: Label Switched Path 290 RSVP: Resource Reservation Protocol 292 TE: Traffic Engineering 294 SR: Segment Routing 296 VRF: VPN Routing and Forwarding 298 AC: Attachment Circuit 300 CE: Customer Edge 302 PE: Provider Edge 304 PCEP: Path Computation Element (PCE) Communication Protocol (PCEP) 306 2. Network Resource Slicing Membership 308 An NRP that supports a Slice-Flow Aggregate can be instantiated over 309 parts of an IP/MPLS network (e.g., all or specific network resources 310 in the access, aggregation, or core network), and can stretch across 311 multiple domains administered by a provider. The NRP topology may be 312 comprised of dedicated and/or shared network resources (e.g., in 313 terms of processing power, storage, and bandwidth). 315 The physical network resources may be fully dedicated to a specific 316 Slice-Flow Aggregate. For example, traffic belonging to a Slice-Flow 317 Aggregate can traverse dedicated network resources without being 318 subjected to contention from traffic of other Slice-Flow Aggregates. 319 Dedicated physical network resource slicing allows for simple 320 partitioning of the physical network resources amongst Slice-Flow 321 Aggregates without the need to distinguish packets traversing the 322 dedicated network resources since only one Slice-Flow Aggregate 323 traffic stream can traverse the dedicated resource at any time. 325 To optimize network utilization, sharing of the physical network 326 resources may be desirable. In such case, the same physical network 327 resource capacity is divided among multiple NRPs that support 328 multiple Slice-Flow Aggregates. The shared physical network 329 resources can be partitioned in the data plane (for example by 330 applying hardware policers and shapers) and/or partitioned in the 331 control plane by providing a logical representation of the physical 332 link that has a subset of the network resources available to it. 334 3. IETF Network Slice Realization 336 Figure 1 describes the steps required to realize an IETF network 337 slice service in a provider network using the solution proposed in 338 this document. Each of the steps is further elaborated on in a 339 subsequent section. 341 -- -- -- 342 ---------- |CE| |CE| |CE| 343 | Network | -- -- -- 344 | Slice | AC : AC : AC : 345 | Orchstr | ---------------------- ------- 346 ---------- ( |PE|....|PE|....|PE| ) ( IETF ) 347 | IETF ( --: -- :-- ) ( Network ) 348 | Network ( :............: ) ( Slice ) 349 | Slice Svc ( IETF Network Slice ) ( ) Customer 350 | Req ---------------------- ------- View 351 ..|....................................\........./.................. 352 --v---------- ----> Slice-Flow \ / Controller 353 |Controllers| | Aggregation Mapping v v View 354 | ------- | | ----------------------------------------- 355 | |IETF | |-- ( |PE|.......|PE|........|PE|.......|PE| ) 356 | |Network| | ( --: -- :-- -- ) 357 | |Slice | | ( :...................: ) 358 | |Cntrlr | | ( Slice-Flow Aggregate ) 359 | |(NSC) | | ----------------------------------------- 360 | ------- |---------. 361 | ------- | | Path Placement 362 | | | | v 363 | | | | ----------------------------------------- 364 | | | | ( |PE|....-..|PE| |PE|.......|PE| ) 365 | |Network| | ( -- |P| --......-...-- - :-- ) 366 | |Cntrlr | | ( -:.........|P|.......|P|..: ) 367 | |(NC) | | ( Path Set - - ) 368 | | | | ----------------------------------------- 369 | | | |-------. 370 | | | | | Apply Topology Filters 371 | | | | v 372 | ------- | ----------------------------- -------- 373 | | (|PE|..-..|PE|... ..|PE|..|PE|) ( Policy ) 374 ----------- ( :-- |P| -- :-: -- :-- ) ( Filter ) 375 | | | ( :.- -:.......|P| :- ) ( Topology ) 376 | | | ( |P|...........:-:.......|P| ) ( ) 377 | | \ ( - Policy Filter Topology ) -------- 378 | | \ ----------------------------- A 379 | | \ A / 380 ..............\.......................\............../.............. 381 | | Path v Service Mapping \ / Physical N/w 382 \ \Inst ------------------------------------------------ 383 \ \ ( |PE|.....-.....|PE|....... |PE|.......|PE| ) 384 \ \ ( -- |P| -- :-...:-- -..:-- ) 385 NRP \ --->( : -:..............|P|.........|P| ) 386 Policy\ ( -.......................:-:..- - ) 387 Inst ----->( |P|..........................|P|......: ) 388 ( - - ) 389 ------------------------------------------------ 391 Figure 1: IETF network slice realization steps. 393 3.1. Network Topology Filters 395 The Physical Network may be filtered into a number of Policy Filter 396 Topologies. Filter actions may include selection of specific nodes 397 and links according to their capabilities and are based on network- 398 wide policies. The resulting topologies can be used to host IETF 399 Network Slices and provide a useful way for the network operator to 400 know that all of the resources they are using to plan a network slice 401 meet specific SLOs. This step can be done offline during planning 402 activity, or could be performed dynamically as new demands arise. 404 Section 5.1.4 describes how topology filters can be associated with 405 the NRP instantiated by the NRP Policy. 407 3.2. IETF Network Slice Service Request 409 The customer requests an IETF Network Slice Service specifying the 410 CE-AC-PE points of attachment, the connectivity matrix, and the SLOs/ 411 SLEs as described in [I-D.ietf-teas-ietf-network-slices]. These 412 capabilities are always provided based on a Service Level Agreement 413 (SLA) between the network slice costumer and the provider. 415 This defines the traffic flows that need to be supported when the 416 slice is realized. Depending on the mechanism and encoding of the 417 Attachment Circuit (AC), the IETF Network Slice Service may also 418 include information that will allow the operator's controllers to 419 configure the PEs to determine what customer traffic is intended for 420 this IETF Network Slice. 422 IETF Network Slice Service Requests are likely to arrive at various 423 times in the life of the network, and may also be modified. 425 3.3. Slice-Flow Aggregation Mapping 427 A network may be called upon to support very many IETF Network 428 Slices, and this could present scaling challenges in the operation of 429 the network. In order to overcome this, the IETF Network Slice 430 streams may be aggregated into groups according to similar 431 characteristics. 433 A Slice-Flow Aggregate is a construct that comprises the traffic 434 flows of one or more IETF Network Slices. The mapping of IETF 435 Network Slices into an Slice-Flow Aggregate is a matter of local 436 operator policy is a function executed by the Controller. The Slice- 437 Flow Aggregate may be preconfigured, created on demand, or modified 438 dynamically. 440 3.4. Path Placement over NRP Topology 442 Depending on the underlying network technology, the paths are 443 selected in the network in order to best deliver the SLOs for the 444 different services carried by the Slice-Flow Aggregate. The path 445 placement function (carried on ingress node or by a controller) is 446 performed on the Policy Filtered Topology that is selected to support 447 the Slice-Flow Aggregate. 449 Note that this step may indicate the need to increase the capacity of 450 the underlying Policy Filter Topology or to create a new Policy 451 Filter Topology. 453 3.5. NRP Policy Installation 455 A Controller function programs the physical network with policies for 456 handling the traffic flows belonging to the Slice-Flow Aggregate. 457 These policies instruct underlying routers how to handle traffic for 458 a specific Slice-Flow Aggregate: the routers correlate markers 459 present in the packets that belong to the Slice-Flow Aggregate. The 460 way in which the NRP Policy is installed in the routers and the way 461 that the traffic is marked is implementation specific. The NRP 462 Policy instantiation in the network is further described in 463 Section 5. 465 3.6. Path Instantiation 467 Depending on the underlying network technology, a Controller function 468 may install the forwarding state specific to the Slice-Flow Aggregate 469 so that traffic is routed along paths derived in the Path Placement 470 step described in Section 3.4. The way in which the paths are 471 instantiated is implementation specific. 473 3.7. Service Mapping 475 The edge points (PEs) can be configured to support the network slice 476 service by mapping the customer traffic to Slice-Flow Aggregates, 477 possibly using information supplied when the IETF network slice 478 service was requested. The edge points MAY also be instructed to 479 mark the packets so that the network routers will know which policies 480 and routing instructions to apply. 482 3.8. Network Slice-Flow Aggregate Relationships 484 The following describes the generalization relationships between the 485 IETF network slice and different parts of the solution as described 486 in Figure 1. 488 o A customer may request 1 or more IETF Network Slices. 490 o Any given Attachment Circuit (AC) may support the traffic for 1 or 491 more IETF Network Slice, but if there is more than one IETF Network 492 Slice using a single AC, the IETF Network Slice Service request must 493 include enough information to allow the edge nodes to demultiplex the 494 traffic for the different IETF Network Slices. 496 o By definition, multiple IETF Network Slices may be mapped to a 497 single Slice-Flow Aggregate. However, it is possible for an Slice- 498 Flow Aggregate to contain just a single IETF Network Slice. 500 o The physical network may be filtered to multiple Policy Filter 501 Topologies. Each such Policy Filter Topology facilitates planning 502 the placement and support of the Slice-Flow Aggregate by presenting 503 only the subset of links and nodes that meet specific criteria. 504 Note, however, that a network operator does not need to derive any 505 Policy Filter Topologies, choosing to operate directly on the full 506 physical network. 508 o It is anticipated that there may be very many IETF Network Slices 509 supported by a network operator over a single physical network. A 510 network may support a limited number of Slice-Flow Aggregates, with 511 each of the Slice-Flow Aggregates grouping any number of the IETF 512 Network Slices streams. 514 4. Network Resource Partition Modes 516 An NRP Policy can be used to dictate if the network resource 517 partitioning of the shared network resources among multiple Slice- 518 Flow Aggregates can be achieved: 520 a) in data plane only, 522 b) in control plane only, or 524 c) in both control and data planes. 526 4.1. Data plane Network Resource Partition Mode 528 The physical network resources can be partitioned on network devices 529 by applying a Per Hop forwarding Behavior (PHB) onto packets that 530 traverse the network devices. In the Diffserv model, a Class 531 Selector Codepoint (CS) is carried in the packet and is used by 532 transit nodes to apply the PHB that determines the scheduling 533 treatment and drop probability for packets. 535 When data plane NRP mode is applied, packets need to be forwarded on 536 the specific NRP that supports the Slice-Flow Aggregate to ensure the 537 proper forwarding treatment dictated in the NRP Policy is applied 538 (refer to Section 5.1 below). In this case, a Slice-Flow Aggregate 539 Selector (SAS) MUST be carried in each packet to identify the Slice- 540 Flow Aggregate that it belongs to. 542 The ingress node of an NRP domain MAY also add an SAS to each Slice- 543 Flow Aggregate packet. The transit nodes within an NRP domain MAY 544 use the SAS to associate packets with a Slice-Flow Aggregate and to 545 determine the Network Resource Partition Per Hop Behavior (NRP-PHB) 546 that is applied to the packet (refer to Section 5.1.3 for further 547 details). The CS MAY be used to apply a Diffserv PHB on to the 548 packet to allow differentiation of traffic treatment within the same 549 Slice-Flow Aggregate. 551 When data plane only NRP mode is used, routers may rely on a network 552 state independent view of the topology to determine the best paths. 553 In this case, the best path selection dictates the forwarding path of 554 packets to the destination. The SAS field carried in each packet 555 determines the specific NRP-PHB treatment along the selected path. 557 For example, the Segment-Routing Flexible Algorithm 558 [I-D.ietf-lsr-flex-algo] may be deployed in a network to steer 559 packets on the IGP computed lowest cumulative delay path. An NRP 560 Policy may be used to allow links along the least latency path to 561 share its data plane resources amongst multiple Slice-Flow 562 Aggregates. In this case, the packets that are steered on a specific 563 NRP carry the SAS that enables routers (along with the Diffserv CS) 564 to determine the NRP-PHB to enforce on the Slice-Flow Aggregate 565 traffic streams. 567 4.2. Control Plane Network Resource Partition Mode 569 Multiple NRPs can be realized over the same set of physical 570 resources. Each NRP is identified by an identifier (NRP-ID) that is 571 globally unique within the NRP domain. The NRP state reservations 572 for each NRP can be maintained on the network element or on a 573 controller. 575 The network reservation states for a specific partition can be 576 represented in a topology that contains all or a subset of the 577 physical network elements (nodes and links) and reflect the network 578 state reservations in that NRP. The logical network resources that 579 appear in the NRP topology can reflect a part, whole, or in-excess of 580 the physical network resource capacity (e.g., when oversubscription 581 is desirable). 583 For example, the physical link bandwidth can be divided into 584 fractions, each dedicated to an NRP that supports a Slice-Flow 585 Aggregate. The topology associated with the NRP supporting a Slice- 586 Flow Aggregate can be used by routing protocols, or by the ingress/ 587 PCE when computing NRP state aware TE paths. 589 To perform NRP state aware Traffic Engineering (NRP-TE), the resource 590 reservation on each link needs to be NRP aware. The NRP reservations 591 state can be managed locally on the device or off device (e.g. on a 592 controller). Details of required IGP extensions to support NRP-TE 593 are described in [I-D.bestbar-lsr-slice-aware-te]. 595 The same physical link may be member of multiple slice policies that 596 instantiate different NRPs. The NRP reservable or utilized bandwidth 597 on such a link is updated (and may be advertised) whenever new paths 598 are placed in the network. The NRP reservation state, in this case, 599 MAY be maintained on each device or off the device on a resource 600 reservation manager that holds reservation states for those links in 601 the network. 603 Multiple NRPs that support Slice-Flow Aggregates can form a group and 604 share the available network resources allocated to each. In this 605 case, a node can update the reservable bandwidth for each NRP to take 606 into consideration the available bandwidth from other NRPs in the 607 same group. 609 For illustration purposes, Figure 2 describes bandwidth paritioning 610 or sharing amongst a group of NRPs. In Figure 2a, the NRPs 611 indentified by the following NRP-IDs: NRP1, NRP2, NRP3 and NRP4 are 612 not sharing any bandwidths between each other. In Figure 2b, the 613 NRPs: NRP1 and NRP2 can share the available bandwidth portion 614 allocated to each amongst them. Similarly, NRP3 and NRP4 can share 615 amongst themselves any available bandwidth allocated to them, but 616 they cannot share available bandwidth allocated to NRP1 or NRP2. In 617 both cases, the Max Reservable Bandwidth may exceed the actual 618 physical link resource capacity to allow for over subscription. 620 I-----------------------------I I-----------------------------I 621 <--NRP1-> I I-----------------I I 622 I---------I I I <-NRP1-> I I 623 I I I I I-------I I I 624 I---------I I I I I I I 625 I I I I-------I I I 626 <-----NRP2------> I I I I 627 I-----------------I I I <-NRP2-> I I 628 I I I I I---------I I I 629 I-----------------I I I I I I I 630 I I I I---------I I I 631 <---NRP3----> I I I I 632 I-------------I I I NRP1 + NRP2 I I 633 I I I I-----------------I I 634 I-------------I I I I 635 I I I I 636 <---NRP4----> I I-----------------I I 637 I-------------I I I <-NRP3-> I I 638 I I I I I-------I I I 639 I-------------I I I I I I I 640 I I I I-------I I I 641 I NRP1+NRP2+NRP3+NRP4 I I I I 642 I I I <-NRP4-> I I 643 I-----------------------------I I I---------I I I 644 <--Max Reservable Bandwidth--> I I I I I 645 I I---------I I I 646 I I I 647 I NRP3 + NRP4 I I 648 I-----------------I I 649 I NRP1+NRP2+NRP3+NRP4 I 650 I I 651 I-----------------------------I 652 <--Max Reservable Bandwidth--> 654 (a) No bandwidth sharing (b) Sharing bandwidth between 655 between NRPs. NRPs of the same group. 657 Figure 2: Bandwidth isolation/sharing among NRPs. 659 4.3. Data and Control Plane Network Resource Partition Mode 661 In order to support strict guarantees for Slice-Flow Aggregates, the 662 network resources can be partitioned in both the control plane and 663 data plane. 665 The control plane partitioning allows the creation of customized 666 topologies per NRP that each supports a Slice-Flow Aggregate. The 667 ingress routers or a Path Computation Engine (PCE) may use the 668 customized topologies and the NRP state to determine optimal path 669 placement for specific demand flows using NRP-TE. 671 The data plane partitioning provides isolation for Slice-Flow 672 Aggregate traffic, and protection when resource contention occurs due 673 to bursts of traffic from other Slice-Flow Aggregate traffic that 674 traverses the same shared network resource. 676 5. Network Resource Partition Instantiation 678 A network slice can span multiple technologies and multiple 679 administrative domains. Depending on the network slice customer 680 requirements, a network slice can be differentiated from other 681 network slices in terms of data, control, and management planes. 683 The customer of a network slice service expresses their intent by 684 specifying requirements rather than mechanisms to realize the slice 685 as described in Section 3.2. 687 The network slice controller is fed with the network slice service 688 intent and realizes it with an appropriate Network Resource Partition 689 Policy (NRP Policy). Multiple IETF network slices MAY be mapped to 690 the same Slice-Flow Aggregate as described in Section 3.3. 692 The network wide consistent NRP Policy definition is distributed to 693 the devices in the network as shown in Figure 1. The specification 694 of the network slice intent on the northbound interface of the 695 controller and the mechanism used to map the network slice to a 696 Slice-Flow Aggregate are outside the scope of this document and will 697 be addressed in separate documents. 699 5.1. NRP Policy Definition 701 The NRP Policy is network-wide construct that is supplied to network 702 devices, and may include rules that control the following: 704 * Data plane specific policies: This includes the SAS, any firewall 705 rules or flow-spec filters, and QoS profiles associated with the 706 NRP Policy and any classes within it. 708 * Control plane specific policies: This includes bandwidth 709 reservations, any network resource sharing amongst slice policies, 710 and reservation preference to prioritize reservations of a 711 specific NRP over others. 713 * Topology membership policies: This defines the topology filter 714 policies that dictate node/link/function membership to a specific 715 NRP. 717 There is a desire for flexibility in realizing network slices to 718 support the services across networks consisting of implementations 719 from multiple vendors. These networks may also be grouped into 720 disparate domains and deploy various path control technologies and 721 tunnel techniques to carry traffic across the network. It is 722 expected that a standardized data model for NRP Policy will 723 facilitate the instantiation and management of the NRP on the 724 topological elements selected by the NRP Policy topology filter. A 725 YANG data model for the Network Resource Partition Policy 726 instantiation on the controller and network devices is described in 727 [I-D.bestbar-teas-yang-slice-policy]. 729 It is also possible to distribute the NRP Policy to network devices 730 using several mechanisms, including protocols such as NETCONF or 731 RESTCONF, or exchanging it using a suitable routing protocol that 732 network devices participate in (such as IGP(s) or BGP). The 733 extensions to enable specific protocols to carry an NRP Policy 734 definition will be described in separate documents. 736 5.1.1. Network Resource Partition Data Plane Selector 738 A router MUST be able to identify a packet belonging to a Slice-Flow 739 Aggregate before it can apply the associated forwarding treatment or 740 NRP-PHB. One or more fields within the packet MAY be used as an SAS 741 to do this. 743 Forwarding Address Based Selector: 745 It is possible to assign a different forwarding address (or MPLS 746 forwarding label in case of MPLS network) for each Slice-Flow 747 Aggregate on a specific node in the network. [RFC3031] states in 748 Section 2.1 that: 'Some routers analyze a packet's network layer 749 header not merely to choose the packet's next hop, but also to 750 determine a packet's "precedence" or "class of service"'. 751 Assigning a unique forwarding address (or MPLS forwarding label) 752 to each Slice-Flow Aggregate allows Slice-Flow Aggregate packets 753 destined to a node to be distinguished by the destination address 754 (or MPLS forwarding label) that is carried in the packet. 756 This approach requires maintaining per Slice-Flow Aggregate state 757 for each destination in the network in both the control and data 758 plane and on each router in the network. For example, consider a 759 network slicing provider with a network composed of 'N' nodes, 760 each with 'K' adjacencies to its neighbors. Assuming a node can 761 be reached over 'M' different Slice-Flow Aggregates, the node 762 assigns and advertises reachability to 'N' unique forwarding 763 addresses, or MPLS forwarding labels. Similarly, each node 764 assigns a unique forwarding address (or MPLS forwarding label) for 765 each of its 'K' adjacencies to enable strict steering over the 766 adjacency for each slice. The total number of control and data 767 plane states that need to be stored and programmed in a router's 768 forwarding is (N+K)*M states. Hence, as 'N', 'K', and 'M' 769 parameters increase, this approach suffers from scalability 770 challenges in both the control and data planes. 772 Global Identifier Based Selector: 774 An NRP Policy MAY include a Global Identifier SAS (GISS) field as 775 defined in [I-D.kompella-mpls-mspl4fa] that is carried in each 776 packet in order to associate it to the NRP supporting a Slice-Flow 777 Aggregate, independent of the forwarding address or MPLS 778 forwarding label that is bound to the destination. Routers within 779 the NRP domain can use the forwarding address (or MPLS forwarding 780 label) to determine the forwarding next-hop(s), and use the GISS 781 field in the packet to infer the specific forwarding treatment 782 that needs to be applied on the packet. 784 The GISS can be carried in one of multiple fields within the 785 packet, depending on the dataplane used. For example, in MPLS 786 networks, the GISS can be encoded within an MPLS label that is 787 carried in the packet's MPLS label stack. All packets that belong 788 to the same Slice-Flow Aggregate MAY carry the same GISS in the 789 MPLS label stack. It is also possible to have multiple GISS's map 790 to the same Slice-Flow Aggregate. 792 The GISS can be encoded in an MPLS label and may appear in several 793 positions in the MPLS label stack. For example, the VPN service 794 label may act as a GISS to allow VPN packets to be mapped to the 795 Slice-Flow Aggregate. In this case, a single VPN service label 796 acting as a GISS MAY be allocated by all Egress PEs of a VPN. 797 Alternatively, multiple VPN service labels MAY act as GISS's that 798 map a single VPN to the same Slice-Flow Aggregate to allow for 799 multiple Egress PEs to allocate different VPN service labels for a 800 VPN. In other cases, a range of VPN service labels acting as 801 multiple GISS's MAY map multiple VPN traffic to a single Slice- 802 Flow Aggregate. An example of such deployment is shown in 803 Figure 3. 805 SR Adj-SID: GISS (VPN service label) on PE2: 1001 806 9012: P1-P2 807 9023: P2-PE2 809 /-----\ /-----\ /-----\ /-----\ 810 | PE1 | ----- | P1 | ------ | P2 |------ | PE2 | 811 \-----/ \-----/ \-----/ \-----/ 813 In 814 packet: 815 +------+ +------+ +------+ +------+ 816 | IP | | 9012 | | 9023 | | 1001 | 817 +------+ +------+ +------+ +------+ 818 | Pay- | | 9023 | | 1001 | | IP | 819 | Load | +------+ +------+ +------+ 820 +----- + | 1001 | | IP | | Pay- | 821 +------+ +------+ | Load | 822 | IP | | Pay- | +------+ 823 +------+ | Load | 824 | Pay- | +------+ 825 | Load | 826 +------+ 828 Figure 3: GISS or VPN label at bottom of label stack. 830 In some cases, the position of the GISS may not be at a fixed 831 position in the MPLS label header. In this case, the GISS label 832 can show up in any position in the MPLS label stack. To enable a 833 transit router to identify the position of the GISS label, a 834 special purpose label (ideally a base special purpose label 835 (bSPL)) can be used to indicate the presence of a GISS in the MPLS 836 label stack. [I-D.kompella-mpls-mspl4fa] proposes a new bSPL 837 called Forwarding Actions Identifier (FAI) that is assigned to 838 alert of the presence of multiple actions and action data 839 (including the presence of the GISS). The NRP ingress boundary 840 node, in this case, imposes two labels: the FAI label and a 841 forwarding actions label that includes the GISS to identify the 842 Slice-Flow Aggregate packets as shown in Figure 4. 844 [I-D.decraene-mpls-slid-encoded-entropy-label-id] also proposes to 845 repurpose the ELI/EL [RFC6790] to carry the Slice Identifier in 846 order to minimize the size of the MPLS stack and ease incremental 847 deployment. 849 SR Adj-SID: GISS: 1001 850 9012: P1-P2 851 9023: P2-PE2 853 /-----\ /-----\ /-----\ /-----\ 854 | PE1 | ----- | P1 | ------ | P2 |------ | PE2 | 855 \-----/ \-----/ \-----/ \-----/ 857 In 858 packet: 859 +------+ +------+ +------+ +------+ 860 | IP | | 9012 | | 9023 | | FAI | 861 +------+ +------+ +------+ +------+ 862 | Pay- | | 9023 | | FAI | | 1001 | 863 | Load | +------+ +------+ +------+ 864 +------+ | FAI | | 1001 | | IP | 865 +------+ +------+ +------+ 866 | 1001 | | IP | | Pay- | 867 +------+ +------+ | Load | 868 | IP | | Pay- | +------+ 869 +------+ | Load | 870 | Pay- | +------+ 871 | Load | 872 +------+ 874 Figure 4: FAI and GISS label in the label stack. 876 When the slice is realized over an IP dataplane, the GISS can be 877 encoded in the IP header. For example, the GISS can be encoded in 878 portion of the IPv6 Flow Label field as described in 879 [I-D.filsfils-spring-srv6-stateless-slice-id]. 881 5.1.2. Network Resource Partition Resource Reservation 883 Bandwidth and network resource allocation strategies for slice 884 policies are essential to achieve optimal placement of paths within 885 the network while still meeting the target SLOs. 887 Resource reservation allows for the managing of available bandwidth 888 and for prioritization of existing allocations to enable preference- 889 based preemption when contention on a specific network resource 890 arises. Sharing of a network resource's available bandwidth amongst 891 a group of NRPs may also be desirable. For example, a Slice-Flow 892 Aggregate may not be using all of the NRP reservable bandwidth; this 893 allows other NRPs in the same group to use the available bandwidth 894 resources for other Slice-Flow Aggregates. 896 Congestion on shared network resources may result from sub-optimal 897 placement of paths in different slice policies. When this occurs, 898 preemption of some Slice-Flow Aggregate paths may be desirable to 899 alleviate congestion. A preference-based allocation scheme enables 900 prioritization of Slice-Flow Aggregate paths that can be preempted. 902 Since network characteristics and its state can change over time, the 903 NRP topology and its network state need to be propagated in the 904 network to enable ingress TE routers or Path Computation Engine 905 (PCEs) to perform accurate path placement based on the current state 906 of the NRP network resources. 908 5.1.3. Network Resource Partition Per Hop Behavior 910 In Diffserv terminology, the forwarding behavior that is assigned to 911 a specific class is called a Per Hop Behavior (PHB). The PHB defines 912 the forwarding precedence that a marked packet with a specific CS 913 receives in relation to other traffic on the Diffserv-aware network. 915 The NRP Per Hop Behavior (NRP-PHB) is the externally observable 916 forwarding behavior applied to a specific packet belonging to a 917 Slice-Flow Aggregate. The goal of an NRP-PHB is to provide a 918 specified amount of network resources for traffic belonging to a 919 specific Slice-Flow Aggregate. A single NRP may also support 920 multiple forwarding treatments or services that can be carried over 921 the same logical network. 923 The Slice-Flow Aggregate traffic may be identified at NRP ingress 924 boundary nodes by carrying a SAS to allow routers to apply a specific 925 forwarding treatment that guarantee the SLA(s). 927 With Differentiated Services (Diffserv) it is possible to carry 928 multiple services over a single converged network. Packets requiring 929 the same forwarding treatment are marked with a CS at domain ingress 930 nodes. Up to eight classes or Behavior Aggregates (BAs) may be 931 supported for a given Forwarding Equivalence Class (FEC) [RFC2475]. 932 To support multiple forwarding treatments over the same Slice-Flow 933 Aggregate, a Slice-Flow Aggregate packet MAY also carry a Diffserv CS 934 to identify the specific Diffserv forwarding treatment to be applied 935 on the traffic belonging to the same NRP. 937 At transit nodes, the CS field carried inside the packets are used to 938 determine the specific PHB that determines the forwarding and 939 scheduling treatment before packets are forwarded, and in some cases, 940 drop probability for each packet. 942 5.1.4. Network Resource Partition Topology 944 A key element of the NRP Policy is a customized topology that may 945 include the full or subset of the physical network topology. The NRP 946 topology could also span multiple administrative domains and/or 947 multiple dataplane technologies. 949 An NRP topology can overlap or share a subset of links with another 950 NRP topology. A number of topology filtering policies can be defined 951 as part of the NRP Policy to limit the specific topology elements 952 that belong to the NRP. For example, a topology filtering policy can 953 leverage Resource Affinities as defined in [RFC2702] to include or 954 exclude certain links that the NRP is instantiated on in supports of 955 the Slice-Flow Aggregate. 957 The NRP Policy may also include a reference to a predefined topology 958 (e.g., derived from a Flexible Algorithm Definition (FAD) as defined 959 in [I-D.ietf-lsr-flex-algo], or Multi-Topology ID as defined 960 [RFC4915]. A YANG data model that covers generic topology filters is 961 described in [I-D.bestbar-teas-yang-topology-filter]. Also, the Path 962 Computation Element (PCE) Communication Protocol (PCEP) extensions to 963 carry topology filters are defined in [I-D.xpbs-pce-topology-filter]. 965 5.2. Network Resource Partition Boundary 967 A network slice originates at the edge nodes of a network slice 968 provider. Traffic that is steered over the corresponding NRP 969 supporting a Slice-Flow Aggregate may traverse NRP capable as well as 970 NRP incapable interior nodes. 972 The network slice may encompass one or more domains administered by a 973 provider. For example, an organization's intranet or an ISP. The 974 network provider is responsible for ensuring that adequate network 975 resources are provisioned and/or reserved to support the SLAs offered 976 by the network end-to-end. 978 5.2.1. Network Resource Partition Edge Nodes 980 NRP edge nodes sit at the boundary of a network slice provider 981 network and receive traffic that requires steering over network 982 resources specific to a NRP that supports a Slice-Flow Aggregate. 983 These edge nodes are responsible for identifying Slice-Flow Aggregate 984 specific traffic flows by possibly inspecting multiple fields from 985 inbound packets (e.g., implementations may inspect IP traffic's 986 network 5-tuple in the IP and transport protocol headers) to decide 987 on which NRP it can be steered. 989 Network slice ingress nodes may condition the inbound traffic at 990 network boundaries in accordance with the requirements or rules of 991 each service's SLAs. The requirements and rules for network slice 992 services are set using mechanisms which are outside the scope of this 993 document. 995 When data plane NRP mode is employed, the NRP ingress nodes are 996 responsible for adding a suitable SAS onto packets that belong to 997 specific Slice-Flow Aggregate. In addition, edge nodes MAY mark the 998 corresponding Diffserv CS to differentiate between different types of 999 traffic carried over the same Slice-Flow Aggregate. 1001 5.2.2. Network Resource Partition Interior Nodes 1003 An NRP interior node receives slice traffic and MAY be able to 1004 identify the packets belonging to a specific Slice-Flow Aggregate by 1005 inspecting the SAS field carried inside each packet, or by inspecting 1006 other fields within the packet that may identify the traffic streams 1007 that belong to a specific Slice-Flow Aggregate. For example, when 1008 data plane NRP mode is applied, interior nodes can use the SAS 1009 carried within the packet to apply the corresponding NRP-PHB 1010 forwarding behavior. Nodes within the network slice provider network 1011 may also inspect the Diffserv CS within each packet to apply a per 1012 Diffserv class PHB within the NRP Policy, and allow differentiation 1013 of forwarding treatments for packets forwarded over the same NRP that 1014 supports the Slice-Flow Aggregate. 1016 5.2.3. Network Resource Partition Incapable Nodes 1018 Packets that belong to a Slice-Flow Aggregate may need to traverse 1019 nodes that are NRP incapable. In this case, several options are 1020 possible to allow the slice traffic to continue to be forwarded over 1021 such devices and be able to resume the NRP forwarding treatment once 1022 the traffic reaches devices that are NRP-capable. 1024 When data plane NRP mode is employed, packets carry a SAS to allow 1025 slice interior nodes to identify them. To support end-to-end network 1026 slicing, the SAS MUST be maintained in the packets as they traverse 1027 devices within the network - including NRP capable and incapable 1028 devices. 1030 For example, when the SAS is an MPLS label at the bottom of the MPLS 1031 label stack, packets can traverse over devices that are NRP incapable 1032 without any further considerations. On the other hand when the SASL 1033 is at the top of the MPLS label stack, packets can be bypassed (or 1034 tunneled) over the NRP incapable devices towards the next device that 1035 supports NRP as shown in Figure 5. 1037 SR Node-SID: SASL: 1001 @@@: NRP Policy enforced 1038 1601: P1 ...: NRP Policy not enforced 1039 1602: P2 1040 1603: P3 1041 1604: P4 1042 1605: P5 1044 @@@@@@@@@@@@@@ ........................ 1045 . 1046 /-----\ /-----\ /-----\ . 1047 | P1 | ----- | P2 | ----- | P3 | . 1048 \-----/ \-----/ \-----/ . 1049 | @@@@@@@@@@ 1050 | 1051 /-----\ /-----\ 1052 | P4 | ------ | P5 | 1053 \-----/ \-----/ 1055 +------+ +------+ +------+ 1056 | 1001 | | 1604 | | 1001 | 1057 +------+ +------+ +------+ 1058 | 1605 | | 1001 | | IP | 1059 +------+ +------+ +------+ 1060 | IP | | 1605 | | Pay- | 1061 +------+ +------+ | Load | 1062 | Pay- | | IP | +------+ 1063 | Load | +------+ 1064 +----- + | Pay- | 1065 | Load | 1066 +------+ 1068 Figure 5: Extending network slice over NRP incapable device(s). 1070 5.2.4. Combining Network Resource Partition Modes 1072 It is possible to employ a combination of the NRP modes that were 1073 discussed in Section 4 to realize a network slice. For example, data 1074 and control plane NRP modes can be employed in parts of a network, 1075 while control plane NRP mode can be employed in the other parts of 1076 the network. The path selection, in such case, can take into account 1077 the NRP available network resources. The SAS carried within packets 1078 allow transit nodes to enforce the corresponding NRP-PHB on the parts 1079 of the network that apply the data plane NRP mode. The SAS can be 1080 maintained while traffic traverses nodes that do not enforce data 1081 plane NRP mode, and so slice PHB enforcement can resume once traffic 1082 traverses capable nodes. 1084 5.3. Mapping Traffic on Slice-Flow Aggregates 1086 The usual techniques to steer traffic onto paths can be applicable 1087 when steering traffic over paths established for a specific Slice- 1088 Flow Aggregate. 1090 For example, one or more (layer-2 or layer-3) VPN services can be 1091 directly mapped to paths established for a Slice-Flow Aggregate. In 1092 this case, the per Virtual Routing and Forwarding (VRF) instance 1093 traffic that arrives on the Provider Edge (PE) router over external 1094 interfaces can be directly mapped to a specific Slice-Flow Aggregate 1095 path. External interfaces can be further partitioned (e.g., using 1096 VLANs) to allow mapping one or more VLANs to specific Slice-Flow 1097 Aggregate paths. 1099 Another option is steer traffic to specific destinations directly 1100 over multiple slice policies. This allows traffic arriving on any 1101 external interface and targeted to such destinations to be directly 1102 steered over the slice paths. 1104 A third option that can also be used is to utilize a data plane 1105 firewall filter or classifier to enable matching of several fields in 1106 the incoming packets to decide whether the packet belongs to a 1107 specific Slice-Flow Aggregate. This option allows for applying a 1108 rich set of rules to identify specific packets to be mapped to a 1109 Slice-Flow Aggregate. However, it requires data plane network 1110 resources to be able to perform the additional checks in hardware. 1112 6. Path Selection and Instantiation 1114 6.1. Applicability of Path Selection to Slice-Flow Aggregates 1116 The path selection in the network can be network state dependent, or 1117 network state independent as described in Section 5.1 of 1118 [I-D.ietf-teas-rfc3272bis]. The latter is the choice commonly used 1119 by IGPs when selecting a best path to a destination prefix, while the 1120 former is used by ingress TE routers, or Path Computation Engines 1121 (PCEs) when optimizing the placement of a flow based on the current 1122 network resource utilization. 1124 When path selection is network state dependent, the path computation 1125 can leverage Traffic Engineering mechanisms (e.g., as defined in 1126 [RFC2702]) to compute feasible paths taking into account the incoming 1127 traffic demand rate and current state of network. This allows 1128 avoiding overly utilized links, and reduces the chance of congestion 1129 on traversed links. 1131 To enable TE path placement, the link state is advertised with 1132 current reservations, thereby reflecting the available bandwidth on 1133 each link. Such link reservations may be maintained centrally on a 1134 network wide network resource manager, or distributed on devices (as 1135 usually done with RSVP). TE extensions exist today to allow IGPs 1136 (e.g., [RFC3630] and [RFC5305]), and BGP-LS [RFC7752] to advertise 1137 such link state reservations. 1139 When the network resource reservations are maintained for NRPs, the 1140 link state can carry per NRP state (e.g., reservable bandwidth). 1141 This allows path computation to take into account the specific 1142 network resources available for an NRP. In this case, we refer to 1143 the process of path placement and path provisioning as NRP aware TE 1144 (NRP-TE). 1146 6.2. Applicability of Path Control Technologies to Slice-Flow 1147 Aggregates 1149 The NRP modes described in this document are agnostic to the 1150 technology used to setup paths that carry Slice-Flow Aggregate 1151 traffic. One or more paths connecting the endpoints of the mapped 1152 IETF network slices may be selected to steer the corresponding 1153 traffic streams over the resources allocated for the NRP that 1154 supports a Slice-Flow Aggregate. 1156 The feasible paths can be computed using the NRP topology and network 1157 state subject the optimization metrics and constraints. 1159 6.2.1. RSVP-TE Based Slice-Flow Aggregate Paths 1161 RSVP-TE [RFC3209] can be used to signal LSPs over the computed 1162 feasible paths in order to carry the Slice-Flow Aggregate traffic. 1163 The specific extensions to the RSVP-TE protocol required to enable 1164 signaling of NRP aware RSVP LSPs are outside the scope of this 1165 document. 1167 6.2.2. SR Based Slice-Flow Aggregate Paths 1169 Segment Routing (SR) [RFC8402] can be used to setup and steer traffic 1170 over the computed Slice-Flow Aggregate feasible paths. 1172 The SR architecture defines a number of building blocks that can be 1173 leveraged to support the realization of NRPs that support Slice-Flow 1174 Aggregates in an SR network. 1176 Such building blocks include: 1178 * SR Policy with or without Flexible Algorithm. 1180 * Steering of services (e.g. VPN) traffic over SR paths 1182 * SR Operation, Administration and Management (OAM) and Performance 1183 Management (PM) 1185 SR allows a headend node to steer packets onto specific SR paths 1186 using a Segment Routing Policy (SR Policy). The SR policy supports 1187 various optimization objectives and constraints and can be used to 1188 steer Slice-Flow Aggregate traffic in the SR network. 1190 The SR policy can be instantiated with or without the IGP Flexible 1191 Algorithm (Flex-Algorithm) feature. It may be possible to dedicate a 1192 single SR Flex-Algorithm to compute and instantiate SR paths for one 1193 Slice-Flow Aggregate traffic. In this case, the SR Flex-Algorithm 1194 computed paths and Flex-Algorithm SR SIDs are not shared by other 1195 Slice-Flow Aggregates traffic. However, to allow for better scale, 1196 it may be desirable for multiple Slice-Flow Aggregates traffic to 1197 share the same SR Flex-Algorithm computed paths and SIDs. Further 1198 details on how the NRP modes presented in this document can be 1199 realized in an SR network are discussed in 1200 [I-D.bestbar-spring-scalable-ns], and [I-D.bestbar-lsr-spring-sa]. 1202 7. Network Resource Partition Protocol Extensions 1204 Routing protocols may need to be extended to carry additional per NRP 1205 link state. For example, [RFC5305], [RFC3630], and [RFC7752] are 1206 ISIS, OSPF, and BGP protocol extensions to exchange network link 1207 state information to allow ingress TE routers and PCE(s) to do proper 1208 path placement in the network. The extensions required to support 1209 network slicing may be defined in other documents, and are outside 1210 the scope of this document. 1212 The instantiation of an NRP Policy may need to be automated. 1213 Multiple options are possible to facilitate automation of 1214 distribution of an NRP Policy to capable devices. 1216 For example, a YANG data model for the NRP Policy may be supported on 1217 network devices and controllers. A suitable transport (e.g., NETCONF 1218 [RFC6241], RESTCONF [RFC8040], or gRPC) may be used to enable 1219 configuration and retrieval of state information for slice policies 1220 on network devices. The NRP Policy YANG data model is outside the 1221 scope of this document, and is defined in 1222 [I-D.bestbar-teas-yang-slice-policy]. 1224 8. IANA Considerations 1226 This document has no IANA actions. 1228 9. Security Considerations 1230 The main goal of network slicing is to allow for varying treatment of 1231 traffic from multiple different network slices that are utilizing a 1232 common network infrastructure and to allow for different levels of 1233 services to be provided for traffic traversing a given network 1234 resource. 1236 A variety of techniques may be used to achieve this, but the end 1237 result will be that some packets may be mapped to specific resources 1238 and may receive different (e.g., better) service treatment than 1239 others. The mapping of network traffic to a specific NRP is 1240 indicated primarily by the SAS, and hence an adversary may be able to 1241 utilize resources allocated to a specific NRP by injecting packets 1242 carrying the same SAS field in their packets. 1244 Such theft-of-service may become a denial-of-service attack when the 1245 modified or injected traffic depletes the resources available to 1246 forward legitimate traffic belonging to a specific NRP. 1248 The defense against this type of theft and denial-of-service attacks 1249 consists of a combination of traffic conditioning at NRP domain 1250 boundaries with security and integrity of the network infrastructure 1251 within an NRP domain. 1253 10. Acknowledgement 1255 The authors would like to thank Krzysztof Szarkowicz, Swamy SRK, 1256 Navaneetha Krishnan, Prabhu Raj Villadathu Karunakaran, Jie Dong, and 1257 Mohamed Boucadair for their review of this document and for providing 1258 valuable feedback on it. The authors would also like to thank Adrian 1259 Farrel for detailed discussions that resulted in Section 3. 1261 11. Contributors 1263 The following individuals contributed to this document: 1265 Colby Barth 1266 Juniper Networks 1267 Email: cbarth@juniper.net 1269 Srihari R. Sangli 1270 Juniper Networks 1271 Email: ssangli@juniper.net 1273 Chandra Ramachandran 1274 Juniper Networks 1275 Email: csekar@juniper.net 1277 12. References 1279 12.1. Normative References 1281 [I-D.bestbar-teas-yang-slice-policy] 1282 Saad, T., Beeram, V. P., Wen, B., Ceccarelli, D., Peng, 1283 S., Chen, R., Contreras, L. M., and X. Liu, "YANG Data 1284 Model for Slice Policy", Work in Progress, Internet-Draft, 1285 draft-bestbar-teas-yang-slice-policy-02, 25 October 2021, 1286 . 1289 [I-D.ietf-lsr-flex-algo] 1290 Psenak, P., Hegde, S., Filsfils, C., Talaulikar, K., and 1291 A. Gulko, "IGP Flexible Algorithm", Work in Progress, 1292 Internet-Draft, draft-ietf-lsr-flex-algo-18, 25 October 1293 2021, . 1296 [I-D.kompella-mpls-mspl4fa] 1297 Kompella, K., Beeram, V. P., Saad, T., and I. Meilik, 1298 "Multi-purpose Special Purpose Label for Forwarding 1299 Actions", Work in Progress, Internet-Draft, draft- 1300 kompella-mpls-mspl4fa-01, 12 July 2021, 1301 . 1304 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1305 Requirement Levels", BCP 14, RFC 2119, 1306 DOI 10.17487/RFC2119, March 1997, 1307 . 1309 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 1310 Label Switching Architecture", RFC 3031, 1311 DOI 10.17487/RFC3031, January 2001, 1312 . 1314 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 1315 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 1316 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 1317 . 1319 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 1320 (TE) Extensions to OSPF Version 2", RFC 3630, 1321 DOI 10.17487/RFC3630, September 2003, 1322 . 1324 [RFC4915] Psenak, P., Mirtorabi, S., Roy, A., Nguyen, L., and P. 1325 Pillay-Esnault, "Multi-Topology (MT) Routing in OSPF", 1326 RFC 4915, DOI 10.17487/RFC4915, June 2007, 1327 . 1329 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 1330 Engineering", RFC 5305, DOI 10.17487/RFC5305, October 1331 2008, . 1333 [RFC6790] Kompella, K., Drake, J., Amante, S., Henderickx, W., and 1334 L. Yong, "The Use of Entropy Labels in MPLS Forwarding", 1335 RFC 6790, DOI 10.17487/RFC6790, November 2012, 1336 . 1338 [RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and 1339 S. Ray, "North-Bound Distribution of Link-State and 1340 Traffic Engineering (TE) Information Using BGP", RFC 7752, 1341 DOI 10.17487/RFC7752, March 2016, 1342 . 1344 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1345 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1346 May 2017, . 1348 [RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L., 1349 Decraene, B., Litkowski, S., and R. Shakir, "Segment 1350 Routing Architecture", RFC 8402, DOI 10.17487/RFC8402, 1351 July 2018, . 1353 12.2. Informative References 1355 [I-D.bestbar-lsr-slice-aware-te] 1356 Britto, W., Shetty, R., Barth, C., Wen, B., Peng, S., and 1357 R. Chen, "IGP Extensions for Support of Slice Aggregate 1358 Aware Traffic Engineering", Work in Progress, Internet- 1359 Draft, draft-bestbar-lsr-slice-aware-te-00, 22 February 1360 2021, . 1363 [I-D.bestbar-lsr-spring-sa] 1364 Saad, T., Beeram, V. P., Chen, R., Peng, S., Wen, B., and 1365 D. Ceccarelli, "IGP Extensions for SR Slice Aggregate 1366 SIDs", Work in Progress, Internet-Draft, draft-bestbar- 1367 lsr-spring-sa-01, 16 September 2021, 1368 . 1371 [I-D.bestbar-spring-scalable-ns] 1372 Saad, T., Beeram, V. P., Chen, R., Peng, S., Wen, B., and 1373 D. Ceccarelli, "Scalable Network Slicing over SR 1374 Networks", Work in Progress, Internet-Draft, draft- 1375 bestbar-spring-scalable-ns-02, 16 September 2021, 1376 . 1379 [I-D.bestbar-teas-yang-topology-filter] 1380 Beeram, V. P., Saad, T., Gandhi, R., and X. Liu, "YANG 1381 Data Model for Topology Filter", Work in Progress, 1382 Internet-Draft, draft-bestbar-teas-yang-topology-filter- 1383 02, 25 October 2021, . 1386 [I-D.decraene-mpls-slid-encoded-entropy-label-id] 1387 Decraene, B., Filsfils, C., Henderickx, W., Saad, T., 1388 Beeram, V. P., and L. Jalil, "Using Entropy Label for 1389 Network Slice Identification in MPLS networks.", Work in 1390 Progress, Internet-Draft, draft-decraene-mpls-slid- 1391 encoded-entropy-label-id-02, 6 August 2021, 1392 . 1395 [I-D.filsfils-spring-srv6-stateless-slice-id] 1396 Filsfils, C., Clad, F., Camarillo, P., Raza, K., Voyer, 1397 D., and R. Rokui, "Stateless and Scalable Network Slice 1398 Identification for SRv6", Work in Progress, Internet- 1399 Draft, draft-filsfils-spring-srv6-stateless-slice-id-04, 1400 30 July 2021, . 1403 [I-D.ietf-teas-ietf-network-slices] 1404 Farrel, A., Gray, E., Drake, J., Rokui, R., Homma, S., 1405 Makhijani, K., Contreras, L. M., and J. Tantsura, 1406 "Framework for IETF Network Slices", Work in Progress, 1407 Internet-Draft, draft-ietf-teas-ietf-network-slices-05, 25 1408 October 2021, . 1411 [I-D.ietf-teas-rfc3272bis] 1412 Farrel, A., "Overview and Principles of Internet Traffic 1413 Engineering", Work in Progress, Internet-Draft, draft- 1414 ietf-teas-rfc3272bis-13, 8 November 2021, 1415 . 1418 [I-D.xpbs-pce-topology-filter] 1419 Xiong, Q., Peng, S., Beeram, V. P., Saad, T., and M. 1420 Koldychev, "PCEP Extensions for Topology Filter", Work in 1421 Progress, Internet-Draft, draft-xpbs-pce-topology-filter- 1422 01, 8 October 2021, . 1425 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 1426 and W. Weiss, "An Architecture for Differentiated 1427 Services", RFC 2475, DOI 10.17487/RFC2475, December 1998, 1428 . 1430 [RFC2702] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., and J. 1431 McManus, "Requirements for Traffic Engineering Over MPLS", 1432 RFC 2702, DOI 10.17487/RFC2702, September 1999, 1433 . 1435 [RFC5462] Andersson, L. and R. Asati, "Multiprotocol Label Switching 1436 (MPLS) Label Stack Entry: "EXP" Field Renamed to "Traffic 1437 Class" Field", RFC 5462, DOI 10.17487/RFC5462, February 1438 2009, . 1440 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 1441 and A. Bierman, Ed., "Network Configuration Protocol 1442 (NETCONF)", RFC 6241, DOI 10.17487/RFC6241, June 2011, 1443 . 1445 [RFC8040] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 1446 Protocol", RFC 8040, DOI 10.17487/RFC8040, January 2017, 1447 . 1449 Authors' Addresses 1451 Tarek Saad 1452 Juniper Networks 1454 Email: tsaad@juniper.net 1456 Vishnu Pavan Beeram 1457 Juniper Networks 1459 Email: vbeeram@juniper.net 1461 Bin Wen 1462 Comcast 1463 Email: Bin_Wen@cable.comcast.com 1465 Daniele Ceccarelli 1466 Ericsson 1468 Email: daniele.ceccarelli@ericsson.com 1470 Joel Halpern 1471 Ericsson 1473 Email: joel.halpern@ericsson.com 1475 Shaofu Peng 1476 ZTE Corporation 1478 Email: peng.shaofu@zte.com.cn 1480 Ran Chen 1481 ZTE Corporation 1483 Email: chen.ran@zte.com.cn 1485 Xufeng Liu 1486 Volta Networks 1488 Email: xufeng.liu.ietf@gmail.com 1490 Luis M. Contreras 1491 Telefonica 1493 Email: luismiguel.contrerasmurillo@telefonica.com 1495 Reza Rokui 1496 Nokia 1498 Email: reza.rokui@nokia.com 1500 Luay Jalil 1501 Verizon 1502 Email: luay.jalil@verizon.com