idnits 2.17.1 draft-bestbar-teas-ns-packet-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (December 22, 2020) is 1220 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-09) exists of draft-filsfils-spring-srv6-stateless-slice-id-01 == Outdated reference: A later version (-26) exists of draft-ietf-lsr-flex-algo-13 ** Obsolete normative reference: RFC 7752 (Obsoleted by RFC 9552) == Outdated reference: A later version (-27) exists of draft-ietf-teas-rfc3272bis-09 == Outdated reference: A later version (-05) exists of draft-nsdt-teas-ns-framework-04 Summary: 1 error (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TEAS Working Group T. Saad 3 Internet-Draft V. Beeram 4 Intended status: Standards Track Juniper Networks 5 Expires: June 25, 2021 B. Wen 6 Comcast 7 D. Ceccarelli 8 J. Halpern 9 Ericsson 10 S. Peng 11 R. Chen 12 ZTE Corporation 13 X. Liu 14 Volta Networks 15 December 22, 2020 17 Realizing Network Slices in IP/MPLS Networks 18 draft-bestbar-teas-ns-packet-01 20 Abstract 22 Network slicing provides the ability to partition a physical network 23 into multiple logical networks of varying sizes, structures, and 24 functions so that each slice can be dedicated to specific services or 25 customers. Network slices need to operate in parallel while 26 providing slice elasticity in terms of network resource allocation. 27 The Differentiated Service (Diffserv) model allows for carrying 28 multiple services on top of a single physical network by relying on 29 compliant nodes to apply specific forwarding treatment (scheduling 30 and drop policy) on to packets that carry the respective Diffserv 31 code point. This document proposes a solution based on the Diffserv 32 model to realize network slicing in IP/MPLS networks. 34 Status of This Memo 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF). Note that other groups may also distribute 41 working documents as Internet-Drafts. The list of current Internet- 42 Drafts is at https://datatracker.ietf.org/drafts/current/. 44 Internet-Drafts are draft documents valid for a maximum of six months 45 and may be updated, replaced, or obsoleted by other documents at any 46 time. It is inappropriate to use Internet-Drafts as reference 47 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on June 25, 2021. 50 Copyright Notice 52 Copyright (c) 2020 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (https://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 68 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 69 1.2. Acronyms and Abbreviations . . . . . . . . . . . . . . . 6 70 2. Network Resource Slicing Membership . . . . . . . . . . . . . 6 71 2.1. Dedicated Network Resources . . . . . . . . . . . . . . . 6 72 2.2. Shared Network Resources . . . . . . . . . . . . . . . . 7 73 3. Path Selection . . . . . . . . . . . . . . . . . . . . . . . 7 74 4. Slice Policy Modes . . . . . . . . . . . . . . . . . . . . . 8 75 4.1. Data plane Slice Policy Mode . . . . . . . . . . . . . . 8 76 4.2. Control Plane Slice Policy Mode . . . . . . . . . . . . . 9 77 4.3. Data and Control Plane Slice Policy Mode . . . . . . . . 11 78 5. Slice Policy Instantiation . . . . . . . . . . . . . . . . . 11 79 5.1. Slice Policy Definition . . . . . . . . . . . . . . . . . 12 80 5.1.1. Slice Policy Data Plane Selector . . . . . . . . . . 13 81 5.1.2. Slice Policy Resource Reservation . . . . . . . . . . 17 82 5.1.3. Slice Policy Per Hop Behavior . . . . . . . . . . . . 18 83 5.1.4. Slice Policy Topology . . . . . . . . . . . . . . . . 19 84 5.2. Slice Policy Boundary . . . . . . . . . . . . . . . . . . 19 85 5.2.1. Slice Policy Edge Nodes . . . . . . . . . . . . . . . 19 86 5.2.2. Slice Policy Interior Nodes . . . . . . . . . . . . . 20 87 5.2.3. Slice Policy Incapable Nodes . . . . . . . . . . . . 20 88 5.2.4. Combining Slice Policy Modes . . . . . . . . . . . . 21 89 5.3. Mapping Traffic on Slice Aggregates . . . . . . . . . . . 22 90 6. Control Plane Extensions . . . . . . . . . . . . . . . . . . 22 91 7. Applicability to Path Control Technologies . . . . . . . . . 23 92 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 93 9. Security Considerations . . . . . . . . . . . . . . . . . . . 23 94 10. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 24 95 11. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 24 96 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 24 97 12.1. Normative References . . . . . . . . . . . . . . . . . . 24 98 12.2. Informative References . . . . . . . . . . . . . . . . . 25 99 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 26 101 1. Introduction 103 Network slicing allows a Service Provider to create independent and 104 logical networks on top of a common or shared physical network 105 infrastructure. Such network slices can be offered to customers or 106 used internally by the Service Provider to facilitate or enhance 107 their service offerings. A Service Provider can also use network 108 slicing to structure and organize the elements of its infrastructure. 109 This document provides a path control technology agnostic solution 110 that a Service Provider can deploy to realize network slicing in IP/ 111 MPLS networks. 113 The definition of network slice for use within the IETF and the 114 characteristics of IETF network slice are specified in 115 [I-D.nsdt-teas-ietf-network-slice-definition]. A framework for 116 reusing IETF VPN and traffic-engineering technologies to realize IETF 117 network slices is discussed in [I-D.nsdt-teas-ns-framework]. These 118 documents also discuss the function of an IETF Network Slice 119 Controller and the requirements on its northbound and southbound 120 interfaces. 122 This document introduces the notion of a slice aggregate which 123 comprises of one of more IETF network slice traffic streams. It 124 describes how a slice policy can be used to realize a slice aggregate 125 by instantiating specific control and data plane behaviors on select 126 topological elements in IP/MPLS networks. The onus is on the IETF 127 Network Slice Controller to maintain the mapping between one or more 128 IETF network slices and a slice aggregate. The mechanisms used by 129 the controller to determine the mapping are outside the scope of this 130 document. The focus of this document is on the mechanisms required 131 at the device level to address the requirements of network slicing in 132 packet networks. 134 In a Differentiated Service (Diffserv) domain [RFC2475], packets 135 requiring the same forwarding treatment (scheduling and drop policy) 136 are classified and marked with a Class Selector (CS) at domain 137 ingress nodes. At transit nodes, the CS field inside the packet is 138 inspected to determine the specific forwarding treatment to be 139 applied before the packet is forwarded further. Similar principles 140 are adopted by this document to realize network slicing. 142 When logical networks representing slice aggregates are realized on 143 top of a shared physical network infrastructure, it is important to 144 steer traffic on the specific network resources allocated for the 145 slice aggregate. In packet networks, the packets that traverse a 146 specific slice aggregate MAY be identified by one or more specific 147 fields carried within the packet. A slice policy ingress boundary 148 node populates the respective field(s) in packets that enter a slice 149 aggregate to allow interior slice policy nodes to identity those 150 packets and apply the specific Per Hop Behavior (PHB) that is 151 associated with the slice aggregate. The PHB defines the scheduling 152 treatment and, in some cases, the packet drop probability. 154 The slice aggregate traffic may further carry a Diffserv CS to allow 155 differentiation of forwarding treatments for packets within a slice 156 aggregate. For example, when using MPLS as a dataplane, it is 157 possible to identify packets belonging to the same slice aggregate by 158 carrying a global MPLS label in the label stack that identifies the 159 slice aggregate in each packet. Additional Diffserv classification 160 may be indicated in the Traffic Class (TC) bits of the global MPLS 161 label to allow further differentiation of forwarding treatments for 162 traffic traversing the same slice aggregate network resources. 164 This document covers different modes of slice policy and discusses 165 how each slice policy mode can ensure proper placement of slice 166 aggregate paths and respective treatment of slice aggregate traffic. 168 1.1. Terminology 170 The reader is expected to be familiar with the terminology specified 171 in [I-D.nsdt-teas-ietf-network-slice-definition] and 172 [I-D.nsdt-teas-ns-framework]. 174 The following terminology is used in the document: 176 IETF network slice: 177 a well-defined composite of a set of endpoints, the connectivity 178 requirements between subsets of these endpoints, and associated 179 service requirements; the term 'network slice' in this document 180 refers to 'IETF network slice' 181 [I-D.nsdt-teas-ietf-network-slice-definition]. 183 Slice: 184 a set of characteristics and behaviors that separate one type of 185 user-traffic from another 186 [I-D.nsdt-teas-ietf-network-slice-definition]. 188 IETF Network Slice Controller (NSC): 189 controller that is used to realize an IETF network slice 190 [I-D.nsdt-teas-ietf-network-slice-definition]. 192 Slice policy: 193 a policy construct that enables instantiation of mechanisms in 194 support of IETF network slice specific control and data plane 195 behaviors on select topological elements; the enforcement of a 196 slice policy results in the creation of a slice aggregate. 198 Slice aggregate: 199 a collection of packets that match a slice policy selection 200 criteria and are given the same forwarding treatment; a slice 201 aggregate comprises of one or more IETF network slice traffic 202 streams; the mapping of one or more IETF network slices to a slice 203 aggregate is maintained by the IETF Network Slice Controller. 205 Slice policy capable node: 206 a node that supports one of the slice policy modes described in 207 this document. 209 Slice policy incapable node: 210 a node that does not support any of the slice policy modes 211 described in this document. 213 Slice aggregate traffic: 214 traffic that is forwarded over network resources associated with a 215 specific slice aggregate. 217 Slice aggregate path: 218 a path that is setup over network resources associated with a 219 specific slice aggregate. 221 Slice aggregate packet: 222 a packet that traverses network resources associated with a 223 specific slice aggregate. 225 Slice policy topology: 226 a set of topological elements associated with a slice policy. 228 Slice aggregate aware TE: 229 a mechanism for TE path selection that takes into account the 230 available network resources associated with a specific slice 231 aggregate. 233 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 234 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 235 "OPTIONAL" in this document are to be interpreted as described in BCP 236 14 [RFC2119] [RFC8174] when, and only when, they appear in all 237 capitals, as shown here. 239 1.2. Acronyms and Abbreviations 241 BA: Behavior Aggregate 243 CS: Class Selector 245 SS: Slice Selector 247 S-PHB: Slice policy Per Hop Behavior as described in Section 5.1.3 249 SSL: Slice Selector Label as described in section Section 5.1.1 251 SSLI: Slice Selector Label Indicator 253 SLA: Service Level Agreement 255 SLO: Service Level Objective 257 Diffserv: Differentiated Services 259 MPLS: Multiprotocol Label Switching 261 LSP: Label Switched Path 263 RSVP: Resource Reservation Protocol 265 TE: Traffic Engineering 267 SR: Segment Routing 269 VRF: VPN Routing and Forwarding 271 2. Network Resource Slicing Membership 273 A slice aggregate can span multiple parts of an IP/MPLS network 274 (e.g., all or specific network resources in the access, aggregation, 275 or core network), and can stretch across multiple operator domains. 276 A slice policy topology may include all or a sub-set of the physical 277 nodes and links of an IP/MPLS network; it may be comprised of 278 dedicated and/or shared network resources (e.g., in terms of 279 processing power, storage, and bandwidth). 281 2.1. Dedicated Network Resources 283 Physical network resources may be fully dedicated to a specific slice 284 aggregate. For example, traffic belonging to a slice aggregate can 285 traverse dedicated network resources without subjected to contention 286 from traffic of other slice aggregates. Dedicated network resource 287 slicing allows for simple partitioning of the physical network 288 resources amongst slice aggregates without the need to distinguish 289 packets traversing the dedicated network resources since only one 290 slice aggregate traffic stream can traverse the dedicated resource at 291 any time. 293 2.2. Shared Network Resources 295 To optimize network utilization, sharing of the physical network 296 resources may be desirable. In such case, the same physical network 297 resource capacity is divided among multiple slice aggregates. Shared 298 network resources can be partitioned in the data plane (for example 299 by applying hardware policers and shapers) and/or partitioned in the 300 control plane by providing a logical representation of the physical 301 link that has a subset of the network resources available to it. 303 3. Path Selection 305 Path selection in a network can be network state dependent, or 306 network state independent as described in Section 5.1 of 307 [I-D.draft-ietf-teas-rfc3272bis-09]. The latter is the choice 308 commonly used by IGPs when selecting a best path to a destination 309 prefix, while the former is used by ingress TE routers, or Path 310 Computation Engines (PCEs) when optimizing the placement of a flow 311 based on the current network resource utilization. 313 For example, when steering traffic on a delay optimized path, the IGP 314 can use its link state database's view of the network topology to 315 compute a path optimizing for the delay metric of each link in the 316 network resulting in a cumulative lowest delay path. 318 When path selection is network state dependent, the path computation 319 can leverage Traffic Engineering mechanisms (e.g., as defined in 320 [RFC2702]) to compute feasible paths taking into account the incoming 321 traffic demand rate and current state of network. This allows 322 avoiding overly utilized links, and reduces the chance of congestion 323 on traversed links. 325 To enable TE path placement, the link state is advertised with 326 current reservations, thereby reflecting the available bandwidth on 327 each link. Such link reservations may be maintained centrally on a 328 network wide network resource manager, or distributed on devices (as 329 usually done with RSVP). TE extensions exist today to allow IGPs 330 (e.g., [RFC3630] and [RFC5305]), and BGP-LS [RFC7752] to advertise 331 such link state reservations. 333 When network resource reservations are also slice aggregate aware, 334 the link state can carry per slice aggregate state (e.g., reservable 335 bandwidth). This allows path computation to take into account the 336 specific network resources available for a slice aggregate when 337 determining the path for a specific flow. In this case, we refer to 338 the process of path placement and path provisioning as slice 339 aggregate aware TE. 341 4. Slice Policy Modes 343 A slice policy can be used to dictate if the partitioning of the 344 shared network resources amongst multiple slice aggregates can be 345 achieved by realizing slice aggregates in: 347 a) data plane only, or 349 b) control plane only, or 351 c) both control and data planes. 353 4.1. Data plane Slice Policy Mode 355 The physical network resources can be partitioned on network devices 356 by applying a Per Hop forwarding Behavior (PHB) onto packets that 357 traverse the network devices. In the Diffserv model, a Class 358 Selector (CS) is carried in the packet and is used by transit nodes 359 to apply the PHB that determines the scheduling treatment and drop 360 probability for packets. 362 When data plane slice policy mode is applied, packets need to be 363 forwarded on the specific slice aggregate network resources and need 364 to be applied a specific forwarding treatment that is dictated in the 365 slice policy (refer to Section 5.1 below). A Slice Selector (SS) 366 MUST be carried in each packet to identify the slice aggregate that 367 it belongs to. 369 The ingress node of a slice policy domain, in addition to marking 370 packets with a Diffserv CS, MAY also add an SS to each slice 371 aggregate packet. The transit nodes within a slice policy domain MAY 372 use the SS to associate packets with a slice aggregate and to 373 determine the Slice policy Per Hop Behavior (S-PHB) that is applied 374 to the packet (refer to Section 5.1.3 for further details). The CS 375 MAY be used to apply a Diffserv PHB on to the packet to allow 376 differentiation of traffic treatment within the same slice aggregate. 378 When data plane only slice policy mode is used, routers may rely on a 379 network state independent view of the topology to determine the best 380 paths to reach destinations. In this case, the best path selection 381 dictates the forwarding path of packets to the destination. The SS 382 field carried in each packet determines the specific S-PHB treatment 383 along the selected path. 385 For example, the Segment-Routing Flexible Algorithm 386 [I-D.ietf-lsr-flex-algo] may be deployed in a network to steer 387 packets on the IGP computed lowest cumulative delay path. A slice 388 policy may be used to allow links along the least latency path to 389 share its data plane resources amongst multiple slice aggregates. In 390 this case, the packets that are steered on a specific slice policy 391 carry the SS field that enables routers (along with the Diffserv CS) 392 to determine the S-PHB and enforce slice aggregate traffic streams. 394 4.2. Control Plane Slice Policy Mode 396 The physical network resources in the network can be logically 397 partitioned by having a representation of network resources appear in 398 a virtual topology. The virtual topology can contain all or a subset 399 of the physical network resources. The logical network resources 400 that appear in the virtual topology can reflect a part, whole, or in- 401 excess of the physical network resource capacity (when 402 oversubscription is desirable). For example, a physical link 403 bandwidth can be divided into fractions, each dedicated to a slice 404 aggregate. Each fraction of the physical link bandwidth MAY be 405 represented as a logical link in a virtual topology that is used when 406 determining paths associated with a specific slice aggregate. The 407 virtual topology associated with the slice policy can be used by 408 routing protocols, or by the ingress/PCE when computing slice 409 aggregate aware TE paths. 411 To perform network state dependent path computation in this mode 412 (slice aggregate aware TE), the resource reservation on each link 413 needs to be slice aggregate aware. Multiple slice policies may be 414 applied on the same physical link. The slice aggregate network 415 resource availability on links is updated (and may eventually be 416 advertised in the network) when new paths are placed in the network. 417 The slice aggregate resource reservation, in this case, can be 418 maintained on each device or be centralized on a resource reservation 419 manager that holds reservation states on links in the network. 421 Multiple slice aggregates can form a group and share the available 422 network resources allocated to each slice aggregate. In this case, a 423 node can update the reservable bandwidth for each slice aggregate to 424 take into consideration the available bandwidth from other slice 425 aggregates in the same group. 427 For illustration purposes, the diagram below represents bandwidth 428 isolation or sharing amongst a group of slice aggregates. In 429 Figure 1a, the slice aggregates: S_AGG1, S_AGG2, S_AGG3 and S_AGG4 430 are not sharing any bandwidths between each other. In Figure 1b, the 431 slice aggregates: S_AGG1 and S_AGG2 can share the available bandwidth 432 portion allocated to each amongst them. Similarly, S_AGG3 and S_AGG4 433 can share amongst themselves any available bandwidth allocated to 434 them, but they cannot share available bandwidth allocated to S_AGG1 435 or S_AGG2. In both cases, the Max Reservable Bandwidth may exceed 436 the actual physical link resource capacity to allow for over 437 subscription. 439 I-----------------------------I I-----------------------------I 440 <--S_AGG1-> I I-----------------I I 441 I---------I I I <-S_AGG1-> I I 442 I I I I I-------I I I 443 I---------I I I I I I I 444 I I I I-------I I I 445 <-----S_AGG2------> I I I I 446 I-----------------I I I <-S_AGG2-> I I 447 I I I I I---------I I I 448 I-----------------I I I I I I I 449 I I I I---------I I I 450 <---S_AGG3----> I I I I 451 I-------------I I I S_AGG1 + S_AGG2 I I 452 I I I I-----------------I I 453 I-------------I I I I 454 I I I I 455 <---S_AGG4----> I I-----------------I I 456 I-------------I I I <-S_AGG3-> I I 457 I I I I I-------I I I 458 I-------------I I I I I I I 459 I I I I-------I I I 460 I S_AGG1+S_AGG2+S_AGG3+S_AGG4 I I I I 461 I I I <-S_AGG4-> I I 462 I-----------------------------I I I---------I I I 463 <--Max Reservable Bandwidth--> I I I I I 464 I I---------I I I 465 I I I 466 I S_AGG3 + S_AGG4 I I 467 I-----------------I I 468 I S_AGG1+S_AGG2+S_AGG3+S_AGG4 I 469 I I 470 I-----------------------------I 471 <--Max Reservable Bandwidth--> 473 (a) No bandwidth sharing (b) Sharing bandwidth between 474 between slice aggregates. slice aggregates of the 475 same group 477 Figure 1: Bandwidth Isolation/Sharing. 479 4.3. Data and Control Plane Slice Policy Mode 481 In order to support strict guarantees for slice aggregates, the 482 network resources can be partitioned in both the control plane and 483 data plane. 485 The control plane partitioning allows the creation of customized 486 topologies per slice aggregate that routers or a Path Computation 487 Engine (PCE) can use to determine optimal path placement for specific 488 demand flows (Slice aggregate aware TE). 490 The data plane partitioning protects slice aggregate traffic from 491 network resource contention that could occur due to bursts in traffic 492 from other slice aggregates traversing the same shared network 493 resource. 495 5. Slice Policy Instantiation 497 A network slice can span multiple technologies and multiple 498 administrative domains. Depending on the network slice consumer's 499 requirements, a network slice can be differentiated from other 500 network slices in terms of data, control or management planes. 502 The consumer of a network slice expresses their intent by specifying 503 requirements rather than mechanisms to realize the slice. The 504 requirements for a network slice can vary and can be expressed in 505 terms of connectivity needs between end-points (point-to-point, 506 point-to-multipoint or multipoint-to-multipoint) with customizable 507 network capabilities that may include data speed, quality, latency, 508 reliability, security, and services (refer to 509 [I-D.nsdt-teas-ietf-network-slice-definition] for more details). 510 These capabilities are always provided based on a Service Level 511 Agreement (SLA) between the network slice consumer and the provider. 513 The onus is on the network slice controller to consume the service 514 layer slice intent and realize it with an appropriate slice policy. 515 Multiple IETF network slices can be mapped to the same slice policy 516 resulting in a slice aggregate. The network wide consistent slice 517 policy definition is distributed to the devices in the network as 518 shown in Figure 2. The specification of the network slice intent on 519 the northbound interface of the controller and the mechanism used to 520 map the network slice to a slice policy are outside the scope of this 521 document. 523 | 524 | Slice Intent 525 +---------------+ 526 | Network Slice | 527 | Controller | 528 +---------------+ 529 | 530 | Slice Policy 531 | 532 | 533 XXXX|XXXXXX 534 XX /| XX 535 XX / | XX 536 XX / | XX 537 XXXX v v XXXX 538 XXX Ingress All XXX 539 XXX node(s) nodes XXX 540 XXX XXX 541 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 542 <----------- Path Control ---------> 543 RSVP-TE/SR-Policy/SR-FlexAlgo .. 545 Figure 2: Slice Policy Instantiation. 547 5.1. Slice Policy Definition 549 The slice policy is network-wide construct that is consumed by 550 network devices, and may include rules that control the following: 552 o Data plane specific policies: This includes the SS, any firewall 553 rules or flow-spec filters, and QoS profiles associated with the 554 slice policy and any classes within it. 556 o Control plane specific policies: This includes guaranteed 557 bandwidth, any network resource sharing amongst slice policies, 558 and reservation preference to prioritize any reservations of a 559 specific slice policy over others. 561 o Topology membership policies: This defines policies that dictate 562 node/link/function network resource topology association for a 563 specific slice policy. 565 There is a desire for flexibility in realizing network slices to 566 support the services across networks consisting of products from 567 multiple vendors. These networks may also be grouped into disparate 568 domains and deploy various path control technologies and tunnel 569 techniques to carry traffic across the network. It is expected that 570 a standardized data model for slice policy will facilitate the 571 instantiation and management of slice aggregates on slice policy 572 capable nodes. 574 It is also possible to distribute the slice policy to network devices 575 using several mechanisms, including protocols such as NETCONF or 576 RESTCONF, or exchanging it using a suitable routing protocol that 577 network devices participate in (such as IGP(s) or BGP). 579 5.1.1. Slice Policy Data Plane Selector 581 A router MUST be able to identify a packet belonging to a slice 582 aggregate before it can apply the proper forwarding treatment or 583 S-PHB associated with the slice policy. One or more fields within 584 the packet MAY be used as an SS to do this. 586 Forwarding Address Slice Selector: 588 One approach to distinguish packets targeted to a destination but 589 belonging to different slice aggregates is to assign multiple 590 forwarding addresses (or multiple MPLS label bindings in the case 591 of MPLS network) for the same node - one for each slice aggregate 592 that traffic can be steered on towards the destination. For 593 example, when realizing a network slice over an IP dataplane, the 594 same destination can be assigned multiple IP addresses (or 595 multiple SRv6 locators in the case of SRv6 network) to enable 596 steering of traffic to the same destination over multiple slice 597 policies. 599 Similarly, for MPLS dataplane, [RFC3031] states in Section 2.1 600 that: 'Some routers analyze a packet's network layer header not 601 merely to choose the packet's next hop, but also to determine a 602 packet's "precedence" or "class of service"'. In such case, the 603 same destination can be assigned multiple MPLS label bindings 604 corresponding to an LSP that traverses network resources of a 605 specific slice aggregate towards the destination. 607 The slice aggregate specific forwarding address (or MPLS 608 forwarding label) can be carried in the packet to allow (IP or 609 MPLS) routers along the path to identify the packets and apply the 610 respective S-PHB and forwarding treatment. This approach requires 611 maintaining per slice aggregate state for each destination in the 612 network in both the control and data plane and on each router in 613 the network. 615 For example, consider a network slicing provider with a network 616 composed of 'N' nodes, each with 'K' adjacencies to its neighbors. 617 Assuming a node is reachable in as many as 'M' slice policies, the 618 node will have to assign and advertise reachability for 'N' unique 619 forwarding addresses, or MPLS forwarding labels. Similarly, each 620 node will have to assign a unique forwarding address (or MPLS 621 forwarding label) for each of its 'K' adjacencies to enable strict 622 steering over each. Consequently, the control plane at any node 623 in the network will need to store as many as (N+K)*M states. In 624 addition, a node will have to store and program (N+K)*M forwarding 625 addresses or labels entries in its Forwarding Information Base 626 (FIB) to realize this. Therefore, as 'N', 'K', and 'M' parameters 627 increase, this approach will have scalability challenges both in 628 the control and data planes. 630 Global Identifier Slice Selector: 632 A slice policy can include a global Slice Selector (SS) field can 633 be carried in each packet to identify the packet belonging to a 634 specific slice aggregate, independent of the forwarding address or 635 MPLS forwarding label that is bound to the destination. Routers 636 within the slice policy domain can use the forwarding address (or 637 MPLS forwarding label) to determine the forwarding path, and use 638 the SS field in the packet to determine the specific S-PHB that 639 gets applied on the packet. This approach allows better scale 640 since it relies on a single forwarding address or MPLS label 641 binding to be used independent of the number of slice policies 642 required along the path. In this case, the additional SS field 643 will need to be carried, and maintained in each packet while it 644 traverses the slice policy domain. 646 The SS can be carried in one of multiple fields within the packet, 647 depending on the dataplane type used. For example, in MPLS 648 networks, the SS can be represented as a global MPLS label that is 649 carried in the packet's MPLS label stack. All packets that belong 650 to the same slice aggregate MAY carry the same SS label in the 651 MPLS label stack. It is possible, as well, to have multiple SS 652 labels that map to the same slice policy S-PHB. 654 The MPLS SS Label (SSL) may appear in several positions in the 655 MPLS label stack. For example, the MPLS SSL can be maintained at 656 the top of the label stack while the packet is forwarded along the 657 MPLS path. In this case, the forwarding at each hop is determined 658 by the forwarding label that resides below the SSL. Figure 3 659 shows an example where the SSL appears at the top of MPLS label 660 stack in a packet. PE1 is a slice policy edge node that receives 661 the packet that needs to be steered over a slice specific MPLS 662 Path. PE1 computes the SR Path composed of the Label Segment- 663 List={9012, 9023}. It imposes an SSL 1001 corresponding to Slice- 664 ID 1001 followed by the SR Path Segment-List. At P1, the top 665 label sets the context of the packet to Slice-ID=1001. The 666 forwarding of the packet is determined by inspecting the 667 forwarding label (below the SSL) within the context of SSL. 669 SR Adj-SID: SSL: 1001 670 9012: P1-P2 671 9023: P2-PE2 673 /-----\ /-----\ /-----\ /-----\ 674 | PE1 | ----- | P1 | ------ | P2 |------ | PE2 | 675 \-----/ \-----/ \-----/ \-----/ 677 In 678 packet: 679 +------+ +------+ +------+ +------+ 680 | IP | | 1001 | | 1001 | | 1001 | 681 +------+ +------+ +------+ +------+ 682 | Pay- | | 9012 | | 9023 | | IP | 683 | Load | +------+ +------+ +------+ 684 +----- + | 9023 | | IP | | Pay- | 685 +------+ +------+ | Load | 686 | IP | | Pay- | +------+ 687 +------+ | Load | 688 | Pay- | +------+ 689 | Load | 690 +------+ 692 Figure 3: SSL at top of label stack. 694 The SSL can also reside at the bottom of the label stack. For 695 example, the VPN service label may also be used as an SSL which 696 allows steering of traffic towards one or more egress PEs over the 697 same slice aggregate. In such cases, one or more service labels 698 MAY be mapped to the same slice aggregate. The same VPN label may 699 also be allocated on all Egress PEs so it can serve as a single 700 SSL for a specific slice policy. Alternatively, a range of VPN 701 labels may be mapped to a single slice aggregate to allow carrying 702 multiple VPNs over the same slice aggregate as shown in Figure 4. 704 SR Adj-SID: SSL (VPN) on PE2: 1001 705 9012: P1-P2 706 9023: P2-PE2 708 /-----\ /-----\ /-----\ /-----\ 709 | PE1 | ----- | P1 | ------ | P2 |------ | PE2 | 710 \-----/ \-----/ \-----/ \-----/ 712 In 713 packet: 714 +------+ +------+ +------+ +------+ 715 | IP | | 9012 | | 9023 | | 1001 | 716 +------+ +------+ +------+ +------+ 717 | Pay- | | 9023 | | 1001 | | IP | 718 | Load | +------+ +------+ +------+ 719 +----- + | 1001 | | IP | | Pay- | 720 +------+ +------+ | Load | 721 | IP | | Pay- | +------+ 722 +------+ | Load | 723 | Pay- | +------+ 724 | Load | 725 +------+ 727 Figure 4: SSL or VPN label at bottom of label stack. 729 In some cases, the position of the SSL may not be at a fixed place 730 in the MPLS label header. In this case, transit routers cannot 731 expect the SSL at a fixed place in the MPLS label stack. This can 732 be addressed by introducing a new Special Purpose Label from the 733 label reserved space called a Slice Selector Label Indicator 734 (SSLI). The slice policy ingress boundary node, in this case, 735 will need to impose at least two additional MPLS labels (SSLI + 736 SSL) to identify the slice aggregate that the packets belong to as 737 shown in Figure 5. 739 SR Adj-SID: SSLI/SSL: SSLI/1001 740 9012: P1-P2 741 9023: P2-PE2 743 /-----\ /-----\ /-----\ /-----\ 744 | PE1 | ----- | P1 | ------ | P2 |------ | PE2 | 745 \-----/ \-----/ \-----/ \-----/ 747 In 748 packet: 749 +------+ +------+ +------+ +------+ 750 | IP | | 9012 | | 9023 | | SSLI | 751 +------+ +------+ +------+ +------+ 752 | Pay- | | 9023 | | SSLI | | 1001 | 753 | Load | +------+ +------+ +------+ 754 +------+ | SSLI | | 1001 | | IP | 755 +------+ +------+ +------+ 756 | 1001 | | IP | | Pay- | 757 +------+ +------+ | Load | 758 | IP | | Pay- | +------+ 759 +------+ | Load | 760 | Pay- | +------+ 761 | Load | 762 +------+ 764 Figure 5: SSLI and bottom SSL at bottom of label stack. 766 When the slice is realized over an IP dataplane, the SSL can be 767 encoded in the IP header. For example, the SSL can be encoded in 768 portion of the IPv6 Flow Label field as described in 769 [I-D.filsfils-spring-srv6-stateless-slice-id]. 771 5.1.2. Slice Policy Resource Reservation 773 Bandwidth and network resource allocation strategies for slice 774 policies are essential to achieve optimal placement of paths within 775 the network while still meeting the target SLOs. 777 Resource reservation allows for the managing of available bandwidth 778 and for prioritization of existing allocations to enable preference- 779 based preemption when contention on a specific network resource 780 arises. Sharing of a network resource's available bandwidth amongst 781 a group of slice policies may also be desirable. For example, a 782 slice aggregate may not always be using all of its reservable 783 bandwidth; this allows other slice policies in the same group to use 784 the available bandwidth resources. 786 Congestion on shared network resources may result from sub-optimal 787 placement of paths in different slice policies. When this occurs, 788 preemption of some slice aggregate specific paths may be desirable to 789 alleviate congestion. A preference based allocation scheme enables 790 prioritization of slice aggregate paths that can be preempted. 792 Since network characteristics and its state can change over time, the 793 slice policy topology and its state also needs to be propagated in 794 the network to enable ingress TE routers or Path Computation Engine 795 (PCEs) to perform accurate path placement based on the current state 796 of the slice policy network resources. 798 5.1.3. Slice Policy Per Hop Behavior 800 In Diffserv terminology, the forwarding behavior that is assigned to 801 a specific class is called a Per Hop Behavior (PHB). The PHB defines 802 the forwarding precedence that a marked packet with a specific CS 803 receives in relation to other traffic on the Diffserv-aware network. 805 A Slice policy Per Hop Behavior (S-PHB) is the externally observable 806 forwarding behavior applied to a specific packet belonging to a slice 807 aggregate. The goal of an S-PHB is to provide a specified amount of 808 network resources for traffic belonging to a specific slice 809 aggregate. A single slice policy may also support multiple 810 forwarding treatments or services that can be carried over the same 811 logical network. 813 The slice aggregate traffic may be identified at slice policy ingress 814 boundary nodes by carrying a SS to allow routers to apply a specific 815 forwarding treatment that guarantee the SLA(s). 817 With Differentiated Services (Diffserv) it is possible to carry 818 multiple services over a single converged network. Packets requiring 819 the same forwarding treatment are marked with a Class Selector (CS) 820 at domain ingress nodes. Up to eight classes or Behavior Aggregates 821 (BAs) may be supported for a given Forwarding Equivalence Class (FEC) 822 [RFC2475]. To support multiple forwarding treatments over the same 823 slice aggregate, a slice aggregate packet MAY also carry a Diffserv 824 CS to identify the specific Diffserv forwarding treatment to be 825 applied on the traffic belonging to the same slice policy. 827 At transit nodes, the CS field carried inside the packets are used to 828 determine the specific PHB that determines the forwarding and 829 scheduling treatment before packets are forwarded, and in some cases, 830 drop probability for each packet. 832 5.1.4. Slice Policy Topology 834 A key element of the slice policy is a customized topology that may 835 include the full or subset of the physical network topology. The 836 slice policy topology could also span multiple administrative domains 837 and/or multiple dataplane technologies. 839 A slice policy topology can overlap or share a subset of links with 840 another slice policy topology. A number of topology filtering 841 policies can be defined as part of the slice policy to limit the 842 specific topology elements that belong to a slice policy. For 843 example, a topology filtering policy can leverage Resource Affinities 844 as defined in [RFC2702] to include or exclude certain links for a 845 specific slice aggregate. The slice policy may also include a 846 reference to a predefined topology (e.g. derived from a Flexible 847 Algorithm Definition (FAD) as defined in [I-D.ietf-lsr-flex-algo], or 848 Multi-Topology ID as defined [RFC4915]. 850 5.2. Slice Policy Boundary 852 A network slice originates at the edge nodes of a network slice 853 provider. Traffic that is steered over the corresponding slice 854 policy may traverse slice policy capable interior nodes, as well as, 855 slice policy incapable interior nodes. 857 The network slice may encompass one or more administrative domains; 858 for example, an organization's intranet or an ISP. The 859 administration of the network is responsible for ensuring that 860 adequate network resources are provisioned and/or reserved to support 861 the SLAs offered by the network end-to-end. 863 5.2.1. Slice Policy Edge Nodes 865 Slice policy edge nodes sit at the boundary of a network slice 866 provider network and receive traffic that requires steering over 867 network resources specific to a slice aggregate. These edge nodes 868 are responsible for identifying slice aggregate specific traffic 869 flows by possibly inspecting multiple fields from inbound packets 870 (e.g. implementations may inspect IP traffic's network 5-tuple in the 871 IP and transport protocol headers) to decide on which slice policy it 872 can be steered. 874 Network slice ingress nodes may condition the inbound traffic at 875 network boundaries in accordance with the requirements or rules of 876 each service's SLAs. The requirements and rules for network slice 877 services are set using mechanisms which are outside the scope of this 878 document. 880 When data plane slice policy is applied, the slice policy ingress 881 boundary nodes are responsible for adding a suitable SS onto packets 882 that belong to specific slice aggregate. In addition, edge nodes MAY 883 mark the corresponding Diffserv CS to differentiate between different 884 types of traffic carried over the same slice aggregate. 886 5.2.2. Slice Policy Interior Nodes 888 A slice policy interior node receives slice traffic and MAY be able 889 to identify the packets belonging to a specific slice aggregate by 890 inspecting the SS field carried inside each packet, or by inspecting 891 other fields within the packet that may identify the traffic streams 892 that belong to a specific slice aggregate. For example when data 893 plane slice policy is applied, interior nodes can use the SS carried 894 within the packet to apply the corresponding S-PHB forwarding 895 behavior. Nodes within the network slice provider network may also 896 inspect the Diffserv CS within each packet to apply a per Diffserv 897 class PHB within the slice policy, and allow differentiation of 898 forwarding treatments for packets forwarded over the same slice 899 aggregate network resources. 901 5.2.3. Slice Policy Incapable Nodes 903 Packets that belong to a slice aggregate may need to traverse nodes 904 that are slice policy incapable. In this case, several options are 905 possible to allow the slice traffic to continue to be forwarded over 906 such devices and be able to resume the slice policy forwarding 907 treatment once the traffic reaches devices that are slice policy 908 capable. 910 When data plane slice policy is applied, packets carry a SS to allow 911 slice interior nodes to identify them. To enable end-to-end network 912 slicing, the SS MUST be maintained in the packets as they traverse 913 devices within the network - including slice policy incapable 914 devices. 916 For example, when the SS is an MPLS label at the bottom of the MPLS 917 label stack, packets can traverse over devices that are slice policy 918 incapable without any further considerations. On the other hand, 919 when the SSL is at the top of the MPLS label stack, packets can be 920 bypassed (or tunneled) over the slice policy incapable devices 921 towards the next device that supports slice policy as shown in 922 Figure 6. 924 SR Node-SID: SSL: 1001 @@@: slice policy enforced 925 1601: P1 ...: slice policy not enforced 926 1602: P2 927 1603: P3 928 1604: P4 929 1605: P5 931 @@@@@@@@@@@@@@ ........................ 932 . 933 /-----\ /-----\ /-----\ . 934 | P1 | ----- | P2 | ----- | P3 | . 935 \-----/ \-----/ \-----/ . 936 | @@@@@@@@@@ 937 | 938 /-----\ /-----\ 939 | P4 | ------ | P5 | 940 \-----/ \-----/ 942 +------+ +------+ +------+ 943 | 1001 | | 1604 | | 1001 | 944 +------+ +------+ +------+ 945 | 1605 | | 1001 | | IP | 946 +------+ +------+ +------+ 947 | IP | | 1605 | | Pay- | 948 +------+ +------+ | Load | 949 | Pay- | | IP | +------+ 950 | Load | +------+ 951 +----- + | Pay- | 952 | Load | 953 +------+ 955 Figure 6: Extending network slice over slice policy incapable 956 device(s). 958 5.2.4. Combining Slice Policy Modes 960 It is possible to employ a combination of the slice policy modes that 961 were discussed in Section 4 to realize a network slice. For example, 962 data and control plane slice policy mode can be employed in parts of 963 a network, while control plane slice policy mode can be employed in 964 the other parts of the network. The path selection, in such case, 965 can take into account the slice aggregate specific available network 966 resources. The SS carried within packets allow transit nodes to 967 enforce the corresponding S-PHB on the parts of the network that 968 apply the data plane slice policy mode. The SS can be maintained 969 while traffic traverses nodes that do not enforce data plane slice 970 policy mode, and so slice PHB enforcement can resume once traffic 971 traverses capable nodes. 973 5.3. Mapping Traffic on Slice Aggregates 975 The usual techniques to steer traffic onto paths can be applicable 976 when steering traffic over paths established for a specific slice 977 aggregate. 979 For example, one or more (layer-2 or layer-3) VPN services can be 980 directly mapped to paths established for a slice aggregate. In this 981 case, the per Virtual Routing and Forwarding (VRF) instance traffic 982 that arrives on the Provider Edge (PE) router over external 983 interfaces can be directly mapped to a specific slice aggregate path. 984 External interfaces can be further partitioned (e.g. using VLANs) to 985 allow mapping one or more VLANs to specific slice aggregate paths. 987 Another option is steer traffic to specific destinations directly 988 over multiple slice policies. This allows traffic arriving on any 989 external interface and targeted to such destinations to be directly 990 steered over the slice paths. 992 A third option that can also be used is to utilize a data plane 993 firewall filter or classifier to enable matching of several fields in 994 the incoming packets to decide whether the packet is steered on a 995 specific slice aggregate. This option allows for applying a rich set 996 of rules to identify specific packets to be mapped to a slice 997 aggregate. However, it requires data plane network resources to be 998 able to perform the additional checks in hardware. 1000 6. Control Plane Extensions 1002 Routing protocols may need to be extended to carry additional per 1003 slice aggregate link state. For example, [RFC5305], [RFC3630], and 1004 [RFC7752] are ISIS, OSPF, and BGP protocol extensions to exchange 1005 network link state information to allow ingress TE routers and PCE(s) 1006 to do proper path placement in the network. The extensions required 1007 to support network slicing may be defined in other documents, and are 1008 outside the scope of this document. 1010 The instantiation of a slice policy may need to be automated. 1011 Multiple options are possible to facilitate automation of 1012 distribution of a slice policy to capable devices. 1014 For example, a YANG data model for the slice policy may be supported 1015 on network devices and controllers. A suitable transport (e.g. 1016 NETCONF [RFC6241], RESTCONF [RFC8040], or gRPC) may be used to enable 1017 configuration and retrieval of state information for slice policies 1018 on network devices. The slice policy YANG data model may be defined 1019 in a separate document and is outside the scope of this document. 1021 7. Applicability to Path Control Technologies 1023 The slice policy modes described in this document are agnostic to the 1024 technology used to setup paths that carry slice aggregate traffic. 1025 One or more paths connecting the endpoints of the mapped IETF network 1026 slices may be selected to steer the corresponding traffic streams 1027 over the resources allocated for the slice aggregate. 1029 For example, once the feasible paths within a slice policy topology 1030 are selected, it is possible to use RSVP-TE protocol [RFC3209] to 1031 setup or signal the LSPs that would be used to carry slice aggregate 1032 traffic. Specific extensions to RSVP-TE protocol to enable signaling 1033 of slice aggregate aware RSVP LSPs are outside the scope of this 1034 document. 1036 Alternatively, Segment Routing (SR) [RFC8402] may be used and the 1037 feasible paths can be realized by steering over specific segments or 1038 segment-lists using an SR policy. Further details on how the slice 1039 policy modes presented in this document can be realized over an SR 1040 network will be discussed in a separate document. 1042 8. IANA Considerations 1044 This document has no IANA actions. 1046 9. Security Considerations 1048 The main goal of network slicing is to allow for varying treatment of 1049 traffic from multiple different network slices that are utilizing a 1050 common network infrastructure and to allow for different levels of 1051 services to be provided for traffic traversing a given network 1052 resource. 1054 A variety of techniques may be used to achieve this, but the end 1055 result will be that some packets may be mapped to specific resources 1056 and may receive different (e.g., better) service treatment than 1057 others. The mapping of network traffic to a specific slice policy is 1058 indicated primarily by the SS, and hence an adversary may be able to 1059 utilize resources allocated to a specific slice policy by injecting 1060 packets carrying the same SS field in their packets. 1062 Such theft-of-service may become a denial-of-service attack when the 1063 modified or injected traffic depletes the resources available to 1064 forward legitimate traffic belonging to a specific slice policy. 1066 The defense against this type of theft and denial-of-service attacks 1067 consists of a combination of traffic conditioning at slice policy 1068 domain boundaries with security and integrity of the network 1069 infrastructure within a slice policy domain. 1071 10. Acknowledgement 1073 The authors would like to thank Krzysztof Szarkowicz, Swamy SRK, 1074 Navaneetha Krishnan, Prabhu Raj Villadathu Karunakaran and Jie Dong 1075 for their review of this document, and for providing valuable 1076 feedback on it. 1078 11. Contributors 1080 The following individuals contributed to this document: 1082 Colby Barth 1083 Juniper Networks 1084 Email: cbarth@juniper.net 1086 Srihari R. Sangli 1087 Juniper Networks 1088 Email: ssangli@juniper.net 1090 Chandra Ramachandran 1091 Juniper Networks 1092 Email: csekar@juniper.net 1094 12. References 1096 12.1. Normative References 1098 [I-D.filsfils-spring-srv6-stateless-slice-id] 1099 Filsfils, C., Clad, F., Camarillo, P., and K. Raza, 1100 "Stateless and Scalable Network Slice Identification for 1101 SRv6", draft-filsfils-spring-srv6-stateless-slice-id-01 1102 (work in progress), July 2020. 1104 [I-D.ietf-lsr-flex-algo] 1105 Psenak, P., Hegde, S., Filsfils, C., Talaulikar, K., and 1106 A. Gulko, "IGP Flexible Algorithm", draft-ietf-lsr-flex- 1107 algo-13 (work in progress), October 2020. 1109 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1110 Requirement Levels", BCP 14, RFC 2119, 1111 DOI 10.17487/RFC2119, March 1997, 1112 . 1114 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 1115 Label Switching Architecture", RFC 3031, 1116 DOI 10.17487/RFC3031, January 2001, 1117 . 1119 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 1120 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 1121 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 1122 . 1124 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 1125 (TE) Extensions to OSPF Version 2", RFC 3630, 1126 DOI 10.17487/RFC3630, September 2003, 1127 . 1129 [RFC4915] Psenak, P., Mirtorabi, S., Roy, A., Nguyen, L., and P. 1130 Pillay-Esnault, "Multi-Topology (MT) Routing in OSPF", 1131 RFC 4915, DOI 10.17487/RFC4915, June 2007, 1132 . 1134 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 1135 Engineering", RFC 5305, DOI 10.17487/RFC5305, October 1136 2008, . 1138 [RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and 1139 S. Ray, "North-Bound Distribution of Link-State and 1140 Traffic Engineering (TE) Information Using BGP", RFC 7752, 1141 DOI 10.17487/RFC7752, March 2016, 1142 . 1144 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1145 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1146 May 2017, . 1148 [RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L., 1149 Decraene, B., Litkowski, S., and R. Shakir, "Segment 1150 Routing Architecture", RFC 8402, DOI 10.17487/RFC8402, 1151 July 2018, . 1153 12.2. Informative References 1155 [I-D.draft-ietf-teas-rfc3272bis-09] 1156 Farrel, A., "Overview and Principles of Internet Traffic 1157 Engineering", draft-ietf-teas-rfc3272bis-09 (work in 1158 progress), December 2020. 1160 [I-D.nsdt-teas-ietf-network-slice-definition] 1161 Rokui, R., Homma, S., Makhijani, K., Contreras, L., and J. 1162 Tantsura, "Definition of IETF Network Slices", draft-nsdt- 1163 teas-ietf-network-slice-definition-02 (work in progress), 1164 December 2020. 1166 [I-D.nsdt-teas-ns-framework] 1167 Gray, E. and J. Drake, "Framework for Transport Network 1168 Slices", draft-nsdt-teas-ns-framework-04 (work in 1169 progress), July 2020. 1171 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 1172 and W. Weiss, "An Architecture for Differentiated 1173 Services", RFC 2475, DOI 10.17487/RFC2475, December 1998, 1174 . 1176 [RFC2702] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., and J. 1177 McManus, "Requirements for Traffic Engineering Over MPLS", 1178 RFC 2702, DOI 10.17487/RFC2702, September 1999, 1179 . 1181 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 1182 and A. Bierman, Ed., "Network Configuration Protocol 1183 (NETCONF)", RFC 6241, DOI 10.17487/RFC6241, June 2011, 1184 . 1186 [RFC8040] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 1187 Protocol", RFC 8040, DOI 10.17487/RFC8040, January 2017, 1188 . 1190 Authors' Addresses 1192 Tarek Saad 1193 Juniper Networks 1195 Email: tsaad@juniper.net 1197 Vishnu Pavan Beeram 1198 Juniper Networks 1200 Email: vbeeram@juniper.net 1202 Bin Wen 1203 Comcast 1205 Email: Bin_Wen@cable.comcast.com 1206 Daniele Ceccarelli 1207 Ericsson 1209 Email: daniele.ceccarelli@ericsson.com 1211 Joel Halpern 1212 Ericsson 1214 Email: joel.halpern@ericsson.com 1216 Shaofu Peng 1217 ZTE Corporation 1219 Email: peng.shaofu@zte.com.cn 1221 Ran Chen 1222 ZTE Corporation 1224 Email: chen.ran@zte.com.cn 1226 Xufeng Liu 1227 Volta Networks 1229 Email: xufeng.liu.ietf@gmail.com