idnits 2.17.1 draft-ietf-teas-gmpls-controller-inter-work-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 11, 2021) is 992 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Haomian Zheng 2 Internet Draft Xianlong Luo 3 Yi Lin 4 Category: Informational Huawei Technologies 5 Yang Zhao 6 China Mobile 7 Yunbin Xu 8 CAICT 9 Sergio Belotti 10 Dieter Beller 11 Nokia 12 Expires: January 12, 2022 July 11, 2021 14 Interworking of GMPLS Control and Centralized Controller System 16 draft-ietf-teas-gmpls-controller-inter-work-06 18 Abstract 20 Generalized Multi-Protocol Label Switching (GMPLS) control allows 21 each network element (NE) to perform local resource discovery, 22 routing and signaling in a distributed manner. 24 On the other hand, with the development of software-defined 25 transport networking technology, a set of NEs can be controlled via 26 centralized controller hierarchies to address the issue from multi- 27 domain, multi-vendor and multi-technology. An example of such 28 centralized architecture is ACTN controller hierarchy described in 29 RFC 8453. 31 Instead of competing with each other, both the distributed and the 32 centralized control plane have their own advantages, and should be 33 complementary in the system. This document describes how the GMPLS 34 distributed control plane can interwork with a centralized 35 controller system in a transport network. 37 Status of this Memo 39 This Internet-Draft is submitted to IETF in full conformance with 40 the provisions of BCP 78 and BCP 79. 42 Internet-Drafts are working documents of the Internet Engineering 43 Task Force (IETF), its areas, and its working groups. Note that 44 other groups may also distribute working documents as Internet- 45 Drafts. 47 Internet-Drafts are draft documents valid for a maximum of six 48 months and may be updated, replaced, or obsoleted by other documents 49 at any time. It is inappropriate to use Internet-Drafts as 50 reference material or to cite them other than as "work in progress." 52 The list of current Internet-Drafts can be accessed at 53 http://www.ietf.org/ietf/1id-abstracts.txt. 55 The list of Internet-Draft Shadow Directories can be accessed at 56 http://www.ietf.org/shadow.html. 58 This Internet-Draft will expire on January 12, 2022. 60 Copyright Notice 62 Copyright (c) 2021 IETF Trust and the persons identified as the 63 document authors. All rights reserved. 65 This document is subject to BCP 78 and the IETF Trust's Legal 66 Provisions Relating to IETF Documents 67 (http://trustee.ietf.org/license-info) in effect on the date of 68 publication of this document. Please review these documents 69 carefully, as they describe your rights and restrictions with 70 respect to this document. Code Components extracted from this 71 document must include Simplified BSD License text as described in 72 Section 4.e of the Trust Legal Provisions and are provided without 73 warranty as described in the Simplified BSD License. 75 Table of Contents 77 1. Introduction .................................................. 3 78 2. Overview ...................................................... 4 79 2.1. Overview of GMPLS Control Plane ............................. 4 80 2.2. Overview of Centralized Controller System ................... 4 81 2.3. GMPLS Control Interwork with Centralized Controller System .. 5 82 3. Discovery Options ............................................. 6 83 3.1. LMP ...................................................... 6 84 4. Routing Options ............................................... 6 85 4.1. OSPF-TE .................................................. 7 86 4.2. ISIS-TE .................................................. 7 87 4.3. Netconf/RESTconf ......................................... 7 88 5. Path Computation .............................................. 7 89 5.1. Constraint-based Path Computing in GMPLS Control ......... 7 90 5.2. Path Computation Element (PCE) ........................... 8 91 6. Signaling Options ............................................. 8 92 6.1. RSVP-TE .................................................. 8 93 7. Interworking Scenarios ........................................ 9 94 7.1. Topology Collection & Synchronization .................... 9 95 7.2. Multi-domain Service Provisioning ........................ 9 96 7.3. Multi-layer Service Provisioning ........................ 12 97 7.3.1. Multi-layer Path Computation ....................... 13 98 7.3.2. Cross-layer Path Creation .......................... 15 99 7.3.3. Link Discovery ..................................... 16 100 7.4. Recovery ................................................ 16 101 7.4.1. Span Protection .................................... 17 102 7.4.2. LSP Protection ..................................... 17 103 7.4.3. Single-domain LSP Restoration ...................... 17 104 7.4.4. Multi-domain LSP Restoration ....................... 18 105 7.4.5. Fast Reroute ....................................... 21 106 7.5. Controller Reliability .................................. 22 107 8. Manageability Considerations ................................. 22 108 9. Security Considerations ...................................... 23 109 10. IANA Considerations ......................................... 23 110 11. References .................................................. 23 111 11.1. Normative References ................................... 23 112 11.2. Informative References ................................. 25 113 12. Authors' Addresses .......................................... 27 115 1. Introduction 117 Generalized Multi-Protocol Label Switching (GMPLS) [RFC3945] extends 118 MPLS to support different classes of interfaces and switching 119 capabilities such as Time-Division Multiplex Capable (TDM), Lambda 120 Switch Capable (LSC), and Fiber-Switch Capable (FSC). Each network 121 element (NE) running a GMPLS control plane collects network 122 information from other NEs and supports service provisioning through 123 signaling in a distributed manner. More generic description for 124 Traffic-engineering networking information exchange can be found in 125 [RFC7926]. 127 On the other hand, Software-Defined Networking (SDN) technologies 128 have been introduced to control the transport network in a 129 centralized manner. Central controllers can collect network 130 information from each node and provision services to corresponding 131 nodes. One of the examples is the Abstraction and Control of Traffic 132 Engineered Networks (ACTN) [RFC8453], which defines a hierarchical 133 architecture with Provisioning Network Controller (PNC), Multi- 134 domain Service Coordinator (MDSC) and Customer Network Controller 135 (CNC) as central controllers for different network abstraction 136 levels. A Path Computation Element (PCE) based approach has been 137 proposed as Application-Based Network Operations (ABNO) in 138 [RFC7491]. 140 In such centralized controller architectures, GMPLS can be applied 141 for the NE-level control. A central controller may support GMPLS 142 enabled domains and may interact with a GMPLS enabled domain where 143 the GMPLS control plane does the service provisioning from ingress 144 to egress. In this case the centralized controller sends the request 145 to the ingress node and does not have to configure all NEs along the 146 path through the domain from ingress to egress thus leveraging the 147 GMPLS control plane. This document describes how GMPLS control 148 interworks with centralized controller system in transport network. 150 2. Overview 152 In this section, overviews of GMPLS control plane and centralized 153 controller system are discussed as well as the interactions between 154 the GMPLS control plane and centralized controllers. 156 2.1. Overview of GMPLS Control Plane 158 GMPLS separates the control plane and the data plane to support 159 time-division, wavelength, and spatial switching, which are 160 significant in transport networks. For the NE level control in 161 GMPLS, each node runs a GMPLS control plane instance. 162 Functionalities such as service provisioning, protection, and 163 restoration can be performed via GMPLS communication among multiple 164 NEs. At the same time, the controller can also collect node and 165 link resources in the network to construct the network topology and 166 compute routing paths for serving service requests. 168 Several protocols have been designed for GMPLS control [RFC3945] 169 including link management [RFC4204], signaling [RFC3471], and 170 routing [RFC4202] protocols. The controllers applying these 171 protocols communicate with each other to exchange resource 172 information and establish Label Switched Paths (LSPs). In this way, 173 controllers in different nodes in the network have the same view of 174 the network topology and provision services based on local policies. 176 2.2. Overview of Centralized Controller System 178 With the development of SDN technologies, a centralized controller 179 architecture has been introduced to transport networks. One example 180 architecture can be found in ACTN [RFC8453]. In such systems, a 181 controller is aware of the network topology and is responsible for 182 provisioning incoming service requests. 184 Multiple hierarchies of controllers are designed at different levels 185 implementing different functions. This kind of architecture enables 186 multi-vendor, multi-domain, and multi-technology control. For 187 example, a higher-level controller coordinates several lower-level 188 controllers controlling different domains, for topology collection 189 and service provisioning. Vendor-specific features can be abstracted 190 between controllers, and standard API (e.g., generated from 191 RESTconf/YANG) is used. 193 2.3. GMPLS Control Interwork with Centralized Controller System 195 Besides the GMPLS and the interactions among the controller 196 hierarchies, it is also necessary for the controllers to communicate 197 with the network elements. Within each domain, GMPLS control can be 198 applied to each NE. The bottom-level central controller can act as a 199 NE to collect network information and initiate LSP. Figure 1 shows 200 an example of GMPLS interworking with centralized controllers (ACTN 201 terminologies are used in the figure). 203 +-------------------+ 204 | Orchestrator | 205 +-------------------+ 206 ^ ^ ^ 207 | | | 208 +-------------+ | +-------------+ 209 | |RESTConf/YANG models | 210 V V V 211 +----------+ +----------+ +----------+ 212 |Controller| |Controller| |Controller| 213 +----------+ +----------+ +----------+ 214 ^ ^ ^ ^ ^ 215 | | | | | 216 Netconf| |PCEP Netconf| |PCEP | IF* 217 /YANG | | /YANG | | | 218 V V V V V 219 .----------. Inter- .----------. Inter- .----------. 220 / \ domain / \ domain / \ 221 | | link | LMP | link | LMP | 222 | |======| OSPF-TE |======| OSPF-TE | 223 | | | RSVP-TE | | RSVP-TE | 224 \ / \ / \ / 225 `----------` `----------` `----------` 226 Non-GMPLS domain 1 GMPLS domain 2 GMPLS domain 3 228 Figure 1: Example of GMPLS/non-GMPLS interworks with Controllers 230 Figure 1 shows the scenario with two GMPLS domains and one non-GMPLS 231 domain. This system supports the interworking among non-GMPLS 232 domain, GMPLS domain and the controller hierarchies. For domain 1, 233 the network element were not enabled with GMPLS so the control can 234 be purely from the controller, via Netconf/YANG and/or PCEP. For 235 domain 2 and 3, each domain has the GMPLS control plane enabled at 236 the physical network level. The PNC can exploit GMPLS capability 237 implemented in the domain to listen to the IGP routing protocol 238 messages (OSPF LSAs for example) that the GMPLS control plane 239 instances are disseminating into the network and thus learn the 240 network topology. For path computation in the domain with PNC 241 implementing a PCE, PCCs (e.g. NEs, other controller/PCE) use PCEP 242 to ask the PNC for a path and get replies. The MDSC communicates 243 with PNCs using for example REST/RESTConf based on YANG data models. 244 As a PNC has learned its domain topology, it can report the topology 245 to the MDSC. When a service arrives, the MDSC computes the path and 246 coordinates PNCs to establish the corresponding LSP segment. 248 Alternatively, the NETCONF protocol can be used to retrieve topology 249 information utilizing the e.g. [RFC8795] Yang model and the 250 technology-specific YANG model augmentations required for the 251 specific network technology. The PNC can retrieve topology 252 information from any NE (the GMPLS control plane instance of each NE 253 in the domain has the same topological view), construct the topology 254 of the domain and export an abstracted view to the MDSC. Based on 255 the topology retrieved from multiple PNCs, the MDSC can create 256 topology graph of the multi-domain network, and can use it for path 257 computation. To setup a service, the MDSC can exploit e.g. [TE- 258 Tunnel] Yang model together with the technology-specific YANG model 259 augmentations. 261 3. Discovery Options 263 In GMPLS control, the link connectivity need to be verified between 264 each pair of nodes. In this way, link resources, which are 265 fundamental resources in the network, are discovered by both ends of 266 the link. 268 3.1. LMP 270 Link management protocol (LMP) [RFC4204] runs between a pair of 271 nodes and is used to manage TE links. In addition to the setup and 272 maintenance of control channels, LMP can be used to verify the data 273 link connectivity and correlate the link property. 275 4. Routing Options 277 In GMPLS control, link state information is flooded within the 278 network as defined in [RFC4202]. Each node in the network can build 279 the network topology according to the flooded link state 280 information. Routing protocols such as OSPF-TE [RFC4203] and ISIS-TE 281 [RFC5307] have been extended to support different interfaces in 282 GMPLS. 284 In centralized controller system, central controller can be placed 285 at the GMPLS network and passively receive the information flooded 286 in the network. In this way, the central controller can construct 287 and update the network topology. 289 4.1. OSPF-TE 291 OSPF-TE is introduced for TE networks in [RFC3630]. OSPF extensions 292 have been defined in [RFC4203] to enable the capability of link 293 state information for GMPLS network. Based on this work, OSPF 294 protocol has been extended to support technology-specific routing. 295 The routing protocol for OTN, WSON and optical flexi-grid network 296 are defined in [RFC7138], [RFC7688] and [RFC8363], respectively. 298 4.2. ISIS-TE 300 ISIS-TE is introduced for TE networks in [RFC5305] and is extended 301 to support GMPLS routing functions [RFC5307], and has been updated 302 to [RFC7074] to support the latest GMPLS switching capability and 303 Types fields. 305 4.3. Netconf/RESTconf 307 Netconf [RFC6241] and RESTconf [RFC8040] protocols are originally 308 used for network configuration. Besides, these protocols can also be 309 used for topology retrieval by using topology-related YANG models, 310 such as [RFC8345] and [RFC8795]. These protocols provide a powerful 311 mechanism for notification that permits to notify the client about 312 topology changes. 314 5. Path Computation 316 Once a controller learns the network topology, it can utilize the 317 available resources to serve service requests by performing path 318 computation. Due to abstraction, the controllers may not have 319 sufficient information to compute the optimal path. In this case, 320 the controller can interact with other controllers by sending Yang 321 Path Computation requests [PAT-COMP] to compute a set of potential 322 optimal paths and then, based on its own constraints, policy and 323 specific knowledge (e.g. cost of access link) can choose the more 324 feasible path for service e2e path setup. 326 Path computation is one of the key objectives in various types of 327 controllers. In the given architecture, it is possible for different 328 components that have the capability to compute the path. 330 5.1. Constraint-based Path Computing in GMPLS Control 332 In GMPLS control, a routing path is computed by the ingress node 333 [RFC3473] and is based on the ingress node TED. Constraint-based 334 path computation is performed according to the local policy of the 335 ingress node. 337 5.2. Path Computation Element (PCE) 339 PCE has been introduced in [RFC4655] as a functional component that 340 provides services to compute path in a network. In [RFC5440], the 341 path computation is accomplished by using the Traffic Engineering 342 Database (TED), which maintains the link resources in the network. 343 The emergence of PCE efficiently improve the quality of network 344 planning and offline computation, but there is a risk that the 345 computed path may be infeasible if there is a diversity requirement, 346 because stateless PCE has no knowledge about the former computed 347 paths. 349 To address this issue, stateful PCE has been proposed in [RFC8231]. 350 Besides the TED, an additional LSP Database (LSP-DB) is introduced 351 to archive each LSP computed by the PCE. In this way, PCE can easily 352 figure out the relationship between the computing path and former 353 computed paths. In this approach, PCE provides computed paths to 354 PCC, and then PCC decides which path is deployed and when to be 355 established. 357 In PCE Initiation [RFC8281], PCE is allowed to trigger the PCC to 358 setup, maintenance, and teardown of the PCE-initiated LSP under the 359 stateful PCE model. This would allow a dynamic network that is 360 centrally controlled and deployed. 362 In centralized controller system, the PCE can be implemented in a 363 central controller, and the central controller performs path 364 computation according to its local policies. On the other hand, the 365 PCE can also be placed outside of the central controller. In this 366 case, the central controller acts as a PCC to request path 367 computation to the PCE through PCEP. One of the reference 368 architecture can be found at [RFC7491]. 370 6. Signaling Options 372 Signaling mechanisms are used to setup LSPs in GMPLS control. 373 Messages are sent hop by hop between the ingress node and the egress 374 node of the LSP to allocate labels. Once the labels are allocated 375 along the path, the LSP setup is accomplished. Signaling protocols 376 such as RSVP-TE [RFC3473] have been extended to support different 377 interfaces in GMPLS. 379 6.1. RSVP-TE 381 RSVP-TE is introduced in [RFC3209] and extended to support GMPLS 382 signaling in [RFC3473]. Several label formats are defined for a 383 generalized label request, a generalized label, suggested label and 384 label sets. Based on [RFC3473], RSVP-TE has been extended to support 385 technology-specific signaling. The RSVP-TE extensions for OTN, WSON, 386 optical flexi-grid network are defined in [RFC7139], [RFC7689], and 387 [RFC7792], respectively. 389 7. Interworking Scenarios 391 7.1. Topology Collection & Synchronization 393 Topology information is necessary on both network elements and 394 controllers. The topology on network element is usually raw 395 information, while the topology on the controller can be either raw 396 or abstracted. Three different abstraction methods have been 397 described in [RFC8453], and different controllers can select the 398 corresponding method depending on application. 400 When there are changes in the network topology, the impacted network 401 element(s) need to report changes to all the other network elements, 402 together with the controller, to sync up the topology information. 403 The inter-NE synchronization can be achieved via protocols mentioned 404 in section 3 and 4. The topology synchronization between NEs and 405 controllers can either be achieved by routing protocols OSPF- 406 TE/PCEP-LS in [PCEP-LS] or Netconf protocol notifications with YANG 407 model. 409 7.2. Multi-domain Service Provisioning 411 Based on the topology information on controllers and network 412 elements, service provisioning can be deployed. Plenty of methods 413 have been specified for single domain service provisioning, such as 414 using PCEP and RSVP-TE. 416 Multi-domain service provisioning would request coordination among 417 the controller hierarchies. Given the service request, the end-to- 418 end delivery procedure may include interactions at any level (i.e. 419 interface) in the hierachy of the controllers (e.g. MPI and SBI for 420 ACTN). The computation for a cross-domain path is usually completed 421 by controllers who have a global view of the topologies. Then the 422 configuration is decomposed into lower layer controllers, to 423 configure the network elements to set up the path. 425 A combination of the centralized and distributed protocols may be 426 necessary for the interaction between network elements and 427 controller. Several methods can be used to create the inter-domain 428 path: 430 1) With end-to-end RSVP-TE session: 432 In this method, the SDN controller of the source domain triggers the 433 source node to create the end-to-end RSVP-TE session, and the 434 assignment and distribution of the labels on the inter-domain links 435 are done by the boarder nodes of each domain, using RSVP-TE 436 protocol. Therefore, this method requires the interworking of RSVP- 437 TE protocols between different domains. 439 There are two possible methods: 441 1.1) One single end-to-end RSVP-TE session 443 In this method, an end-to-end RSVP-TE session from the source NE to 444 the destination NE will be used to create the inter-domain path. A 445 typical example would be the PCE Initiation scenario, in which a PCE 446 message (PCInitiate) is sent from the controller to the first-end 447 node, and then trigger a RSVP procedure along the path. Similarly, 448 the interaction between the controller and the ingress node of a 449 domain can be achieved by Netconf protocol with corresponding YANG 450 models, and then completed by running RSVP among the network 451 elements. 453 1.2) LSP Stitching 455 The LSP stitching method defined in [RFC5150] can also be used to 456 create the end-to-end LSP. I.e., when the source node receives an 457 end-to-end path creation request (e.g., using PCEP or Netconf 458 protocol), the source node starts an end-to-end RSVP-TE session 459 along the end points of each LSP segment (refers to S-LSP in 460 [RFC5150]) of each domain, to assign the labels on the inter-domain 461 links between each pair of neighbor S-LSPs, and stitch the end-to- 462 end LSP to each S-LSP. See Figure 2 as an example. Note that the S- 463 LSP in each domain can be either created by each domain controller 464 in advance, or created dynamically triggered by the end-to-end RSVP- 465 TE session. 467 +-----------------+ +----------------+ +-----------------+ 468 |Client | | | | Client| 469 |Signal Domain 1| | Domain 2 | |Domain 3 Signal| 470 | | | | | | | | 471 |+-+-+ | | | | +-+-+| 472 || | | +--+ +--+| |+--+ +--+ +--+| |+--+ +--+ | | || 473 || | | | | | || || | | | | || || | | | | | || 474 || ******************************************************** || 475 || | | | | || || | | | | || || | | | | || 476 |+---+ +--+ +--+| |+--+ +--+ +--+| |+--+ +--+ +---+| 477 +-----------------+ +----------------+ +-----------------+ 478 | . . . . . . | 479 | .<-S-LSP 1->. .<- S-LSP 2 -->. .<-S-LSP 3->. | 480 | . . . . | 481 |-------------->.---->.------------->.---->.-------------->| 482 |<--------------.<----.<-------------.<----.<--------------| 483 | End-to-end RSVP-TE session for LSP stitching | 485 Figure 2: LSP stitching 486 2) Without end-to-end RSVP-TE session: 488 In this method, each SDN controller is responsible to create the 489 path segment within its domain. The boarder node does not need to 490 communicate with other boarder nodes in other domains for the 491 distribution of labels on inter-domain links, so end-to-end RSVP-TE 492 session through multiple domains is not required, and the 493 interworking of RSVP-TE protocol between different domains is not 494 needed. 496 Note that path segments in the source domain and the destination 497 domain are "asymmetrical" segments, because the configuration of 498 client signal mapping into server layer tunnel is needed at only one 499 end of the segment, while configuration of server layer cross- 500 connect is needed at the other end of the segment. For example, the 501 path segment 1 and 3 in Figure 3 are asymmetrical segments, because 502 one end of the segment requires mapping GE into ODU0, while the 503 other end of the segment requires setting up ODU0 cross-connect. 505 +-----------------+ +----------------+ +-----------------+ 506 |Client | | | | Client| 507 |Signal Domain 1| | Domain 2 | |Domain 3 Signal| 508 |(GE) | | | | (GE) | 509 | | ODU0 tunnel| | | | | | 510 |+-+-+ ^ | | | | +-+-+| 511 || | | +--+ |+--+| |+--+ +--+ +--+| |+--+ +--+ | | || 512 || | | | | || || || | | | | || || | | | | | || 513 || ******************************************************** || 514 || | | | | || . || | | | | || . || | | | | || 515 |+---+ +--+ +--+| . |+--+ +--+ +--+| . |+--+ +--+ +---+| 516 +-----------------+ . +----------------+ . +-----------------+ 517 . . . . 518 .<-Path Segment 1->.<--Path Segment 2-->.<-Path Segment 3->. 519 . . . . 521 Figure 3: Example of asymmetrical path segment 523 The PCEP / GMPLS protocols should support creation of such 524 asymmetrical segment. 526 Note also that mechanisms to assign the labels in the inter-domain 527 links are also needed to be considered. There are two possible 528 methods: 530 2.1) Inter-domain labels assigned by NEs: 532 The concept of Stitching Label that allows stitching local path 533 segments was introduced in [RFC5150] and [sPCE-ID], in order to form 534 the inter-domain path crossing several different domains. It also 535 describes the BRPC and H-PCE PCInitiate procedure, i.e., the ingress 536 boarder node of each downstream domain assigns the stitching label 537 for the inter-domain link between the downstream domain and its 538 upstream neighbor domain, and this stitching label will be passed to 539 the upstream neighbor domain by PCE protocol, which will be used for 540 the path segment creation in the upstream neighbor domain. 542 2.2) Inter-domain labels assigned by SDN controller: 544 If the resource of inter-domain links are managed by the multi- 545 domain SDN controller, each single-domain SDN controller can provide 546 to the multi-domain SDN controller the list of available labels 547 (e.g. timeslots if OTN is the scenario) using IETF Topology model 548 and related technology specific extension. Once that multi-domain 549 SDN controller has computed e2e path RSVP-TE or PCEP can be used in 550 the different domains to setup related segment tunnel consisting 551 with label inter-domain information, e.g. for PCEP the label ERO can 552 be included in the PCInitiate message to indicate the inter-domain 553 labels, so that each boarder node of each domain can configure the 554 correct cross-connect within itself. 556 7.3. Multi-layer Service Provisioning 558 GMPLS can interwork with centralized controller system in multi- 559 layer networks. 561 +--------------+ 562 | Multi-layer | 563 |SDN Controller| 564 +-----+--+-----+ 565 | | Higher-layer Network 566 | | .--------------------. 567 | | +--------------+ / \ 568 | | | Higher-layer | | +--+ Link +--+ | 569 | +-->|SDN Controller+----->| | |**********| | | 570 | +--------------+ | +--+ +--+ | 571 | \ . . / 572 | `--.------------.---` 573 | . . 574 | .---.------------.---. 575 | +--------------+ / . . \ 576 | | Lower-layer | | +--+ +--+ +--+ | 577 +----->|SDN Controller+----->| | ============== | | 578 +--------------+ | +--+ +--+ +--+ | 579 \ H-LSP / 580 `-------------------` 581 Lower-layer Network 583 Figure 4: Example of GMPLS-SDN interworking in multi-layer network 584 An example with two layers of network is shown in Figure 4. In this 585 example, the GMPLS control plane is enabled in each layer network, 586 and interworks with the SDN controller of its domain (higher-layer 587 SDN controller and lower-layer SDN controller, respectively). The 588 multi-layer SDN controller, which acts as the Orchestrator, is used 589 to coordinate the control of the multi-layer network. 591 7.3.1. Multi-layer Path Computation 593 [RFC5623] describes three inter-layer path computation models and 594 four inter-layer path control models: 596 - 3 Path computation: 598 o Single PCE path computation model 600 o Multiple PCE path computation with inter-PCE communication 601 model 603 o Multiple PCE path computation without inter-PCE communication 604 model 606 - 4 Path control: 608 o PCE-VNTM cooperation model 610 o Higher-layer signaling trigger model 612 o NMS-VNTM cooperation model (integrated flavor) 614 o NMS-VNTM cooperation model (separate flavor) 616 Section 4.2.4 of [RFC5623] also provides all the possible 617 combinations of inter-layer path computation and inter-layer path 618 control models. 620 To apply [RFC5623] in multi-layer network with GMPLS-SDN 621 interworking, the higher-layer SDN controller and the lower-layer 622 SDN controller can act as the PCE Hi and PCE Lo respectively, and 623 typically, the multi-layer SDN controller can act as a VNTM because 624 it has the abstracted view of both the higher-layer and lower-layer 625 networks. 627 Table 1 shows all possible combinations of path computation and path 628 control models in multi-layer network with GMPLS-SDN interworking: 630 Table 1: Combinations of path computation and path control models 631 --------------------------------------------------------- 632 | Path computation |Single PCE | Multiple | Multiple | 633 | \ | (Not | PCE with | PCE w/o | 634 | Path control |applicable)| inter-PCE | inter-PCE | 635 |---------------------+-----------+-----------+-----------| 636 | PCE-VNTM | ...... | | | 637 | cooperation | . -- . | Yes | Yes | 638 | | . . | | | 639 |---------------------+--.----.---+-----------+-----------| 640 | Higher-layer | . . | | | 641 | signaling trigger | . -- . | Yes | Yes | 642 | | . . | | | 643 |---------------------+--.----.---+-----------+-----------| 644 | NMS-VNTM | . . | .........|....... | 645 | cooperation | . -- . | .Yes | No . | 646 | (integrated flavor) | . . | . | . | 647 |---------------------+--.----.---+--.--------+------.----| 648 | NMS-VNTM | . . | . | . | 649 | cooperation | . -- . | .No | Yes. | 650 | (separate flavor) | ...... | .........|....... | 651 ---------------------+----|------+--------|--+----------- 652 V V 653 Not applicable because Typical models to be used 654 there are multiple PCEs 656 Note that: 658 - Since there is one PCE in each layer network, the path computation 659 model "Single PCE path computation" is not applicable. 661 - For the other two path computation models "Multiple PCE with 662 inter-PCE" and "Multiple PCE w/o inter-PCE", the possible 663 combinations are the same as defined in [RFC5623]. More 664 specifically: 666 o The path control models "NMS-VNTM cooperation (integrated 667 flavor)" and "NMS-VNTM cooperation (separate flavor)" are the 668 typical models to be used in multi-layer network with GMPLS-SDN 669 interworking. This is because in these two models, the path 670 computation is triggered by the NMS or VNTM. And in SDN 671 centralized controller system, the path computation requests 672 are typically from the multi-layer SDN controller (acts as 673 VNTM). 675 o For the other two path control models "PCE-VNTM cooperation" 676 and "Higher-layer signaling trigger", the path computation is 677 triggered by the NEs, i.e., NE performs PCC functions. These 678 two models are still possible to be used, although they are not 679 the main methods. 681 7.3.2. Cross-layer Path Creation 683 In a multi-layer network, a lower-layer LSP in the lower-layer 684 network can be created, which will construct a new link in the 685 higher-layer network. Such lower-layer LSP is called Hierarchical 686 LSP, or H-LSP for short, see [RFC6107]. 688 The new link constructed by the H-LSP then can be used by the 689 higher-layer network to create new LSPs. 691 As described in [RFC5212], two methods are introduced to create the 692 H-LSP: the static (pre-provisioned) method and the dynamic 693 (triggered) method. 695 1) Static (pre-provisioned) method 697 In this method, the H-LSP in the lower layer network is created in 698 advance. After that, the higher layer network can create LSPs using 699 the resource of the link constructed by the H-LSP. 701 The multi-layer SDN controller is responsible to decide the creation 702 of H-LSP in the lower layer network if it acts as a VNTM. It then 703 requests the lower-layer SDN controller to create the H-LSP via, for 704 example, MPI interface under the ACTN architecture. See Section 705 3.3.2 of [TE-Tunnel]. 707 The lower-layer SDN controller can trigger the GMPLS control plane 708 to create the H-LSP. As a typical example, the PCInitiate message 709 can be used for the communication between the lower-layer SDN 710 controller and the source node of the H-LSP. 712 And the source node of the H-LSP can trigger the RSVP-TE signaling 713 procedure to create the H-LSP, as described in [RFC6107]. 715 2) Dynamic (triggered) method 717 In this method, the signaling of LSP creation in the higher layer 718 network will trigger the creation of H-LSP in the lower layer 719 network dynamically, if it is necessary. 721 In this case, after the cross-layer path is computed, the multi- 722 layer SDN controller requests the higher-layer SDN controller for 723 the cross-layer LSP creation. As a typical example, the MPI 724 interface under the ACTN architecture could be used. 726 The higher-layer SDN controller can trigger the GMPLS control plane 727 to create the LSP in the higher-layer network. As a typical example, 728 the PCInitiate message can be used for the communication between the 729 higher-layer SDN controller and the source node of the Higher-layer 730 LSP, as described in Section 4.3 of [RFC8282]. At least two sets of 731 ERO information should be included to indicate the routes of higher- 732 layer LSP and lower-layer H-LSP. 734 The source node of the Higher-layer LSP follows the procedure 735 defined in Section 4 of [RFC6001], to trigger the GMPLS control 736 plane in both higher-layer network and lower-layer network to create 737 the higher-layer LSP and the lower-layer H-LSP. 739 On success, the source node of the H-LSP should report the 740 information of the H-LSP to the lower-layer SDN controller via, for 741 example, PCRpt message. 743 7.3.3. Link Discovery 745 If the higher-layer network and the lower-layer network are under 746 the same GMPLS control plane instance, the H-LSP can be an FA-LSP. 747 Then the information of the link constructed by this FA-LSP, called 748 FA, can be advertised in the routing instance, so that the higher- 749 layer SDN controller can be aware of this new FA. [RFC4206] and the 750 following updates to it (including [RFC6001] and [RFC6107]) describe 751 the detail extensions to support advertisement of an FA. 753 If the higher-layer network and the lower-layer network are under 754 separated GMPLS control plane instances, after an H-LSP is created 755 in the lower-layer network, the link discovery procedure defined in 756 LMP protocol ([RFC4204]) will be triggered in the higher-layer 757 network to discover the information of the link constructed by the 758 H-LSP. The information of this new link will be advertised to the 759 higher-layer SDN controller. 761 7.4. Recovery 763 The GMPLS recovery functions are described in [RFC4426]. Span 764 protection, end-to-end protection and restoration, are discussed 765 with different protection schemes and message exchange requirements. 766 Related RSVP-TE extensions to support end-to-end recovery is 767 described in [RFC4872]. The extensions in [RFC4872] include 768 protection, restoration, preemption, and rerouting mechanisms for an 769 end-to-end LSP. Besides end-to-end recovery, a GMPLS segment 770 recovery mechanism is defined in [RFC4873], which also intends to be 771 compatible with Fast Reroute (FRR) (see [RFC4090] which defines 772 RSVP-TE extensions for the FRR mechanism, and [RFC8271] which 773 described the updates of GMPLS RSVP-TE protocol for FRR of GMPLS TE- 774 LSPs). 776 7.4.1. Span Protection 778 Span protection refers to the protection of the link between two 779 neighboring switches. The main protocol requirements include: 781 - Link management: Link property correlation on the link protection 782 type; 784 - Routing: announcement of the link protection type; 786 - Signaling: indication of link protection requirement for that LSP. 788 GMPLS already supports the above requirements, and there are no new 789 requirements in the scenario of interworking between GMPLS and 790 centralized controller system. 792 7.4.2. LSP Protection 794 The LSP protection includes end-to-end and segment LSP protection. 795 For both cases: 797 - In the provisioning phase: 799 In both single-domain and multi-domain scenarios, the disjoint 800 path computation can be done by the centralized controller system, 801 as it has the global topology and resource view. And the path 802 creation can be done by the procedure described in Section 7.2. 804 - In the protection switchover phase: 806 In both single-domain and multi-domain scenarios, the existing 807 standards provide the distributed way to trigger the protection 808 switchover. For example, data plane Automatic Protection Switching 809 (APS) mechanism described in [G.808.1], or GMPLS Notify mechanism 810 described in [RFC4872] and [RFC4873]. In the scenario of 811 interworking between GMPLS and centralized controller system, it 812 is recommended to still use these distributed mechanisms rather 813 than centralized mechanism (i.e., the controller triggers the 814 protection switchover). This can significantly shorten the 815 protection switching time. 817 7.4.3. Single-domain LSP Restoration 819 - Pre-planned LSP rerouting (including shared-mesh restoration): 821 In pre-planned protecting, the protecting LSP is established only 822 in the control plane in the provisioning phase, and will be 823 activated in the data plane once failure occurs. 825 In the scenario of interworking between GMPLS and centralized 826 controller system, the route of protecting LSP can be computed by 827 the centralized controller system. This takes the advantage of 828 making better use of network resource, especially for the resource 829 sharing in shared-mesh restoration. 831 - Full LSP rerouting: 833 In full LSP rerouting, the normal traffic will be switched to an 834 alternate LSP that is fully established only after failure 835 occurrence. 837 As described in [RFC4872] and [RFC4873], the alternate route can 838 be computed on demand when failure occurrence, or pre-computed and 839 stored before failure occurrence. 841 In a fully distributed scenario, the pre-computation method offers 842 faster restoration time, but has the risk that the pre-computed 843 alternate route may become out of date due to the changes of the 844 network. 846 In the scenario of interworking between GMPLS and centralized 847 controller system, the pre-computation of the alternate route 848 could be taken place in the centralized controller (and may be 849 stored in the controller or the head-end node of the LSP). In this 850 way, any changes in the network can trigger the refreshment of the 851 alternate route by the centralized controller. This makes sure 852 that the alternate route will not become out of date. 854 7.4.4. Multi-domain LSP Restoration 856 A working LSP may traverse multiple domains, each of which may or 857 may not support GMPLS distributed control plane. 859 In the case that all the domains support GMPLS, both the end-to-end 860 rerouting method and the domain segment rerouting method could be 861 used. 863 In the case that only some of the domains support GMPLS, the domain 864 segment rerouting method could be used in those GMPLS domains. For 865 other domains which do not support GMPLS, other mechanisms may be 866 used to protect the LSP segments, which are out of scope of this 867 document. 869 1) End-to-end rerouting: 871 In this case, failure occurs on the working LSP inside any domain or 872 on the inter-domain links will trigger the end-to-end restoration. 874 In both pre-planned and full LSP rerouting, the end-to-end 875 protecting LSP could be computed by the centralized controller 876 system, and could be created by the procedure described in Section 877 7.2. Note that the end-to-end protecting LSP may traverse different 878 domains from the working LSP, depending on the result of multi- 879 domain path computation for the protecting LSP. 881 +-------------+ +-------------+ +-------------+ 882 | Domain 1 | | Domain 2 | | Domain 3 | 883 |+---+ +---+| |+---+ +---+| |+---+ +---+| 884 || | | || || | | || || | | || 885 || ===/~/======/~~~/================================ || 886 || * | | || || | | || || | | * || 887 |+-*-+ +---+| |+---+ +---+| |+---+ +-*-+| 888 | * | +-------------+ | * | 889 | * | | * | 890 | * | +-------------+ | * | 891 | * | | Domain 4 | | * | 892 |+-*-+ +---+| |+---+ +---+| |+---+ +-*-+| 893 || * | | || || | | || || | | * || 894 || ************************************************* || 895 || | | || || | | || || | | || 896 |+---+ +---+| |+---+ +---+| |+---+ +---+| 897 +-------------+ +-------------+ +-------------+ 899 ====: Working LSP ****: Protecting LSP /~/: Failure 901 Figure 5: End-to-end restoration 903 2) Domain segment rerouting: 905 2.1) Intra-domain rerouting: 907 If failure occurs on the working LSP segment in a GMPLS domain, the 908 segment rerouting ([RFC4873]) could be used for the working LSP 909 segment in that GMPLS domain. Figure 6 shows an example of intra- 910 domain rerouting. 912 +-------------+ +-------------+ +-------------+ 913 | Domain 1 | | Domain 2 | | Domain 3 | 914 |+---+ +---+| |+---+ +---+| |+---+ +---+| 915 || | | || || | | || || | | || 916 || ===/~/=========================================== || 917 || * | | * || || | | || || | | || 918 |+-*-+ +-*-+| |+---+ +---+| |+---+ +---+| 919 | * * | +-------------+ | | 920 | * * | | | 921 | * * | +-------------+ | | 922 | * * | | Domain 4 | | | 923 |+-*-+ +-*-+| |+---+ +---+| |+---+ +---+| 924 || * | | * || || | | || || | | || 925 || ********* || || | | || || | | || 926 || | | || || | | || || | | || 927 |+---+ +---+| |+---+ +---+| |+---+ +---+| 928 +-------------+ +-------------+ +-------------+ 930 ====: Working LSP ****: Rerouting LSP segment /~/: Failure 932 Figure 6: Intra-domain segment rerouting 934 2.2) Inter-domain rerouting: 936 If intra-domain segment rerouting failed (e.g., due to lack of 937 resource in that domain), or if failure occurs on the working LSP on 938 an inter-domain link, the centralized controller system may 939 coordinate with other domain(s), to find an alternative path or path 940 segment to bypass the failure, and then trigger the inter-domain 941 rerouting procedure. Note that the rerouting path or path segment 942 may traverse different domains from the working LSP. 944 For inter-domain rerouting, the interaction between GMPLS and 945 centralized controller system is needed: 947 - Report of the result of intra-domain segment rerouting to its 948 domain SDN controller, and then to the multi-domain orchestrator. 949 The former one could be supported by the PCRpt message in 950 [RFC8231], while the latter one could be supported by the MPI 951 interface of ACTN. 953 - Report of inter-domain link failure to the two domain SDN 954 controllers (by which the two ends of the inter-domain link are 955 controlled respectively), and then to the multi-domain 956 orchestrator. The former one could be done as described in Section 957 7.1 of this document, while the latter one could be supported by 958 the MPI interface of ACTN. 960 - Computation of rerouting path or path segment crossing multi- 961 domains by the centralized controller system (see [PAT-COMP]); 962 - Creation of rerouting path segment in each related domain. The 963 multi-domain orchestrator can send the path segment rerouting 964 request to each related domain SDN controller via MPI interface, 965 and then each domain SDN controller can trigger the creation of 966 rerouting path segment in its domain. Note that the ingress and/or 967 egress node of the rerouting path segment may be different from 968 the working LSP segment in each related domain (e.g., Domain 1 and 969 Domain 2 in Figure 7). Note also that the rerouting path segment 970 may traverse a new domain which the working LSP does not traverse 971 (e.g., Domain 4 in Figure 7). 973 +------------+ 974 |Multi-domain| 975 |Orchestrator| 976 +-----.------+ 977 .................................................. 978 . . . . 979 +-----V------+ +-----V------+ +-----V------+ +-----V------+ 980 | Domain | | Domain | | Domain | | Domain | 981 |Controller 1| |Controller 2| |Controller 3| |Controller 4| 982 +-----.------+ +------.-----+ +-----.------+ +-----.------+ 983 . . . . 984 +-----V-------+ +----V--------+ . +------V------+ 985 | Domain 1 | | Domain 2 | . | Domain 3 | 986 |+---+ +---+| |+---+ +---+| . |+---+ +---+| 987 || | | || || | | || . || | | || 988 || ============/~/========================================== || 989 || * | | || || | | * || . || | | || 990 |+-*-+ +---+| |+---+ +-*-+| . |+---+ +---+| 991 | * | +----------*--+ . | | 992 | * | ***** . | | 993 | * | +----------*-----V----+ | | 994 | * | | *Domain 4 | | | 995 |+-*-+ +---+| |+---+ +-*-+ +---+| |+---+ +---+| 996 || * | | || || | | * | | || || | | || 997 || ******************************* | | || || | | || 998 || | | || || | | | | || || | | || 999 |+---+ +---+| |+---+ +---+ +---+| |+---+ +---+| 1000 +-------------+ +---------------------+ +-------------+ 1002 ====: Working LSP ****: Rerouting LSP segment /~/: Failure 1004 Figure 7: Inter-domain segment rerouting 1006 7.4.5. Fast Reroute 1008 [RFC4090] defines two methods of fast reroute, the one-to-one backup 1009 method and the facility backup method. For both methods: 1011 1) Path computation of protecting LSP: 1013 In Section 6.2 of [RFC4090], the protecting LSP (detour LSP in one- 1014 to-one backup, or bypass tunnel in facility backup) could be 1015 computed by the Point of Local Repair (PLR) using, for example, 1016 Constraint-based Shortest Path First (CSPF) computation. In the 1017 scenario of interworking between GMPLS and centralized controller 1018 system, the protecting LSP could also be computed by the centralized 1019 controller system, as it has the global view of the network 1020 topology, resource and information of LSPs. 1022 2) Protecting LSP creation: 1024 In the scenario of interworking between GMPLS and centralized 1025 controller system, the Protecting LSP could still be created by the 1026 RSVP-TE signaling protocol as described in [RFC4090] and [RFC8271]. 1028 In addition, if the protecting LSP is computed by the centralized 1029 controller system, the Secondary Explicit Route Object defined in 1030 [RFC4873] could be used to explicitly indicate the route of the 1031 protecting LSP. 1033 3) Failure detection and traffic switchover: 1035 If a PLR detects that failure occurs, it is recommended to still use 1036 the distributed mechanisms described in [RFC4090] to switch the 1037 traffic to the related detour LSP or bypass tunnel, rather than in a 1038 centralized way. This can significantly shorten the protection 1039 switching time. 1041 7.5. Controller Reliability 1043 Given the important role in the network, the reliability of 1044 controller is critical. Once a controller is shut down, the network 1045 should operate as well. It can be either achieved by controller back 1046 up or functionality back up. There are several of controller backup 1047 or federation mechanisms in the literature. It is also more reliable 1048 to have some function back up in the network element, to guarantee 1049 the performance in the network. 1051 8. Manageability Considerations 1053 Each entity in the network, including both controllers and network 1054 elements, should be managed properly as it will interact with other 1055 entities. The manageability considerations in controller hierarchies 1056 and network elements still apply respectively. For the protocols 1057 applied in the network, manageability is also requested. 1059 The responsibility of each entity should be clarified. The control 1060 of function and policy among different controllers should be 1061 consistent via proper negotiation process. 1063 9. Security Considerations 1065 This document provides the interwork between the GMPLS and 1066 controller hierarchies. The security requirements in both system 1067 still applies respectively. Protocols referenced in this document 1068 also have various security considerations, which is also expected to 1069 be satisfied. 1071 Other considerations on the interface between the controller and the 1072 network element are also important. Such security includes the 1073 functions to authenticate and authorize the control access to the 1074 controller from multiple network elements. Security mechanisms on 1075 the controller are also required to safeguard the underlying network 1076 elements against attacks on the control plane and/or unauthorized 1077 usage of data transport resources. 1079 10. IANA Considerations 1081 This document requires no IANA actions. 1083 11. References 1085 11.1. Normative References 1087 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 1088 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 1089 Tunnels", RFC 3209, December 2001. 1091 [RFC3473] Berger, L., Ed., "Generalized Multi-Protocol Label 1092 Switching (GMPLS) Signaling Resource ReserVation Protocol- 1093 Traffic Engineering (RSVP-TE) Extensions", RFC 3473, 1094 January 2003. 1096 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 1097 (TE) Extensions to OSPF Version 2", RFC 3630, September 1098 2003. 1100 [RFC3945] Mannie, E., Ed., "Generalized Multi-Protocol Label 1101 Switching (GMPLS) Architecture", RFC 3945, October 2004. 1103 [RFC4090] Pan, P., Ed., Swallow, G., Ed., and A. Atlas, Ed., "Fast 1104 Reroute Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 1105 May 2005. 1107 [RFC4203] Kompella, K., Ed. and Y. Rekhter, Ed., "OSPF Extensions in 1108 Support of Generalized Multi-Protocol Label Switching 1109 (GMPLS)", RFC 4203, October 2005. 1111 [RFC4206] Kompella, K. and Rekhter Y., "Label Switched Paths (LSP) 1112 Hierarchy with Generalized Multi-Protocol Label Switching 1113 (GMPLS) Traffic Engineering (TE)", RFC 4206, October 2005. 1115 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 1116 Element (PCE)-Based Architecture", RFC 4655, August 2006. 1118 [RFC4872] Lang, J., Ed., Rekhter, Y., Ed., and D. Papadimitriou, 1119 Ed., "RSVP-TE Extensions in Support of End-to-End 1120 Generalized Multi-Protocol Label Switching (GMPLS) 1121 Recovery", RFC 4872, May 2007. 1123 [RFC4873] Berger, L., Bryskin, I., Papadimitriou, D., and A. Farrel, 1124 "GMPLS Segment Recovery", RFC 4873, May 2007. 1126 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 1127 Engineering", RFC 5305, October 2008. 1129 [RFC5307] Kompella, K., Ed. and Y. Rekhter, Ed., "IS-IS Extensions 1130 in Support of Generalized Multi-Protocol Label Switching 1131 (GMPLS)", RFC 5307, October 2008. 1133 [RFC5440] Vasseur, JP., Ed. and JL. Le Roux, Ed., "Path Computation 1134 Element (PCE) Communication Protocol (PCEP)", RFC 5440, 1135 March 2009. 1137 [RFC6001] Papadimitriou D., Vigoureux M., Shiomoto K., Brungard D. 1138 and Le Roux JL., "Generalized MPLS (GMPLS) Protocol 1139 Extensions for Multi-Layer and Multi-Region Networks 1140 (MLN/MRN)", RFC 6001, October 2010. 1142 [RFC6107] Shiomoto K. and Farrel A., "Procedures for Dynamically 1143 Signaled Hierarchical Label Switched Paths", RFC 6107, 1144 February 2011. 1146 [RFC6241] Enns, R., Bjorklund, M., Schoenwaelder J., Bierman A., 1147 "Network Configuration Protocol (NETCONF)", RFC 6241, June 1148 2011. 1150 [RFC7074] Berger, L. and J. Meuric, "Revised Definition of the GMPLS 1151 Switching Capability and Type Fields", RFC 7074, November 1152 2013. 1154 [RFC7491] King, D., Farrel, A., "A PCE-Based Architecture for 1155 Application-Based Network Operations", RFC7491, March 1156 2015. 1158 [RFC7926] Farrel, A., Drake, J., Bitar, N., Swallow, G., Ceccarelli, 1159 D. and Zhang, X., "Problem Statement and Architecture for 1160 Information Exchange between Interconnected Traffic- 1161 Engineered Networks", RFC7926, July 2016. 1163 [RFC8040] Bierman, A., Bjorklund, M., Watsen, K., "RESTCONF 1164 Protocol", RFC 8040, January 2017. 1166 [RFC8271] Taillon M., Saad T., Gandhi R., Ali Z. and Bhatia M., 1167 "Updates to the Resource Reservation Protocol for Fast 1168 Reroute of Traffic Engineering GMPLS Label Switched 1169 Paths", RFC 8271, October 2017. 1171 [RFC8282] Oki E., Takeda T., Farrel A. and Zhang F., "Extensions to 1172 the Path Computation Element Communication Protocol (PCEP) 1173 for Inter-Layer MPLS and GMPLS Traffic Engineering", RFC 1174 8282, December 2017. 1176 [RFC8453] Ceccarelli, D. and Y. Lee, "Framework for Abstraction and 1177 Control of Traffic Engineered Networks", RFC 8453, August 1178 2018. 1180 [RFC8795] Liu, X., Bryskin, I., Beeram, V., Saad, T., Shah, H., 1181 Gonzalez De Dios, O., "YANG Data Model for Traffic 1182 Engineering (TE) Topologies", RFC8795, August 2020. 1184 11.2. Informative References 1186 [RFC3471] Berger, L., Ed., "Generalized Multi-Protocol Label 1187 Switching (GMPLS) Signaling Functional Description", RFC 1188 3471, January 2003. 1190 [RFC4202] Kompella, K., Ed. and Y. Rekhter, Ed., "Routing Extensions 1191 in Support of Generalized Multi-Protocol Label Switching 1192 (GMPLS)", RFC 4202, October 2005. 1194 [RFC4204] Lang, J., Ed., "Link Management Protocol (LMP)", RFC 4204, 1195 October 2005. 1197 [RFC4426] Lang, J., Ed., Rajagopalan, B., Ed., and D. Papadimitriou, 1198 Ed., "Generalized Multi-Protocol Label witching (GMPLS) 1199 Recovery Functional Specification", RFC 4426, March 2006. 1201 [RFC5150] Ayyangar, A., Kompella, K., Vasseur, J.P., Farrel, A., 1202 "Label Switched Path Stitching with Generalized 1203 Multiprotocol Label Switching Traffic Engineering (GMPLS 1204 TE)", RFC 5150, February, 2008. 1206 [RFC5212] Shiomoto K., Papadimitriou D., Le Roux JL., Vigoureux M. 1207 and Brungard D., "Requirements for GMPLS-Based Multi- 1208 Region and Multi-Layer Networks (MRN/MLN)", RFC 5212, July 1209 2008. 1211 [RFC5623] Oki E., Takeda T., Le Roux JL. and Farrel A., "Framework 1212 for PCE-Based Inter-Layer MPLS and GMPLS Traffic 1213 Engineering", RFC 5623, September 2009. 1215 [RFC7138] Ceccarelli, D., Ed., Zhang, F., Belotti, S., Rao, R., and 1216 J. Drake, "Traffic Engineering Extensions to OSPF for 1217 GMPLS Control of Evolving G.709 Optical Transport 1218 Networks", RFC 7138, March 2014. 1220 [RFC7139] Zhang, F., Ed., Zhang, G., Belotti, S., Ceccarelli, D., 1221 and K. Pithewan, "GMPLS Signaling Extensions for Control 1222 of Evolving G.709 Optical Transport Networks", RFC 7139, 1223 March 2014. 1225 [RFC7688] Lee, Y., Ed. and G. Bernstein, Ed., "GMPLS OSPF 1226 Enhancement for Signal and Network Element Compatibility 1227 for Wavelength Switched Optical Networks", RFC 7688, 1228 November 2015. 1230 [RFC7689] Bernstein, G., Ed., Xu, S., Lee, Y., Ed., Martinelli, G., 1231 and H. Harai, "Signaling Extensions for Wavelength 1232 Switched Optical Networks", RFC 7689, November 2015. 1234 [RFC7792] Zhang, F., Zhang, X., Farrel, A., Gonzalez de Dios, O., 1235 and D. Ceccarelli, "RSVP-TE Signaling Extensions in 1236 Support of Flexi-Grid Dense Wavelength Division 1237 Multiplexing (DWDM) Networks", RFC 7792, March 2016. 1239 [RFC8231] Crabbe, E., Minei, I., Medved, J., and R. Varga, "Path 1240 Computation Element Communication Protocol (PCEP) 1241 Extensions for Stateful PCE", RFC 8231, September 2017. 1243 [RFC8281] Crabbe, E., Minei, I., Sivabalan, S., and R. Varga, "PCEP 1244 Extensions for PCE-initiated LSP Setup in a Stateful PCE 1245 Model", RFC 8281, October 2017. 1247 [RFC8345] Clemm, A., Medved, J., Varga, R., Bahadur, N., 1248 Ananthakrishnan, H., Liu, X., "A YANG Data Model for 1249 Network Topologies", RFC 8345, March 2018. 1251 [RFC8363] Zhang, X., Zheng, H., Casellas, R., Dios, O., and D. 1252 Ceccarelli, "GMPLS OSPF-TE Extensions in support of Flexi- 1253 grid DWDM networks", RFC8363, February 2017. 1255 [PAT-COMP] Busi, I., Belotti, S., Lopez, V., Gonzalez de Dios, O., 1256 Sharma, A., Shi, Y., Vilalta, R., Setheraman, K., "Yang 1257 model for requesting Path Computation", draft-ietf-teas- 1258 yang-path-computation, work in progress. 1260 [PCEP-LS] Dhody, D., Lee, Y., Ceccarelli, D., "PCEP Extensions for 1261 Distribution of Link-State and TE Information", draft- 1262 dhodylee-pce-pcep-ls, work in progress. 1264 [TE-Tunnel] Saad, T. et al., "A YANG Data Model for Traffic 1265 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 1266 te, work in progress. 1268 [sPCE-ID] Dugeon, O. et al., "PCEP Extension for Stateful Inter- 1269 Domain Tunnels", draft-ietf-pce-stateful-interdomain, work 1270 in progress. 1272 [G.808.1] ITU-T, "Generic protection switching - Linear trail and 1273 subnetwork protection", G.808.1, May 2014. 1275 12. Authors' Addresses 1277 Haomian Zheng 1278 Huawei Technologies 1279 H1, Huawei Xiliu Beipo Village, Songshan Lake 1280 Dongguan 1281 Guangdong, 523808 China 1282 Email: zhenghaomian@huawei.com 1284 Xianlong Luo 1285 Huawei Technologies 1286 G1, Huawei Xiliu Beipo Village, Songshan Lake 1287 Dongguan 1288 Guangdong, 523808 China 1289 Email: luoxianlong@huawei.com 1291 Yunbin Xu 1292 CAICT 1293 Email: xuyunbin@caict.ac.cn 1295 Yang Zhao 1296 China Mobile 1297 Email: zhaoyangyjy@chinamobile.com 1299 Sergio Belotti 1300 Nokia 1301 Email: sergio.belotti@nokia.com 1302 Dieter Beller 1303 Nokia 1304 Email: Dieter.Beller@nokia.com 1306 Yi Lin 1307 Huawei Technologies 1308 H1, Huawei Xiliu Beipo Village, Songshan Lake 1309 Dongguan 1310 Guangdong, 523808 China 1311 Email: yi.lin@huawei.com