idnits 2.17.1 draft-ietf-teas-gmpls-controller-inter-work-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 07, 2022) is 781 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Haomian Zheng 2 Internet Draft Xianlong Luo 3 Yi Lin 4 Category: Informational Huawei Technologies 5 Yang Zhao 6 China Mobile 7 Yunbin Xu 8 CAICT 9 Sergio Belotti 10 Dieter Beller 11 Nokia 12 Expires: September 08, 2022 March 07, 2022 14 Interworking of GMPLS Control and Centralized Controller System 16 draft-ietf-teas-gmpls-controller-inter-work-08 18 Abstract 20 Generalized Multi-Protocol Label Switching (GMPLS) control allows 21 each network element (NE) to perform local resource discovery, 22 routing and signaling in a distributed manner. 24 On the other hand, with the development of software-defined 25 transport networking technology, a set of NEs can be controlled via 26 centralized controller hierarchies to address the issue from multi- 27 domain, multi-vendor and multi-technology. An example of such 28 centralized architecture is ACTN controller hierarchy described in 29 RFC 8453. 31 Instead of competing with each other, both the distributed and the 32 centralized control plane have their own advantages, and should be 33 complementary in the system. This document describes how the GMPLS 34 distributed control plane can interwork with a centralized 35 controller system in a transport network. 37 Status of this Memo 39 This Internet-Draft is submitted to IETF in full conformance with 40 the provisions of BCP 78 and BCP 79. 42 Internet-Drafts are working documents of the Internet Engineering 43 Task Force (IETF), its areas, and its working groups. Note that 44 other groups may also distribute working documents as Internet- 45 Drafts. 47 Internet-Drafts are draft documents valid for a maximum of six 48 months and may be updated, replaced, or obsoleted by other documents 49 at any time. It is inappropriate to use Internet-Drafts as 50 reference material or to cite them other than as "work in progress." 52 The list of current Internet-Drafts can be accessed at 53 http://www.ietf.org/ietf/1id-abstracts.txt. 55 The list of Internet-Draft Shadow Directories can be accessed at 56 http://www.ietf.org/shadow.html. 58 This Internet-Draft will expire on September 08, 2022. 60 Copyright Notice 62 Copyright (c) 2022 IETF Trust and the persons identified as the 63 document authors. All rights reserved. 65 This document is subject to BCP 78 and the IETF Trust's Legal 66 Provisions Relating to IETF Documents 67 (http://trustee.ietf.org/license-info) in effect on the date of 68 publication of this document. Please review these documents 69 carefully, as they describe your rights and restrictions with 70 respect to this document. Code Components extracted from this 71 document must include Simplified BSD License text as described in 72 Section 4.e of the Trust Legal Provisions and are provided without 73 warranty as described in the Simplified BSD License. 75 Table of Contents 77 1. Introduction .................................................. 3 78 2. Overview ...................................................... 4 79 2.1. Overview of GMPLS Control Plane ............................. 4 80 2.2. Overview of Centralized Controller System ................... 4 81 2.3. GMPLS Control Interwork with Centralized Controller System .. 5 82 3. Discovery Options ............................................. 7 83 3.1. LMP ...................................................... 7 84 4. Routing Options ............................................... 7 85 4.1. OSPF-TE .................................................. 7 86 4.2. ISIS-TE .................................................. 7 87 4.3. Netconf/RESTconf ......................................... 8 88 5. Path Computation .............................................. 8 89 5.1. Constraint-based Path Computing in GMPLS Control ......... 8 90 5.2. Path Computation Element (PCE) ........................... 8 91 6. Signaling Options ............................................. 9 92 6.1. RSVP-TE .................................................. 9 93 7. Interworking Scenarios ........................................ 9 94 7.1. Topology Collection & Synchronization .................... 9 95 7.2. Multi-domain Service Provisioning ....................... 10 96 7.3. Multi-layer Service Provisioning ........................ 13 97 7.3.1. Multi-layer Path Computation ....................... 14 98 7.3.2. Cross-layer Path Creation .......................... 16 99 7.3.3. Link Discovery ..................................... 17 100 7.4. Recovery ................................................ 17 101 7.4.1. Span Protection .................................... 18 102 7.4.2. LSP Protection ..................................... 18 103 7.4.3. Single-domain LSP Restoration ...................... 18 104 7.4.4. Multi-domain LSP Restoration ....................... 19 105 7.4.5. Fast Reroute ....................................... 23 106 7.5. Controller Reliability .................................. 23 107 8. Manageability Considerations ................................. 23 108 9. Security Considerations ...................................... 24 109 10. IANA Considerations.......................................... 24 110 11. References .................................................. 24 111 11.1. Normative References ................................... 24 112 11.2. Informative References ................................. 26 113 12. Authors' Addresses .......................................... 28 115 1. Introduction 117 Generalized Multi-Protocol Label Switching (GMPLS) [RFC3945] extends 118 MPLS to support different classes of interfaces and switching 119 capabilities such as Time-Division Multiplex Capable (TDM), Lambda 120 Switch Capable (LSC), and Fiber-Switch Capable (FSC). Each network 121 element (NE) running a GMPLS control plane collects network 122 information from other NEs and supports service provisioning through 123 signaling in a distributed manner. More generic description for 124 Traffic-engineering networking information exchange can be found in 125 [RFC7926]. 127 On the other hand, Software-Defined Networking (SDN) technologies 128 have been introduced to control the transport network in a 129 centralized manner. Centralized controllers can collect network 130 information from each node and provision services to corresponding 131 nodes. One of the examples is the Abstraction and Control of Traffic 132 Engineered Networks (ACTN) [RFC8453], which defines a hierarchical 133 architecture with Provisioning Network Controller (PNC), Multi- 134 domain Service Coordinator (MDSC) and Customer Network Controller 135 (CNC) as centralized controllers for different network abstraction 136 levels. A Path Computation Element (PCE) based approach has been 137 proposed as Application-Based Network Operations (ABNO) in 138 [RFC7491]. 140 In such centralized controller architectures, GMPLS can be applied 141 for the NE-level control. A centralized controller may support GMPLS 142 enabled domains and may interact with a GMPLS enabled domain where 143 the GMPLS control plane does the service provisioning from ingress 144 to egress. In this case the centralized controller sends the request 145 to the ingress node and does not have to configure all NEs along the 146 path through the domain from ingress to egress thus leveraging the 147 GMPLS control plane. This document describes how GMPLS control 148 interworks with centralized controller system in transport network. 150 2. Overview 152 In this section, overviews of GMPLS control plane and centralized 153 controller system are discussed as well as the interactions between 154 the GMPLS control plane and centralized controllers. 156 2.1. Overview of GMPLS Control Plane 158 GMPLS separates the control plane and the data plane to support 159 time-division, wavelength, and spatial switching, which are 160 significant in transport networks. For the NE level control in 161 GMPLS, each node runs a GMPLS control plane instance. 162 Functionalities such as service provisioning, protection, and 163 restoration can be performed via GMPLS communication among multiple 164 NEs. At the same time, the controller can also collect node and link 165 resources in the network to construct the network topology and 166 compute routing paths for serving service requests. 168 Several protocols have been designed for GMPLS control [RFC3945] 169 including link management [RFC4204], signaling [RFC3471], and 170 routing [RFC4202] protocols. The controllers applying these 171 protocols communicate with each other to exchange resource 172 information and establish Label Switched Paths (LSPs). In this way, 173 controllers in different nodes in the network have the same view of 174 the network topology and provision services based on local policies. 176 2.2. Overview of Centralized Controller System 178 With the development of SDN technologies, a centralized controller 179 architecture has been introduced to transport networks. One example 180 architecture can be found in ACTN [RFC8453]. In such systems, a 181 controller is aware of the network topology and is responsible for 182 provisioning incoming service requests. 184 Multiple hierarchies of controllers are designed at different levels 185 implementing different functions. This kind of architecture enables 186 multi-vendor, multi-domain, and multi-technology control. For 187 example, a higher-level controller coordinates several lower-level 188 controllers controlling different domains, for topology collection 189 and service provisioning. Vendor-specific features can be abstracted 190 between controllers, and standard API (e.g., generated from 191 RESTconf/YANG) is used. 193 2.3. GMPLS Control Interwork with Centralized Controller System 195 Besides the GMPLS and the interactions among the controller 196 hierarchies, it is also necessary for the controllers to communicate 197 with the network elements. Within each domain, GMPLS control can be 198 applied to each NE. The bottom-level centralized controller can act 199 as a NE to collect network information and initiate LSP. Figure 1 200 shows an example of GMPLS interworking with centralized controllers 201 (ACTN terminologies are used in the figure). 203 +-------------------+ 204 | Orchestrator | 205 | (MDSC) | 206 +-------------------+ 207 ^ ^ ^ 208 | | | 209 +-------------+ | +-------------+ 210 | |RESTConf/YANG models | 211 V V V 212 +-------------+ +-------------+ +-------------+ 213 |Controller(N)| |Controller(G)| |Controller(G)| 214 | (PNC) | | (PNC) | | (PNC) | 215 +-------------+ +-------------+ +-------------+ 216 ^ ^ ^ ^ ^ ^ 217 | | | | | | 218 Netconf| |PCEP Netconf| |PCEP Netconf| |PCEP 219 /YANG | | /YANG | | /YANG | | 220 V V V V V V 221 .----------. Inter- .----------. Inter- .----------. 222 / \ domain / \ domain / \ 223 | | link | LMP | link | LMP | 224 | |======| OSPF-TE |======| OSPF-TE | 225 | | | RSVP-TE | | RSVP-TE | 226 \ / \ / \ / 227 `----------` `----------` `----------` 228 Non-GMPLS domain 1 GMPLS domain 2 GMPLS domain 3 230 Figure 1: Example of GMPLS/non-GMPLS interworks with Controllers 232 Figure 1 shows the scenario with two GMPLS domains and one non-GMPLS 233 domain. This system supports the interworking among non-GMPLS 234 domain, GMPLS domain and the controller hierarchies. For domain 1, 235 the network element were not enabled with GMPLS so the control can 236 be purely from the controller, via Netconf/YANG and/or PCEP. For 237 domains 2 and 3, each domain has the GMPLS control plane enabled at 238 the physical network level. The PNC can exploit GMPLS capability 239 implemented in the domain to listen to the IGP routing protocol 240 messages (OSPF LSAs for example) that the GMPLS control plane 241 instances are disseminating into the network and thus learn the 242 network topology. For path computation in the domain with PNC 243 implementing a PCE, PCCs (e.g. NEs, other controller/PCE) use PCEP 244 to ask the PNC for a path and get replies. The MDSC communicates 245 with PNCs using for example REST/RESTConf based on YANG data models. 246 As a PNC has learned its domain topology, it can report the topology 247 to the MDSC. When a service arrives, the MDSC computes the path and 248 coordinates PNCs to establish the corresponding LSP segment. 250 Alternatively, the NETCONF protocol can be used to retrieve topology 251 information utilizing the e.g. [RFC8795] Yang model and the 252 technology-specific YANG model augmentations required for the 253 specific network technology. The PNC can retrieve topology 254 information from any NE (the GMPLS control plane instance of each NE 255 in the domain has the same topological view), construct the topology 256 of the domain and export an abstracted view to the MDSC. Based on 257 the topology retrieved from multiple PNCs, the MDSC can create 258 topology graph of the multi-domain network, and can use it for path 259 computation. To setup a service, the MDSC can exploit e.g. [TE- 260 Tunnel] Yang model together with the technology-specific YANG model 261 augmentations. 263 This document focuses on the interworking between the GMPLS and the 264 centralized controller system, including: 266 - The interworking between the GMPLS domain(s) and the centralized 267 controller(s) (including the orchestrator, if exists) controlling 268 the GMPLS domain(s). 270 - The interworking between a non-GMPLS domain (which is controlled 271 by a centralized controller system) and a GMPLS domain, through 272 the controller hierarchy architecture. 274 For convenience, this document uses the following terminologies for 275 the controller and the orchestrator: 277 - Controller(G): A domain controller controlling a GMPLS domain (the 278 controller(G) of the GMPLS domains 2 and 3 in Figure 1); 280 - Controller(N): A domain controller controlling a non-GMPLS domain 281 (the controller(N) of the non-GMPLS domain 1 in Figure 1); 283 - H-Controller(G): A domain controller controlling the higher-layer 284 GMPLS domain, in the context of multi-layer networks; 286 - L-Controller(G): A domain controller controlling the lower-layer 287 GMPLS domain, in the context of multi-layer networks; 289 - H-Controller(N): A domain controller controlling the higher-layer 290 non-GMPLS domain, in the context of multi-layer networks; 291 - L-Controller(N): A domain controller controlling the lower-layer 292 non-GMPLS domain, in the context of multi-layer networks; 294 - Orchestrator(MD): An orchestrator used to orchestrate the multi- 295 domain networks; 297 - Orchestrator(ML): An orchestrator used to orchestrate the multi- 298 layer networks. 300 3. Discovery Options 302 In GMPLS control, the link connectivity need to be verified between 303 each pair of nodes. In this way, link resources, which are 304 fundamental resources in the network, are discovered by both ends of 305 the link. 307 3.1. LMP 309 Link management protocol (LMP) [RFC4204] runs between a pair of 310 nodes and is used to manage TE links. In addition to the setup and 311 maintenance of control channels, LMP can be used to verify the data 312 link connectivity and correlate the link property. 314 4. Routing Options 316 In GMPLS control, link state information is flooded within the 317 network as defined in [RFC4202]. Each node in the network can build 318 the network topology according to the flooded link state 319 information. Routing protocols such as OSPF-TE [RFC4203] and ISIS-TE 320 [RFC5307] have been extended to support different interfaces in 321 GMPLS. 323 In centralized controller system, centralized controller can be 324 placed at the GMPLS network and passively receive the information 325 flooded in the network. In this way, the centralized controller can 326 construct and update the network topology. 328 4.1. OSPF-TE 330 OSPF-TE is introduced for TE networks in [RFC3630]. OSPF extensions 331 have been defined in [RFC4203] to enable the capability of link 332 state information for GMPLS network. Based on this work, OSPF 333 protocol has been extended to support technology-specific routing. 334 The routing protocol for OTN, WSON and optical flexi-grid network 335 are defined in [RFC7138], [RFC7688] and [RFC8363], respectively. 337 4.2. ISIS-TE 339 ISIS-TE is introduced for TE networks in [RFC5305] and is extended 340 to support GMPLS routing functions [RFC5307], and has been updated 341 to [RFC7074] to support the latest GMPLS switching capability and 342 Types fields. 344 4.3. Netconf/RESTconf 346 Netconf [RFC6241] and RESTconf [RFC8040] protocols are originally 347 used for network configuration. Besides, these protocols can also be 348 used for topology retrieval by using topology-related YANG models, 349 such as [RFC8345] and [RFC8795]. These protocols provide a powerful 350 mechanism for notification that permits to notify the client about 351 topology changes. 353 5. Path Computation 355 Once a controller learns the network topology, it can utilize the 356 available resources to serve service requests by performing path 357 computation. Due to abstraction, the controllers may not have 358 sufficient information to compute the optimal path. In this case, 359 the controller can interact with other controllers by sending Yang 360 Path Computation requests [PAT-COMP] to compute a set of potential 361 optimal paths and then, based on its own constraints, policy and 362 specific knowledge (e.g. cost of access link) can choose the more 363 feasible path for service e2e path setup. 365 Path computation is one of the key objectives in various types of 366 controllers. In the given architecture, it is possible for different 367 components that have the capability to compute the path. 369 5.1. Constraint-based Path Computing in GMPLS Control 371 In GMPLS control, a routing path is computed by the ingress node 372 [RFC3473] and is based on the ingress node TED. Constraint-based 373 path computation is performed according to the local policy of the 374 ingress node. 376 5.2. Path Computation Element (PCE) 378 PCE has been introduced in [RFC4655] as a functional component that 379 provides services to compute path in a network. In [RFC5440], the 380 path computation is accomplished by using the Traffic Engineering 381 Database (TED), which maintains the link resources in the network. 382 The emergence of PCE efficiently improve the quality of network 383 planning and offline computation, but there is a risk that the 384 computed path may be infeasible if there is a diversity requirement, 385 because stateless PCE has no knowledge about the former computed 386 paths. 388 To address this issue, stateful PCE has been proposed in [RFC8231]. 389 Besides the TED, an additional LSP Database (LSP-DB) is introduced 390 to archive each LSP computed by the PCE. In this way, PCE can easily 391 figure out the relationship between the computing path and former 392 computed paths. In this approach, PCE provides computed paths to 393 PCC, and then PCC decides which path is deployed and when to be 394 established. 396 In PCE Initiation [RFC8281], PCE is allowed to trigger the PCC to 397 setup, maintenance, and teardown of the PCE-initiated LSP under the 398 stateful PCE model. This would allow a dynamic network that is 399 centrally controlled and deployed. 401 In centralized controller system, the PCE can be implemented in a 402 centralized controller, and the centralized controller performs path 403 computation according to its local policies. On the other hand, the 404 PCE can also be placed outside of the centralized controller. In 405 this case, the centralized controller acts as a PCC to request path 406 computation to the PCE through PCEP. One of the reference 407 architecture can be found at [RFC7491]. 409 6. Signaling Options 411 Signaling mechanisms are used to setup LSPs in GMPLS control. 412 Messages are sent hop by hop between the ingress node and the egress 413 node of the LSP to allocate labels. Once the labels are allocated 414 along the path, the LSP setup is accomplished. Signaling protocols 415 such as RSVP-TE [RFC3473] have been extended to support different 416 interfaces in GMPLS. 418 6.1. RSVP-TE 420 RSVP-TE is introduced in [RFC3209] and extended to support GMPLS 421 signaling in [RFC3473]. Several label formats are defined for a 422 generalized label request, a generalized label, suggested label and 423 label sets. Based on [RFC3473], RSVP-TE has been extended to support 424 technology-specific signaling. The RSVP-TE extensions for OTN, WSON, 425 optical flexi-grid network are defined in [RFC7139], [RFC7689], and 426 [RFC7792], respectively. 428 7. Interworking Scenarios 430 7.1. Topology Collection & Synchronization 432 Topology information is necessary on both network elements and 433 controllers. The topology on network element is usually raw 434 information, while the topology on the controller can be either raw 435 or abstracted. Three different abstraction methods have been 436 described in [RFC8453], and different controllers can select the 437 corresponding method depending on application. 439 When there are changes in the network topology, the impacted network 440 element(s) need to report changes to all the other network elements, 441 together with the controller, to sync up the topology information. 442 The inter-NE synchronization can be achieved via protocols mentioned 443 in Sections 3 and 4. The topology synchronization between NEs and 444 controllers can either be achieved by routing protocols OSPF- 445 TE/PCEP-LS in [PCEP-LS] or Netconf protocol notifications with YANG 446 model. 448 7.2. Multi-domain Service Provisioning 450 Based on the topology information on controllers and network 451 elements, service provisioning can be deployed. Plenty of methods 452 have been specified for single domain service provisioning, such as 453 using PCEP and RSVP-TE. 455 Multi-domain service provisioning would request coordination among 456 the controller hierarchies. Given the service request, the end-to- 457 end delivery procedure may include interactions at any level (i.e. 458 interface) in the hierachy of the controllers (e.g. MPI and SBI for 459 ACTN). The computation for a cross-domain path is usually completed 460 by controllers who have a global view of the topologies. Then the 461 configuration is decomposed into lower-level controllers, to 462 configure the network elements to set up the path. 464 A combination of the centralized and distributed protocols may be 465 necessary for the interaction between network elements and 466 controller. Several methods can be used to create the inter-domain 467 path: 469 1) With end-to-end RSVP-TE session: 471 In this method, all the domains need to support the RSVP-TE protocol 472 and thus need to be GMPLS domains. The Controller(G) of the source 473 domain triggers the source node to create the end-to-end RSVP-TE 474 session, and the assignment and distribution of the labels on the 475 inter-domain links are done by the boarder nodes of each domain, 476 using RSVP-TE protocol. Therefore, this method requires the 477 interworking of RSVP-TE protocols between different domains. 479 There are two possible methods: 481 1.1) One single end-to-end RSVP-TE session 483 In this method, an end-to-end RSVP-TE session from the source node 484 to the destination node will be used to create the inter-domain 485 path. A typical example would be the PCE Initiation scenario, in 486 which a PCE message (PCInitiate) is sent from the controller(G) to 487 the source node, and then trigger an RSVP procedure along the path. 488 Similarly, the interaction between the controller and the source 489 node of the source domain can be achieved by Netconf protocol with 490 corresponding YANG models, and then completed by running RSVP among 491 the network elements. 493 1.2) LSP Stitching 495 The LSP stitching method defined in [RFC5150] can also be used to 496 create the end-to-end LSP. I.e., when the source node receives an 497 end-to-end path creation request (e.g., using PCEP or Netconf 498 protocol), the source node starts an end-to-end RSVP-TE session 499 along the end points of each LSP segment (refers to S-LSP in 500 [RFC5150]) of each domain, to assign the labels on the inter-domain 501 links between each pair of neighbor S-LSPs, and stitch the end-to- 502 end LSP to each S-LSP. See Figure 2 as an example. Note that the S- 503 LSP in each domain can be either created by its Controller(G) in 504 advance, or created dynamically triggered by the end-to-end RSVP-TE 505 session. 507 +------------------------+ 508 | Orchestrator(MD) | 509 +-----------+------------+ 510 | 511 +---------------+ +------V-------+ +---------------+ 512 | Controller(G) | | Controller(G)| | Controller(G) | 513 +-------+-------+ +------+-------+ +-------+-------+ 514 | | | 515 +--------V--------+ +-------V--------+ +--------V--------+ 516 |Client | | | | Client| 517 |Signal Domain 1| | Domain 2 | |Domain 3 Signal| 518 | | | | | | | | 519 |+-+-+ | | | | +-+-+| 520 || | | +--+ +--+| |+--+ +--+ +--+| |+--+ +--+ | | || 521 || | | | | | || || | | | | || || | | | | | || 522 || ******************************************************** || 523 || | | | | || || | | | | || || | | | | || 524 |+---+ +--+ +--+| |+--+ +--+ +--+| |+--+ +--+ +---+| 525 +-----------------+ +----------------+ +-----------------+ 526 | . . . . . . | 527 | .<-S-LSP 1->. .<- S-LSP 2 -->. .<-S-LSP 3->. | 528 | . . . . | 529 |-------------->.---->.------------->.---->.-------------->| 530 |<--------------.<----.<-------------.<----.<--------------| 531 | End-to-end RSVP-TE session for LSP stitching | 533 Figure 2: LSP stitching 535 2) Without end-to-end RSVP-TE session: 537 In this method, each domain can be a GMPLS domain or a non-GMPLS 538 domain. Each controller (may be a Controller(G) or a Controller(N)) 539 is responsible to create the path segment within its domain. The 540 boarder node does not need to communicate with other boarder nodes 541 in other domains for the distribution of labels on inter-domain 542 links, so end-to-end RSVP-TE session through multiple domains is not 543 required, and the interworking of RSVP-TE protocol between different 544 domains is not needed. 546 Note that path segments in the source domain and the destination 547 domain are "asymmetrical" segments, because the configuration of 548 client signal mapping into server layer tunnel is needed at only one 549 end of the segment, while configuration of server layer cross- 550 connect is needed at the other end of the segment. For example, the 551 path segment 1 and 3 in Figure 3 are asymmetrical segments, because 552 one end of the segment requires mapping GE into ODU0, while the 553 other end of the segment requires setting up ODU0 cross-connect. 555 +------------------------+ 556 | Orchestrator(MD) | 557 +-----------+------------+ 558 | 559 +---------------+ +------V-------+ +---------------+ 560 | Controller | | Controller | | Controller | 561 +-------+-------+ +------+-------+ +-------+-------+ 562 | | | 563 +--------V--------+ +-------V--------+ +--------V--------+ 564 |Client | | | | Client| 565 |Signal Domain 1| | Domain 2 | |Domain 3 Signal| 566 |(GE) | | | | (GE) | 567 | | ODU0 tunnel| | | | | | 568 |+-+-+ ^ | | | | +-+-+| 569 || | | +--+ |+--+| |+--+ +--+ +--+| |+--+ +--+ | | || 570 || | | | | || || || | | | | || || | | | | | || 571 || ******************************************************** || 572 || | | | | || . || | | | | || . || | | | | || 573 |+---+ +--+ +--+| . |+--+ +--+ +--+| . |+--+ +--+ +---+| 574 +-----------------+ . +----------------+ . +-----------------+ 575 . . . . 576 .<-Path Segment 1->.<--Path Segment 2-->.<-Path Segment 3->. 578 Figure 3: Example of asymmetrical path segment 580 The PCEP / GMPLS protocols should support creation of such 581 asymmetrical segment. 583 Note also that mechanisms to assign the labels in the inter-domain 584 links are also needed to be considered. There are two possible 585 methods: 587 2.1) Inter-domain labels assigned by NEs: 589 The concept of Stitching Label that allows stitching local path 590 segments was introduced in [RFC5150] and [sPCE-ID], in order to form 591 the inter-domain path crossing several different domains. It also 592 describes the BRPC and H-PCE PCInitiate procedure, i.e., the ingress 593 node of each downstream domain assigns the stitching label for the 594 inter-domain link between the downstream domain and its upstream 595 neighbor domain, and this stitching label will be passed to the 596 upstream neighbor domain by PCE protocol, which will be used for the 597 path segment creation in the upstream neighbor domain. 599 2.2) Inter-domain labels assigned by controller: 601 If the resource of inter-domain links are managed by the 602 orchestrator(MD), each domain controller can provide to the 603 orchestrator(MD) the list of available labels (e.g. timeslots if OTN 604 is the scenario) using IETF Topology model and related technology 605 specific extension. Once that the orchestrator(MD) has computed the 606 E2E path, RSVP-TE or PCEP can be used in the different domains to 607 setup related segment tunnel consisting with label inter-domain 608 information, e.g. for PCEP the label ERO can be included in the 609 PCInitiate message to indicate the inter-domain labels, so that each 610 boarder node of each domain can configure the correct cross-connect 611 within itself. 613 7.3. Multi-layer Service Provisioning 615 GMPLS can interwork with centralized controller system in multi- 616 layer networks. 618 +----------------+ 619 |Orchestrator(ML)| 620 +------+--+------+ 621 | | Higher-layer Network 622 | | .--------------------. 623 | | / \ 624 | | +--------------+ | +--+ Link +--+ | 625 | +-->| H-Controller +----->| | |**********| | | 626 | +--------------+ | +--+ +--+ | 627 | \ . . / 628 | `--.------------.---` 629 | . . 630 | .---.------------.---. 631 | / . . \ 632 | +--------------+ | +--+ +--+ +--+ | 633 +----->| L-Controller +----->| | ============== | | 634 +--------------+ | +--+ +--+ +--+ | 635 \ H-LSP / 636 `-------------------` 637 Lower-layer Network 639 Figure 4: GMPLS-controller interworking in multi-layer networks 640 An example with two layers of network is shown in Figure 4. In this 641 example, the GMPLS control plane is enabled in at least one layer 642 network (otherwise it is out of scope of this document), and 643 interworks with the controller of its domain (H-Controller and L- 644 Controller, respectively). The Orchestrator(ML) is used to 645 coordinate the control of the multi-layer network. 647 7.3.1. Multi-layer Path Computation 649 [RFC5623] describes three inter-layer path computation models and 650 four inter-layer path control models: 652 - 3 Path computation: 654 o Single PCE path computation model 656 o Multiple PCE path computation with inter-PCE communication 657 model 659 o Multiple PCE path computation without inter-PCE communication 660 model 662 - 4 Path control: 664 o PCE-VNTM cooperation model 666 o Higher-layer signaling trigger model 668 o NMS-VNTM cooperation model (integrated flavor) 670 o NMS-VNTM cooperation model (separate flavor) 672 Section 4.2.4 of [RFC5623] also provides all the possible 673 combinations of inter-layer path computation and inter-layer path 674 control models. 676 To apply [RFC5623] in multi-layer network with GMPLS-controller 677 interworking, the H-Controller and the L-Controller can act as the 678 PCE Hi and PCE Lo respectively, and typically, the Orchestrator(ML) 679 can act as a VNTM because it has the abstracted view of both the 680 higher-layer and lower-layer networks. 682 Table 1 shows all possible combinations of path computation and path 683 control models in multi-layer network with GMPLS-controller 684 interworking: 686 Table 1: Combinations of path computation and path control models 688 --------------------------------------------------------- 689 | Path computation |Single PCE | Multiple | Multiple | 690 | \ | (Not | PCE with | PCE w/o | 691 | Path control |applicable)| inter-PCE | inter-PCE | 692 |---------------------+-----------+-----------+-----------| 693 | PCE-VNTM | ...... | | | 694 | cooperation | . -- . | Yes | Yes | 695 | | . . | | | 696 |---------------------+--.----.---+-----------+-----------| 697 | Higher-layer | . . | | | 698 | signaling trigger | . -- . | Yes | Yes | 699 | | . . | | | 700 |---------------------+--.----.---+-----------+-----------| 701 | NMS-VNTM | . . | .........|....... | 702 | cooperation | . -- . | .Yes | No . | 703 | (integrated flavor) | . . | . | . | 704 |---------------------+--.----.---+--.--------+------.----| 705 | NMS-VNTM | . . | . | . | 706 | cooperation | . -- . | .No | Yes. | 707 | (separate flavor) | ...... | .........|....... | 708 ---------------------+----|------+--------|--+----------- 709 V V 710 Not applicable because Typical models to be used 711 there are multiple PCEs 713 Note that: 715 - Since there is one PCE in each layer network, the path computation 716 model "Single PCE path computation" is not applicable. 718 - For the other two path computation models "Multiple PCE with 719 inter-PCE" and "Multiple PCE w/o inter-PCE", the possible 720 combinations are the same as defined in [RFC5623]. More 721 specifically: 723 o The path control models "NMS-VNTM cooperation (integrated 724 flavor)" and "NMS-VNTM cooperation (separate flavor)" are the 725 typical models to be used in multi-layer network with GMPLS- 726 controller interworking. This is because in these two models, 727 the path computation is triggered by the NMS or VNTM. And in 728 the centralized controller system, the path computation 729 requests are typically from the Orchestrator(ML) (acts as 730 VNTM). 732 o For the other two path control models "PCE-VNTM cooperation" 733 and "Higher-layer signaling trigger", the path computation is 734 triggered by the NEs, i.e., NE performs PCC functions. These 735 two models are still possible to be used, although they are not 736 the main methods. 738 7.3.2. Cross-layer Path Creation 740 In a multi-layer network, a lower-layer LSP in the lower-layer 741 network can be created, which will construct a new link in the 742 higher-layer network. Such lower-layer LSP is called Hierarchical 743 LSP, or H-LSP for short, see [RFC6107]. 745 The new link constructed by the H-LSP then can be used by the 746 higher-layer network to create new LSPs. 748 As described in [RFC5212], two methods are introduced to create the 749 H-LSP: the static (pre-provisioned) method and the dynamic 750 (triggered) method. 752 1) Static (pre-provisioned) method 754 In this method, the H-LSP in the lower-layer network is created in 755 advance. After that, the higher-layer network can create LSPs using 756 the resource of the link constructed by the H-LSP. 758 The Orchestrator(ML) is responsible to decide the creation of H-LSP 759 in the lower-layer network if it acts as a VNTM. It then requests 760 the L-Controller to create the H-LSP via, for example, MPI interface 761 under the ACTN architecture. See Section 3.3.2 of [TE-Tunnel]. 763 If the lower-layer network is a GMPLS domain, the L-Controller(G) 764 can trigger the GMPLS control plane to create the H-LSP. As a 765 typical example, the PCInitiate message can be used for the 766 communication between the L-Controller and the source node of the H- 767 LSP. And the source node of the H-LSP can trigger the RSVP-TE 768 signaling procedure to create the H-LSP, as described in [RFC6107]. 770 If the lower-layer network is a non-GMPLS domain, other methods may 771 be used by the L-Controller(N) to create the H-LSP, which is out of 772 scope of this document. 774 2) Dynamic (triggered) method 776 In this method, the signaling of LSP creation in the higher-layer 777 network will trigger the creation of H-LSP in the lower-layer 778 network dynamically, if it is necessary. Therefore, both the higher- 779 layer and lower-layer networks need to support the RSVP-TE protocol 780 and thus need to be GMPLS domains. 782 In this case, after the cross-layer path is computed, the 783 Orchestrator(ML) requests the H-Controller(G) for the cross-layer 784 LSP creation. As a typical example, the MPI interface under the ACTN 785 architecture could be used. 787 The H-Controller(G) can trigger the GMPLS control plane to create 788 the LSP in the higher-layer network. As a typical example, the 789 PCInitiate message can be used for the communication between the H- 790 Controller(G) and the source node of the Higher-layer LSP, as 791 described in Section 4.3 of [RFC8282]. At least two sets of ERO 792 information should be included to indicate the routes of higher- 793 layer LSP and lower-layer H-LSP. 795 The source node of the Higher-layer LSP follows the procedure 796 defined in Section 4 of [RFC6001], to trigger the GMPLS control 797 plane in both higher-layer network and lower-layer network to create 798 the higher-layer LSP and the lower-layer H-LSP. 800 On success, the source node of the H-LSP should report the 801 information of the H-LSP to the L-Controller(G) via, for example, 802 PCRpt message. 804 7.3.3. Link Discovery 806 If the higher-layer network and the lower-layer network are under 807 the same GMPLS control plane instance, the H-LSP can be an FA-LSP. 808 Then the information of the link constructed by this FA-LSP, called 809 FA, can be advertised in the routing instance, so that the H- 810 Controller can be aware of this new FA. [RFC4206] and the following 811 updates to it (including [RFC6001] and [RFC6107]) describe the 812 detail extensions to support advertisement of an FA. 814 If the higher-layer network and the lower-layer network are under 815 separated GMPLS control plane instances, or one of the both layer 816 networks is a non-GMPLS domain, after an H-LSP is created in the 817 lower-layer network, the link discovery procedure will be triggered 818 in the higher-layer network to discover the information of the link 819 constructed by the H-LSP. LMP protocol defined in [RFC4204] can be 820 used if the higher-layer network supports GMPLS. The information of 821 this new link will be advertised to the H-Controller. 823 7.4. Recovery 825 The GMPLS recovery functions are described in [RFC4426]. Span 826 protection, end-to-end protection and restoration, are discussed 827 with different protection schemes and message exchange requirements. 828 Related RSVP-TE extensions to support end-to-end recovery is 829 described in [RFC4872]. The extensions in [RFC4872] include 830 protection, restoration, preemption, and rerouting mechanisms for an 831 end-to-end LSP. Besides end-to-end recovery, a GMPLS segment 832 recovery mechanism is defined in [RFC4873], which also intends to be 833 compatible with Fast Reroute (FRR) (see [RFC4090] which defines 834 RSVP-TE extensions for the FRR mechanism, and [RFC8271] which 835 described the updates of GMPLS RSVP-TE protocol for FRR of GMPLS TE- 836 LSPs). 838 7.4.1. Span Protection 840 Span protection refers to the protection of the link between two 841 neighboring switches. The main protocol requirements include: 843 - Link management: Link property correlation on the link protection 844 type; 846 - Routing: announcement of the link protection type; 848 - Signaling: indication of link protection requirement for that LSP. 850 GMPLS already supports the above requirements, and there are no new 851 requirements in the scenario of interworking between GMPLS and 852 centralized controller system. 854 7.4.2. LSP Protection 856 The LSP protection includes end-to-end and segment LSP protection. 857 For both cases: 859 - In the provisioning phase: 861 In both single-domain and multi-domain scenarios, the disjoint 862 path computation can be done by the centralized controller system, 863 as it has the global topology and resource view. And the path 864 creation can be done by the procedure described in Section 7.2. 866 - In the protection switchover phase: 868 In both single-domain and multi-domain scenarios, the existing 869 standards provide the distributed way to trigger the protection 870 switchover. For example, data plane Automatic Protection Switching 871 (APS) mechanism described in [G.808.1], or GMPLS Notify mechanism 872 described in [RFC4872] and [RFC4873]. In the scenario of 873 interworking between GMPLS and centralized controller system, it 874 is recommended to still use these distributed mechanisms rather 875 than centralized mechanism (i.e., the controller triggers the 876 protection switchover). This can significantly shorten the 877 protection switching time. 879 7.4.3. Single-domain LSP Restoration 881 - Pre-planned LSP rerouting (including shared-mesh restoration): 883 In pre-planned protecting, the protecting LSP is established only 884 in the control plane in the provisioning phase, and will be 885 activated in the data plane once failure occurs. 887 In the scenario of interworking between GMPLS and centralized 888 controller system, the route of protecting LSP can be computed by 889 the centralized controller system. This takes the advantage of 890 making better use of network resource, especially for the resource 891 sharing in shared-mesh restoration. 893 - Full LSP rerouting: 895 In full LSP rerouting, the normal traffic will be switched to an 896 alternate LSP that is fully established only after failure 897 occurrence. 899 As described in [RFC4872] and [RFC4873], the alternate route can 900 be computed on demand when failure occurrence, or pre-computed and 901 stored before failure occurrence. 903 In a fully distributed scenario, the pre-computation method offers 904 faster restoration time, but has the risk that the pre-computed 905 alternate route may become out of date due to the changes of the 906 network. 908 In the scenario of interworking between GMPLS and centralized 909 controller system, the pre-computation of the alternate route 910 could be taken place in the centralized controller (and may be 911 stored in the controller or the head-end node of the LSP). In this 912 way, any changes in the network can trigger the refreshment of the 913 alternate route by the centralized controller. This makes sure 914 that the alternate route will not become out of date. 916 7.4.4. Multi-domain LSP Restoration 918 A working LSP may traverse multiple domains, each of which may or 919 may not support GMPLS distributed control plane. 921 In the case that all the domains support GMPLS, both the end-to-end 922 rerouting method and the domain segment rerouting method could be 923 used. 925 In the case that only some of the domains support GMPLS, the domain 926 segment rerouting method could be used in those GMPLS domains. For 927 other domains which do not support GMPLS, other mechanisms may be 928 used to protect the LSP segments, which are out of scope of this 929 document. 931 1) End-to-end rerouting: 933 In this case, failure occurs on the working LSP inside any domain or 934 on the inter-domain links will trigger the end-to-end restoration. 936 In both pre-planned and full LSP rerouting, the end-to-end 937 protecting LSP could be computed by the centralized controller 938 system, and could be created by the procedure described in Section 939 7.2. Note that the end-to-end protecting LSP may traverse different 940 domains from the working LSP, depending on the result of multi- 941 domain path computation for the protecting LSP. 943 +----------------+ 944 |Orchestrator(MD)| 945 +-------.--------+ 946 ............................................ 947 . . . . 948 +----V-----+ +----V-----+ +----V-----+ +----V-----+ 949 |Controller| |Controller| |Controller| |Controller| 950 | (G) 1 | | (G) 2 | | (G) 3 | | (G) 4 | 951 +----.-----+ +-------.--+ +-------.--+ +----.-----+ 952 . . . . 953 +----V--------+ +-V-----------+ . +-------V-----+ 954 | Domain 1 | | Domain 2 | . | Domain 4 | 955 |+---+ +---+| |+---+ +---+| . |+---+ +---+| 956 || ===/~/======/~~~/================================ || 957 |+-*-+ +---+| |+---+ +---+| . |+---+ +-*-+| 958 | * | +-------------+ . | * | 959 | * | . | * | 960 | * | +-------------+ . | * | 961 | * | | Domain 3 <... | * | 962 |+-*-+ +---+| |+---+ +---+| |+---+ +-*-+| 963 || ************************************************* || 964 |+---+ +---+| |+---+ +---+| |+---+ +---+| 965 +-------------+ +-------------+ +-------------+ 967 ====: Working LSP ****: Protecting LSP /~/: Failure 969 Figure 5: End-to-end restoration 971 2) Domain segment rerouting: 973 2.1) Intra-domain rerouting: 975 If failure occurs on the working LSP segment in a GMPLS domain, the 976 segment rerouting ([RFC4873]) could be used for the working LSP 977 segment in that GMPLS domain. Figure 6 shows an example of intra- 978 domain rerouting. 980 The intra-domain rerouting of a non-GMPLS domain is out of scope of 981 this document. 983 +----------------+ 984 |Orchestrator(MD)| 985 +-------.--------+ 986 ............................................ 987 . . . . 988 +----V-----+ +----V-----+ +----V-----+ +----V-----+ 989 |Controller| |Controller| |Controller| |Controller| 990 | (G) 1 | |(G)/(N) 2 | |(G)/(N) 3 | |(G)/(N) 4 | 991 +----.-----+ +-------.--+ +-------.--+ +----.-----+ 992 . . . . 993 +----V--------+ +-V-----------+ . +-------V-----+ 994 | Domain 1 | | Domain 2 | . | Domain 4 | 995 |+---+ +---+| |+---+ +---+| . |+---+ +---+| 996 || ===/~/=========================================== || 997 |+-*-+ +-*-+| |+---+ +---+| . |+---+ +---+| 998 | * * | +-------------+ . | | 999 | * * | . | | 1000 | * * | +-------------+ . | | 1001 | * * | | Domain 3 <... | | 1002 |+-*-+ +-*-+| |+---+ +---+| |+---+ +---+| 1003 || ********* || || | | || || | | || 1004 |+---+ +---+| |+---+ +---+| |+---+ +---+| 1005 +-------------+ +-------------+ +-------------+ 1007 ====: Working LSP ****: Rerouting LSP segment /~/: Failure 1009 Figure 6: Intra-domain segment rerouting 1011 2.2) Inter-domain rerouting: 1013 If intra-domain segment rerouting failed (e.g., due to lack of 1014 resource in that domain), or if failure occurs on the working LSP on 1015 an inter-domain link, the centralized controller system may 1016 coordinate with other domain(s), to find an alternative path or path 1017 segment to bypass the failure, and then trigger the inter-domain 1018 rerouting procedure. Note that the rerouting path or path segment 1019 may traverse different domains from the working LSP. 1021 The domains involved in the inter-domain rerouting procedure need to 1022 be GMPLS domains, which support the RSVP-TE signaling for the 1023 creation of rerouting LSP segment. 1025 For inter-domain rerouting, the interaction between GMPLS and 1026 centralized controller system is needed: 1028 - Report of the result of intra-domain segment rerouting to its 1029 Controller(G), and then to the Orchestrator(MD). The former one 1030 could be supported by the PCRpt message in [RFC8231], while the 1031 latter one could be supported by the MPI interface of ACTN. 1033 - Report of inter-domain link failure to the two Controllers (e.g., 1034 Controller(G) 1 and Controller(G) 2 in Figure 7) by which the two 1035 ends of the inter-domain link are controlled respectively, and 1036 then to the Orchestrator(MD). The former one could be done as 1037 described in Section 7.1 of this document, while the latter one 1038 could be supported by the MPI interface of ACTN. 1040 - Computation of rerouting path or path segment crossing multi- 1041 domains by the centralized controller system (see [PAT-COMP]); 1043 - Creation of rerouting LSP segment in each related domain. The 1044 Orchestrator(MD) can send the LSP segment rerouting request to the 1045 source Controller(G) (e.g., Controller(G) 1 in Figure 7) via MPI 1046 interface, and then the Controller(G) can trigger the creation of 1047 rerouting LSP segment through multiple GMPLS domains using GMPLS 1048 rerouting signaling. Note that the rerouting LSP segment may 1049 traverse a new domain which the working LSP does not traverse 1050 (e.g., Domain 3 in Figure 7). 1052 +----------------+ 1053 |Orchestrator(MD)| 1054 +-------.--------+ 1055 .................................................. 1056 . . . . 1057 +-----V------+ +-----V------+ +-----V------+ +-----V------+ 1058 | Controller | | Controller | | Controller | | Controller | 1059 | (G) 1 | | (G) 2 | | (G) 3 | | (G)/(N) 4 | 1060 +-----.------+ +------.-----+ +-----.------+ +-----.------+ 1061 . . . . 1062 +-----V-------+ +----V--------+ . +------V------+ 1063 | Domain 1 | | Domain 2 | . | Domain 4 | 1064 |+---+ +---+| |+---+ +---+| . |+---+ +---+| 1065 || | | || || | | || . || | | || 1066 || ============/~/========================================== || 1067 || * | | || || | | * || . || | | || 1068 |+-*-+ +---+| |+---+ +-*-+| . |+---+ +---+| 1069 | * | +----------*--+ . | | 1070 | * | ***** . | | 1071 | * | +----------*-----V----+ | | 1072 | * | | *Domain 3 | | | 1073 |+-*-+ +---+| |+---+ +-*-+ +---+| |+---+ +---+| 1074 || * | | || || | | * | | || || | | || 1075 || ******************************* | | || || | | || 1076 || | | || || | | | | || || | | || 1077 |+---+ +---+| |+---+ +---+ +---+| |+---+ +---+| 1078 +-------------+ +---------------------+ +-------------+ 1080 ====: Working LSP ****: Rerouting LSP segment /~/: Failure 1082 Figure 7: Inter-domain segment rerouting 1083 7.4.5. Fast Reroute 1085 [RFC4090] defines two methods of fast reroute, the one-to-one backup 1086 method and the facility backup method. For both methods: 1088 1) Path computation of protecting LSP: 1090 In Section 6.2 of [RFC4090], the protecting LSP (detour LSP in one- 1091 to-one backup, or bypass tunnel in facility backup) could be 1092 computed by the Point of Local Repair (PLR) using, for example, 1093 Constraint-based Shortest Path First (CSPF) computation. In the 1094 scenario of interworking between GMPLS and centralized controller 1095 system, the protecting LSP could also be computed by the centralized 1096 controller system, as it has the global view of the network 1097 topology, resource and information of LSPs. 1099 2) Protecting LSP creation: 1101 In the scenario of interworking between GMPLS and centralized 1102 controller system, the Protecting LSP could still be created by the 1103 RSVP-TE signaling protocol as described in [RFC4090] and [RFC8271]. 1105 In addition, if the protecting LSP is computed by the centralized 1106 controller system, the Secondary Explicit Route Object defined in 1107 [RFC4873] could be used to explicitly indicate the route of the 1108 protecting LSP. 1110 3) Failure detection and traffic switchover: 1112 If a PLR detects that failure occurs, it is recommended to still use 1113 the distributed mechanisms described in [RFC4090] to switch the 1114 traffic to the related detour LSP or bypass tunnel, rather than in a 1115 centralized way. This can significantly shorten the protection 1116 switching time. 1118 7.5. Controller Reliability 1120 Given the important role in the network, the reliability of 1121 controller is critical. Once a controller is shut down, the network 1122 should operate as well. It can be either achieved by controller back 1123 up or functionality back up. There are several of controller backup 1124 or federation mechanisms in the literature. It is also more reliable 1125 to have some function back up in the network element, to guarantee 1126 the performance in the network. 1128 8. Manageability Considerations 1130 Each entity in the network, including both controllers and network 1131 elements, should be managed properly as it will interact with other 1132 entities. The manageability considerations in controller hierarchies 1133 and network elements still apply respectively. For the protocols 1134 applied in the network, manageability is also requested. 1136 The responsibility of each entity should be clarified. The control 1137 of function and policy among different controllers should be 1138 consistent via proper negotiation process. 1140 9. Security Considerations 1142 This document provides the interwork between the GMPLS and 1143 controller hierarchies. The security requirements in both system 1144 still applies respectively. Protocols referenced in this document 1145 also have various security considerations, which is also expected to 1146 be satisfied. 1148 Other considerations on the interface between the controller and the 1149 network element are also important. Such security includes the 1150 functions to authenticate and authorize the control access to the 1151 controller from multiple network elements. Security mechanisms on 1152 the controller are also required to safeguard the underlying network 1153 elements against attacks on the control plane and/or unauthorized 1154 usage of data transport resources. 1156 10. IANA Considerations 1158 This document requires no IANA actions. 1160 11. References 1162 11.1. Normative References 1164 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 1165 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 1166 Tunnels", RFC 3209, December 2001. 1168 [RFC3473] Berger, L., Ed., "Generalized Multi-Protocol Label 1169 Switching (GMPLS) Signaling Resource ReserVation Protocol- 1170 Traffic Engineering (RSVP-TE) Extensions", RFC 3473, 1171 January 2003. 1173 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 1174 (TE) Extensions to OSPF Version 2", RFC 3630, September 1175 2003. 1177 [RFC3945] Mannie, E., Ed., "Generalized Multi-Protocol Label 1178 Switching (GMPLS) Architecture", RFC 3945, October 2004. 1180 [RFC4090] Pan, P., Ed., Swallow, G., Ed., and A. Atlas, Ed., "Fast 1181 Reroute Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 1182 May 2005. 1184 [RFC4203] Kompella, K., Ed. and Y. Rekhter, Ed., "OSPF Extensions in 1185 Support of Generalized Multi-Protocol Label Switching 1186 (GMPLS)", RFC 4203, October 2005. 1188 [RFC4206] Kompella, K. and Rekhter Y., "Label Switched Paths (LSP) 1189 Hierarchy with Generalized Multi-Protocol Label Switching 1190 (GMPLS) Traffic Engineering (TE)", RFC 4206, October 2005. 1192 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 1193 Element (PCE)-Based Architecture", RFC 4655, August 2006. 1195 [RFC4872] Lang, J., Ed., Rekhter, Y., Ed., and D. Papadimitriou, 1196 Ed., "RSVP-TE Extensions in Support of End-to-End 1197 Generalized Multi-Protocol Label Switching (GMPLS) 1198 Recovery", RFC 4872, May 2007. 1200 [RFC4873] Berger, L., Bryskin, I., Papadimitriou, D., and A. Farrel, 1201 "GMPLS Segment Recovery", RFC 4873, May 2007. 1203 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 1204 Engineering", RFC 5305, October 2008. 1206 [RFC5307] Kompella, K., Ed. and Y. Rekhter, Ed., "IS-IS Extensions 1207 in Support of Generalized Multi-Protocol Label Switching 1208 (GMPLS)", RFC 5307, October 2008. 1210 [RFC5440] Vasseur, JP., Ed. and JL. Le Roux, Ed., "Path Computation 1211 Element (PCE) Communication Protocol (PCEP)", RFC 5440, 1212 March 2009. 1214 [RFC6001] Papadimitriou D., Vigoureux M., Shiomoto K., Brungard D. 1215 and Le Roux JL., "Generalized MPLS (GMPLS) Protocol 1216 Extensions for Multi-Layer and Multi-Region Networks 1217 (MLN/MRN)", RFC 6001, October 2010. 1219 [RFC6107] Shiomoto K. and Farrel A., "Procedures for Dynamically 1220 Signaled Hierarchical Label Switched Paths", RFC 6107, 1221 February 2011. 1223 [RFC6241] Enns, R., Bjorklund, M., Schoenwaelder J., Bierman A., 1224 "Network Configuration Protocol (NETCONF)", RFC 6241, June 1225 2011. 1227 [RFC7074] Berger, L. and J. Meuric, "Revised Definition of the GMPLS 1228 Switching Capability and Type Fields", RFC 7074, November 1229 2013. 1231 [RFC7491] King, D., Farrel, A., "A PCE-Based Architecture for 1232 Application-Based Network Operations", RFC7491, March 1233 2015. 1235 [RFC7926] Farrel, A., Drake, J., Bitar, N., Swallow, G., Ceccarelli, 1236 D. and Zhang, X., "Problem Statement and Architecture for 1237 Information Exchange between Interconnected Traffic- 1238 Engineered Networks", RFC7926, July 2016. 1240 [RFC8040] Bierman, A., Bjorklund, M., Watsen, K., "RESTCONF 1241 Protocol", RFC 8040, January 2017. 1243 [RFC8271] Taillon M., Saad T., Gandhi R., Ali Z. and Bhatia M., 1244 "Updates to the Resource Reservation Protocol for Fast 1245 Reroute of Traffic Engineering GMPLS Label Switched 1246 Paths", RFC 8271, October 2017. 1248 [RFC8282] Oki E., Takeda T., Farrel A. and Zhang F., "Extensions to 1249 the Path Computation Element Communication Protocol (PCEP) 1250 for Inter-Layer MPLS and GMPLS Traffic Engineering", RFC 1251 8282, December 2017. 1253 [RFC8453] Ceccarelli, D. and Y. Lee, "Framework for Abstraction and 1254 Control of Traffic Engineered Networks", RFC 8453, August 1255 2018. 1257 [RFC8795] Liu, X., Bryskin, I., Beeram, V., Saad, T., Shah, H., 1258 Gonzalez De Dios, O., "YANG Data Model for Traffic 1259 Engineering (TE) Topologies", RFC8795, August 2020. 1261 11.2. Informative References 1263 [RFC3471] Berger, L., Ed., "Generalized Multi-Protocol Label 1264 Switching (GMPLS) Signaling Functional Description", RFC 1265 3471, January 2003. 1267 [RFC4202] Kompella, K., Ed. and Y. Rekhter, Ed., "Routing Extensions 1268 in Support of Generalized Multi-Protocol Label Switching 1269 (GMPLS)", RFC 4202, October 2005. 1271 [RFC4204] Lang, J., Ed., "Link Management Protocol (LMP)", RFC 4204, 1272 October 2005. 1274 [RFC4426] Lang, J., Ed., Rajagopalan, B., Ed., and D. Papadimitriou, 1275 Ed., "Generalized Multi-Protocol Label witching (GMPLS) 1276 Recovery Functional Specification", RFC 4426, March 2006. 1278 [RFC5150] Ayyangar, A., Kompella, K., Vasseur, J.P., Farrel, A., 1279 "Label Switched Path Stitching with Generalized 1280 Multiprotocol Label Switching Traffic Engineering (GMPLS 1281 TE)", RFC 5150, February, 2008. 1283 [RFC5212] Shiomoto K., Papadimitriou D., Le Roux JL., Vigoureux M. 1284 and Brungard D., "Requirements for GMPLS-Based Multi- 1285 Region and Multi-Layer Networks (MRN/MLN)", RFC 5212, July 1286 2008. 1288 [RFC5623] Oki E., Takeda T., Le Roux JL. and Farrel A., "Framework 1289 for PCE-Based Inter-Layer MPLS and GMPLS Traffic 1290 Engineering", RFC 5623, September 2009. 1292 [RFC7138] Ceccarelli, D., Ed., Zhang, F., Belotti, S., Rao, R., and 1293 J. Drake, "Traffic Engineering Extensions to OSPF for 1294 GMPLS Control of Evolving G.709 Optical Transport 1295 Networks", RFC 7138, March 2014. 1297 [RFC7139] Zhang, F., Ed., Zhang, G., Belotti, S., Ceccarelli, D., 1298 and K. Pithewan, "GMPLS Signaling Extensions for Control 1299 of Evolving G.709 Optical Transport Networks", RFC 7139, 1300 March 2014. 1302 [RFC7688] Lee, Y., Ed. and G. Bernstein, Ed., "GMPLS OSPF 1303 Enhancement for Signal and Network Element Compatibility 1304 for Wavelength Switched Optical Networks", RFC 7688, 1305 November 2015. 1307 [RFC7689] Bernstein, G., Ed., Xu, S., Lee, Y., Ed., Martinelli, G., 1308 and H. Harai, "Signaling Extensions for Wavelength 1309 Switched Optical Networks", RFC 7689, November 2015. 1311 [RFC7792] Zhang, F., Zhang, X., Farrel, A., Gonzalez de Dios, O., 1312 and D. Ceccarelli, "RSVP-TE Signaling Extensions in 1313 Support of Flexi-Grid Dense Wavelength Division 1314 Multiplexing (DWDM) Networks", RFC 7792, March 2016. 1316 [RFC8231] Crabbe, E., Minei, I., Medved, J., and R. Varga, "Path 1317 Computation Element Communication Protocol (PCEP) 1318 Extensions for Stateful PCE", RFC 8231, September 2017. 1320 [RFC8281] Crabbe, E., Minei, I., Sivabalan, S., and R. Varga, "PCEP 1321 Extensions for PCE-initiated LSP Setup in a Stateful PCE 1322 Model", RFC 8281, October 2017. 1324 [RFC8345] Clemm, A., Medved, J., Varga, R., Bahadur, N., 1325 Ananthakrishnan, H., Liu, X., "A YANG Data Model for 1326 Network Topologies", RFC 8345, March 2018. 1328 [RFC8363] Zhang, X., Zheng, H., Casellas, R., Dios, O., and D. 1329 Ceccarelli, "GMPLS OSPF-TE Extensions in support of Flexi- 1330 grid DWDM networks", RFC8363, February 2017. 1332 [PAT-COMP] Busi, I., Belotti, S., Lopez, V., Gonzalez de Dios, O., 1333 Sharma, A., Shi, Y., Vilalta, R., Setheraman, K., "Yang 1334 model for requesting Path Computation", draft-ietf-teas- 1335 yang-path-computation, work in progress. 1337 [PCEP-LS] Dhody, D., Lee, Y., Ceccarelli, D., "PCEP Extensions for 1338 Distribution of Link-State and TE Information", draft- 1339 dhodylee-pce-pcep-ls, work in progress. 1341 [TE-Tunnel] Saad, T. et al., "A YANG Data Model for Traffic 1342 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 1343 te, work in progress. 1345 [sPCE-ID] Dugeon, O. et al., "PCEP Extension for Stateful Inter- 1346 Domain Tunnels", draft-ietf-pce-stateful-interdomain, work 1347 in progress. 1349 [G.808.1] ITU-T, "Generic protection switching - Linear trail and 1350 subnetwork protection", G.808.1, May 2014. 1352 12. Authors' Addresses 1354 Haomian Zheng 1355 Huawei Technologies 1356 H1, Huawei Xiliu Beipo Village, Songshan Lake 1357 Dongguan 1358 Guangdong, 523808 China 1359 Email: zhenghaomian@huawei.com 1361 Xianlong Luo 1362 Huawei Technologies 1363 G1, Huawei Xiliu Beipo Village, Songshan Lake 1364 Dongguan 1365 Guangdong, 523808 China 1366 Email: luoxianlong@huawei.com 1368 Yunbin Xu 1369 CAICT 1370 Email: xuyunbin@caict.ac.cn 1372 Yang Zhao 1373 China Mobile 1374 Email: zhaoyangyjy@chinamobile.com 1376 Sergio Belotti 1377 Nokia 1378 Email: sergio.belotti@nokia.com 1379 Dieter Beller 1380 Nokia 1381 Email: Dieter.Beller@nokia.com 1383 Yi Lin 1384 Huawei Technologies 1385 H1, Huawei Xiliu Beipo Village, Songshan Lake 1386 Dongguan 1387 Guangdong, 523808 China 1388 Email: yi.lin@huawei.com