idnits 2.17.1 draft-ietf-teas-yang-path-computation-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 56 instances of weird spacing in the document. Is it really formatted ragged-right, rather than justified? Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 1422 has weird spacing: '...ic-type ide...' == Line 1430 has weird spacing: '...- usage ide...' == Line 1441 has weird spacing: '...k-tp-id te-...' == Line 1446 has weird spacing: '...k-tp-id te-...' == Line 1452 has weird spacing: '...-number ine...' == (51 more instances...) -- The document date (February 8, 2021) is 1171 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC3945' is defined on line 3291, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 8 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Italo Busi (Ed.) 2 Internet Draft Huawei 3 Intended status: Standard Track Sergio Belotti (Ed.) 4 Expires: August 2021 Nokia 5 Victor Lopez 6 Telefonica 7 Anurag Sharma 8 Google 9 Yan Shi 10 China Unicom 12 February 8, 2021 14 YANG Data Model for requesting Path Computation 15 draft-ietf-teas-yang-path-computation-12 17 Status of this Memo 19 This Internet-Draft is submitted in full conformance with the 20 provisions of BCP 78 and BCP 79. 22 Internet-Drafts are working documents of the Internet Engineering 23 Task Force (IETF), its areas, and its working groups. Note that 24 other groups may also distribute working documents as Internet- 25 Drafts. 27 Internet-Drafts are draft documents valid for a maximum of six 28 months and may be updated, replaced, or obsoleted by other documents 29 at any time. It is inappropriate to use Internet-Drafts as 30 reference material or to cite them other than as "work in progress." 32 The list of current Internet-Drafts can be accessed at 33 http://www.ietf.org/ietf/1id-abstracts.txt 35 The list of Internet-Draft Shadow Directories can be accessed at 36 http://www.ietf.org/shadow.html 38 This Internet-Draft will expire on August 8, 2021. 40 Copyright Notice 42 Copyright (c) 2021 IETF Trust and the persons identified as the 43 document authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal 46 Provisions Relating to IETF Documents 47 (http://trustee.ietf.org/license-info) in effect on the date of 48 publication of this document. Please review these documents 49 carefully, as they describe your rights and restrictions with 50 respect to this document. Code Components extracted from this 51 document must include Simplified BSD License text as described in 52 Section 4.e of the Trust Legal Provisions and are provided without 53 warranty as described in the Simplified BSD License. 55 Abstract 57 There are scenarios, typically in a hierarchical Software-Defined 58 Networking (SDN) context, where the topology information provided by 59 a Traffic Engineering (TE) network provider may not be sufficient 60 for its client to perform end-to-end path computation. In these 61 cases the client would need to request the provider to calculate 62 some (partial) feasible paths. 64 This document defines a YANG data model for a Remote Procedure Call 65 (RPC) to request path computation. This model complements the 66 solution, defined in RFCXXXX, to configure a TE tunnel path in 67 "compute-only" mode. 69 [RFC EDITOR NOTE: Please replace RFC XXXX with the RFC number of 70 draft-ietf-teas-yang-te once it has been published. 72 Moreover this document describes some use cases where a path 73 computation request, via YANG-based protocols (e.g., NETCONF or 74 RESTCONF), can be needed. 76 Table of Contents 78 1. Introduction...................................................3 79 1.1. Terminology...............................................5 80 1.2. Tree Diagram..............................................5 81 1.3. Prefixes in Data Node Names...............................6 82 2. Use Cases......................................................6 83 2.1. Packet/Optical Integration................................6 84 2.2. Multi-domain TE networks.................................11 85 2.3. Data Center Interconnections.............................13 86 2.4. Backward Recursive Path Computation scenario.............15 87 2.5. Hierarchical PCE scenario................................16 88 3. Motivations...................................................18 89 3.1. Motivation for a YANG Model..............................18 90 3.1.1. Benefits of common data models......................18 91 3.1.2. Benefits of a single interface......................19 92 3.1.3. Extensibility.......................................20 93 3.2. Interactions with TE topology............................20 94 3.2.1. TE topology aggregation.............................21 95 3.2.2. TE topology abstraction.............................24 96 3.2.3. Complementary use of TE topology and path computation26 97 3.3. Path Computation RPC.....................................28 98 3.3.1. Temporary reporting of the computed path state......30 99 4. Path computation and optimization for multiple paths..........32 100 5. YANG data model for requesting Path Computation...............33 101 5.1. Synchronization of multiple path computation requests....34 102 5.2. Returned metric values...................................36 103 5.3. Multiple Paths Requests for the same TE tunnel...........38 104 5.4. Multi-Layer Path Computation.............................42 105 6. YANG data model for TE path computation.......................43 106 6.1. Tree diagram.............................................43 107 6.2. YANG module..............................................57 108 7. Security Considerations.......................................80 109 8. IANA Considerations...........................................81 110 9. References....................................................81 111 9.1. Normative References.....................................81 112 9.2. Informative References...................................83 113 Appendix A. Examples of dimensioning the "detailed connectivity 114 matrix" 85 115 Acknowledgments..................................................90 116 Contributors.....................................................90 117 Authors' Addresses...............................................91 119 1. Introduction 121 There are scenarios, typically in a hierarchical Software-Defined 122 Networking (SDN) context, where the topology information provided by 123 a Traffic Engineering (TE) network provider may not be sufficient 124 for its client to perform end-to-end path computation. In these 125 cases the client would need to request the provider to calculate 126 some (partial) feasible paths, complementing his topology knowledge, 127 to make his end-to-end path computation feasible. 129 This type of scenarios can be applied to different interfaces in 130 different reference architectures: 132 o Application-Based Network Operations (ABNO) control interface 133 [RFC7491], in which an Application Service Coordinator can 134 request ABNO controller to take in charge path calculation (see 135 Figure 1 in [RFC7491]). 137 o Abstraction and Control of TE Networks (ACTN) [RFC8453], where a 138 controller hierarchy is defined, the need for path computation 139 arises on the interface between Customer Network Controller (CNC) 140 and Multi-Domain Service Coordinator (MDSC), called CNC-MDSC 141 Interface (CMI), and on the interface between MSDC and 142 Provisioning Network Controller (PNC), called MDSC-PNC Interface 143 (MPI). [RFC8454] describes an information model for the Path 144 Computation request. 146 Multiple protocol solutions can be used for communication between 147 different controller hierarchical levels. This document assumes that 148 the controllers are communicating using YANG-based protocols (e.g., 149 NETCONF [RFC6241] or RESTCONF [RFC8040]). 151 Path Computation Elements (PCEs), controllers and orchestrators 152 perform their operations based on Traffic Engineering Databases 153 (TED). Such TEDs can be described, in a technology agnostic way, 154 with the YANG data model for TE Topologies [RFC8795]. Furthermore, 155 the technology specific details of the TED are modeled in the 156 augmented TE topology models, e.g. [OTN-TOPO] for Optical Transport 157 Network (OTN) Optical Data Unit (ODU) technologies. 159 The availability of such topology models allows providing the TED 160 using YANG-based protocols (e.g., NETCONF or RESTCONF). Furthermore, 161 it enables a PCE/controller performing the necessary abstractions or 162 modifications and offering this customized topology to another 163 PCE/controller or high level orchestrator. 165 The tunnels that can be provided over the networks described with 166 the topology models can be also set-up, deleted and modified via 167 YANG-based protocols (e.g., NETCONF or RESTCONF) using the TE tunnel 168 YANG data model [TE-TUNNEL]. 170 This document defines a YANG data model [RFC7950] for an RPC to 171 request path computation, which complements the solution defined in 172 [TE-TUNNEL], to configure a TE tunnel path in "compute-only" mode. 174 The YANG data model definition does not make any assumption about 175 whether that the client or the server implement a "PCE" 176 functionality, as defined in [RFC4655], and the Path Computation 177 Element Communication Protocol (PCEP) protocol, as defined in 178 [RFC5440]. 180 Moreover, this document describes some use cases where a path 181 computation request, via YANG-based protocols (e.g., NETCONF or 182 RESTCONF), can be needed. 184 The YANG data model defined in this document conforms to the Network 185 Management Datastore Architecture [RFC8342]. 187 1.1. Terminology 189 TED: The traffic engineering database is a collection of all TE 190 information about all TE nodes and TE links in a given network. 192 PCE: A Path Computation Element (PCE) is an entity that is capable 193 of computing a network path or route based on a network graph, and 194 of applying computational constraints during the computation. The 195 PCE entity is an application that can be located within a network 196 node or component, on an out-of-network server, etc. For example, a 197 PCE would be able to compute the path of a TE Label Switched Path 198 (LSP) by operating on the TED and considering bandwidth and other 199 constraints applicable to the TE LSP service request. [RFC4655]. 201 Domain: TE information is the data relating to nodes and TE links 202 that is used in the process of selecting a TE path. TE information 203 is usually only available within a network. We call such a zone of 204 visibility of TE information a domain. An example of a domain may 205 be an IGP area or an Autonomous System. [RFC7926] 207 The terminology for describing YANG data models is found in 208 [RFC7950]. 210 1.2. Tree Diagram 212 Tree diagrams used in this document follow the notation defined in 213 [RFC8340]. 215 1.3. Prefixes in Data Node Names 217 In this document, names of data nodes and other data model objects 218 are prefixed using the standard prefix associated with the 219 corresponding YANG imported modules, as shown in Table 1. 221 +---------------+--------------------------+-----------------+ 222 | Prefix | YANG module | Reference | 223 +---------------+--------------------------+-----------------+ 224 | inet | ietf-inet-types | [RFC6991] | 225 | te-types | ietf-te-types | [RFC8776] | 226 | te | ietf-te | [TE-TUNNEL] | 227 | te-pc | ietf-te-path-computation | this document | 228 +---------------+--------------------------+-----------------+ 230 Table 1: Prefixes and corresponding YANG modules 232 2. Use Cases 234 This section presents some use cases, where a client needs to 235 request underlying SDN controllers for path computation. 237 The use of the YANG data model defined in this document is not 238 restricted to these use cases but can be used in any other use case 239 when deemed useful. 241 The presented uses cases have been grouped, depending on the 242 different underlying topologies: a) Packet/Optical Integration; b) 243 multi-domain Traffic Engineered (TE) Networks; and c) Data Center 244 Interconnections. Use cases d) and e) respectively present how to 245 apply this YANG data model for standard multi-domain PCE i.e. 246 Backward Recursive Path Computation [RFC5441] and Hierarchical PCE 247 [RFC6805]. 249 2.1. Packet/Optical Integration 251 In this use case, an optical domain is used to provide connectivity 252 to some nodes of a packet domain (see Figure 1). 254 +----------------+ 255 | | 256 | Packet/Optical | 257 | Coordinator | 258 | | 259 +---+------+-----+ 260 | | 261 +------------+ | 262 | +-----------+ 263 +------V-----+ | 264 | | +------V-----+ 265 | Packet | | | 266 | Domain | | Optical | 267 | Controller | | Domain | 268 | | | Controller | 269 +------+-----+ +-------+----+ 270 | | 271 .........V......................... | 272 : packet domain : | 273 +----+ +----+ | 274 | R1 |= = = = = = = = = = = = = = = =| R2 | | 275 +-+--+ +--+-+ | 276 | : : | | 277 | :................................ : | | 278 | | | 279 | +-----+ | | 280 | ...........| Opt |........... | | 281 | : | C | : | | 282 | : /+--+--+\ : | | 283 | : / | \ : | | 284 | : / | \ : | | 285 | +-----+ / +--+--+ \ +-----+ | | 286 | | Opt |/ | Opt | \| Opt | | | 287 +---| A | | D | | B |---+ | 288 +-----+\ +--+--+ /+-----+ | 289 : \ | / : | 290 : \ | / : | 291 : \ +--+--+ / optical<---------+ 292 : \| Opt |/ domain : 293 :..........| E |..........: 294 +-----+ 296 Figure 1 - Packet/Optical Integration use case 298 Figure 1 as well as Figure 2 below only show a partial view of the 299 packet network connectivity, before additional packet connectivity 300 is provided by the optical network. 302 It is assumed that the Optical Domain Controller provides to the 303 Packet/Optical Coordinator an abstracted view of the optical 304 network. A possible abstraction could be to represent the whole 305 optical network as one "virtual node" with "virtual ports" connected 306 to the access links, as shown in Figure 2. 308 It is also assumed that Packet Domain Controller can provide the 309 Packet/Optical Coordinator the information it needs to set up 310 connectivity between packet nodes through the optical network (e.g., 311 the access links). 313 The path computation request helps the Packet/Optical Coordinator to 314 know the real connections that can be provided by the optical 315 network. 317 ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,. 318 , Packet/Optical Coordinator view , 319 , +----+ , . 320 , | | , 321 , | R2 | , . 322 , +----+ +------------ + /+----+ , 323 , | | | |/-----/ / / , . 324 , | R1 |--O VP1 VP4 O / / , 325 , | |\ | | /----/ / , . 326 , +----+ \| |/ / , 327 , / O VP2 VP5 O / , . 328 , / | | +----+ , 329 , / | | | | , . 330 , / O VP3 VP6 O--| R4 | , 331 , +----+ /-----/|_____________| +----+ , . 332 , | |/ +------------ + , 333 , | R3 | , . 334 , +----+ ,,,,,,,,,,,,,,,,, 335 ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, ,. 336 . Packet Domain Controller view +----+ , 337 only packet nodes and packet links | | , . 338 . with access links to the optical network | R2 | , 339 , +----+ /+----+ , . 340 . , | | /-----/ / / , 341 , | R1 |--- / / , . 342 . , +----+\ /----/ / , 343 , / \ / / , . 344 . , / / , 345 , / +----+ , . 346 . , / | | , 347 , / ---| R4 | , . 348 . , +----+ /-----/ +----+ , 349 , | |/ , . 350 . , | R3 | , 351 , +----+ ,,,,,,,,,,,,,,,,,. 352 .,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, , 353 Optical Domain Controller view , . 354 . only optical nodes, +--+ , 355 optical links and /|OF| , . 356 . access links from the +--++--+ / , 357 packet network |OA| \ /-----/ / , . 358 . , ---+--+--\ +--+/ / , 359 , \ | \ \-|OE|-------/ , . 360 . , \ | \ /-+--+ , 361 , \+--+ X | , . 363 . , |OB|-/ \ | , 364 , +--+-\ \+--+ , . 365 . , / \ \--|OD|--- , 366 , /-----/ +--+ +--+ , . 367 . , / |OC|/ , 368 , +--+ , . 369 ., ,,,,,,,,,,,,,,,,,, 370 ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, , 371 . Actual Physical View +----+ , 372 , +--+ | | , 373 . , /|OF| | R2 | , 374 , +----+ +--++--+ /+----+ , 375 . , | | |OA| \ /-----/ / / , 376 , | R1 |---+--+--\ +--+/ / / , 377 . , +----+\ | \ \-|OE|-------/ / , 378 , / \ | \ /-+--+ / , 379 . , / \+--+ X | / , 380 , / |OB|-/ \ | +----+ , 381 . , / +--+-\ \+--+ | | , 382 , / / \ \--|OD|---| R4 | , 383 . , +----+ /-----/ +--+ +--+ +----+ , 384 , | |/ |OC|/ , 385 . , | R3 | +--+ , 386 , +----+ , 387 .,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 389 Figure 2 - Packet and Optical Topology Abstractions 391 In this use case, the Packet/Optical Coordinator needs to set up an 392 optimal underlying path for an IP link between R1 and R2. 394 As depicted in Figure 2, the Packet/Optical Coordinator has only an 395 "abstracted view" of the physical network, and it does not know the 396 feasibility or the cost of the possible optical paths (e.g., VP1-VP4 397 and VP2-VP5), which depend on the current status of the physical 398 resources within the optical network and on vendor-specific optical 399 attributes. 401 The Packet/Optical Coordinator can request the underlying Optical 402 Domain Controller to compute a set of potential optimal paths, 403 taking into account optical constraints. Then, based on its own 404 constraints, policy and knowledge (e.g. cost of the access links), 405 it can choose which one of these potential paths to use to set up 406 the optimal end-to-end path crossing optical network. 408 ............................ 409 : : 410 O VP1 VP4 O 411 cost=10 /:\ /:\ cost=10 412 / : \----------------------/ : \ 413 +----+ / : cost=50 : \ +----+ 414 | |/ : : \| | 415 | R1 | : : | R2 | 416 | |\ : : /| | 417 +----+ \ : /--------------------\ : / +----+ 418 \ : / cost=55 \ : / 419 cost=5 \:/ \:/ cost=5 420 O VP2 VP5 O 421 : : 422 :..........................: 424 Figure 3 - Packet/Optical Integration path computation example 426 For example, in Figure 3, the Packet/Optical Coordinator can request 427 the Optical Domain Controller to compute the paths between VP1-VP4 428 and VP2-VP5 and then decide to set up the optimal end-to-end path 429 using the VP2-VP5 optical path even if this is not the optimal path 430 from the optical domain perspective. 432 Considering the dynamicity of the connectivity constraints of an 433 optical domain, it is possible that a path computed by the Optical 434 Domain Controller when requested by the Packet/Optical Coordinator 435 is no longer valid/available when the Packet/Optical Coordinator 436 requests it to be set up. This is further discussed in section 3.3. 438 2.2. Multi-domain TE networks 440 In this use case there are two TE domains which are interconnected 441 together by multiple inter-domains links. 443 A possible example could be a multi-domain optical network. 445 +--------------+ 446 | Multi-Domain | 447 | Controller | 448 +---+------+---+ 449 | | 450 +------------+ | 451 | +-----------+ 452 +------V-----+ | 453 | | | 454 | TE Domain | +------V-----+ 455 | Controller | | | 456 | 1 | | TE Domain | 457 +------+-----+ | Controller | 458 | | 2 | 459 | +------+-----+ 460 .........V.......... | 461 : : | 462 +-----+ : | 463 | | : .........V.......... 464 | X | : : : 465 | | +-----+ +-----+ : 466 +-----+ | | | | : 467 : | C |------| E | : 468 +-----+ +-----+ /| | | |\ +-----+ +-----+ 469 | | | |/ +-----+ +-----+ \| | | | 470 | A |----| B | : : | G |----| H | 471 | | | |\ : : /| | | | 472 +-----+ +-----+ \+-----+ +-----+/ +-----+ +-----+ 473 : | | | | : 474 : | D |------| F | : 475 : | | | | +-----+ 476 : +-----+ +-----+ | | 477 : : : | Y | 478 : : : | | 479 : TE domain 1 : : TE domain 2 +-----+ 480 :..................: :.................: 482 Figure 4 - Multi-domain multi-link interconnection 484 In order to set up an end-to-end multi-domain TE path (e.g., between 485 nodes A and H), the Multi-Domain Controller needs to know the 486 feasibility or the cost of the possible TE paths within the two TE 487 domains, which depend on the current status of the physical 488 resources within each TE domain. This is more challenging in case of 489 optical networks because the optimal paths depend also on vendor- 490 specific optical attributes (which may be different in the two 491 domains if they are provided by different vendors). 493 In order to set up a multi-domain TE path (e.g., between nodes A and 494 H), the Multi-Domain Controller can request the TE Domain 495 Controllers to compute a set of intra-domain optimal paths and take 496 decisions based on the information received. For example: 498 o The Multi-Domain Controller asks TE Domain Controllers to provide 499 set of paths between A-C, A-D, E-H and F-H 501 o TE Domain Controllers return a set of feasible paths with the 502 associated costs: the path A-C is not part of this set (in 503 optical networks, it is typical to have some paths not being 504 feasible due to optical constraints that are known only by the 505 Optical Domain Controller) 507 o The Multi-Domain Controller will select the path A-D-F-H since it 508 is the only feasible multi-domain path and then request the TE 509 Domain Controllers to set up the A-D and F-H intra-domain paths 511 o If there are multiple feasible paths, the Multi-Domain Controller 512 can select the optimal path knowing the cost of the intra-domain 513 paths (provided by the TE domain controllers) and the cost of the 514 inter-domain links (known by the Multi-Domain Controller) 516 This approach may have some scalability issues when the number of TE 517 domains is quite big (e.g. 20). 519 In this case, it would be worthwhile using the abstract TE topology 520 information provided by the TE Domain Controllers to limit the 521 number of potential optimal end-to-end paths and then request path 522 computation from fewer TE Domain Controllers in order to decide what 523 the optimal path within this limited set is. 525 For more details, see section 3.2.3. 527 2.3. Data Center Interconnections 529 In these use case, there is a TE domain which is used to provide 530 connectivity between Data Centers which are connected with the TE 531 domain using access links. 533 +--------------+ 534 | Cloud Network| 535 | Orchestrator | 536 +--------------+ 537 | | | | 538 +-------------+ | | +------------------------+ 539 | | +------------------+ | 540 | +--------V---+ | | 541 | | | | | 542 | | TE Network | | | 543 +------V-----+ | Controller | +------V-----+ | 544 | DC | +------------+ | DC | | 545 | Controller | | | Controller | | 546 +------------+ | +-----+ +------------+ | 547 | ....V...| |........ | | 548 | : | P | : | | 549 .....V..... : /+-----+\ : .....V..... | 550 : : +-----+ / | \ +-----+ : : | 551 : DC1 || : | |/ | \| | : DC2 || : | 552 : ||||----| PE1 | | | PE2 |---- |||| : | 553 : _|||||| : | |\ | /| | : _|||||| : | 554 : : +-----+ \ +-----+ / +-----+ : : | 555 :.........: : \| |/ : :.........: | 556 :.......| PE3 |.......: | 557 | | | 558 +-----+ +---------V--+ 559 .....|..... | DC | 560 : : | Controller | 561 : DC3 || : +------------+ 562 : |||| : | 563 : _|||||| <------------------+ 564 : : 565 :.........: 567 Figure 5 - Data Center Interconnection use case 569 In this use case, there is the need to transfer data from Data 570 Center 1 (DC1) to either DC2 or DC3 (e.g. workload migration). 572 The optimal decision depends both on the cost of the TE path (DC1- 573 DC2 or DC1-DC3) and of the Data Center resources within DC2 or DC3. 575 The Cloud Network Orchestrator needs to make a decision for optimal 576 connection based on TE network constraints and Data Center 577 resources. It may not be able to make this decision because it has 578 only an abstract view of the TE network (as in use case in 2.1). 580 The Cloud Network Orchestrator can request to the TE Network 581 Controller to compute the cost of the possible TE paths (e.g., DC1- 582 DC2 and DC1-DC3) and to the DC Controller to provide the information 583 it needs about the required Data Center resources within DC2 and DC3 584 and then it can take the decision about the optimal solution based 585 on this information and its policy. 587 2.4. Backward Recursive Path Computation scenario 589 [RFC5441] has defined the Virtual Source Path Tree (VSPT) TLV within 590 PCE Reply Object in order to compute inter-domain paths following a 591 "Backward Recursive Path Computation" (BRPC) method. The main 592 principle is to forward the PCE request message up to the 593 destination domain. Then, each PCE involved in the computation will 594 compute its part of the path and send it back to the requester 595 through PCE Response message. The resulting computation is spread 596 from destination PCE to source PCE. Each PCE is in charge of merging 597 the path it received with the one it calculated. At the end, the 598 source PCE merges its local part of the path with the received one 599 to achieve the end-to-end path. 601 Figure 6 below show a typical BRPC scenario where 3 PCEs cooperate 602 to compute inter-domain paths. 604 +----------------+ +----------------+ 605 | Domain (B) | | Domain (C) | 606 | | | | 607 | /-------|---PCEP---|--------\ | 608 | / | | \ | 609 | (PCE) | | (PCE) | 610 | / <----------> | 611 | / | Inter | | 612 +---|----^-------+ Domain +----------------+ 613 | | Link 614 PCEP | 615 | | Inter-domain Link 616 | | 617 +---|----v-------+ 618 | | | 619 | | Domain (A) | 620 | \ | 621 | (PCE) | 622 | | 623 | | 624 +----------------+ 625 Figure 6 - BRPC Scenario 627 In this use case, a client can use the YANG data model defined in 628 this document to request path computation from the PCE that controls 629 the source of the tunnel. For example, a client can request to the 630 PCE of domain A to compute a path from a source S, within domain A, 631 to a destination D, within domain C. Then PCE of domain A will use 632 PCEP protocol, as per [RFC5441], to compute the path from S to D and 633 in turn gives the final answer to the requester. 635 2.5. Hierarchical PCE scenario 637 [RFC6805] has defined an architecture and extensions to the PCE 638 standard to compute inter-domain path following a hierarchical 639 method. Two new roles have been defined: parent PCE and child PCE. 640 The parent PCE is in charge to coordinate the end-to-end path 641 computation. For that purpose it sends to each child PCE involved in 642 the multi-domain path computation a PCE Request message to obtain 643 the local part of the path. Once received all answer through PCE 644 Response message, the parent PCE will merge the different local 645 parts of the path to achieve the end-to-end path. 647 Figure 7 below shows a typical hierarchical scenario where a parent 648 PCE request end-to-end path to the different child PCE. Note that a 649 PCE could take independently the role of child or parent PCE 650 depending of which PCE will request the path. 652 ----------------------------------------------------------------- 653 | Domain 5 | 654 | ----- | 655 | |PCE 5| | 656 | ----- | 657 | | 658 | ---------------- ---------------- ---------------- | 659 | | Domain 1 | | Domain 2 | | Domain 3 | | 660 | | | | | | | | 661 | | ----- | | ----- | | ----- | | 662 | | |PCE 1| | | |PCE 2| | | |PCE 3| | | 663 | | ----- | | ----- | | ----- | | 664 | | | | | | | | 665 | | ----| |---- ----| |---- | | 666 | | |BN11+---+BN21| |BN23+---+BN31| | | 667 | | - ----| |---- ----| |---- - | | 668 | | |S| | | | | |D| | | 669 | | - ----| |---- ----| |---- - | | 670 | | |BN12+---+BN22| |BN24+---+BN32| | | 671 | | ----| |---- ----| |---- | | 672 | | | | | | | | 673 | | ---- | | | | ---- | | 674 | | |BN13| | | | | |BN33| | | 675 | -----------+---- ---------------- ----+----------- | 676 | \ / | 677 | \ ---------------- / | 678 | \ | | / | 679 | \ |---- ----| / | 680 | ----+BN41| |BN42+---- | 681 | |---- ----| | 682 | | | | 683 | | ----- | | 684 | | |PCE 4| | | 685 | | ----- | | 686 | | | | 687 | | Domain 4 | | 688 | ---------------- | 689 | | 690 ----------------------------------------------------------------- 691 Figure 7 - Hierarchical domain topology from [RFC6805] 693 In this use case, a client can use the YANG data model defined in 694 this document to request to the parent PCE a path from a source S to 695 a destination D. The parent PCE will in turn contact the child PCEs 696 through PCEP protocol to compute the end-to-end path and then return 697 the computed path to the client, using the YANG data model defined 698 in this document. For example the YANG data model can be used to 699 request to PCE5 acting as parent PCE to compute a path from source 700 S, within domain 1, to destination D, within domain 3. PCE5 will 701 contact child PCEs of domain 1, 2 and 3 to obtain local part of the 702 end-to-end path through the PCEP protocol. Once received the PCE 703 Response message, it merges the answers to compute the end-to-end 704 path and send it back to the client. 706 3. Motivations 708 This section provides the motivation for the YANG data model defined 709 in this document. 711 Section 3.1 describes the motivation for a YANG data model to 712 request path computation. 714 Section 3.2 describes the motivation for a YANG data model which 715 complements the TE topology YANG data model defined in [RFC8795]. 717 Section 3.3 describes the motivation for a YANG RPC which 718 complements the TE tunnel YANG data model defined in [TE-TUNNEL]. 720 3.1. Motivation for a YANG Model 722 3.1.1. Benefits of common data models 724 The YANG data model for requesting path computation is closely 725 aligned with the YANG data models that provide (abstract) TE 726 topology information, i.e., [RFC8795] as well as that are used to 727 configure and manage TE tunnels, i.e., [TE-TUNNEL]. 729 There are many benefits in aligning the data model used for path 730 computation requests with the YANG data models used for TE topology 731 information and for TE tunnels configuration and management: 733 o There is no need for an error-prone mapping or correlation of 734 information. 736 o It is possible to use the same endpoint identifiers in path 737 computation requests and in the topology modeling. 739 o The attributes used for path computation constraints are the same 740 as those used when setting up a TE tunnel. 742 3.1.2. Benefits of a single interface 744 The system integration effort is typically lower if a single, 745 consistent interface is used by controllers, i.e., one data modeling 746 language (i.e., YANG) and a common protocol (e.g., NETCONF or 747 RESTCONF). 749 Practical benefits of using a single, consistent interface include: 751 1. Simple authentication and authorization: The interface between 752 different components has to be secured. If different protocols 753 have different security mechanisms, ensuring a common access 754 control model may result in overhead. For instance, there may be 755 a need to deal with different security mechanisms, e.g., 756 different credentials or keys. This can result in increased 757 integration effort. 759 2. Consistency: Keeping data consistent over multiple different 760 interfaces or protocols is not trivial. For instance, the 761 sequence of actions can matter in certain use cases, or 762 transaction semantics could be desired. While ensuring 763 consistency within one protocol can already be challenging, it is 764 typically cumbersome to achieve that across different protocols. 766 3. Testing: System integration requires comprehensive testing, 767 including corner cases. The more different technologies are 768 involved, the more difficult it is to run comprehensive test 769 cases and ensure proper integration. 771 4. Middle-box friendliness: Provider and consumer of path 772 computation requests may be located in different networks, and 773 middle-boxes such as firewalls, NATs, or load balancers may be 774 deployed. In such environments it is simpler to deploy a single 775 protocol. Also, it may be easier to debug connectivity problems. 777 5. Tooling reuse: Implementers may want to implement path 778 computation requests with tools and libraries that already exist 779 in controllers and/or orchestrators, e.g., leveraging the rapidly 780 growing eco-system for YANG tooling. 782 3.1.3. Extensibility 784 Path computation is only a subset of the typical functionality of a 785 controller. In many use cases, issuing path computation requests 786 comes along with the need to access other functionality on the same 787 system. In addition to obtaining TE topology, for instance also 788 configuration of services (set-up/modification/deletion) may be 789 required, as well as: 791 1. Receiving notifications for topology changes as well as 792 integration with fault management 794 2. Performance management such as retrieving monitoring and 795 telemetry data 797 3. Service assurance, e.g., by triggering OAM functionality 799 4. Other fulfilment and provisioning actions beyond tunnels and 800 services, such as changing QoS configurations 802 YANG is a very extensible and flexible data modeling language that 803 can be used for all these use cases. 805 3.2. Interactions with TE topology 807 The use cases described in section 2 have been described assuming 808 that the topology view exported by each underlying controller to its 809 client is aggregated using the "virtual node model", defined in 810 [RFC7926]. 812 TE topology information, e.g., as provided by [RFC8795], could in 813 theory be used by an underlying controller to provide TE information 814 to its client thus allowing a PCE available within its client to 815 perform multi-domain path computation on its own, without requesting 816 path computations to the underlying controllers. 818 In case the client does not implement a PCE function, as discussed 819 in section 1, it could not perform path computation based on TE 820 topology information and would instead need to request path 821 computation from the underlying controllers to get the information 822 it needs to find the optimal end-to-end path. 824 In case the client implements a PCE function, as discussed in 825 section 1, the TE topology information needs to be complete and 826 accurate, which would bring to scalability issues. 828 Using TE topology to provide a "virtual link model" aggregation, as 829 described in [RFC7926], may be not sufficient, unless the 830 aggregation provides a complete and accurate information, which 831 would still cause scalability issues, as described in sections 3.2.1 832 below. 834 Using TE topology abstraction, as described in [RFC7926], may lead 835 to compute an unfeasible path, as described in [RFC7926] in section 836 3.2.2 below. 838 Therefore when computing an optimal multi-domain path, there is a 839 scalability trade-off between providing complete and accurate TE 840 information and the number of path computation requests to the 841 underlying controllers. 843 The TE topology information used, in a complimentary way, to reduce 844 the number for path computation requests to the underlying 845 controllers, are described in section 3.2.3 below. 847 3.2.1. TE topology aggregation 849 Using the TE topology model, as defined in [RFC8795], the underlying 850 controller can export the whole TE domain as a single TE node with a 851 "detailed connectivity matrix" (which provides specific TE 852 attributes, such as delay, Shared Risk Link Groups (SRLGs) and other 853 TE metrics, between each ingress and egress links). 855 The information provided by the "detailed connectivity matrix" would 856 be equivalent to the information that should be provided by "virtual 857 link model" as defined in [RFC7926]. 859 For example, in the Packet/Optical Integration use case, described 860 in section 2.1, the Optical Domain Controller can make the 861 information shown in Figure 3 available to the Packet/Optical 862 Coordinator as part of the TE topology information and the 863 Packet/Optical Coordinator could use this information to calculate 864 by its own the optimal path between R1 and R2, without requesting 865 any additional information to the Optical Domain Controller. 867 However, when designing the amount of information to provide within 868 the "detailed connectivity matrix", there is a tradeoff to be 869 considered between accuracy (i.e., providing "all" the information 870 that might be needed by the PCE available within the client) and 871 scalability. 873 Figure 8 below shows another example, similar to Figure 3, where 874 there are two possible Optical paths between VP1 and VP4 with 875 different properties (e.g., available bandwidth and cost). 877 ............................ 878 : /--------------------\ : 879 : / cost=65 \ : 880 :/ available-bw=10G \: 881 O VP1 VP4 O 882 cost=10 /:\ /:\ cost=10 883 / : \----------------------/ : \ 884 +----+ / : cost=50 : \ +----+ 885 | |/ : available-bw=2G : \| | 886 | R1 | : : | R2 | 887 | |\ : : /| | 888 +----+ \ : /--------------------\ : / +----+ 889 \ : / cost=55 \ : / 890 cost=5 \:/ available-bw=3G \:/ cost=5 891 O VP2 VP5 O 892 : : 893 :..........................: 895 Figure 8 - Packet/Optical Integration path computation Example with 896 multiple choices 898 If the information in the "detailed connectivity matrix" is not 899 complete/accurate, we can have the following drawbacks: 901 o If only the VP1-VP4 path with available bandwidth of 2 Gb/s and 902 cost 50 is reported, the client's PCE will fail to compute a 5 903 Gb/s path between routers R1 and R2, although this would be 904 feasible; 906 o If only the VP1-VP4 path with available bandwidth of 10 Gb/s and 907 cost 60 is reported, the client's PCE will compute, as optimal, 908 the 1 Gb/s path between R1 and R2 going through the VP2-VP5 path 909 within the optical domain while the optimal path would actually 910 be the one going thought the VP1-VP4 sub-path (with cost 50) 911 within the optical domain. 913 Reporting all the information, as in Figure 8, using the "detailed 914 connectivity matrix", is quite challenging from a scalability 915 perspective. The amount of this information is not just based on 916 number of end points (which would scale as N-square), but also on 917 many other parameters, including client rate, user 918 constraints/policies for the service, e.g. max latency < N ms, max 919 cost, etc., exclusion policies to route around busy links, min OSNR 920 margin, max preFEC BER etc. All these constraints could be different 921 based on connectivity requirements. 923 Examples of how the "detailed connectivity matrix" can be 924 dimensioned are described in Appendix A. 926 It is also worth noting that the "connectivity matrix" has been 927 originally defined in Wavelength Switched Optical Networks (WSON), 928 [RFC7446], to report the connectivity constrains of a physical node 929 within the WDM network: the information it contains is pretty 930 "static" and therefore, once taken and stored in the TE data base, 931 it can be always being considered valid and up-to-date in path 932 computation request. 934 The "connectivity matrix" is sometimes confused with "optical reach 935 table" that contain multiple (e.g. k-shortest) regen-free reachable 936 paths for every A-Z node combination in the network. Optical reach 937 tables can be calculated offline, utilizing vendor optical design 938 and planning tools, and periodically uploaded to the Controller: 939 these optical path reach tables are fairly static. However, to get 940 the connectivity matrix, between any two sites, either a regen free 941 path can be used, if one is available, or multiple regen free paths 942 are concatenated to get from the source to the destination, which 943 can be a very large combination. Additionally, when the optical path 944 within optical domain needs to be computed, it can result in 945 different paths based on input objective, constraints, and network 946 conditions. In summary, even though "optical reach table" is fairly 947 static, which regen free paths to build the connectivity matrix 948 between any source and destination is very dynamic, and is done 949 using very sophisticated routing algorithms. 951 Using the "basic connectivity matrix" with an abstract node to 952 abstract the information regarding the connectivity constraints of 953 an Optical domain, would make this information more "dynamic" since 954 the connectivity constraints of an optical domain can change over 955 time because some optical paths that are feasible at a given time 956 may become unfeasible at a later time when e.g., another optical 957 path is established. 959 The information in the "detailed connectivity matrix" is even more 960 dynamic since the establishment of another optical path may change 961 some of the parameters (e.g., delay or available bandwidth) in the 962 "detailed connectivity matrix" while not changing the feasibility of 963 the path. 965 There is therefore the need to keep the information in the "detailed 966 connectivity matrix" updated which means that there another tradeoff 967 between the accuracy (i.e., providing "all" the information that 968 might be needed by the client's PCE) and having up-to-date 969 information. The more the information is provided and the longer it 970 takes to keep it up-to-date which increases the likelihood that the 971 client's PCE computes paths using not updated information. 973 It seems therefore quite challenging to have a "detailed 974 connectivity matrix" that provides accurate, scalable and updated 975 information to allow the client's PCE to take optimal decisions by 976 its own. 978 Considering the example in Figure 8 with the approach defined in 979 this document, the client, when it needs to set up an end-to-end 980 path, it can request the Optical Domain Controller to compute a set 981 of optimal paths (e.g., for VP1-VP4 and VP2-VP5) and take decisions 982 based on the information received: 984 o When setting up a 5 Gb/s path between routers R1 and R2, the 985 Optical Domain Controller may report only the VP1-VP4 path as the 986 only feasible path: the Packet/Optical Coordinator can 987 successfully set up the end-to-end path passing though this 988 optical path; 990 o When setting up a 1 Gb/s path between routers R1 and R2, the 991 Optical Domain Controller (knowing that the path requires only 1 992 Gb/s) can report both the VP1-VP4 path, with cost 50, and the 993 VP2-VP5 path, with cost 65. The Packet/Optical Coordinator can 994 then compute the optimal path which is passing thought the VP1- 995 VP4 sub-path (with cost 50) within the optical domain. 997 3.2.2. TE topology abstraction 999 Using the TE topology model, as defined in [RFC8795], the underlying 1000 controller can export to its client an abstract TE topology, 1001 composed by a set of TE nodes and TE links, representing the 1002 abstract view of the topology under its control. 1004 For example, in the multi-domain TE network use case, described in 1005 section 2.2, the TE Domain Controller 1 can export a TE topology 1006 encompassing the TE nodes A, B, C and D and the TE links 1007 interconnecting them. In a similar way, the TE Domain Controller 2 1008 can export a TE topology encompassing the TE nodes E, F, G and H and 1009 the TE links interconnecting them. 1011 In this example, for simplicity reasons, each abstract TE node maps 1012 with each physical node, but this is not necessary. 1014 In order to set up a multi-domain TE path (e.g., between nodes A and 1015 H), the Multi-Domain Controller can compute by its own an optimal 1016 end-to-end path based on the abstract TE topology information 1017 provided by the domain controllers. For example: 1019 o Multi-Domain Controller can compute, based on its own TED data, 1020 the optimal multi-domain path being A-B-C-E-G-H, and then request 1021 the TE Domain Controllers to set up the A-B-C and E-G-H intra- 1022 domain paths 1024 o But, during path set-up, the TE Domain Controller may find out 1025 that A-B-C intra-domain path is not feasible (as discussed in 1026 section 2.2, in optical networks it is typical to have some paths 1027 not being feasible due to optical constraints that are known only 1028 by the Optical Domain Controller), while only the path A-B-D is 1029 feasible 1031 o So what the Multi-Domain Controller has computed is not good and 1032 it needs to re-start the path computation from scratch 1034 As discussed in section 3.2.1, providing more extensive abstract 1035 information from the TE Domain Controllers to the Multi-Domain 1036 Controller may lead to scalability problems. 1038 In a sense this is similar to the problem of routing and wavelength 1039 assignment within an optical domain. It is possible to do first 1040 routing (step 1) and then wavelength assignment (step 2), but the 1041 chances of ending up with a good path is low. Alternatively, it is 1042 possible to do combined routing and wavelength assignment, which is 1043 known to be a more optimal and effective way for optical path set-up. 1044 Similarly, it is possible to first compute an abstract end-to-end 1045 path within the Multi-Domain Controller (step 1) and then compute an 1046 intra-domain path within each optical domain (step 2), but there are 1047 more chances not to find a path or to get a suboptimal path than 1048 performing multiple per-domain path computations and then stitch 1049 them. 1051 3.2.3. Complementary use of TE topology and path computation 1053 As discussed in section 2.2, there are some scalability issues with 1054 path computation requests in a multi-domain TE network with many TE 1055 domains, in terms of the number of requests to send to the TE Domain 1056 Controllers. It would therefore be worthwhile using the abstract TE 1057 topology information provided by the TE Domain Controllers to limit 1058 the number of requests. 1060 An example can be described considering the multi-domain abstract TE 1061 topology shown in Figure 9. In this example, an end-to-end TE path 1062 between domains A and F needs to be set up. The transit TE domain 1063 should be selected between domains B, C, D and E. 1065 .........B......... 1066 : _ _ _ _ _ _ _ _ : 1067 :/ \: 1068 +---O NOT FEASIBLE O---+ 1069 cost=5| : : | 1070 ......A...... | :.................: | ......F...... 1071 : : | | : : 1072 : O-----+ .........C......... +-----O : 1073 : : : /-------------\ : : : 1074 : : :/ \: : : 1075 : cost<=20 O---------O cost <= 30 O---------O cost<=20 : 1076 : /: cost=5 : : cost=5 :\ : 1077 : /------/ : :.................: : \------\ : 1078 : / : : \ : 1079 :/ cost<=25 : .........D......... : cost<=25 \: 1080 O-----------O-------+ : /-------------\ : +-------O-----------O 1081 :\ : cost=5| :/ \: |cost=5 : /: 1082 : \ : +-O cost <= 30 O-+ : / : 1083 : \------\ : : : : /------/ : 1084 : cost>=30 \: :.................: :/ cost>=30 : 1085 : O-----+ +-----O : 1086 :...........: | .........E......... | :...........: 1087 | : /-------------\ : | 1088 cost=5| :/ \: |cost=5 1089 +---O cost >= 30 O---+ 1090 : : 1091 :.................: 1093 Figure 9 - Multi-domain with many domains (Topology information) 1094 The actual cost of each intra-domain path is not known a priori from 1095 the abstract topology information. The Multi-Domain Controller only 1096 knows, from the TE topology provided by the underlying domain 1097 controllers, the feasibility of some intra-domain paths and some 1098 upper-bound and/or lower-bound cost information. With this 1099 information, together with the cost of inter-domain links, the 1100 Multi-Domain Controller can understand by its own that: 1102 o Domain B cannot be selected as the path connecting domains A and 1103 F is not feasible; 1105 o Domain E cannot be selected as a transit domain since it is know 1106 from the abstract topology information provided by domain 1107 controllers that the cost of the multi-domain path A-E-F (which 1108 is 100, in the best case) will be always be higher than the cost 1109 of the multi-domain paths A-D-F (which is 90, in the worst case) 1110 and A-C-F (which is 80, in the worst case). 1112 Therefore, the Multi-Domain Controller can understand by its own 1113 that the optimal multi-domain path could be either A-D-F or A-C-F 1114 but it cannot know which one of the two possible option actually 1115 provides the optimal end-to-end path. 1117 The Multi-Domain Controller can therefore request path computation 1118 only to the TE Domain Controllers A, D, C and F (and not to all the 1119 possible TE Domain Controllers). 1121 .........B......... 1122 : : 1123 +---O O---+ 1124 ......A...... | :.................: | ......F...... 1125 : : | | : : 1126 : O-----+ .........C......... +-----O : 1127 : : : /-------------\ : : : 1128 : : :/ \: : : 1129 : cost=15 O---------O cost = 25 O---------O cost=10 : 1130 : /: cost=5 : : cost=5 :\ : 1131 : /------/ : :.................: : \------\ : 1132 : / : : \ : 1133 :/ cost=10 : .........D......... : cost=15 \: 1134 O-----------O-------+ : /-------------\ : +-------O-----------O 1135 : : cost=5| :/ \: |cost=5 : : 1136 : : +-O cost = 15 O-+ : : 1137 : : : : : : 1138 : : :.................: : : 1139 : O-----+ +-----O : 1140 :...........: | .........E......... | :...........: 1141 | : : | 1142 +---O O---+ 1143 :.................: 1145 Figure 10 - Multi-domain with many domains 1146 (Path Computation information) 1148 Based on these requests, the Multi-Domain Controller can know the 1149 actual cost of each intra-domain paths which belongs to potential 1150 optimal end-to-end paths, as shown in Figure 10, and then compute 1151 the optimal end-to-end path (e.g., A-D-F, having total cost of 50, 1152 instead of A-C-F having a total cost of 70). 1154 3.3. Path Computation RPC 1156 The TE tunnel YANG data model, defined in [TE-TUNNEL], can support 1157 the need to request path computation, as described in section 5.1.2 1158 of [TE-TUNNEL]. 1160 This solution is stateful since the state of each created "compute- 1161 only" TE tunnel path needs to be maintained, in the YANG datastores 1162 (at least in the running datastore and operational datastore), and 1163 updated, when underlying network conditions change. 1165 The RPC mechanism allows requesting path computation using a simple 1166 atomic operation, without creating any state in the YANG datastores, 1167 and it is the natural option/choice, especially with stateless PCE. 1169 It is very useful to provide both the options of using an RPC as 1170 well as of setting up TE tunnel paths in "compute-only" mode. It is 1171 suggested to use the RPC as much as possible and to rely on 1172 "compute-only" TE tunnel paths, when really needed. 1174 Using the RPC solution would imply that the underlying controller 1175 (e.g., a PNC) computes a path twice during the process to set up an 1176 LSP: at time T1, when its client (e.g., an MDSC) sends a path 1177 computation RPC request to it, and later, at time T2, when the same 1178 client (MDSC) creates a TE tunnel requesting the set-up of the LSP. 1179 The underlying assumption is that, if network conditions have not 1180 changed, the same path that has been computed at time T1 is also 1181 computed at time T2 by the underlying controller (e.g. PNC) and 1182 therefore the path that is set up at time T2 is exactly the same 1183 path that has been computed at time T1. 1185 However, since the operation is stateless, there is no guarantee 1186 that the returned path would still be available when path set-up is 1187 requested: this does not cause major issues when the time between 1188 path computation and path set-up is short (especially if compared 1189 with the time that would be needed to update the information of a 1190 very detailed connectivity matrix). 1192 In most of the cases, there is even no need to guarantee that the 1193 path that has been set up is the exactly same as the path that has 1194 been returned by path computation, especially if it has the same or 1195 even better metrics. Depending on the abstraction level applied by 1196 the server, the client may also not know the actual computed path. 1198 The most important requirement is that the required global 1199 objectives (e.g., multi-domain path metrics and constraints) are 1200 met. For this reason a path verification phase is always necessary 1201 to verify that the actual path that has been set up meets the global 1202 objectives (for example in a multi-domain network, the resulting 1203 end-to-end path meets the required end-to-end metrics and 1204 constraints). 1206 In most of the cases, even if the path being set up is not exactly 1207 the same as the path returned by path computation, its metrics and 1208 constraints are "good enough" and the path verification passes 1209 successfully. In the few corner cases where the path verification 1210 fails, it is possible repeat the whole process (path computation, 1211 path set-up and path verification). 1213 In case it is required to set up at T2 exactly the same path 1214 computed at T1, the RPC solution should not be used and, instead, a 1215 "compute-only" TE tunnel path should be set up, allowing also 1216 notifications in case the computed path has been changed. 1218 In this case, at time T1, the client (MDSC) creates a TE tunnel in a 1219 compute-only mode in the running datastore and later, at time T2, 1220 changes the configuration of that TE tunnel (not to be any more in a 1221 compute-only mode) to trigger the set-up of the LSP over the path 1222 which have been computed at time T1 and reported in the operational 1223 datastore. 1225 It is worth noting that also using the "compute-only" TE tunnel 1226 path, although increasing the likelihood that the computed path is 1227 available at path set-up, does not guaranteed that because 1228 notifications may not be reliable or delivered on time. Path 1229 verification is needed also in this case. 1231 The solution based on "compute-only" TE tunnel path has also the 1232 following drawbacks: 1234 o Several messages required for any path computation 1236 o Requires persistent storage in the underlying controller 1238 o Need for garbage collection for stranded paths 1240 o Process burden to detect changes on the computed paths in order 1241 to provide notifications update 1243 3.3.1. Temporary reporting of the computed path state 1245 This section describes an optional extension to the stateless 1246 behavior of the path computation RPC, where the underlying 1247 controller, after having received a path computation RPC request, 1248 maintains some "transient state" associated with the computed path, 1249 allowing the client to request the set-up of exactly that path, if 1250 still available. 1252 This is similar to the "compute-only" TE tunnel path solution but, 1253 to avoid the drawbacks of the stateful approach, is leveraging the 1254 path computation RPC and the separation between configuration and 1255 operational datastore, as defined in the NMDA architecture 1256 [RFC8342]. 1258 The underlying controller, after having computed a path, as 1259 requested by a path computation RPC, also creates a TE tunnel 1260 instance within the operational datastore, to store that computed 1261 path. This would be similar to a "compute-only" TE tunnel path, with 1262 the only difference that there is no associated TE tunnel instance 1263 within the running datastore. 1265 Since the underlying controller stores in the operational datastore 1266 the computed path based on an abstract topology it exposes, it also 1267 remembers, internally, which is the actual native path (physical 1268 path), within its native topology (physical topology), associated 1269 with that compute-only TE tunnel instance. 1271 Afterwards, the client (e.g., MDSC) can request the set-up of that 1272 specific path by creating a TE tunnel instance (not in compute-only 1273 mode) in the running datastore using the same tunnel-name of 1274 the existing TE tunnel in the operational datastore: this will 1275 trigger the underlying controller to set up that path, if still 1276 available. 1278 There are still cases where the path being set up is not exactly the 1279 same as the path that has been computed: 1281 o When the tunnel is configured with path constraints which are not 1282 compatible with the computed path; 1284 o When the tunnel set-up is requested after the resources of the 1285 computed path are no longer available; 1287 o When the tunnel set-up is requested after the computed path is no 1288 longer known (e.g. due to a server reboot) by the underlying 1289 controller. 1291 In all these cases, the underlying controller should compute and set 1292 up a new path. 1294 Therefore the "path verification" phase, as described in section 3.3 1295 above, is always needed to check that the path that has been set up 1296 is still "good enough". 1298 Since this new approach is not completely stateless, garbage 1299 collection is implemented using a timeout that, when it expires, 1300 triggers the removal of the computed path from the operational 1301 datastore. This operation is fully controlled by the underlying 1302 controller without the need for any action to be taken by the client 1303 that is not able to act on the operational datastore. The default 1304 value of this timeout is 10 minutes but a different value may be 1305 configured by the client. 1307 In addition, it is possible for the client to tag each path 1308 computation request with a transaction-id allowing for a faster 1309 removal of all the paths associated with a transaction-id, without 1310 waiting for their timers to expire. 1312 The underlying controller can remove from the operational datastore 1313 all the paths computed with a given transaction-id which have not 1314 been set up either when it receives a Path Delete RPC request for 1315 that transaction-id or, automatically, right after the set-up up of 1316 a path that has been previously computed with that transaction-id. 1318 This possibility is useful when multiple paths are computed but, at 1319 most, only one is set up (e.g., in multi-domain path computation 1320 scenario scenarios). After the selected path has been set up (e.g, 1321 in one domain during multi-domain path set-up), all the other 1322 alternative computed paths can be automatically deleted by the 1323 underlying controller (since no longer needed). The client can also 1324 request, using the Path Delete RPC request, the underlying 1325 controller to remove all the computed paths, if none of them is 1326 going to be set up (e.g., in a transit domain not being selected by 1327 multi-domain path computation and so not being automatically 1328 deleted). 1330 This approach is complimentary and not alternative to the timer 1331 which is always needed to avoid stranded computed paths being stored 1332 in the operational datastore when no path is set up and no explicit 1333 Path Delete RPC request is received. 1335 4. Path computation and optimization for multiple paths 1337 There are use cases, where it is advantageous to request path 1338 computation for a set of paths, through a network or through a 1339 network domain, using a single request [RFC5440]. 1341 In this case, sending a single request for multiple path 1342 computations, instead of sending multiple requests for each path 1343 computation, would reduce the protocol overhead and it would consume 1344 less resources (e.g., threads in the client and server). 1346 In the context of a typical multi-domain TE network, there could 1347 multiple choices for the ingress/egress points of a domain and the 1348 Multi-Domain Controller needs to request path computation between 1349 all the ingress/egress pairs to select the best pair. For example, 1350 in the example of section 2.2, the Multi-Domain Controller needs to 1351 request the TE Network Controller 1 to compute the A-C and the A-D 1352 paths and to the TE Network Controller 2 to compute the E-H and the 1353 F-H paths. 1355 It is also possible that the Multi-Domain Controller receives a 1356 request to set up a group of multiple end to end connections. The 1357 Multi-Domain Controller needs to request each TE domain Controller 1358 to compute multiple paths, one (or more) for each end to end 1359 connection. 1361 There are also scenarios where it can be needed to request path 1362 computation for a set of paths in a synchronized fashion. 1364 One example could be computing multiple diverse paths. Computing a 1365 set of diverse paths in an unsynchronized fashion, leads to the 1366 possibility of not being able to satisfy the diversity requirement. 1367 In this case, it is preferable to compute a sub-optimal primary path 1368 for which a diversely routed secondary path exists. 1370 There are also scenarios where it is needed to request optimizing a 1371 set of paths using objective functions that apply to the whole set 1372 of paths, see [RFC5541], e.g. to minimize the sum of the costs of 1373 all the computed paths in the set. 1375 5. YANG data model for requesting Path Computation 1377 This document define a YANG RPC to request path computation as an 1378 "augmentation" of tunnel-rpc, defined in [TE-TUNNEL]. This model 1379 provides the RPC input attributes that are needed to request path 1380 computation and the RPC output attributes that are needed to report 1381 the computed paths. 1383 augment /te:tunnels-path-compute/te:input/te:path-compute-info: 1384 +-- path-request* [request-id] 1385 | +-- request-id uint32 1386 | ........... 1388 augment /te:tunnels-path-compute/te:output/te:path-compute-result: 1389 +--ro response* [response-id] 1390 +--ro response-id uint32 1391 +--ro computed-paths-properties 1392 | +--ro computed-path-properties* [k-index] 1393 | +--ro k-index uint8 1394 | +--ro path-properties 1395 | ........... 1397 This model extensively re-uses the grouping defined in [TE-TUNNEL] 1398 to ensure maximal syntax and semantics commonality. 1400 This YANG data model allows one RPC to include multiple path 1401 requests, each path request being identified by a request-id. 1402 Therefore, one RPC can return multiple responses, one for each path 1403 request, being identified by a response-id equal to the 1404 corresponding request-id. Each response reports one or more computed 1405 paths, as requested by the k-requested-paths attribute. By default, 1406 each response reports one computed path. 1408 5.1. Synchronization of multiple path computation requests 1410 The YANG data model permits the synchronization of a set of multiple 1411 path requests (identified by specific request-id) all related to a 1412 "svec" container emulating the syntax of the Synchronization VECtor 1413 (SVEC) PCEP object, defined in [RFC5440]. 1415 +-- synchronization* [] 1416 +-- svec 1417 | +-- relaxable? boolean 1418 | +-- disjointness? te-path-disjointness 1419 | +-- request-id-number* uint32 1420 +-- svec-constraints 1421 | +-- path-metric-bound* [metric-type] 1422 | +-- metric-type identityref 1423 | +-- upper-bound? uint64 1424 +-- path-srlgs-lists 1425 | +-- path-srlgs-list* [usage] 1426 | +-- usage identityref 1427 | +-- values* srlg 1428 +-- path-srlgs-names 1429 | +-- path-srlgs-name* [usage] 1430 | +-- usage identityref 1431 | +-- names* string 1432 +-- exclude-objects 1433 | +-- excludes* [] 1434 | +-- (type)? 1435 | +--:(numbered-node-hop) 1436 | | +-- numbered-node-hop 1437 | | +-- node-id te-node-id 1438 | | +-- hop-type? te-hop-type 1439 | +--:(numbered-link-hop) 1440 | | +-- numbered-link-hop 1441 | | +-- link-tp-id te-tp-id 1442 | | +-- hop-type? te-hop-type 1443 | | +-- direction? te-link-direction 1444 | +--:(unnumbered-link-hop) 1445 | | +-- unnumbered-link-hop 1446 | | +-- link-tp-id te-tp-id 1447 | | +-- node-id te-node-id 1448 | | +-- hop-type? te-hop-type 1449 | | +-- direction? te-link-direction 1450 | +--:(as-number) 1451 | | +-- as-number-hop 1452 | | +-- as-number inet:as-number 1453 | | +-- hop-type? te-hop-type 1454 | +--:(label) 1455 | +-- label-hop 1456 | +-- te-label 1457 | +-- (technology)? 1458 | | +--:(generic) 1459 | | +-- generic? 1460 | | rt-types:generalized-label 1461 | +-- direction? te-label-direction 1462 +-- optimizations 1463 +-- (algorithm)? 1464 +--:(metric) {te-types:path-optimization-metric}? 1465 | +-- optimization-metric* [metric-type] 1466 | +-- metric-type identityref 1467 | +-- weight? uint8 1468 +--:(objective-function) 1469 {te-types:path-optimization-objective- 1470 function}? 1471 +-- objective-function 1472 +-- objective-function-type? identityref 1474 The model, in addition to the metric types, defined in [TE-TUNNEL], 1475 which can be applied to each individual path request, supports also 1476 additional metric types, which apply to a set of synchronized 1477 requests, as referenced in [RFC5541]. These additional metric types 1478 are defined by the following YANG identities: 1480 o svec-metric-type: base YANG identity from which cumulative metric 1481 types identities are derived. 1483 o svec-metric-cumul-te: cumulative TE cost metric type, as defined 1484 in [RFC5541]. 1486 o svec-metric-cumul-igp: cumulative IGP cost metric type, as 1487 defined in [RFC5541]. 1489 o svec-metric-cumul-hop: cumulative Hop metric type, representing 1490 the cumulative version of the Hop metric type defined in 1491 [RFC8776]. 1493 o svec-metric-aggregate-bandwidth-consumption: aggregate bandwidth 1494 consumption metric type, as defined in [RFC5541]. 1496 o svec-metric-load-of-the-most-loaded-link: load of the most loaded 1497 link metric type, as defined in [RFC5541]. 1499 5.2. Returned metric values 1501 This YANG data model provides a way to return the values of the 1502 metrics computed by the path computation in the output of RPC, 1503 together with other important information (e.g. srlg, affinities, 1504 explicit route), emulating the syntax of the "C" flag of the 1505 "METRIC" PCEP object [RFC5440]: 1507 | +--ro path-properties 1508 | +--ro path-metric* [metric-type] 1509 | | +--ro metric-type identityref 1510 | | +--ro accumulative-value? uint64 1511 | +--ro path-affinities-values 1512 | | +--ro path-affinities-value* [usage] 1513 | | +--ro usage identityref 1514 | | +--ro value? admin-groups 1515 | +--ro path-affinity-names 1516 | | +--ro path-affinity-name* [usage] 1517 | | +--ro usage identityref 1518 | | +--ro affinity-name* [name] 1519 | | +--ro name string 1520 | +--ro path-srlgs-lists 1521 | | +--ro path-srlgs-list* [usage] 1522 | | +--ro usage identityref 1523 | | +--ro values* srlg 1524 | +--ro path-srlgs-names 1525 | | +--ro path-srlgs-name* [usage] 1526 | | +--ro usage identityref 1527 | | +--ro names* string 1528 | +--ro path-route-objects 1529 | ........... 1531 It also allows the client to request which information (metrics, 1532 srlg and/or affinities) should be returned: 1534 | +-- request-id uint32 1535 | ........... 1536 | +-- requested-metrics* [metric-type] 1537 | | +-- metric-type identityref 1538 | +-- return-srlgs? boolean 1539 | +-- return-affinities? boolean 1540 | ........... 1542 This feature is essential for path computation in a multi-domain TE 1543 network as described in section 2.2. In this case, the metrics 1544 returned by a path computation requested to a given underlying 1545 controller must be used by the client to compute the best end-to-end 1546 path. If they are missing, the client cannot compare different paths 1547 calculated by the underlying controllers and choose the best one for 1548 the optimal e2e path. 1550 5.3. Multiple Paths Requests for the same TE tunnel 1552 The YANG data model allows including multiple requests for different 1553 paths intended to be used within the same tunnel or within different 1554 tunnels. 1556 When multiple requested paths are intended to be used within the 1557 same tunnel (e.g., requesting path computation for the primary and 1558 secondary paths of a protected tunnel), the set of attributes that 1559 are intended to be configured on per-tunnel basis rather than on 1560 per-path basis are common to all these path requests. These 1561 attributes includes both attributes which can be configured only a 1562 per-tunnel basis (e.g., tunnel-name, source/destination TTP, 1563 encoding and switching-type) as well attributes which can be 1564 configured also on a per-path basis (e.g., the te-bandwidth or the 1565 associations). 1567 Therefore, a tunnel-attributes list is defined, within the path 1568 computation request RPC: 1570 +-- tunnel-attributes* [tunnel-name] 1571 | +-- tunnel-name string 1572 | +-- encoding? identityref 1573 | +-- switching-type? identityref 1574 | ........... 1576 The path requests that are intended to be used within the same 1577 tunnel should reference the same entry in the tunnel-attributes 1578 list. This allows: 1580 o avoiding repeating the same set of per-tunnel parameters on 1581 multiple requested paths; 1583 o the server to understand what attributes are intended to be 1584 configured on a per-tunnel basis (e.g., the te-bandwidth 1585 configured in the tunnel-attributes) and what attributes are 1586 intended to be configured on a per-path basis(e.g., the te- 1587 bandwidth configured in the path-request). This could be useful 1588 especially when the server also creates a TE tunnel instance 1589 within the operational datastore to report the computed paths, as 1590 described in section 3.3.1: in this case, the tunnel-name is also 1591 used as the suggested name for that TE tunnel instance. 1593 The YANG data model allows also including requests for paths 1594 intended to modify existing tunnels (e.g., adding a protection path 1595 for an existing un-protected tunnel). In this case, the per-tunnel 1596 attributes are already provided in the existing TE tunel instance 1597 and do not need to be re-configured in the path computation request 1598 RPC. Therefore, these requests should reference an existing TE 1599 tunnel instance. 1601 It is also possible to request computing paths without indicating in 1602 which tunnel they are intended to be used (e.g., in case of an 1603 unprotected tunnel). In this case, the per-tunnel attributes could 1604 be provided together with the per-path attributes in the path 1605 request, without using the tunnel-attributes list. 1607 The choices below are defined to distinguish whether the per-tunnel 1608 attributes are configured by values (providing a set of attributes) 1609 or by reference (providing a leafref), to either a TE tunnel 1610 instance, if it exists, or to an entry of the tunnel-attributes 1611 list, if the TE tunnel instance does not exist): 1613 | +-- (tunnel-attributes)? 1614 | | +--:(reference) 1615 | | | +-- (tunnel-exist)? 1616 | | | | +--:(tunnel-ref) 1617 | | | | | +-- tunnel-ref te:tunnel-ref 1618 | | | | +--:(tunnel-attributes-ref) 1619 | | | | +-- tunnel-attributes-ref leafref 1620 | | ........... 1621 | | +--:(value) 1622 | | +-- tunnel-name? string 1623 | | ........... 1624 | | +-- encoding? identityref 1625 | | +-- switching-type? identityref 1626 | | ........... 1628 The (values) case will provide the set of attributes that are 1629 configured only on per-tunnel basis (e.g., tunnel-name, 1630 source/destination TTP, encoding and switching-type). The role of 1631 the path being requested is specified by the (path-role) choice: 1633 | | +-- (path-role)? 1634 | | | +--:(primary-path) 1635 | | | | +-- primary-path-name? string 1636 | | | +--:(secondary-path) 1637 | | | +-- secondary-path-name? string 1639 It is worth noting that a TE tunnel with only one path cannot have 1640 any reverse path. 1642 The (reference) case provides the information needed to associate 1643 multiple path requests that are intended to be used within the same 1644 tunnel. 1646 In order to indicate the role of the path being requested within the 1647 intended tunnel (e.g., primary or secondary path), the 1648 (tunnel-path-role) choice is defined: 1650 | | | +-- (tunnel-path-role) 1651 | | | +--:(primary-path) 1652 | | | | +-- primary-path! 1653 | | | | ........... 1654 | | | +--:(secondary-path) 1655 | | | | +-- secondary-path 1656 | | | | ........... 1657 | | | +--:(primary-reverse-path) 1658 | | | | +-- primary-reverse-path 1659 | | | | ........... 1660 | | | +--:(secondary-reverse-path) 1661 | | | +-- secondary-reverse-path 1662 | | | ........... 1664 The primary-path is a presence container used to indicate that the 1665 requested path is intended to be used as a primary path. It can also 1666 contain some attributes which are configured only on primary paths 1667 (e.g., the k-requested-paths). 1669 The secondary-path container indicates that the requested path is 1670 intended to be used as a secondary path and it contains at least 1671 references to one or more primary paths which can use it as a 1672 candidate secondary path: 1674 | | | | +-- secondary-path 1675 | | | | ........... 1676 | | | | +-- primary-path-ref* [] 1677 | | | | +-- (primary-path-exist)? 1678 | | | | +--:(path-ref) 1679 | | | | | +-- primary-path-ref leafref 1680 | | | | +--:(path-request-ref) 1681 | | | | +-- path-request-ref leafref 1683 A requested secondary path can reference any requested primary 1684 paths, and, in case they are intended to be used within an existing 1685 TE tunnel, it could also reference any existing primary-paths. 1687 Open issue: what happens in the case of a TE tunnel which contains 1688 only one secondary path? 1690 The secondary-path container can also contain some attributes which 1691 are configured only on secondary paths (e.g., the protection-type). 1693 The primary-reverse-path container indicates that the requested path 1694 is intended to be used as a primary reverse path and it contains 1695 only the reference to the primary path which is intended to use it 1696 as a reverse path: 1698 | | | | +-- primary-reverse-path 1699 | | | | +-- (primary-path-exist)? 1700 | | | | +--:(path-ref) 1701 | | | | | +-- primary-path-ref leafref 1702 | | | | +--:(path-request-ref) 1703 | | | | +-- path-request-ref leafref 1705 A requested primary reverse path can reference either a requested 1706 primary path, or, in case it is intended to be used within an 1707 existing TE tunnel, an existing primary-path. 1709 The secondary-reverse-path container indicates that the requested 1710 path is intended to be used as a secondary reverse path and it 1711 contains at least references to one or more primary paths, whose 1712 primary reverse path can use it as a candidate secondary reverse 1713 path: 1715 | | | +-- secondary-reverse-path 1716 | | | ........... 1717 | | | +-- primary-reverse-path-ref* [] 1718 | | | +-- (primary-reverse-path-exist)? 1719 | | | +--:(path-ref) 1720 | | | | +-- primary-path-ref leafref 1721 | | | +--:(path-request-ref) 1722 | | | +-- path-request-ref leafref 1724 A requested secondary reverse path can reference any requested 1725 primary paths, and, in case they are intended to be used within an 1726 existing TE tunnel, it could reference also existing primary-paths. 1728 The secondary-reverse-path container can also contain some 1729 attributes which are configured only on secondary reverse paths 1730 (e.g., the protection-type). 1732 5.4. Multi-Layer Path Computation 1734 The models supports requesting multi-layer path computation 1735 following the same approach based on dependency tunnels, as defined 1736 in [TE-TUNNEL]. 1738 The tunnel-attributes of a given client-layer path request can 1739 reference server-layer TE tunnels which can already exist in the 1740 YANG datastore or be specified in the tunnel-attributes list, within 1741 the same RPC request: 1743 | +-- dependency-tunnels 1744 | | +-- dependency-tunnel* [name] 1745 | | | +-- name -> /te:te/tunnels/tunnel/name 1746 | | | +-- encoding? identityref 1747 | | | +-- switching-type? identityref 1748 | | +-- dependency-tunnel-attributes* [name] 1749 | | +-- name leafref 1750 | | +-- encoding? identityref 1751 | | +-- switching-type? identityref 1753 In a similar way as in [TE-TUNNEL], the server-layer tunnel 1754 attributes should provide the information of what would be the 1755 dynamic link in the client layer topology supported by that tunnel, 1756 if instantiated: 1758 | +-- hierarchical-link 1759 | +-- local-te-node-id? te-types:te-node-id 1760 | +-- local-te-link-tp-id? te-types:te-tp-id 1761 | +-- remote-te-node-id? te-types:te-node-id 1762 | +-- te-topology-identifier 1763 | +-- provider-id? te-global-id 1764 | +-- client-id? te-global-id 1765 | +-- topology-id? te-topology-id 1767 It is worth noting that since path computation RPC is stateless, the 1768 dynamic hierarchical links configured for the server-layer tunnel 1769 attributes cannot be used for path computation of any client-layer 1770 path unless explicitly referenced in the dependency-tunnel- 1771 attributes list within the same RPC request. 1773 6. YANG data model for TE path computation 1775 6.1. Tree diagram 1777 Figure 11 below shows the tree diagram of the YANG data model 1778 defined in module ietf-te-path-computation.yang. 1780 module: ietf-te-path-computation 1781 augment /te:tunnels-path-compute/te:input/te:path-compute-info: 1782 +-- path-request* [request-id] 1783 | +-- request-id uint32 1784 | +-- (tunnel-attributes)? 1785 | | +--:(reference) 1786 | | | +-- (tunnel-exist)? 1787 | | | | +--:(tunnel-ref) 1788 | | | | | +-- tunnel-ref te:tunnel-ref 1789 | | | | +--:(tunnel-attributes-ref) 1790 | | | | +-- tunnel-attributes-ref leafref 1791 | | | +-- path-name? string 1792 | | | +-- (tunnel-path-role) 1793 | | | +--:(primary-path) 1794 | | | | +-- primary-path! 1795 | | | | +-- preference? uint8 1796 | | | | +-- k-requested-paths? uint8 1797 | | | +--:(secondary-path) 1798 | | | | +-- secondary-path 1799 | | | | +-- preference? uint8 1800 | | | | +-- protection-type? identityref 1801 | | | | +-- restoration-type? identityref 1802 | | | | +-- primary-path-ref* [] 1803 | | | | +-- (primary-path-exist)? 1804 | | | | +--:(path-ref) 1805 | | | | | +-- primary-path-ref leafref 1806 | | | | +--:(path-request-ref) 1807 | | | | +-- path-request-ref leafref 1808 | | | +--:(primary-reverse-path) 1809 | | | | +-- primary-reverse-path 1810 | | | | +-- (primary-path-exist)? 1811 | | | | +--:(path-ref) 1812 | | | | | +-- primary-path-ref leafref 1813 | | | | +--:(path-request-ref) 1814 | | | | +-- path-request-ref leafref 1815 | | | +--:(secondary-reverse-path) 1816 | | | +-- secondary-reverse-path 1817 | | | +-- preference? uint8 1818 | | | +-- protection-type? identityref 1819 | | | +-- restoration-type? identityref 1820 | | | +-- primary-reverse-path-ref* [] 1821 | | | +-- (primary-reverse-path-exist)? 1822 | | | +--:(path-ref) 1823 | | | | +-- primary-path-ref leafref 1824 | | | +--:(path-request-ref) 1825 | | | +-- path-request-ref leafref 1826 | | +--:(value) 1827 | | +-- tunnel-name? string 1828 | | +-- (path-role)? 1829 | | | +--:(primary-path) 1830 | | | | +-- primary-path-name? string 1831 | | | +--:(secondary-path) 1832 | | | +-- secondary-path-name? string 1833 | | +-- k-requested-paths? uint8 1834 | | +-- protection-type? identityref 1835 | | +-- restoration-type? identityref 1836 | | +-- encoding? identityref 1837 | | +-- switching-type? identityref 1838 | | +-- source? inet:ip-address 1839 | | +-- destination? inet:ip-address 1840 | | +-- src-tp-id? binary 1841 | | +-- dst-tp-id? binary 1842 | | +-- bidirectional? boolean 1843 | | +-- te-topology-identifier 1844 | | +-- provider-id? te-global-id 1845 | | +-- client-id? te-global-id 1846 | | +-- topology-id? te-topology-id 1847 | +-- association-objects 1848 | | +-- association-object* [association-key] 1849 | | | +-- association-key string 1850 | | | +-- type? identityref 1851 | | | +-- id? uint16 1852 | | | +-- source 1853 | | | +-- id? te-gen-node-id 1854 | | | +-- type? enumeration 1855 | | +-- association-object-extended* [association-key] 1856 | | +-- association-key string 1857 | | +-- type? identityref 1858 | | +-- id? uint16 1859 | | +-- source 1860 | | | +-- id? te-gen-node-id 1861 | | | +-- type? enumeration 1862 | | +-- global-source? uint32 1863 | | +-- extended-id? yang:hex-string 1864 | +-- optimizations 1865 | | +-- (algorithm)? 1866 | | +--:(metric) {path-optimization-metric}? 1867 | | | +-- optimization-metric* [metric-type] 1868 | | | | +-- metric-type identityref 1869 | | | | +-- weight? uint8 1870 | | | | +-- explicit-route-exclude-objects 1871 | | | | | +-- route-object-exclude-object* [index] 1872 | | | | | +-- index uint32 1873 | | | | | +-- (type)? 1874 | | | | | +--:(numbered-node-hop) 1875 | | | | | | +-- numbered-node-hop 1876 | | | | | | +-- node-id te-node-id 1877 | | | | | | +-- hop-type? te-hop-type 1878 | | | | | +--:(numbered-link-hop) 1879 | | | | | | +-- numbered-link-hop 1880 | | | | | | +-- link-tp-id te-tp-id 1881 | | | | | | +-- hop-type? te-hop-type 1882 | | | | | | +-- direction? te-link- 1883 direction 1884 | | | | | +--:(unnumbered-link-hop) 1885 | | | | | | +-- unnumbered-link-hop 1886 | | | | | | +-- link-tp-id te-tp-id 1887 | | | | | | +-- node-id te-node-id 1888 | | | | | | +-- hop-type? te-hop-type 1889 | | | | | | +-- direction? te-link- 1890 direction 1891 | | | | | +--:(as-number) 1892 | | | | | | +-- as-number-hop 1893 | | | | | | +-- as-number inet:as-number 1894 | | | | | | +-- hop-type? te-hop-type 1895 | | | | | +--:(label) 1896 | | | | | | +-- label-hop 1897 | | | | | | +-- te-label 1898 | | | | | | +-- (technology)? 1899 | | | | | | | +--:(generic) 1900 | | | | | | | +-- generic? 1901 | | | | | | | rt- 1902 types:generalized-label 1903 | | | | | | +-- direction? 1904 | | | | | | te-label-direction 1905 | | | | | +--:(srlg) 1906 | | | | | +-- srlg 1907 | | | | | +-- srlg? uint32 1908 | | | | +-- explicit-route-include-objects 1909 | | | | +-- route-object-include-object* [index] 1910 | | | | +-- index uint32 1911 | | | | +-- (type)? 1912 | | | | +--:(numbered-node-hop) 1913 | | | | | +-- numbered-node-hop 1914 | | | | | +-- node-id te-node-id 1915 | | | | | +-- hop-type? te-hop-type 1916 | | | | +--:(numbered-link-hop) 1917 | | | | | +-- numbered-link-hop 1918 | | | | | +-- link-tp-id te-tp-id 1919 | | | | | +-- hop-type? te-hop-type 1920 | | | | | +-- direction? te-link- 1921 direction 1922 | | | | +--:(unnumbered-link-hop) 1923 | | | | | +-- unnumbered-link-hop 1924 | | | | | +-- link-tp-id te-tp-id 1925 | | | | | +-- node-id te-node-id 1926 | | | | | +-- hop-type? te-hop-type 1927 | | | | | +-- direction? te-link- 1928 direction 1929 | | | | +--:(as-number) 1930 | | | | | +-- as-number-hop 1931 | | | | | +-- as-number inet:as-number 1932 | | | | | +-- hop-type? te-hop-type 1933 | | | | +--:(label) 1934 | | | | +-- label-hop 1935 | | | | +-- te-label 1936 | | | | +-- (technology)? 1937 | | | | | +--:(generic) 1938 | | | | | +-- generic? 1939 | | | | | rt- 1940 types:generalized-label 1941 | | | | +-- direction? 1942 | | | | te-label-direction 1943 | | | +-- tiebreakers 1944 | | | +-- tiebreaker* [tiebreaker-type] 1945 | | | +-- tiebreaker-type identityref 1946 | | +--:(objective-function) 1947 | | {path-optimization-objective-function}? 1948 | | +-- objective-function 1949 | | +-- objective-function-type? identityref 1950 | +-- named-path-constraint? leafref 1951 | | {te-types:named-path-constraints}? 1952 | +-- te-bandwidth 1953 | | +-- (technology)? 1954 | | +--:(generic) 1955 | | +-- generic? te-bandwidth 1956 | +-- link-protection? identityref 1957 | +-- setup-priority? uint8 1958 | +-- hold-priority? uint8 1959 | +-- signaling-type? identityref 1960 | +-- path-metric-bounds 1961 | | +-- path-metric-bound* [metric-type] 1962 | | +-- metric-type identityref 1963 | | +-- upper-bound? uint64 1964 | +-- path-affinities-values 1965 | | +-- path-affinities-value* [usage] 1966 | | +-- usage identityref 1967 | | +-- value? admin-groups 1968 | +-- path-affinity-names 1969 | | +-- path-affinity-name* [usage] 1970 | | +-- usage identityref 1971 | | +-- affinity-name* [name] 1972 | | +-- name string 1973 | +-- path-srlgs-lists 1974 | | +-- path-srlgs-list* [usage] 1975 | | +-- usage identityref 1976 | | +-- values* srlg 1977 | +-- path-srlgs-names 1978 | | +-- path-srlgs-name* [usage] 1979 | | +-- usage identityref 1980 | | +-- names* string 1981 | +-- disjointness? te-path- 1982 disjointness 1983 | +-- explicit-route-objects-always 1984 | | +-- route-object-exclude-always* [index] 1985 | | | +-- index uint32 1986 | | | +-- (type)? 1987 | | | +--:(numbered-node-hop) 1988 | | | | +-- numbered-node-hop 1989 | | | | +-- node-id te-node-id 1990 | | | | +-- hop-type? te-hop-type 1991 | | | +--:(numbered-link-hop) 1992 | | | | +-- numbered-link-hop 1993 | | | | +-- link-tp-id te-tp-id 1994 | | | | +-- hop-type? te-hop-type 1995 | | | | +-- direction? te-link-direction 1996 | | | +--:(unnumbered-link-hop) 1997 | | | | +-- unnumbered-link-hop 1998 | | | | +-- link-tp-id te-tp-id 1999 | | | | +-- node-id te-node-id 2000 | | | | +-- hop-type? te-hop-type 2001 | | | | +-- direction? te-link-direction 2002 | | | +--:(as-number) 2003 | | | | +-- as-number-hop 2004 | | | | +-- as-number inet:as-number 2005 | | | | +-- hop-type? te-hop-type 2006 | | | +--:(label) 2007 | | | +-- label-hop 2008 | | | +-- te-label 2009 | | | +-- (technology)? 2010 | | | | +--:(generic) 2011 | | | | +-- generic? 2012 | | | | rt-types:generalized-label 2013 | | | +-- direction? te-label-direction 2014 | | +-- route-object-include-exclude* [index] 2015 | | +-- explicit-route-usage? identityref 2016 | | +-- index uint32 2017 | | +-- (type)? 2018 | | +--:(numbered-node-hop) 2019 | | | +-- numbered-node-hop 2020 | | | +-- node-id te-node-id 2021 | | | +-- hop-type? te-hop-type 2022 | | +--:(numbered-link-hop) 2023 | | | +-- numbered-link-hop 2024 | | | +-- link-tp-id te-tp-id 2025 | | | +-- hop-type? te-hop-type 2026 | | | +-- direction? te-link-direction 2027 | | +--:(unnumbered-link-hop) 2028 | | | +-- unnumbered-link-hop 2029 | | | +-- link-tp-id te-tp-id 2030 | | | +-- node-id te-node-id 2031 | | | +-- hop-type? te-hop-type 2032 | | | +-- direction? te-link-direction 2033 | | +--:(as-number) 2034 | | | +-- as-number-hop 2035 | | | +-- as-number inet:as-number 2036 | | | +-- hop-type? te-hop-type 2037 | | +--:(label) 2038 | | | +-- label-hop 2039 | | | +-- te-label 2040 | | | +-- (technology)? 2041 | | | | +--:(generic) 2042 | | | | +-- generic? 2043 | | | | rt-types:generalized-label 2044 | | | +-- direction? te-label-direction 2045 | | +--:(srlg) 2046 | | +-- srlg 2047 | | +-- srlg? uint32 2048 | +-- path-in-segment! 2049 | | +-- label-restrictions 2050 | | +-- label-restriction* [index] 2051 | | +-- restriction? enumeration 2052 | | +-- index uint32 2053 | | +-- label-start 2054 | | | +-- te-label 2055 | | | +-- (technology)? 2056 | | | | +--:(generic) 2057 | | | | +-- generic? rt-types:generalized- 2058 label 2059 | | | +-- direction? te-label-direction 2060 | | +-- label-end 2061 | | | +-- te-label 2062 | | | +-- (technology)? 2063 | | | | +--:(generic) 2064 | | | | +-- generic? rt-types:generalized- 2065 label 2066 | | | +-- direction? te-label-direction 2067 | | +-- label-step 2068 | | | +-- (technology)? 2069 | | | +--:(generic) 2070 | | | +-- generic? int32 2071 | | +-- range-bitmap? yang:hex-string 2072 | +-- path-out-segment! 2073 | | +-- label-restrictions 2074 | | +-- label-restriction* [index] 2075 | | +-- restriction? enumeration 2076 | | +-- index uint32 2077 | | +-- label-start 2078 | | | +-- te-label 2079 | | | +-- (technology)? 2080 | | | | +--:(generic) 2081 | | | | +-- generic? rt-types:generalized- 2082 label 2083 | | | +-- direction? te-label-direction 2084 | | +-- label-end 2085 | | | +-- te-label 2086 | | | +-- (technology)? 2087 | | | | +--:(generic) 2088 | | | | +-- generic? rt-types:generalized- 2089 label 2090 | | | +-- direction? te-label-direction 2091 | | +-- label-step 2092 | | | +-- (technology)? 2093 | | | +--:(generic) 2094 | | | +-- generic? int32 2095 | | +-- range-bitmap? yang:hex-string 2096 | +-- requested-metrics* [metric-type] 2097 | | +-- metric-type identityref 2098 | +-- return-srlgs? boolean 2099 | +-- return-affinities? boolean 2100 | +-- requested-state! 2101 | +-- timer? uint16 2102 | +-- transaction-id? string 2103 +-- tunnel-attributes* [tunnel-name] 2104 | +-- tunnel-name string 2105 | +-- encoding? identityref 2106 | +-- switching-type? identityref 2107 | +-- source? inet:ip-address 2108 | +-- destination? inet:ip-address 2109 | +-- src-tp-id? binary 2110 | +-- dst-tp-id? binary 2111 | +-- bidirectional? boolean 2112 | +-- association-objects 2113 | | +-- association-object* [association-key] 2114 | | | +-- association-key string 2115 | | | +-- type? identityref 2116 | | | +-- id? uint16 2117 | | | +-- source 2118 | | | +-- id? te-gen-node-id 2119 | | | +-- type? enumeration 2120 | | +-- association-object-extended* [association-key] 2121 | | +-- association-key string 2122 | | +-- type? identityref 2123 | | +-- id? uint16 2124 | | +-- source 2125 | | | +-- id? te-gen-node-id 2126 | | | +-- type? enumeration 2127 | | +-- global-source? uint32 2128 | | +-- extended-id? yang:hex-string 2129 | +-- protection-type? identityref 2130 | +-- restoration-type? identityref 2131 | +-- te-topology-identifier 2132 | | +-- provider-id? te-global-id 2133 | | +-- client-id? te-global-id 2134 | | +-- topology-id? te-topology-id 2135 | +-- te-bandwidth 2136 | | +-- (technology)? 2137 | | +--:(generic) 2138 | | +-- generic? te-bandwidth 2139 | +-- link-protection? identityref 2140 | +-- setup-priority? uint8 2141 | +-- hold-priority? uint8 2142 | +-- signaling-type? identityref 2143 | +-- hierarchy 2144 | +-- dependency-tunnels 2145 | | +-- dependency-tunnel* [name] 2146 | | | +-- name -> /te:te/tunnels/tunnel/name 2147 | | | +-- encoding? identityref 2148 | | | +-- switching-type? identityref 2149 | | +-- dependency-tunnel-attributes* [name] 2150 | | +-- name leafref 2151 | | +-- encoding? identityref 2152 | | +-- switching-type? identityref 2153 | +-- hierarchical-link 2154 | +-- local-te-node-id? te-types:te-node-id 2155 | +-- local-te-link-tp-id? te-types:te-tp-id 2156 | +-- remote-te-node-id? te-types:te-node-id 2157 | +-- te-topology-identifier 2158 | +-- provider-id? te-global-id 2159 | +-- client-id? te-global-id 2160 | +-- topology-id? te-topology-id 2161 +-- synchronization* [] 2162 +-- svec 2163 | +-- relaxable? boolean 2164 | +-- disjointness? te-path-disjointness 2165 | +-- request-id-number* uint32 2166 +-- svec-constraints 2167 | +-- path-metric-bound* [metric-type] 2168 | +-- metric-type identityref 2169 | +-- upper-bound? uint64 2170 +-- path-srlgs-lists 2171 | +-- path-srlgs-list* [usage] 2172 | +-- usage identityref 2173 | +-- values* srlg 2174 +-- path-srlgs-names 2175 | +-- path-srlgs-name* [usage] 2176 | +-- usage identityref 2177 | +-- names* string 2178 +-- exclude-objects 2179 | +-- excludes* [] 2180 | +-- (type)? 2181 | +--:(numbered-node-hop) 2182 | | +-- numbered-node-hop 2183 | | +-- node-id te-node-id 2184 | | +-- hop-type? te-hop-type 2185 | +--:(numbered-link-hop) 2186 | | +-- numbered-link-hop 2187 | | +-- link-tp-id te-tp-id 2188 | | +-- hop-type? te-hop-type 2189 | | +-- direction? te-link-direction 2190 | +--:(unnumbered-link-hop) 2191 | | +-- unnumbered-link-hop 2192 | | +-- link-tp-id te-tp-id 2193 | | +-- node-id te-node-id 2194 | | +-- hop-type? te-hop-type 2195 | | +-- direction? te-link-direction 2196 | +--:(as-number) 2197 | | +-- as-number-hop 2198 | | +-- as-number inet:as-number 2199 | | +-- hop-type? te-hop-type 2200 | +--:(label) 2201 | +-- label-hop 2202 | +-- te-label 2203 | +-- (technology)? 2204 | | +--:(generic) 2205 | | +-- generic? 2206 | | rt-types:generalized-label 2207 | +-- direction? te-label-direction 2208 +-- optimizations 2209 +-- (algorithm)? 2210 +--:(metric) {te-types:path-optimization-metric}? 2211 | +-- optimization-metric* [metric-type] 2212 | +-- metric-type identityref 2213 | +-- weight? uint8 2214 +--:(objective-function) 2215 {te-types:path-optimization-objective- 2216 function}? 2217 +-- objective-function 2218 +-- objective-function-type? identityref 2219 augment /te:tunnels-path-compute/te:output/te:path-compute-result: 2220 +--ro response* [response-id] 2221 +--ro response-id uint32 2222 +--ro computed-paths-properties 2223 | +--ro computed-path-properties* [k-index] 2224 | +--ro k-index uint8 2225 | +--ro path-properties 2226 | +--ro path-metric* [metric-type] 2227 | | +--ro metric-type identityref 2228 | | +--ro accumulative-value? uint64 2229 | +--ro path-affinities-values 2230 | | +--ro path-affinities-value* [usage] 2231 | | +--ro usage identityref 2232 | | +--ro value? admin-groups 2233 | +--ro path-affinity-names 2234 | | +--ro path-affinity-name* [usage] 2235 | | +--ro usage identityref 2236 | | +--ro affinity-name* [name] 2237 | | +--ro name string 2238 | +--ro path-srlgs-lists 2239 | | +--ro path-srlgs-list* [usage] 2240 | | +--ro usage identityref 2241 | | +--ro values* srlg 2242 | +--ro path-srlgs-names 2243 | | +--ro path-srlgs-name* [usage] 2244 | | +--ro usage identityref 2245 | | +--ro names* string 2246 | +--ro path-route-objects 2247 | | +--ro path-route-object* [index] 2248 | | +--ro index uint32 2249 | | +--ro (type)? 2250 | | +--:(numbered-node-hop) 2251 | | | +--ro numbered-node-hop 2252 | | | +--ro node-id te-node-id 2253 | | | +--ro hop-type? te-hop-type 2254 | | +--:(numbered-link-hop) 2255 | | | +--ro numbered-link-hop 2256 | | | +--ro link-tp-id te-tp-id 2257 | | | +--ro hop-type? te-hop-type 2258 | | | +--ro direction? te-link-direction 2259 | | +--:(unnumbered-link-hop) 2260 | | | +--ro unnumbered-link-hop 2261 | | | +--ro link-tp-id te-tp-id 2262 | | | +--ro node-id te-node-id 2263 | | | +--ro hop-type? te-hop-type 2264 | | | +--ro direction? te-link-direction 2265 | | +--:(as-number) 2266 | | | +--ro as-number-hop 2267 | | | +--ro as-number inet:as-number 2268 | | | +--ro hop-type? te-hop-type 2269 | | +--:(label) 2270 | | +--ro label-hop 2271 | | +--ro te-label 2272 | | +--ro (technology)? 2273 | | | +--:(generic) 2274 | | | +--ro generic? 2275 | | | rt- 2276 types:generalized-label 2277 | | +--ro direction? 2278 | | te-label-direction 2279 | +--ro te-bandwidth 2280 | | +--ro (technology)? 2281 | | +--:(generic) 2282 | | +--ro generic? te-bandwidth 2283 | +--ro disjointness-type? 2284 | te-types:te-path-disjointness 2285 +--ro computed-path-error-infos 2286 | +--ro computed-path-error-info* [] 2287 | +--ro error-description? string 2288 | +--ro error-timestamp? yang:date-and-time 2289 | +--ro error-reason? identityref 2290 +--ro tunnel-ref? te:tunnel-ref 2291 +--ro (path)? 2292 +--:(primary) 2293 | +--ro primary-path-ref? leafref 2294 +--:(primary-reverse) 2295 | +--ro primary-reverse-path-ref? leafref 2296 +--:(secondary) 2297 | +--ro secondary-path-ref? leafref 2298 +--:(secondary-reverse) 2299 +--ro secondary-reverse-path-ref? leafref 2300 augment /te:tunnels-actions/te:input/te:tunnel-info/te:filter- 2301 type: 2302 +--:(path-compute-transactions) 2303 +-- path-compute-transaction-id* string 2304 augment /te:tunnels-actions/te:output: 2305 +--ro path-computed-delete-result 2306 +--ro path-compute-transaction-id* string 2308 Figure 11 - TE path computation tree diagram 2310 6.2. YANG module 2312 file "ietf-te-path-computation@2021-02-08.yang" 2313 module ietf-te-path-computation { 2314 yang-version 1.1; 2315 namespace "urn:ietf:params:xml:ns:yang:ietf-te-path-computation"; 2316 prefix te-pc; 2318 import ietf-inet-types { 2319 prefix inet; 2320 reference 2321 "RFC6991: Common YANG Data Types"; 2322 } 2323 import ietf-te { 2324 prefix te; 2325 reference 2326 "RFCYYYY: A YANG Data Model for Traffic Engineering Tunnels 2327 and Interfaces"; 2328 } 2330 /* Note: The RFC Editor will replace YYYY with the number assigned 2331 to the RFC once draft-ietf-teas-yang-te becomes an RFC.*/ 2333 import ietf-te-types { 2334 prefix te-types; 2335 reference 2336 "RFC8776: Common YANG Data Types for Traffic Engineering."; 2337 } 2339 organization 2340 "Traffic Engineering Architecture and Signaling (TEAS) 2341 Working Group"; 2342 contact 2343 "WG Web: 2344 WG List: 2346 Editor: Italo Busi 2347 2349 Editor: Sergio Belotti 2350 2352 Editor: Victor Lopez 2353 2355 Editor: Oscar Gonzalez de Dios 2356 2358 Editor: Anurag Sharma 2359 2361 Editor: Yan Shi 2362 2364 Editor: Ricard Vilalta 2365 2367 Editor: Karthik Sethuraman 2368 2370 Editor: Michael Scharf 2371 2373 Editor: Daniele Ceccarelli 2374 2376 "; 2377 description 2378 "This module defines a YANG data model for requesting Traffic 2379 Engineering (TE) path computation. The YANG model defined in 2380 this document is based on RPCs augmenting the RPCs defined in 2381 the generic TE module (ietf-te). 2382 The model fully conforms to the 2383 Network Management Datastore Architecture (NMDA). 2385 Copyright (c) 2021 IETF Trust and the persons 2386 identified as authors of the code. All rights reserved. 2388 Redistribution and use in source and binary forms, with or 2389 without modification, is permitted pursuant to, and subject 2390 to the license terms contained in, the Simplified BSD License 2391 set forth in Section 4.c of the IETF Trust's Legal Provisions 2393 Relating to IETF Documents 2394 (http://trustee.ietf.org/license-info). 2396 This version of this YANG module is part of RFC XXXX; see 2397 the RFC itself for full legal notices."; 2399 // RFC Ed.: replace XXXX with actual RFC number and remove 2400 // this note 2401 // replace the revision date with the module publication date 2402 // the format is (year-month-day) 2404 revision 2021-02-08 { 2405 description 2406 "Initial revision"; 2407 reference 2408 "RFC XXXX: Yang model for requesting Path Computation"; 2409 } 2411 // RFC Ed.: replace XXXX with actual RFC number and remove 2412 // this note 2414 /* 2415 * Identities 2416 */ 2418 identity svec-metric-type { 2419 description 2420 "Base identity for SVEC metric type."; 2421 reference 2422 "RFC5541: Encoding of Objective Functions in the Path 2423 Computation Element Communication Protocol (PCEP)."; 2424 } 2426 identity svec-metric-cumul-te { 2427 base svec-metric-type; 2428 description 2429 "Cumulative TE cost."; 2430 reference 2431 "RFC5541: Encoding of Objective Functions in the Path 2432 Computation Element Communication Protocol (PCEP)."; 2433 } 2435 identity svec-metric-cumul-igp { 2436 base svec-metric-type; 2437 description 2438 "Cumulative IGP cost."; 2439 reference 2440 "RFC5541: Encoding of Objective Functions in the Path 2441 Computation Element Communication Protocol (PCEP)."; 2442 } 2444 identity svec-metric-cumul-hop { 2445 base svec-metric-type; 2446 description 2447 "Cumulative Hop path metric."; 2448 reference 2449 "RFC8776: Common YANG Data Types for Traffic Engineering."; 2450 } 2452 identity svec-metric-aggregate-bandwidth-consumption { 2453 base svec-metric-type; 2454 description 2455 "Aggregate bandwidth consumption."; 2456 reference 2457 "RFC5541: Encoding of Objective Functions in the Path 2458 Computation Element Communication Protocol (PCEP)."; 2459 } 2461 identity svec-metric-load-of-the-most-loaded-link { 2462 base svec-metric-type; 2463 description 2464 "Load of the most loaded link."; 2465 reference 2466 "RFC5541: Encoding of Objective Functions in the Path 2467 Computation Element Communication Protocol (PCEP)."; 2468 } 2470 identity tunnel-action-path-compute-delete { 2471 base te:tunnel-actions-type; 2472 description 2473 "Action type to delete the transient states 2474 of computed paths, as described in section 3.3.1."; 2475 } 2477 /* 2478 * Groupings 2479 */ 2481 grouping protection-restoration-properties { 2482 description 2483 "This grouping defines the restoration and protection types 2484 for a path in the path computation request."; 2485 leaf protection-type { 2486 type identityref { 2487 base te-types:lsp-protection-type; 2488 } 2489 default "te-types:lsp-protection-unprotected"; 2490 description 2491 "LSP protection type."; 2492 } 2493 leaf restoration-type { 2494 type identityref { 2495 base te-types:lsp-restoration-type; 2496 } 2497 default "te-types:lsp-restoration-restore-any"; 2498 description 2499 "LSP restoration type."; 2500 } 2501 } // grouping protection-restoration-properties 2503 grouping requested-info { 2504 description 2505 "This grouping defines the information (e.g., metrics) 2506 which is requested, in the path computation request, to be 2507 returned in the path computation response."; 2508 list requested-metrics { 2509 key "metric-type"; 2510 description 2511 "The list of the requested metrics. 2512 The metrics listed here must be returned in the response. 2513 Returning other metrics in the response is optional."; 2514 leaf metric-type { 2515 type identityref { 2516 base te-types:path-metric-type; 2517 } 2518 description 2519 "The metric that must be returned in the response"; 2520 } 2521 } 2522 leaf return-srlgs { 2523 type boolean; 2524 default "false"; 2525 description 2526 "If true, path srlgs must be returned in the response. 2527 If false, returning path srlgs in the response optional."; 2528 } 2529 leaf return-affinities { 2530 type boolean; 2531 default "false"; 2532 description 2533 "If true, path affinities must be returned in the response. 2534 If false, returning path affinities in the response is 2535 optional."; 2536 } 2537 } // grouping requested-info 2539 grouping requested-state { 2540 description 2541 "Configuration for the transient state used 2542 to report the computed path"; 2543 container requested-state { 2544 presence 2545 "Request temporary reporting of the computed path state"; 2546 description 2547 "Configures attributes for the temporary reporting of the 2548 computed path state (e.g., expiration timer)."; 2549 leaf timer { 2550 type uint16; 2551 units "minutes"; 2552 default "10"; 2553 description 2554 "The timeout after which the transient state reporting 2555 the computed path should be removed."; 2556 } 2557 leaf transaction-id { 2558 type string; 2559 description 2560 "The transaction-id associated with this path computation 2561 to be used for fast deletion of the transient states 2562 associated with multiple path computations. 2564 This transaction-id can be used to explicitly delete all 2565 the transient states of all the computed paths associated 2566 with the same transaction-id. 2568 When one path associated with a transaction-id is setup, 2569 the transient states of all the other computed paths 2570 with the same transaction-id are automatically removed. 2572 If not specified, the transient state is removed only 2573 when the timer expires (when the timer is specified) 2574 or not created at all (stateless path computation, 2575 when the timer is not specified)."; 2576 } 2577 } 2578 } // grouping requested-state 2580 grouping reported-state { 2581 description 2582 "This grouping defines the information, returned in the path 2583 computation response, reporting the transient state related 2584 to the computed path"; 2585 leaf tunnel-ref { 2586 type te:tunnel-ref; 2587 description 2588 " 2589 Reference to the tunnel that reports the transient state 2590 of the computed path. 2592 If no transient state is created, this attribute is 2593 omitted. 2594 "; 2595 } 2596 choice path { 2597 description 2598 "The transient state of the computed path can be reported 2599 as a primary, primary-reverse, secondary or 2600 a secondary-reverse path of a te-tunnel"; 2601 case primary { 2602 leaf primary-path-ref { 2603 type leafref { 2604 path "/te:te/te:tunnels/" 2605 + "te:tunnel[te:name=current()/../tunnel-ref]/" 2606 + "te:primary-paths/te:primary-path/" 2607 + "te:name"; 2608 } 2609 must '../tunnel-ref' { 2610 description 2611 "The primary-path name can only be reported 2612 if also the tunnel name is reported."; 2613 } 2614 description 2615 " 2616 Reference to the primary-path that reports 2617 the transient state of the computed path. 2619 If no transient state is created, 2620 this attribute is omitted. 2621 "; 2623 } 2624 } // case primary 2625 case primary-reverse { 2626 leaf primary-reverse-path-ref { 2627 type leafref { 2628 path "/te:te/te:tunnels/" 2629 + "te:tunnel[te:name=current()/../tunnel-ref]/" 2630 + "te:primary-paths/te:primary-path/" 2631 + "te:name"; 2632 } 2633 must '../tunnel-ref' { 2634 description 2635 "The primary-reverse-path name can only be reported 2636 if also the tunnel name is reported."; 2637 } 2638 description 2639 " 2640 Reference to the primary-reverse-path that reports 2641 the transient state of the computed path. 2643 If no transient state is created, 2644 this attribute is omitted. 2645 "; 2646 } 2647 } // case primary-reverse 2648 case secondary { 2649 leaf secondary-path-ref { 2650 type leafref { 2651 path "/te:te/te:tunnels/" 2652 + "te:tunnel[te:name=current()/../tunnel-ref]/" 2653 + "te:secondary-paths/te:secondary-path/" 2654 + "te:name"; 2655 } 2656 must '../tunnel-ref' { 2657 description 2658 "The secondary-path name can only be reported 2659 if also the tunnel name is reported."; 2660 } 2661 description 2662 " 2663 Reference to the secondary-path that reports 2664 the transient state of the computed path. 2666 If no transient state is created, 2667 this attribute is omitted. 2668 "; 2669 } 2670 } // case secondary 2671 case secondary-reverse { 2672 leaf secondary-reverse-path-ref { 2673 type leafref { 2674 path "/te:te/te:tunnels/" 2675 + "te:tunnel[te:name=current()/../tunnel-ref]/" 2676 + "te:secondary-reverse-paths/" 2677 + "te:secondary-reverse-path/te:name"; 2678 } 2679 must '../tunnel-ref' { 2680 description 2681 "The secondary-reverse-path name can only be reported 2682 if also the tunnel name is reported."; 2683 } 2684 description 2685 " 2686 Reference to the secondary-reverse-path that reports 2687 the transient state of the computed path. 2689 If no transient state is created, 2690 this attribute is omitted. 2691 "; 2692 } 2693 } // case secondary 2694 } // choice path 2695 } // grouping reported-state 2697 grouping synchronization-constraints { 2698 description 2699 "Global constraints applicable to synchronized path 2700 computation requests."; 2702 container svec-constraints { 2703 description 2704 "global svec constraints"; 2705 list path-metric-bound { 2706 key "metric-type"; 2707 description 2708 "list of bound metrics"; 2709 leaf metric-type { 2710 type identityref { 2711 base svec-metric-type; 2712 } 2713 description 2714 "SVEC metric type."; 2715 reference 2716 "RFC5541: Encoding of Objective Functions in the Path 2717 Computation Element Communication Protocol (PCEP)."; 2718 } 2719 leaf upper-bound { 2720 type uint64; 2721 description 2722 "Upper bound on SVEC metric"; 2723 } 2724 } 2725 } 2726 uses te-types:generic-path-srlgs; 2727 container exclude-objects { 2728 description 2729 "Resources to be excluded"; 2730 list excludes { 2731 description 2732 "List of Explicit Route Objects to always exclude 2733 from synchronized path computation"; 2734 uses te-types:explicit-route-hop; 2735 } 2736 } 2737 } // grouping synchronization-constraints 2739 grouping synchronization-optimization { 2740 description 2741 "Optimizations applicable to synchronized path 2742 computation requests."; 2743 container optimizations { 2744 description 2745 "The objective function container that includes attributes 2746 to impose when computing a synchronized set of paths"; 2747 choice algorithm { 2748 description 2749 "Optimizations algorithm."; 2750 case metric { 2751 if-feature "te-types:path-optimization-metric"; 2752 list optimization-metric { 2753 key "metric-type"; 2754 description 2755 "svec path metric type"; 2756 leaf metric-type { 2757 type identityref { 2758 base svec-metric-type; 2759 } 2760 description 2761 "TE path metric type usable for computing a set of 2762 synchronized requests"; 2763 } 2764 leaf weight { 2765 type uint8; 2766 description 2767 "Metric normalization weight"; 2768 } 2769 } 2770 } 2771 case objective-function { 2772 if-feature 2773 "te-types:path-optimization-objective-function"; 2774 container objective-function { 2775 description 2776 "The objective function container that includes 2777 attributes to impose when computing a TE path"; 2778 leaf objective-function-type { 2779 type identityref { 2780 base te-types:objective-function-type; 2781 } 2782 default "te-types:of-minimize-cost-path"; 2783 description 2784 "Objective function entry"; 2785 } 2786 } 2787 } 2788 } 2789 } 2790 } // grouping synchronization-optimization 2792 grouping synchronization-info { 2793 description 2794 "Information for synchonized path computation requests."; 2795 list synchronization { 2796 description 2797 "List of Synchronization VECtors."; 2798 container svec { 2799 description 2800 "Synchronization VECtor"; 2801 leaf relaxable { 2802 type boolean; 2803 default "true"; 2804 description 2805 "If this leaf is true, path computation process is 2806 free to ignore svec content. 2807 Otherwise, it must take into account this svec."; 2808 } 2809 uses te-types:generic-path-disjointness; 2810 leaf-list request-id-number { 2811 type uint32; 2812 description 2813 "This list reports the set of path computation 2814 requests that must be synchronized."; 2815 } 2816 } 2817 uses synchronization-constraints; 2818 uses synchronization-optimization; 2820 } 2821 } // grouping synchronization-info 2823 grouping encoding-and-switching-type { 2824 description 2825 "Common grouping to define the LSP encoding and 2826 switching types"; 2827 leaf encoding { 2828 type identityref { 2829 base te-types:lsp-encoding-types; 2830 } 2831 description 2832 "LSP encoding type"; 2833 reference 2834 "RFC3945"; 2835 } 2836 leaf switching-type { 2837 type identityref { 2838 base te-types:switching-capabilities; 2839 } 2840 description 2841 "LSP switching type"; 2842 reference 2843 "RFC3945"; 2844 } 2845 } 2847 grouping tunnel-common-attributes { 2848 description 2849 "Common grouping to define the TE tunnel parameters"; 2850 uses encoding-and-switching-type; 2851 leaf source { 2852 type inet:ip-address; 2853 description 2854 "TE tunnel source address."; 2855 } 2856 leaf destination { 2857 type inet:ip-address; 2858 description 2859 "te-tunnel destination address"; 2860 } 2861 leaf src-tp-id { 2862 type binary; 2863 description 2864 "TE tunnel source termination point identifier."; 2865 } 2866 leaf dst-tp-id { 2867 type binary; 2868 description 2869 "TE tunnel destination termination point identifier."; 2870 } 2871 leaf bidirectional { 2872 type boolean; 2873 default "false"; 2874 description 2875 "TE tunnel bidirectional"; 2876 } 2877 } 2879 /* 2880 * Augment TE RPCs 2881 */ 2883 augment "/te:tunnels-path-compute/te:input/te:path-compute-info" { 2884 description 2885 "Path Computation RPC input"; 2886 list path-request { 2887 key "request-id"; 2888 description 2889 "The list of the requested paths to be computed"; 2890 leaf request-id { 2891 type uint32; 2892 mandatory true; 2893 description 2894 "Each path computation request is uniquely identified 2895 within the RPC request by the request-id-number."; 2896 } 2897 choice tunnel-attributes { 2898 default "value"; 2899 description 2900 "Whether the tunnel attributes are specified by value 2901 within this path computation request or by reference. 2902 The reference could be either to an existing te-tunnel 2903 or to an entry in the tunnel-attributes list"; 2904 case reference { 2905 choice tunnel-exist { 2906 description 2907 "Whether the tunnel reference is to an existing 2908 te-tunnel or to an entry in the tunnel-attributes 2909 list"; 2910 case tunnel-ref { 2911 leaf tunnel-ref { 2912 type te:tunnel-ref; 2913 mandatory true; 2914 description 2915 "The referenced te-tunnel instance"; 2916 } 2917 } // case tunnel-ref 2918 case tunnel-attributes-ref { 2919 leaf tunnel-attributes-ref { 2920 type leafref { 2921 path "/te:tunnels-path-compute/" 2922 + "te:path-compute-info/" 2923 + "te-pc:tunnel-attributes/te-pc:tunnel-name"; 2924 } 2925 mandatory true; 2926 description 2927 "The referenced te-tunnel instance"; 2928 } 2929 } // case tunnel-attributes-ref 2930 } // choice tunnel-exist 2931 leaf path-name { 2932 type string; 2933 description 2934 "TE path name."; 2935 } 2936 choice tunnel-path-role { 2937 mandatory true; 2938 description 2939 "Whether this path is a primary, or a reverse primary, 2940 or a secondary, or a reverse secondary path"; 2941 case primary-path { 2942 container primary-path { 2943 presence "Indicates that the requested path 2944 is a primary path"; 2945 description 2946 "TE primary path"; 2947 uses te:path-preference; 2948 uses te:k-requested-paths; 2949 } // container primary-path 2950 } // case primary-path 2951 case secondary-path { 2952 container secondary-path { 2953 description 2954 "TE secondary path"; 2955 uses te:path-preference; 2956 uses protection-restoration-properties; 2957 list primary-path-ref { 2958 min-elements 1; 2959 description 2960 "The list of primary paths that reference 2961 this path as a candidate secondary path"; 2962 choice primary-path-exist { 2963 description 2964 "Whether the path reference is to an existing 2965 te-tunnel path or to another path request"; 2966 case path-ref { 2967 leaf primary-path-ref { 2968 type leafref { 2969 path "/te:te/te:tunnels/te:tunnel[te:name" 2970 + "=current()/../../../tunnel-ref]/" 2971 + "te:primary-paths/te:primary-path/" 2972 + "te:name"; 2973 } 2974 must '../../../tunnel-ref' { 2975 description 2976 "The primary-path can be referenced 2977 if also the tunnel is referenced."; 2978 } 2979 mandatory true; 2980 description 2981 "The referenced primary path"; 2982 } 2983 } // case path-ref 2984 case path-request-ref { 2985 leaf path-request-ref { 2986 type leafref { 2987 path "/te:tunnels-path-compute/" 2988 + "te:path-compute-info/" 2989 + "te-pc:path-request/" 2990 + "te-pc:request-id"; 2991 } 2992 mandatory true; 2993 description 2994 "The referenced primary path request"; 2995 } 2996 } // case path-request-ref 2997 } // choice primary-path-exist 2998 } // list primary-path-ref 2999 } // container secondary-path 3000 } // case secondary-path 3001 case primary-reverse-path { 3002 container primary-reverse-path { 3003 description 3004 "TE primary reverse path"; 3005 choice primary-path-exist { 3006 description 3007 "Whether the path reference to the primary paths 3008 for which this path is the reverse-path is to 3009 an existing te-tunnel path or to another path 3010 request"; 3011 case path-ref { 3012 leaf primary-path-ref { 3013 type leafref { 3014 path "/te:te/te:tunnels/te:tunnel[te:name" 3015 + "=current()/../../tunnel-ref]/" 3016 + "te:primary-paths/te:primary-path/" 3017 + "te:name"; 3018 } 3019 must '../../tunnel-ref' { 3020 description 3021 "The primary-path can be referenced 3022 if also the tunnel is referenced."; 3023 } 3024 mandatory true; 3025 description 3026 "The referenced primary path"; 3027 } 3028 } // case path-ref 3029 case path-request-ref { 3030 leaf path-request-ref { 3031 type leafref { 3032 path "/te:tunnels-path-compute/" 3033 + "te:path-compute-info/" 3034 + "te-pc:path-request/" 3035 + "te-pc:request-id"; 3036 } 3037 mandatory true; 3038 description 3039 "The referenced primary path request"; 3040 } 3041 } // case path-request-ref 3042 } // choice primary-path-exist 3043 } // container primary-reverse-path 3044 } // case primary-reverse-path 3045 case secondary-reverse-path { 3046 container secondary-reverse-path { 3047 description 3048 "TE secondary reverse path"; 3049 uses te:path-preference; 3050 uses protection-restoration-properties; 3051 list primary-reverse-path-ref { 3052 min-elements 1; 3053 description 3054 "The list of primary reverse paths that 3055 reference this path as a candidate 3056 secondary reverse path"; 3057 choice primary-reverse-path-exist { 3058 description 3059 "Whether the path reference is to an existing 3060 te-tunnel path or to another path request"; 3061 case path-ref { 3062 leaf primary-path-ref { 3063 type leafref { 3064 path "/te:te/te:tunnels/te:tunnel[te:name" 3065 + "=current()/../../../tunnel-ref]/" 3066 + "te:primary-paths/te:primary-path/" 3067 + "te:name"; 3068 } 3069 must '../../../tunnel-ref' { 3070 description 3071 "The primary-path can be referenced 3072 if also the tunnel is referenced."; 3073 } 3074 mandatory true; 3075 description 3076 "The referenced primary path"; 3077 } 3078 } // case path-ref 3079 case path-request-ref { 3080 leaf path-request-ref { 3081 type leafref { 3082 path "/te:tunnels-path-compute/" 3083 + "te:path-compute-info/" 3084 + "te-pc:path-request/" 3085 + "te-pc:request-id"; 3086 } 3087 mandatory true; 3088 description 3089 "The referenced primary reverse path 3090 request"; 3091 } 3092 } // case path-request-ref 3094 } // choice primary-reverse-path-exist 3095 } // list primary-reverse-path-ref 3096 } // container secondary-reverse-path 3097 } // case secondary-reverse-path 3098 } // choice tunnel-path-role 3099 } // case reference 3100 case value { 3101 leaf tunnel-name { 3102 type string; 3103 description 3104 "TE tunnel name."; 3105 } 3106 choice path-role { 3107 default "primary-path"; 3108 description 3109 "Whether this path is a primary or a secondary path"; 3110 case primary-path { 3111 leaf primary-path-name { 3112 type string; 3113 description 3114 "TE path name."; 3115 } 3116 } // case primary-path 3117 case secondary-path { 3118 leaf secondary-path-name { 3119 type string; 3120 description 3121 "TE path name."; 3122 } 3123 } // case secondary-path 3124 } // choice path-role 3125 /* 3126 * Open issue: should protection-restoration-properties be moved 3127 * under secondary-path? 3128 */ 3129 uses te:k-requested-paths; 3130 uses protection-restoration-properties; 3131 uses tunnel-common-attributes; 3132 uses te-types:te-topology-identifier; 3134 } // case value 3135 } // choice tunnel-attributes 3136 uses te:path-compute-info; 3137 uses requested-info; 3138 uses requested-state; 3139 } 3140 list tunnel-attributes { 3141 key "tunnel-name"; 3142 description 3143 "Tunnel attributes common to multiple request paths"; 3144 leaf tunnel-name { 3145 type string; 3146 description 3147 "TE tunnel name."; 3148 } 3149 uses tunnel-common-attributes; 3150 uses te:tunnel-associations-properties; 3151 uses protection-restoration-properties; 3152 uses te-types:tunnel-constraints; 3153 uses te:tunnel-hierarchy-properties { 3154 augment "hierarchy/dependency-tunnels" { 3155 description 3156 "Augment with the list of dependency tunnel requests."; 3157 list dependency-tunnel-attributes { 3158 key "name"; 3159 description 3160 "A tunnel request entry that this tunnel request can 3161 potentially depend on."; 3162 leaf name { 3163 type leafref { 3164 path "/te:tunnels-path-compute/" 3165 + "te:path-compute-info/te-pc:tunnel-attributes/" 3166 + "te-pc:tunnel-name"; 3167 } 3168 description 3169 "Dependency tunnel request name."; 3170 } 3171 uses encoding-and-switching-type; 3172 } 3174 } 3175 } 3176 } 3177 uses synchronization-info; 3178 } // path-compute rpc input 3180 augment "/te:tunnels-path-compute/te:output/" 3181 + "te:path-compute-result" { 3182 description 3183 "Path Computation RPC output"; 3184 list response { 3185 key "response-id"; 3186 config false; 3187 description 3188 "response"; 3189 leaf response-id { 3190 type uint32; 3191 description 3192 "The response-id has the same value of the 3193 corresponding request-id."; 3194 } 3195 uses te:path-computation-response; 3196 uses reported-state; 3197 } 3198 } // path-compute rpc output 3200 augment "/te:tunnels-actions/te:input/te:tunnel-info/" 3201 + "te:filter-type" { 3202 description 3203 "Augment Tunnels Action RPC input filter types"; 3204 case path-compute-transactions { 3205 when "derived-from-or-self(../te:action-info/te:action, " 3206 + "'tunnel-action-path-compute-delete')"; 3207 description 3208 "Path Delete Action RPC"; 3209 leaf-list path-compute-transaction-id { 3210 type string; 3211 description 3212 "The list of the transaction-id values of the 3213 transient states to be deleted"; 3214 } 3215 } 3216 } // path-delete rpc input 3218 augment "/te:tunnels-actions/te:output" { 3219 description 3220 "Augment Tunnels Action RPC input with path delete result"; 3221 container path-computed-delete-result { 3222 description 3223 "Path Delete RPC output"; 3224 leaf-list path-compute-transaction-id { 3225 type string; 3226 description 3227 "The list of the transaction-id values of the 3228 transient states that have been successfully deleted"; 3229 } 3230 } 3231 } // path-delete rpc output 3232 } 3233 3235 Figure 12 - TE path computation YANG module 3237 7. Security Considerations 3239 This document describes use cases of requesting Path Computation 3240 using YANG data models, which could be used at the ABNO Control 3241 Interface [RFC7491] and/or between controllers in ACTN [RFC8453]. As 3242 such, it does not introduce any new security considerations compared 3243 to the ones related to YANG specification, ABNO specification and 3244 ACTN Framework defined in [RFC7950], [RFC7491] and [RFC8453]. 3246 The YANG module defined in this draft is designed to be accessed via 3247 the NETCONF protocol [RFC6241] or RESTCONF protocol [RFC8040]. The 3248 lowest NETCONF layer is the secure transport layer, and the 3249 mandatory-to-implement secure transport is Secure Shell (SSH) 3250 [RFC6242]. The lowest RESTCONF layer is HTTPS, and the mandatory-to- 3251 implement secure transport is TLS [RFC8446]. 3253 This document also defines common data types using the YANG data 3254 modeling language. The definitions themselves have no security 3255 impact on the Internet, but the usage of these definitions in 3256 concrete YANG modules might have. The security considerations 3257 spelled out in the YANG specification [RFC7950] apply for this 3258 document as well. 3260 The NETCONF access control model [RFC8341] provides the means to 3261 restrict access for particular NETCONF or RESTCONF users to a 3262 preconfigured subset of all available NETCONF or RESTCONF protocol 3263 operations and content. 3265 Note - The security analysis of each leaf is for further study. 3267 8. IANA Considerations 3269 This document registers the following URIs in the "ns" subregistry 3270 within the "IETF XML registry" [RFC3688]. 3272 URI: urn:ietf:params:xml:ns:yang:ietf-te-path-computation 3273 Registrant Contact: The IESG. 3274 XML: N/A, the requested URI is an XML namespace. 3276 This document registers a YANG module in the "YANG Module Names" 3277 registry [RFC7950]. 3279 name: ietf-te-path-computation 3280 namespace: urn:ietf:params:xml:ns:yang:ietf-te-path-computation 3281 prefix: te-pc 3282 reference: this document 3284 9. References 3286 9.1. Normative References 3288 [RFC3688] Mealling, M., "The IETF XML Registry", RFC 3688, January 3289 2004. 3291 [RFC3945] Mannie, E., Ed., "Generalized Multi-Protocol Label 3292 Switching (GMPLS) Architecture", RFC 3945, DOI 3293 10.17487/RFC3945, October 2004, . 3296 [RFC5440] Vasseur, JP., Le Roux, JL. et al., "Path Computation 3297 Element (PCE) Communication Protocol (PCEP)", RFC 5440, 3298 March 2009. 3300 [RFC5441] Vasseur, JP., Ed., Zhang, R., Bitar, N., and JL. Le Roux, 3301 "A Backward-Recursive PCE-Based Computation (BRPC) 3302 Procedure to Compute Shortest Constrained Inter-Domain 3303 Traffic Engineering Label Switched Paths", RFC 5441, 3304 DOI 10.17487/RFC5441, April 2009, . 3307 [RFC5541] Le Roux, JL. et al., "Encoding of Objective Functions in 3308 the Path Computation Element Communication Protocol 3309 (PCEP)", RFC 5541, June 2009. 3311 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 3312 and A. Bierman, Ed., "Network Configuration Protocol 3313 (NETCONF)", RFC 6241, June 2011. 3315 [RFC6242] Wasserman, M., "Using the NETCONF Protocol over Secure 3316 Shell (SSH)", RFC 6242, June 2011. 3318 [RFC6991] Schoenwaelder, J., "Common YANG Data Types", RFC 6991, 3319 July 2013. 3321 [RFC8040] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 3322 Protocol", RFC 8040, January 2017. 3324 [RFC8341] Bierman, A., and M. Bjorklund, "Network Configuration 3325 Access Control Model", RFC 8341, March 2018. 3327 [RFC7926] Farrel, A. et al., "Problem Statement and Architecture for 3328 Information Exchange Between Interconnected Traffic 3329 Engineered Networks", RFC 7926, July 2016. 3331 [RFC7950] Bjorklund, M., "The YANG 1.1 Data Modeling Language", RFC 3332 7950, August 2016. 3334 [RFC8040] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 3335 Protocol", RFC 8040, January 2017. 3337 [RFC8340] Bjorklund, M. and L. Berger, Ed., "YANG Tree Diagrams", 3338 BCP 215, RFC 8340, March 2018. 3340 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 3341 Version 1.3", RFC 8446, August 2018. 3343 [RFC8776] Saad, T., Gandhi, R., Liu, X., Beeram, V., and I. Bryskin, 3344 "Common YANG Data Types for Traffic Engineering", RFC8776, 3345 June 2020. 3347 [RFC8795] Liu, X. et al., " Liu, X. et al., "YANG Data Model for 3348 Traffic Engineering (TE) Topologies", RFC8795, August 3349 2020. 3351 [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic 3352 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 3353 te, work in progress. 3355 9.2. Informative References 3357 [RFC4655] Farrel, A. et al., "A Path Computation Element (PCE)-Based 3358 Architecture", RFC 4655, August 2006. 3360 [RFC6805] King, D., Ed. and A. Farrel, Ed., "The Application of the 3361 Path Computation Element Architecture to the Determination 3362 of a Sequence of Domains in MPLS and GMPLS", RFC 6805, DOI 3363 10.17487/RFC6805, November 2012, . 3366 [RFC7139] Zhang, F. et al., "GMPLS Signaling Extensions for Control 3367 of Evolving G.709 Optical Transport Networks", RFC 7139, 3368 March 2014. 3370 [RFC7446] Lee, Y. et al., "Routing and Wavelength Assignment 3371 Information Model for Wavelength Switched Optical 3372 Networks", RFC 7446, February 2015. 3374 [RFC7491] Farrel, A., King, D., "A PCE-Based Architecture for 3375 Application-Based Network Operations", RFC 7491, March 3376 2015. 3378 [RFC8233] Dhody, D. et al., "Extensions to the Path Computation 3379 Element Communication Protocol (PCEP) to Compute Service- 3380 Aware Label Switched Paths (LSPs)", RFC 8233, September 3381 2017 3383 [RFC8342] Bjorklund,M. et al. "Network Management Datastore 3384 Architecture (NMDA)", RFC 8342, March 2018 3386 [RFC8453] Ceccarelli, D., Lee, Y. et al., "Framework for Abstraction 3387 and Control of TE Networks (ACTN)", RFC8453, August 2018. 3389 [RFC8454] Lee, Y. et al., "Information Model for Abstraction and 3390 Control of TE Networks (ACTN)", RFC8454, September 2018. 3392 [OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical 3393 Transport Network Topology", draft-ietf-ccamp-otn-topo- 3394 yang, work in progress. 3396 [ITU-T G.709-2016] ITU-T Recommendation G.709 (06/16), "Interface 3397 for the optical transport network", June 2016. 3399 Appendix A. Examples of dimensioning the "detailed connectivity matrix" 3401 In the following table, a list of the possible constraints, 3402 associated with their potential cardinality, is reported. 3404 The maximum number of potential connections to be computed and 3405 reported is, in first approximation, the multiplication of all of 3406 them. 3408 Constraint Cardinality 3409 ---------- ------------------------------------------------------- 3411 End points N(N-1)/2 if connections are bidirectional (OTN and WDM), 3412 N(N-1) for unidirectional connections. 3414 Bandwidth In WDM networks, bandwidth values are expressed in GHz. 3416 On fixed-grid WDM networks, the central frequencies are 3417 on a 50GHz grid and the channel width of the transmitters 3418 are typically 50GHz such that each central frequency can 3419 be used, i.e., adjacent channels can be placed next to 3420 each other in terms of central frequencies. 3422 On flex-grid WDM networks, the central frequencies are on 3423 a 6.25GHz grid and the channel width of the transmitters 3424 can be multiples of 12.5GHz. 3426 For fixed-grid WDM networks typically there is only one 3427 possible bandwidth value (i.e., 50GHz) while for flex- 3428 grid WDM networks typically there are 4 possible 3429 bandwidth values (e.g., 37.5GHz, 50GHz, 62.5GHz, 75GHz). 3431 In OTN (ODU) networks, bandwidth values are expressed as 3432 pairs of ODU type and, in case of ODUflex, ODU rate in 3433 bytes/sec as described in section 5 of [RFC7139]. 3435 For "fixed" ODUk types, 6 possible bandwidth values are 3436 possible (i.e., ODU0, ODU1, ODU2, ODU2e, ODU3, ODU4). 3438 For ODUflex(GFP), up to 80 different bandwidth values can 3439 be specified, as defined in Table 7-8 of [ITU-T G.709- 3440 2016]. 3442 For other ODUflex types, like ODUflex(CBR), the number of 3443 possible bandwidth values depends on the rates of the 3444 clients that could be mapped over these ODUflex types, as 3445 shown in Table 7.2 of [ITU-T G.709-2016], which in theory 3446 could be a countinuum of values. However, since different 3447 ODUflex bandwidths that use the same number of TSs on 3448 each link along the path are equivalent for path 3449 computation purposes, up to 120 different bandwidth 3450 ranges can be specified. 3452 Ideas to reduce the number of ODUflex bandwidth values in 3453 the detailed connectivity matrix, to less than 100, are 3454 for further study. 3456 Bandwidth specification for ODUCn is currently for 3457 further study but it is expected that other bandwidth 3458 values can be specified as integer multiples of 100Gb/s. 3460 In IP we have bandwidth values in bytes/sec. In 3461 principle, this is a countinuum of values, but in 3462 practice we can identify a set of bandwidth ranges, where 3463 any bandwidth value inside the same range produces the 3464 same path. 3465 The number of such ranges is the cardinality, which 3466 depends on the topology, available bandwidth and status 3467 of the network. Simulations (Note: reference paper 3468 submitted for publication) show that values for medium 3469 size topologies (around 50-150 nodes) are in the range 4- 3470 7 (5 on average) for each end points couple. 3472 Metrics IGP, TE and hop number are the basic objective metrics 3473 defined so far. There are also the 2 objective functions 3474 defined in [RFC5541]: Minimum Load Path (MLP) and Maximum 3475 Residual Bandwidth Path (MBP). Assuming that one only 3476 metric or objective function can be optimized at once, 3477 the total cardinality here is 5. 3479 With [RFC8233], a number of additional metrics are 3480 defined, including Path Delay metric, Path Delay 3481 Variation metric and Path Loss metric, both for point-to- 3482 point and point-to-multipoint paths. This increases the 3483 cardinality to 8. 3485 Bounds Each metric can be associated with a bound in order to 3486 find a path having a total value of that metric lower 3487 than the given bound. This has a potentially very high 3488 cardinality (as any value for the bound is allowed). In 3489 practice there is a maximum value of the bound (the one 3490 with the maximum value of the associated metric) which 3491 results always in the same path, and a range approach 3492 like for bandwidth in IP should produce also in this case 3493 the cardinality. Assuming to have a cardinality similar 3494 to the one of the bandwidth (let say 5 on average) we 3495 should have 6 (IGP, TE, hop, path delay, path delay 3496 variation and path loss; we don't consider here the two 3497 objective functions of [RFC5541] as they are conceived 3498 only for optimization)*5 = 30 cardinality. 3500 Technology 3501 constraints For further study 3503 Priority We have 8 values for set-up priority, which is used in 3504 path computation to route a path using free resources 3505 and, where no free resources are available, resources 3506 used by LSPs having a lower holding priority. 3508 Local prot It's possible to ask for a local protected service, where 3509 all the links used by the path are protected with fast 3510 reroute (this is only for IP networks, but line 3511 protection schemas are available on the other 3512 technologies as well). This adds an alternative path 3513 computation, so the cardinality of this constraint is 2. 3515 Administrative 3516 Colors Administrative colors (aka affinities) are typically 3517 assigned to links but when topology abstraction is used 3518 affinity information can also appear in the detailed 3519 connectivity matrix. 3521 There are 32 bits available for the affinities. Links can 3522 be tagged with any combination of these bits, and path 3523 computation can be constrained to include or exclude any 3524 or all of them. The relevant cardinality is 3 (include- 3525 any, exclude-any, include-all) times 2^32 possible 3526 values. However, the number of possible values used in 3527 real networks is quite small. 3529 Included Resources 3531 A path computation request can be associated to an 3532 ordered set of network resources (links, nodes) to be 3533 included along the computed path. This constraint would 3534 have a huge cardinality as in principle any combination 3535 of network resources is possible. However, as far as the 3536 client doesn't know details about the internal topology 3537 of the domain, it shouldn't include this type of 3538 constraint at all (see more details below). 3540 Excluded Resources 3542 A path computation request can be associated to a set of 3543 network resources (links, nodes, SRLGs) to be excluded 3544 from the computed path. Like for included resources, 3545 this constraint has a potentially very high cardinality, 3546 but, once again, it can't be actually used by the 3547 client, if it's not aware of the domain topology (see 3548 more details below). 3549 As discussed above, the client can specify include or exclude 3550 resources depending on the abstract topology information that the 3551 underlying controller exposes: 3553 o In case the underlying controller exposes the entire domain as a 3554 single abstract TE node with his own external terminations and 3555 detailed connectivity matrix (whose size we are estimating), no 3556 other topological details are available, therefore the size of 3557 the detailed connectivity matrix only depends on the combination 3558 of the constraints that the client can use in a path computation 3559 request to its underlying controller. These constraints cannot 3560 refer to any details of the internal topology of the domain, as 3561 those details are not known to the client and so they do not 3562 impact size of the detailed connectivity matrix exported. 3564 o Instead in case the underlying controller exposes a topology 3565 including more than one abstract TE nodes and TE links, and their 3566 attributes (e.g. SRLGs, affinities for the links), the client 3567 knows these details and therefore could compute a path across the 3568 domain referring to them in the constraints. The detailed 3569 connectivity matrixes, whose size need to be estimated here, are 3570 the ones relevant to the abstract TE nodes exported to the 3571 client. These detailed connectivity matrixes and therefore theirs 3572 sizes, while cannot depend on the other abstract TE nodes and TE 3573 links, which are external to the given abstract node, could 3574 depend to SRLGs (and other attributes, like affinities) which 3575 could be present also in the portion of the topology represented 3576 by the abstract nodes, and therefore contribute to the size of 3577 the related detailed connectivity matrix. 3579 We also don't consider here the possibility to ask for more than one 3580 path in diversity or for point-to-multi-point paths, which are for 3581 further study. 3583 Considering for example an IP domain without considering SRLG and 3584 affinities, we have an estimated number of paths depending on these 3585 estimated cardinalities: 3587 Endpoints = N*(N-1), Bandwidth = 5, Metrics = 6, Bounds = 20, 3588 Priority = 8, Local prot = 2 3590 The number of paths to be pre-computed by each IP domain is 3591 therefore 24960 * N(N-1) where N is the number of domain access 3592 points. 3594 This means that with just 4 access points we have nearly 300000 3595 paths to compute, advertise and maintain (if a change happens in the 3596 domain, due to a fault, or just the deployment of new traffic, a 3597 substantial number of paths need to be recomputed and the relevant 3598 changes advertised to the client). 3600 This seems quite challenging. In fact, if we assume a mean length of 3601 1K for the json describing a path (a quite conservative estimate), 3602 reporting 300000 paths means transferring and then parsing more than 3603 300 Mbytes for each domain. If we assume that 20% (to be checked) of 3604 this paths change when a new deployment of traffic occurs, we have 3605 60 Mbytes of transfer for each domain traversed by a new end-to-end 3606 path. If a network has, let say, 20 domains (we want to estimate the 3607 load for a non-trivial domain set-up) in the beginning a total 3608 initial transfer of 6Gigs is needed, and eventually, assuming 4-5 3609 domains are involved in mean during a path deployment we could have 3610 240-300 Mbytes of changes advertised to the client. 3612 Further bare-bone solutions can be investigated, removing some more 3613 options, if this is considered not acceptable; in conclusion, it 3614 seems that an approach based only on the information provided by the 3615 detailed connectivity matrix is hardly feasible, and could be 3616 applicable only to small networks with a limited meshing degree 3617 between domains and renouncing to a number of path computation 3618 features. 3620 Acknowledgments 3622 The authors would like to thank Igor Bryskin and Xian Zhang for 3623 participating in the initial discussions that have triggered this 3624 work and providing valuable insights. 3626 The authors would like to thank the authors of the TE tunnel YANG 3627 data model [TE-TUNNEL], in particular Igor Bryskin, Vishnu Pavan 3628 Beeram, Tarek Saad and Xufeng Liu, for their inputs to the 3629 discussions and support in having consistency between the Path 3630 Computation and TE tunnel YANG data models. 3632 The authors would like to thank Adrian Farrel, Dhruv Dhody, Igor 3633 Bryskin, Julien Meuric and Lou Berger for their valuable input to 3634 the discussions that has clarified that the path being set up is not 3635 necessarily the same as the path that has been previously computed 3636 and, in particular to Dhruv Dhody, for his suggestion to describe 3637 the need for a path verification phase to check that the actual path 3638 being set up meets the required end-to-end metrics and constraints. 3640 The authors would like to thank Aihua Guo, Lou Berger, Shaolong Gan, 3641 Martin Bjorklund and Tom Petch for their useful comments on how to 3642 define XPath statements in YANG RPCs. 3644 The authors would like to thank Haomian Zheng, Yanlei Zheng, Tom 3645 Petch, Aihua Guo and Martin Bjorklund for their review and valuable 3646 comments to this document. 3648 This document was prepared using 2-Word-v2.0.template.dot. 3650 Contributors 3652 Dieter Beller 3653 Nokia 3654 Email: dieter.beller@nokia.com 3656 Gianmarco Bruno 3657 Ericsson 3658 Email: gianmarco.bruno@ericsson.com 3659 Francesco Lazzeri 3660 Ericsson 3661 Email: francesco.lazzeri@ericsson.com 3663 Young Lee 3664 Huawei 3665 Email: leeyoung@huawei.com 3667 Carlo Perocchio 3668 Ericsson 3669 Email: carlo.perocchio@ericsson.com 3671 Olivier Dugeon 3672 Orange Labs 3673 Email: olivier.dugeon@orange.com 3675 Julien Meuric 3676 Orange Labs 3677 Email: julien.meuric@orange.com 3679 Authors' Addresses 3681 Italo Busi (Editor) 3682 Huawei 3683 Email: italo.busi@huawei.com 3685 Sergio Belotti (Editor) 3686 Nokia 3687 Email: sergio.belotti@nokia.com 3689 Victor Lopez 3690 Telefonica 3691 Email: victor.lopezalvarez@telefonica.com 3693 Oscar Gonzalez de Dios 3694 Telefonica 3695 Email: oscar.gonzalezdedios@telefonica.com 3696 Anurag Sharma 3697 Google 3698 Email: ansha@google.com 3700 Yan Shi 3701 China Unicom 3702 Email: shiyan49@chinaunicom.cn 3704 Ricard Vilalta 3705 CTTC 3706 Email: ricard.vilalta@cttc.es 3708 Karthik Sethuraman 3709 NEC 3710 Email: karthik.sethuraman@necam.com 3712 Michael Scharf 3713 Nokia 3714 Email: michael.scharf@gmail.com 3716 Daniele Ceccarelli 3717 Ericsson 3718 Email: daniele.ceccarelli@ericsson.com