idnits 2.17.1 draft-ietf-teas-yang-path-computation-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([TE-TUNNEL]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 1329 has weird spacing: '...tion-id uin...' == Line 1336 has weird spacing: '...ic-type ide...' == Line 1345 has weird spacing: '...-- name str...' == Line 1352 has weird spacing: '...ic-type ide...' == Line 1423 has weird spacing: '...o usage ide...' == (34 more instances...) -- The document date (July 8, 2019) is 1755 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Downref: Normative reference to an Informational RFC: RFC 7491 ** Downref: Normative reference to an Informational RFC: RFC 8453 ** Downref: Normative reference to an Informational RFC: RFC 8454 Summary: 4 errors (**), 0 flaws (~~), 7 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Italo Busi (Ed.) 2 Internet Draft Huawei 3 Intended status: Standard Track Sergio Belotti (Ed.) 4 Expires: January 2020 Nokia 6 July 8, 2019 8 Yang model for requesting Path Computation 9 draft-ietf-teas-yang-path-computation-06.txt 11 Status of this Memo 13 This Internet-Draft is submitted in full conformance with the 14 provisions of BCP 78 and BCP 79. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six 22 months and may be updated, replaced, or obsoleted by other documents 23 at any time. It is inappropriate to use Internet-Drafts as 24 reference material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html 32 This Internet-Draft will expire on January 8, 2020. 34 Copyright Notice 36 Copyright (c) 2019 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (http://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with 44 respect to this document. Code Components extracted from this 45 document must include Simplified BSD License text as described in 46 Section 4.e of the Trust Legal Provisions and are provided without 47 warranty as described in the Simplified BSD License. 49 Abstract 51 There are scenarios, typically in a hierarchical SDN context, where 52 the topology information provided by a TE network provider may not 53 be sufficient for its client to perform end-to-end path computation. 54 In these cases the client would need to request the provider to 55 calculate some (partial) feasible paths. 57 This document defines a YANG data model for a stateless RPC to 58 request path computation. This model complements the stateful 59 solution defined in [TE-TUNNEL]. 61 Moreover this document describes some use cases where a path 62 computation request, via YANG-based protocols (e.g., NETCONF or 63 RESTCONF), can be needed. 65 Table of Contents 67 1. Introduction...................................................3 68 1.1. Terminology...............................................4 69 2. Use Cases......................................................5 70 2.1. Packet/Optical Integration................................5 71 2.2. Multi-domain TE Networks.................................10 72 2.3. Data center interconnections.............................12 73 2.4. Backward Recursive Path Computation scenario.............14 74 2.5. Hierarchical PCE scenario................................15 75 3. Motivations...................................................17 76 3.1. Motivation for a YANG Model..............................17 77 3.1.1. Benefits of common data models......................17 78 3.1.2. Benefits of a single interface......................18 79 3.1.3. Extensibility.......................................19 80 3.2. Interactions with TE Topology............................19 81 3.2.1. TE Topology Aggregation.............................20 82 3.2.2. TE Topology Abstraction.............................23 83 3.2.3. Complementary use of TE topology and path computation24 84 3.3. Stateless and Stateful Path Computation..................27 85 3.3.1. Temporary reporting of the computed path state......29 86 4. Path Computation and Optimization for multiple paths..........31 87 5. YANG Model for requesting Path Computation....................32 88 5.1. Synchronization of multiple path computation requests....32 89 5.2. Returned metric values...................................34 90 6. YANG model for stateless TE path computation..................36 91 6.1. YANG Tree................................................36 92 6.2. YANG Module..............................................46 93 7. Security Considerations.......................................61 94 8. IANA Considerations...........................................62 95 9. References....................................................62 96 9.1. Normative References.....................................62 97 9.1. Informative References...................................64 98 10. Acknowledgments..............................................64 99 Appendix A. Examples of dimensioning the "detailed connectivity 100 matrix" 66 102 1. Introduction 104 There are scenarios, typically in a hierarchical SDN context, where 105 the topology information provided by a TE network provider may not 106 be sufficient for its client to perform end-to-end path computation. 107 In these cases the client would need to request the provider to 108 calculate some (partial) feasible paths, complementing his topology 109 knowledge, to make his end-to-end path computation feasible. 111 This type of scenarios can be applied to different interfaces in 112 different reference architectures: 114 o ABNO control interface [RFC7491], in which an Application Service 115 Coordinator can request ABNO controller to take in charge path 116 calculation (see Figure 1 in [RFC7491]). 118 o ACTN [RFC8453], where a controller hierarchy is defined, the need 119 for path computation arises on both interfaces CMI (interface 120 between Customer Network Controller (CNC) and Multi Domain 121 Service Coordinator (MDSC)) and/or MPI (interface between MSDC- 122 PNC). [RFC8454] describes an information model for the Path 123 Computation request. 125 Multiple protocol solutions can be used for communication between 126 different controller hierarchical levels. This document assumes that 127 the controllers are communicating using YANG-based protocols (e.g., 128 NETCONF or RESTCONF). 130 Path Computation Elements, Controllers and Orchestrators perform 131 their operations based on Traffic Engineering Databases (TED). Such 132 TEDs can be described, in a technology agnostic way, with the YANG 133 Data Model for TE Topologies [TE-TOPO]. Furthermore, the technology 134 specific details of the TED are modeled in the augmented TE topology 135 models (e.g. [OTN-TOPO] for OTN ODU technologies). 137 The availability of such topology models allows providing the TED 138 using YANG-based protocols (e.g., NETCONF or RESTCONF). Furthermore, 139 it enables a PCE/Controller performing the necessary abstractions or 140 modifications and offering this customized topology to another 141 PCE/Controller or high level orchestrator. 143 Note: This document assumes that the client of the YANG data model 144 defined in this document may not implement a "PCE" functionality, as 145 defined in [RFC4655]. 147 The tunnels that can be provided over the networks described with 148 the topology models can be also set-up, deleted and modified via 149 YANG-based protocols (e.g., NETCONF or RESTCONF) using the TE-Tunnel 150 Yang model [TE-TUNNEL]. 152 This document proposes a YANG model for a path computation request 153 defined as a stateless RPC, which complements the stateful solution 154 defined in [TE-TUNNEL]. 156 Moreover, this document describes some use cases where a path 157 computation request, via YANG-based protocols (e.g., NETCONF or 158 RESTCONF), can be needed. 160 1.1. Terminology 162 TED: The traffic engineering database is a collection of all TE 163 information about all TE nodes and TE links in a given network. 165 PCE: A Path Computation Element (PCE) is an entity that is capable 166 of computing a network path or route based on a network graph, and 167 of applying computational constraints during the computation. The 168 PCE entity is an application that can be located within a network 169 node or component, on an out-of-network server, etc. For example, a 170 PCE would be able to compute the path of a TE LSP by operating on 171 the TED and considering bandwidth and other constraints applicable 172 to the TE LSP service request. [RFC4655] 174 2. Use Cases 176 This section presents some use cases, where a client needs to 177 request underlying SDN controllers for path computation. 179 The use of the YANG model defined in this document is not restricted 180 to these use cases but can be used in any other use case when deemed 181 useful. 183 The presented uses cases have been grouped, depending on the 184 different underlying topologies: a) Packet-Optical integration; b) 185 Multi-domain Traffic Engineered (TE) Networks; and c) Data center 186 interconnections. Use cases d) and e) respectively present how to 187 apply this Yang model for standard multi-domain PCE i.e. Backward 188 Recursive Path Computation [RFC5441] and Hierarchical PCE [RFC6805]. 190 2.1. Packet/Optical Integration 192 In this use case, an Optical network is used to provide connectivity 193 to some nodes of a Packet network (see Figure 1). 195 +----------------+ 196 | | 197 | Packet/Optical | 198 | Coordinator | 199 | | 200 +---+------+-----+ 201 | | 202 +------------+ | 203 | +-----------+ 204 +------V-----+ | 205 | | +------V-----+ 206 | Packet | | | 207 | Network | | Optical | 208 | Controller | | Network | 209 | | | Controller | 210 +------+-----+ +-------+----+ 211 | | 212 .........V......................... | 213 : Packet Network : | 214 +----+ +----+ | 215 | R1 |= = = = = = = = = = = = = = = =| R2 | | 216 +-+--+ +--+-+ | 217 | : : | | 218 | :................................ : | | 219 | | | 220 | +-----+ | | 221 | ...........| Opt |........... | | 222 | : | C | : | | 223 | : /+--+--+\ : | | 224 | : / | \ : | | 225 | : / | \ : | | 226 | +-----+ / +--+--+ \ +-----+ | | 227 | | Opt |/ | Opt | \| Opt | | | 228 +---| A | | D | | B |---+ | 229 +-----+\ +--+--+ /+-----+ | 230 : \ | / : | 231 : \ | / : | 232 : \ +--+--+ / Optical<---------+ 233 : \| Opt |/ Network: 234 :..........| E |..........: 235 +-----+ 237 Figure 1 - Packet/Optical Integration Use Case 239 Figure 1 as well as Figure 2 below only show a partial view of the 240 packet network connectivity, before additional packet connectivity 241 is provided by the Optical network. 243 It is assumed that the Optical network controller provides to the 244 packet/optical coordinator an abstracted view of the Optical 245 network. A possible abstraction could be to represent the whole 246 optical network as one "virtual node" with "virtual ports" connected 247 to the access links, as shown in Figure 2. 249 It is also assumed that Packet network controller can provide the 250 packet/optical coordinator the information it needs to setup 251 connectivity between packet nodes through the Optical network (e.g., 252 the access links). 254 The path computation request helps the coordinator to know the real 255 connections that can be provided by the optical network. 257 ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,. 258 , Packet/Optical Coordinator view , 259 , +----+ , . 260 , | | , 261 , | R2 | , . 262 , +----+ +------------ + /+----+ , 263 , | | | |/-----/ / / , . 264 , | R1 |--O VP1 VP4 O / / , 265 , | |\ | | /----/ / , . 266 , +----+ \| |/ / , 267 , / O VP2 VP5 O / , . 268 , / | | +----+ , 269 , / | | | | , . 270 , / O VP3 VP6 O--| R4 | , 271 , +----+ /-----/|_____________| +----+ , . 272 , | |/ +------------ + , 273 , | R3 | , . 274 , +----+ ,,,,,,,,,,,,,,,,, 275 ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, ,. 276 . Packet Network Controller view +----+ , 277 only packet nodes and packet links | | , . 278 . with access links to the optical network | R2 | , 279 , +----+ /+----+ , . 280 . , | | /-----/ / / , 281 , | R1 |--- / / , . 282 . , +----+\ /----/ / , 283 , / \ / / , . 284 . , / / , 285 , / +----+ , . 286 . , / | | , 287 , / ---| R4 | , . 288 . , +----+ /-----/ +----+ , 289 , | |/ , . 290 . , | R3 | , 291 , +----+ ,,,,,,,,,,,,,,,,,. 292 .,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, , 293 Optical Network Controller view , . 294 . only optical nodes, +--+ , 295 optical links and /|OF| , . 296 . access links from the +--++--+ / , 297 packet network |OA| \ /-----/ / , . 298 . , ---+--+--\ +--+/ / , 299 , \ | \ \-|OE|-------/ , . 300 . , \ | \ /-+--+ , 301 , \+--+ X | , . 303 . , |OB|-/ \ | , 304 , +--+-\ \+--+ , . 305 . , / \ \--|OD|--- , 306 , /-----/ +--+ +--+ , . 307 . , / |OC|/ , 308 , +--+ , . 309 ., ,,,,,,,,,,,,,,,,,, 310 ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, , 311 . Actual Physical View +----+ , 312 , +--+ | | , 313 . , /|OF| | R2 | , 314 , +----+ +--++--+ /+----+ , 315 . , | | |OA| \ /-----/ / / , 316 , | R1 |---+--+--\ +--+/ / / , 317 . , +----+\ | \ \-|OE|-------/ / , 318 , / \ | \ /-+--+ / , 319 . , / \+--+ X | / , 320 , / |OB|-/ \ | +----+ , 321 . , / +--+-\ \+--+ | | , 322 , / / \ \--|OD|---| R4 | , 323 . , +----+ /-----/ +--+ +--+ +----+ , 324 , | |/ |OC|/ , 325 . , | R3 | +--+ , 326 , +----+ , 327 .,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 329 Figure 2 - Packet and Optical Topology Abstractions 331 In this use case, the coordinator needs to setup an optimal 332 underlying path for an IP link between R1 and R2. 334 As depicted in Figure 2, the coordinator has only an "abstracted 335 view" of the physical network, and it does not know the feasibility 336 or the cost of the possible optical paths (e.g., VP1-VP4 and VP2- 337 VP5), which depend from the current status of the physical resources 338 within the optical network and on vendor-specific optical 339 attributes. 341 The coordinator can request the underlying Optical domain controller 342 to compute a set of potential optimal paths, taking into account 343 optical constraints. Then, based on its own constraints, policy and 344 knowledge (e.g. cost of the access links), it can choose which one 345 of these potential paths to use to setup the optimal end-to-end path 346 crossing optical network. 348 ............................ 349 : : 350 O VP1 VP4 O 351 cost=10 /:\ /:\ cost=10 352 / : \----------------------/ : \ 353 +----+ / : cost=50 : \ +----+ 354 | |/ : : \| | 355 | R1 | : : | R2 | 356 | |\ : : /| | 357 +----+ \ : /--------------------\ : / +----+ 358 \ : / cost=55 \ : / 359 cost=5 \:/ \:/ cost=5 360 O VP2 VP5 O 361 : : 362 :..........................: 364 Figure 3 - Packet/Optical Path Computation Example 366 For example, in Figure 3, the Coordinator can request the Optical 367 network controller to compute the paths between VP1-VP4 and VP2-VP5 368 and then decide to setup the optimal end-to-end path using the VP2- 369 VP5 Optical path even this is not the optimal path from the Optical 370 domain perspective. 372 Considering the dynamicity of the connectivity constraints of an 373 Optical domain, it is possible that a path computed by the Optical 374 network controller when requested by the Coordinator is no longer 375 valid/available when the Coordinator requests it to be setup up. 376 This is further discussed in section 3.3. 378 2.2. Multi-domain TE Networks 380 In this use case there are two TE domains which are interconnected 381 together by multiple inter-domains links. 383 A possible example could be a multi-domain optical network. 385 +--------------+ 386 | Multi-domain | 387 | Controller | 388 +---+------+---+ 389 | | 390 +------------+ | 391 | +-----------+ 392 +------V-----+ | 393 | | | 394 | TE Domain | +------V-----+ 395 | Controller | | | 396 | 1 | | TE Domain | 397 +------+-----+ | Controller | 398 | | 2 | 399 | +------+-----+ 400 .........V.......... | 401 : : | 402 +-----+ : | 403 | | : .........V.......... 404 | X | : : : 405 | | +-----+ +-----+ : 406 +-----+ | | | | : 407 : | C |------| E | : 408 +-----+ +-----+ /| | | |\ +-----+ +-----+ 409 | | | |/ +-----+ +-----+ \| | | | 410 | A |----| B | : : | G |----| H | 411 | | | |\ : : /| | | | 412 +-----+ +-----+ \+-----+ +-----+/ +-----+ +-----+ 413 : | | | | : 414 : | D |------| F | : 415 : | | | | +-----+ 416 : +-----+ +-----+ | | 417 : : : | Y | 418 : : : | | 419 : Domain 1 : : Domain 2 +-----+ 420 :..................: :.................: 422 Figure 4 - Multi-domain multi-link interconnection 424 In order to setup an end-to-end multi-domain TE path (e.g., between 425 nodes A and H), the multi-domain controller needs to know the 426 feasibility or the cost of the possible TE paths within the two TE 427 domains, which depend from the current status of the physical 428 resources within each TE network. This is more challenging in case 429 of optical networks because the optimal paths depend also on vendor- 430 specific optical attributes (which may be different in the two 431 domains if they are provided by different vendors). 433 In order to setup a multi-domain TE path (e.g., between nodes A and 434 H), the multi-domain controller can request the TE domain 435 controllers to compute a set of intra-domain optimal paths and take 436 decisions based on the information received. For example: 438 o The multi-domain controller asks TE domain controllers to provide 439 set of paths between A-C, A-D, E-H and F-H 441 o TE domain controllers return a set of feasible paths with the 442 associated costs: the path A-C is not part of this set(in optical 443 networks, it is typical to have some paths not being feasible due 444 to optical constraints that are known only by the optical domain 445 controller) 447 o The multi-domain controller will select the path A-D-F-H since it 448 is the only feasible multi-domain path and then request the TE 449 domain controllers to setup the A-D and F-H intra-domain paths 451 o If there are multiple feasible paths, the multi-domain controller 452 can select the optimal path knowing the cost of the intra-domain 453 paths (provided by the TE domain controllers) and the cost of the 454 inter-domain links (known by the multi-domain controller) 456 This approach may have some scalability issues when the number of TE 457 domains is quite big (e.g. 20). 459 In this case, it would be worthwhile using the abstract TE topology 460 information provided by the TE domain controllers to limit the 461 number of potential optimal end-to-end paths and then request path 462 computation to fewer TE domain controllers in order to decide what 463 the optimal path within this limited set is. 465 For more details, see section 3.2.3. 467 2.3. Data center interconnections 469 In these use case, there is a TE domain which is used to provide 470 connectivity between data centers which are connected with the TE 471 domain using access links. 473 +--------------+ 474 | Cloud Network| 475 | Orchestrator | 476 +--------------+ 477 | | | | 478 +-------------+ | | +------------------------+ 479 | | +------------------+ | 480 | +--------V---+ | | 481 | | | | | 482 | | TE Network | | | 483 +------V-----+ | Controller | +------V-----+ | 484 | DC | +------------+ | DC | | 485 | Controller | | | Controller | | 486 +------------+ | +-----+ +------------+ | 487 | ....V...| |........ | | 488 | : | P | : | | 489 .....V..... : /+-----+\ : .....V..... | 490 : : +-----+ / | \ +-----+ : : | 491 : DC1 || : | |/ | \| | : DC2 || : | 492 : ||||----| PE1 | | | PE2 |---- |||| : | 493 : _|||||| : | |\ | /| | : _|||||| : | 494 : : +-----+ \ +-----+ / +-----+ : : | 495 :.........: : \| |/ : :.........: | 496 :.......| PE3 |.......: | 497 | | | 498 +-----+ +---------V--+ 499 .....|..... | DC | 500 : : | Controller | 501 : DC3 || : +------------+ 502 : |||| : | 503 : _|||||| <------------------+ 504 : : 505 :.........: 507 Figure 5 - Data Center Interconnection Use Case 509 In this use case, there is need to transfer data from Data Center 1 510 (DC1) to either DC2 or DC3 (e.g. workload migration). 512 The optimal decision depends both on the cost of the TE path (DC1- 513 DC2 or DC1-DC3) and of the data center resources within DC2 or DC3. 515 The cloud network orchestrator needs to make a decision for optimal 516 connection based on TE Network constraints and data centers 517 resources. It may not be able to make this decision because it has 518 only an abstract view of the TE network (as in use case in 2.1). 520 The cloud network orchestrator can request to the TE network 521 controller to compute the cost of the possible TE paths (e.g., DC1- 522 DC2 and DC1-DC3) and to the DC controller to provide the information 523 it needs about the required data center resources within DC2 and DC3 524 and then it can take the decision about the optimal solution based 525 on this information and its policy. 527 2.4. Backward Recursive Path Computation scenario 529 [RFC5441] has defined the Virtual Source Path Tree (VSPT) TLV within 530 PCE Reply Object in order to compute inter-domain paths following a 531 "Backward Recursive Path Computation" (BRPC) method. The main 532 principle is to forward the PCE request message up to the 533 destination domain. Then, each PCE involved in the computation will 534 compute its part of the path and send it back to the requester 535 through PCE Response message. The resulting computation is spread 536 from destination PCE to source PCE. Each PCE is in charge of merging 537 the path it received with the one it calculated. At the end, the 538 source PCE merges its local part of the path with the received one 539 to achieve the end-to-end path. 541 Figure 6 below show a typical BRPC scenario where 3 PCEs cooperate 542 to compute inter-domain paths. 544 +----------------+ +----------------+ 545 | Domain (B) | | Domain (C) | 546 | | | | 547 | /-------|---PCEP---|--------\ | 548 | / | | \ | 549 | (PCE) | | (PCE) | 550 | / <----------> | 551 | / | Inter | | 552 +---|----^-------+ Domain +----------------+ 553 | | Link 554 PCEP | 555 | | Inter-domain Link 556 | | 557 +---|----v-------+ 558 | | | 559 | | Domain (A) | 560 | \ | 561 | (PCE) | 562 | | 563 | | 564 +----------------+ 565 Figure 6 - BRPC Scenario 567 In this use case, a client can use the YANG model defined in this 568 document to request path computation to the PCE that controls the 569 source of the tunnel. For example, a client can request to the PCE 570 of domain A to compute a path from a source S, within domain A, to a 571 destination D, within domain C. Then PCE of domain A will use PCEP 572 protocol, as per [RFC5441], to compute the path from S to D and in 573 turn gives the final answer to the requester. 575 2.5. Hierarchical PCE scenario 577 [RFC6805] has defined an architecture and extensions to the PCE 578 standard to compute inter-domain path following a hierarchical 579 method. Two new roles have been defined: Parent PCE and child PCE. 580 The parent PCE is in charge to coordinate the end-to-end path 581 computation. For that purpose it sends to each child PCE involve in 582 the multi-domain path computation a PCE Request message to obtain 583 the local part of the path. Once received all answer through PCE 584 Response message, the Parent PCE will merge the different local 585 parts of the path to achieve the end-to-end path. 587 Figure 7 below shows a typical hierarchical scenario where a Parent 588 PCE request end-to-end path to the different child PCE. Note that a 589 PCE could take independently the role of Child or Parent PCE 590 depending of which PCE will request the path. 592 ----------------------------------------------------------------- 593 | Domain 5 | 594 | ----- | 595 | |PCE 5| | 596 | ----- | 597 | | 598 | ---------------- ---------------- ---------------- | 599 | | Domain 1 | | Domain 2 | | Domain 3 | | 600 | | | | | | | | 601 | | ----- | | ----- | | ----- | | 602 | | |PCE 1| | | |PCE 2| | | |PCE 3| | | 603 | | ----- | | ----- | | ----- | | 604 | | | | | | | | 605 | | ----| |---- ----| |---- | | 606 | | |BN11+---+BN21| |BN23+---+BN31| | | 607 | | - ----| |---- ----| |---- - | | 608 | | |S| | | | | |D| | | 609 | | - ----| |---- ----| |---- - | | 610 | | |BN12+---+BN22| |BN24+---+BN32| | | 611 | | ----| |---- ----| |---- | | 612 | | | | | | | | 613 | | ---- | | | | ---- | | 614 | | |BN13| | | | | |BN33| | | 615 | -----------+---- ---------------- ----+----------- | 616 | \ / | 617 | \ ---------------- / | 618 | \ | | / | 619 | \ |---- ----| / | 620 | ----+BN41| |BN42+---- | 621 | |---- ----| | 622 | | | | 623 | | ----- | | 624 | | |PCE 4| | | 625 | | ----- | | 626 | | | | 627 | | Domain 4 | | 628 | ---------------- | 629 | | 630 ----------------------------------------------------------------- 631 Figure 7 - Hierarchical domain topology from [RFC6805] 633 In this use case, a client can use the YANG model defined in this 634 document to request to the Parent PCE a path from a source S to a 635 destination D. The Parent PCE will in turn contact the child PCEs 636 through PCEP protocol to compute the end-to-end path and then return 637 the computed path to the client, using the YANG model defined in 638 this document. For example the YANG model can be used to request to 639 PCE5 acting as Parent PCE to compute a path from source S, within 640 domain 1, to destination D, within domain 3. PCE5 will contact child 641 PCEs of domain 1, 2 and 3 to obtain local part of the end-to-end 642 path through the PCEP protocol. Once received the PCE Response 643 message, it merges the answers to compute the end-to-end path and 644 send it back to the client. 646 3. Motivations 648 This section provides the motivation for the YANG model defined in 649 this document. 651 Section 3.1 describes the motivation for a YANG model to request 652 path computation. 654 Section 3.2 describes the motivation for a YANG model which 655 complements the TE Topology YANG model defined in [TE-TOPO]. 657 Section 3.3 describes the motivation for a stateless YANG RPC which 658 complements the TE Tunnel YANG model defined in [TE-TUNNEL]. 660 3.1. Motivation for a YANG Model 662 3.1.1. Benefits of common data models 664 The YANG data model for requesting path computation is closely 665 aligned with the YANG data models that provide (abstract) TE 666 topology information, i.e., [TE-TOPO] as well as that are used to 667 configure and manage TE Tunnels, i.e., [TE-TUNNEL]. 669 There are many benefits in aligning the data model used for path 670 computation requests with the YANG data models used for TE topology 671 information and for TE Tunnels configuration and management: 673 o There is no need for an error-prone mapping or correlation of 674 information. 676 o It is possible to use the same endpoint identifiers in path 677 computation requests and in the topology modeling. 679 o The attributes used for path computation constraints are the same 680 as those used when setting up a TE Tunnel. 682 3.1.2. Benefits of a single interface 684 The system integration effort is typically lower if a single, 685 consistent interface is used by controllers, i.e., one data modeling 686 language (i.e., YANG) and a common protocol (e.g., NETCONF or 687 RESTCONF). 689 Practical benefits of using a single, consistent interface include: 691 1. Simple authentication and authorization: The interface between 692 different components has to be secured. If different protocols 693 have different security mechanisms, ensuring a common access 694 control model may result in overhead. For instance, there may be 695 a need to deal with different security mechanisms, e.g., 696 different credentials or keys. This can result in increased 697 integration effort. 699 2. Consistency: Keeping data consistent over multiple different 700 interfaces or protocols is not trivial. For instance, the 701 sequence of actions can matter in certain use cases, or 702 transaction semantics could be desired. While ensuring 703 consistency within one protocol can already be challenging, it is 704 typically cumbersome to achieve that across different protocols. 706 3. Testing: System integration requires comprehensive testing, 707 including corner cases. The more different technologies are 708 involved, the more difficult it is to run comprehensive test 709 cases and ensure proper integration. 711 4. Middle-box friendliness: Provider and consumer of path 712 computation requests may be located in different networks, and 713 middle-boxes such as firewalls, NATs, or load balancers may be 714 deployed. In such environments it is simpler to deploy a single 715 protocol. Also, it may be easier to debug connectivity problems. 717 5. Tooling reuse: Implementers may want to implement path 718 computation requests with tools and libraries that already exist 719 in controllers and/or orchestrators, e.g., leveraging the rapidly 720 growing eco-system for YANG tooling. 722 3.1.3. Extensibility 724 Path computation is only a subset of the typical functionality of a 725 controller. In many use cases, issuing path computation requests 726 comes along with the need to access other functionality on the same 727 system. In addition to obtaining TE topology, for instance also 728 configuration of services (setup/modification/deletion) may be 729 required, as well as: 731 1. Receiving notifications for topology changes as well as 732 integration with fault management 734 2. Performance management such as retrieving monitoring and 735 telemetry data 737 3. Service assurance, e.g., by triggering OAM functionality 739 4. Other fulfilment and provisioning actions beyond tunnels and 740 services, such as changing QoS configurations 742 YANG is a very extensible and flexible data modeling language that 743 can be used for all these use cases. 745 3.2. Interactions with TE Topology 747 The use cases described in section 2 have been described assuming 748 that the topology view exported by each underlying SDN controller to 749 the orchestrator is aggregated using the "virtual node model", 750 defined in [RFC7926]. 752 TE Topology information, e.g., as provided by [TE-TOPO], could in 753 theory be used by an underlying SDN controllers to provide TE 754 information to its client thus allowing a PCE available within its 755 client to perform multi-domain path computation by its own, without 756 requesting path computations to the underlying SDN controllers. 758 In case the client does not implement a PCE function, as discussed 759 in section 1, it could not perform path computation based on TE 760 Topology information and would instead need to request path 761 computation to the underlying controllers to get the information it 762 needs to compute the optimal end-to-end path. 764 This section analyzes the need for a client to request underlying 765 SDN controllers for path computation even in case it implements a 766 PCE functionality, as well as how the TE Topology information and 767 the path computation can be complementary. 769 In nutshell, there is a scalability trade-off between providing all 770 the TE information needed by PCE, when implemented by the client, to 771 take optimal path computation decisions by its own versus sending 772 too many requests to underlying SDN Domain Controllers to compute a 773 set of feasible optimal intra-domain TE paths. 775 3.2.1. TE Topology Aggregation 777 Using the TE Topology model, as defined in [TE-TOPO], the underlying 778 SDN controller can export the whole TE domain as a single abstract 779 TE node with a "detailed connectivity matrix". 781 The concept of a "detailed connectivity matrix" is defined in [TE- 782 TOPO] to provide specific TE attributes (e.g., delay, SRLGs and 783 summary TE metrics) as an extension of the "basic connectivity 784 matrix", which is based on the "connectivity matrix" defined in 785 [RFC7446]. 787 The information provided by the "detailed connectivity matrix" would 788 be equivalent to the information that should be provided by "virtual 789 link model" as defined in [RFC7926]. 791 For example, in the Packet/Optical integration use case, described 792 in section 2.1, the Optical network controller can make the 793 information shown in Figure 3 available to the Coordinator as part 794 of the TE Topology information and the Coordinator could use this 795 information to calculate by its own the optimal path between R1 and 796 R2, without requesting any additional information to the Optical 797 network Controller. 799 However, when designing the amount of information to provide within 800 the "detailed connectivity matrix", there is a tradeoff to be 801 considered between accuracy (i.e., providing "all" the information 802 that might be needed by the PCE available to Orchestrator) and 803 scalability. 805 Figure 8 below shows another example, similar to Figure 3, where 806 there are two possible Optical paths between VP1 and VP4 with 807 different properties (e.g., available bandwidth and cost). 809 ............................ 810 : /--------------------\ : 811 : / cost=65 \ : 812 :/ available-bw=10G \: 813 O VP1 VP4 O 814 cost=10 /:\ /:\ cost=10 815 / : \----------------------/ : \ 816 +----+ / : cost=50 : \ +----+ 817 | |/ : available-bw=2G : \| | 818 | R1 | : : | R2 | 819 | |\ : : /| | 820 +----+ \ : /--------------------\ : / +----+ 821 \ : / cost=55 \ : / 822 cost=5 \:/ available-bw=3G \:/ cost=5 823 O VP2 VP5 O 824 : : 825 :..........................: 827 Figure 8 - Packet/Optical Path Computation Example with multiple 828 choices 830 Reporting all the information, as in Figure 8, using the "detailed 831 connectivity matrix", is quite challenging from a scalability 832 perspective. The amount of this information is not just based on 833 number of end points (which would scale as N-square), but also on 834 many other parameters, including client rate, user 835 constraints/policies for the service, e.g. max latency < N ms, max 836 cost, etc., exclusion policies to route around busy links, min OSNR 837 margin, max preFEC BER etc. All these constraints could be different 838 based on connectivity requirements. 840 Examples of how the "detailed connectivity matrix" can be 841 dimensioned are described in Appendix A. 843 It is also worth noting that the "connectivity matrix" has been 844 originally defined in WSON, [RFC7446], to report the connectivity 845 constrains of a physical node within the WDM network: the 846 information it contains is pretty "static" and therefore, once taken 847 and stored in the TE data base, it can be always being considered 848 valid and up-to-date in path computation request. 850 Using the "basic connectivity matrix" with an abstract node to 851 abstract the information regarding the connectivity constraints of 852 an Optical domain, would make this information more "dynamic" since 853 the connectivity constraints of an Optical domain can change over 854 time because some optical paths that are feasible at a given time 855 may become unfeasible at a later time when e.g., another optical 856 path is established. The information in the "detailed connectivity 857 matrix" is even more dynamic since the establishment of another 858 optical path may change some of the parameters (e.g., delay or 859 available bandwidth) in the "detailed connectivity matrix" while not 860 changing the feasibility of the path. 862 The "connectivity matrix" is sometimes confused with optical reach 863 table that contain multiple (e.g. k-shortest) regen-free reachable 864 paths for every A-Z node combination in the network. Optical reach 865 tables can be calculated offline, utilizing vendor optical design 866 and planning tools, and periodically uploaded to the Controller: 867 these optical path reach tables are fairly static. However, to get 868 the connectivity matrix, between any two sites, either a regen free 869 path can be used, if one is available, or multiple regen free paths 870 are concatenated to get from src to dest, which can be a very large 871 combination. Additionally, when the optical path within optical 872 domain needs to be computed, it can result in different paths based 873 on input objective, constraints, and network conditions. In summary, 874 even though "optical reachability table" is fairly static, which 875 regen free paths to build the connectivity matrix between any source 876 and destination is very dynamic, and is done using very 877 sophisticated routing algorithms. 879 There is therefore the need to keep the information in the "detailed 880 connectivity matrix" updated which means that there another tradeoff 881 between the accuracy (i.e., providing "all" the information that 882 might be needed by the client's PCE) and having up-to-date 883 information. The more the information is provided and the longer it 884 takes to keep it up-to-date which increases the likelihood that the 885 client's PCE computes paths using not updated information. 887 It seems therefore quite challenging to have a "detailed 888 connectivity matrix" that provides accurate, scalable and updated 889 information to allow the client's PCE to take optimal decisions by 890 its own. 892 Instead, if the information in the "detailed connectivity matrix" is 893 not complete/accurate, we can have the following drawbacks 894 considering for example the case in Figure 8: 896 o If only the VP1-VP4 path with available bandwidth of 2 Gb/s and 897 cost 50 is reported, the client's PCE will fail to compute a 5 898 Gb/s path between routers R1 and R2, although this would be 899 feasible; 901 o If only the VP1-VP4 path with available bandwidth of 10 Gb/s and 902 cost 60 is reported, the client's PCE will compute, as optimal, 903 the 1 Gb/s path between R1 and R2 going through the VP2-VP5 path 904 within the Optical domain while the optimal path would actually 905 be the one going thought the VP1-VP4 sub-path (with cost 50) 906 within the Optical domain. 908 Using the approach proposed in this document, the client, when it 909 needs to setup an end-to-end path, it can request the Optical domain 910 controller to compute a set of optimal paths (e.g., for VP1-VP4 and 911 VP2-VP5) and take decisions based on the information received: 913 o When setting up a 5 Gb/s path between routers R1 and R2, the 914 Optical domain controller may report only the VP1-VP4 path as the 915 only feasible path: the Orchestrator can successfully setup the 916 end-to-end path passing though this Optical path; 918 o When setting up a 1 Gb/s path between routers R1 and R2, the 919 Optical domain controller (knowing that the path requires only 1 920 Gb/s) can report both the VP1-VP4 path, with cost 50, and the 921 VP2-VP5 path, with cost 65. The Orchestrator can then compute the 922 optimal path which is passing thought the VP1-VP4 sub-path (with 923 cost 50) within the Optical domain. 925 3.2.2. TE Topology Abstraction 927 Using the TE Topology model, as defined in [TE-TOPO], the underlying 928 SDN controller can export an abstract TE Topology, composed by a set 929 of TE nodes and TE links, representing the abstract view of the 930 topology controlled by each domain controller. 932 Considering the example in Figure 4, the TE domain controller 1 can 933 export a TE Topology encompassing the TE nodes A, B, C and D and the 934 TE Link interconnecting them. In a similar way, TE domain controller 935 2 can export a TE Topology encompassing the TE nodes E, F, G and H 936 and the TE Link interconnecting them. 938 In this example, for simplicity reasons, each abstract TE node maps 939 with each physical node, but this is not necessary. 941 In order to setup a multi-domain TE path (e.g., between nodes A and 942 H), the multi-domain controller can compute by its own an optimal 943 end-to-end path based on the abstract TE topology information 944 provided by the domain controllers. For example: 946 o Multi-domain controller's PCE, based on its own information, can 947 compute the optimal multi-domain path being A-B-C-E-G-H, and then 948 request the TE domain controllers to setup the A-B-C and E-G-H 949 intra-domain paths 951 o But, during path setup, the domain controller may find out that 952 A-B-C intra-domain path is not feasible (as discussed in section 953 2.2, in optical networks it is typical to have some paths not 954 being feasible due to optical constraints that are known only by 955 the optical domain controller), while only the path A-B-D is 956 feasible 958 o So what the multi-domain controller computed is not good and need 959 to re-start the path computation from scratch 961 As discussed in section 3.2.1, providing more extensive abstract 962 information from the TE domain controllers to the multi-domain 963 controller may lead to scalability problems. 965 In a sense this is similar to the problem of routing and wavelength 966 assignment within an Optical domain. It is possible to do first 967 routing (step 1) and then wavelength assignment (step 2), but the 968 chances of ending up with a good path is low. Alternatively, it is 969 possible to do combined routing and wavelength assignment, which is 970 known to be a more optimal and effective way for Optical path setup. 971 Similarly, it is possible to first compute an abstract end-to-end 972 path within the multi-domain Orchestrator (step 1) and then compute 973 an intra-domain path within each Optical domain (step 2), but there 974 are more chances not to find a path or to get a suboptimal path that 975 performing per-domain path computation and then stitch them. 977 3.2.3. Complementary use of TE topology and path computation 979 As discussed in section 2.2, there are some scalability issues with 980 path computation requests in a multi-domain TE network with many TE 981 domains, in terms of the number of requests to send to the TE domain 982 controllers. It would therefore be worthwhile using the TE topology 983 information provided by the domain controllers to limit the number 984 of requests. 986 An example can be described considering the multi-domain abstract 987 topology shown in Figure 9. In this example, an end-to-end TE path 988 between domains A and F needs to be setup. The transit domain should 989 be selected between domains B, C, D and E. 991 .........B......... 992 : _ _ _ _ _ _ _ _ : 993 :/ \: 994 +---O NOT FEASIBLE O---+ 995 cost=5| : : | 996 ......A...... | :.................: | ......F...... 997 : : | | : : 998 : O-----+ .........C......... +-----O : 999 : : : /-------------\ : : : 1000 : : :/ \: : : 1001 : cost<=20 O---------O cost <= 30 O---------O cost<=20 : 1002 : /: cost=5 : : cost=5 :\ : 1003 : /------/ : :.................: : \------\ : 1004 : / : : \ : 1005 :/ cost<=25 : .........D......... : cost<=25 \: 1006 O-----------O-------+ : /-------------\ : +-------O-----------O 1007 :\ : cost=5| :/ \: |cost=5 : /: 1008 : \ : +-O cost <= 30 O-+ : / : 1009 : \------\ : : : : /------/ : 1010 : cost>=30 \: :.................: :/ cost>=30 : 1011 : O-----+ +-----O : 1012 :...........: | .........E......... | :...........: 1013 | : /-------------\ : | 1014 cost=5| :/ \: |cost=5 1015 +---O cost >= 30 O---+ 1016 : : 1017 :.................: 1019 Figure 9 - Multi-domain with many domains (Topology information) 1021 The actual cost of each intra-domain path is not known a priori from 1022 the abstract topology information. The Multi-domain controller only 1023 knows, from the TE topology provided by the underlying domain 1024 controllers, the feasibility of some intra-domain paths and some 1025 upper-bound and/or lower-bound cost information. With this 1026 information, together with the cost of inter-domain links, the 1027 Multi-domain controller can understand by its own that: 1029 o Domain B cannot be selected as the path connecting domains A and 1030 E is not feasible; 1032 o Domain E cannot be selected as a transit domain since it is know 1033 from the abstract topology information provided by domain 1034 controllers that the cost of the multi-domain path A-E-F (which 1035 is 100, in the best case) will be always be higher than the cost 1036 of the multi-domain paths A-D-F (which is 90, in the worst case) 1037 and A-E-F (which is 80, in the worst case) 1039 Therefore, the Multi-domain controller can understand by its own 1040 that the optimal multi-domain path could be either A-D-F or A-E-F 1041 but it cannot known which one of the two possible option actually 1042 provides the optimal end-to-end path. 1044 The Multi-domain controller can therefore request path computation 1045 only to the TE domain controllers A, D, E and F (and not to all the 1046 possible TE domain controllers). 1048 .........B......... 1049 : : 1050 +---O O---+ 1051 ......A...... | :.................: | ......F...... 1052 : : | | : : 1053 : O-----+ .........C......... +-----O : 1054 : : : /-------------\ : : : 1055 : : :/ \: : : 1056 : cost=15 O---------O cost = 25 O---------O cost=10 : 1057 : /: cost=5 : : cost=5 :\ : 1058 : /------/ : :.................: : \------\ : 1059 : / : : \ : 1060 :/ cost=10 : .........D......... : cost=15 \: 1061 O-----------O-------+ : /-------------\ : +-------O-----------O 1062 : : cost=5| :/ \: |cost=5 : : 1063 : : +-O cost = 15 O-+ : : 1064 : : : : : : 1065 : : :.................: : : 1066 : O-----+ +-----O : 1067 :...........: | .........E......... | :...........: 1068 | : : | 1069 +---O O---+ 1070 :.................: 1072 Figure 10 - Multi-domain with many domains (Path Computation 1073 information) 1075 Based on these requests, the Multi-domain controller can know the 1076 actual cost of each intra-domain paths which belongs to potential 1077 optimal end-to-end paths, as shown in Figure 10, and then compute 1078 the optimal end-to-end path (e.g., A-D-F, having total cost of 50, 1079 instead of A-C-F having a total cost of 70). 1081 3.3. Stateless and Stateful Path Computation 1083 The TE Tunnel YANG model, defined in [TE-TUNNEL], can support the 1084 need to request path computation. 1086 It is possible to request path computation by configuring a 1087 "compute-only" TE tunnel and retrieving the computed path(s) in the 1088 LSP(s) Record-Route Object (RRO) list as described in section 3.3.1 1089 of [TE-TUNNEL]. 1091 This is a stateful solution since the state of each created 1092 "compute-only" TE tunnel needs to be maintained and updated, when 1093 underlying network conditions change. 1095 It is very useful to provide options for both stateless and stateful 1096 path computation mechanisms. It is suggested to use stateless 1097 mechanisms as much as possible and to rely on stateful path 1098 computation when really needed. 1100 Stateless RPC allows requesting path computation using a simple 1101 atomic operation and it is the natural option/choice, especially 1102 with stateless PCE. The stateless path computation solution assumes 1103 that the underlying SDN controller (e.g., a PNC) will compute a path 1104 twice during the process to setup an LSP: at time T1, when its 1105 client (e.g., an MDSC) sends a path computation RPC request to it, 1106 and later, at time T2, when the same client (MDSC) creates a 1107 te-tunnel requesting the setup of the LSP. The underlying assumption 1108 is that, if network conditions have not changed, the same path that 1109 has been computed at time T1 is also computed at time T2 by the 1110 underlying SDN controller (e.g. PNC) and therefore the path that is 1111 setup at time T2 is exactly the same path that has been computed at 1112 time T1. 1114 Since the operation is stateless, there is no guarantee that the 1115 returned path would still be available when path setup is requested: 1116 this does not cause major issues in case the time between path 1117 computation and path setup is short (especially if compared with the 1118 time that would be needed to update the information of a very 1119 detailed connectivity matrix). 1121 In most of the cases, there is even no need to guarantee that the 1122 path that has been setup is the exactly same as the path that has 1123 been returned by path computation, especially if it has the same or 1124 even better metrics. Depending on the abstraction level applied by 1125 the server, the client may also not know the actual computed path. 1127 The most important requirement is that the required global 1128 objectives (e.g., multi-domain path metrics and constraints) are 1129 met. For this reason a path verification phase is necessary to 1130 verify that the actual path that has been setup meets the global 1131 objectives (for example in a multi-domain network, the resulting 1132 end-to-end path meets the required end-to-end metrics and 1133 constraints). 1135 In most of the cases, even if the setup path is not exactly the same 1136 as the path returned by path computation, its metrics and 1137 constraints are "good enough" (the path verification passes 1138 successfully). In the few corner cases where the path verification 1139 fails, it is possible repeat the whole process (path computation, 1140 path setup and path verification). 1142 In case the stateless solution is not sufficient and it would be the 1143 need to setup at T2 exactly the same path computed at T1 a stateful 1144 solution, based on "compute-only" TE tunnel, could be used to get 1145 notifications in case the computed path has been changed. In this 1146 case at time T1, the client (MDSC) creates a te-tunnel in a 1147 compute-only mode in the config DS and later, at time T2, changes 1148 the configuration of that te-tunnel (not to be any more in a 1149 compute-only mode) to trigger the setup of the LSP. 1151 It is worth noting that also the stateful solution, although 1152 increasing the likelihood that the computed path is available at 1153 path setup, does not guaranteed that because notifications may not 1154 be reliable or delivered on time. Path verification is needed also 1155 when stateful path computation is used. 1157 The stateful path computation has also the following drawbacks: 1159 o Several messages required for any path computation 1161 o Requires persistent storage in the provider controller 1163 o Need for garbage collection for stranded paths 1164 o Process burden to detect changes on the computed paths in order 1165 to provide notifications update 1167 3.3.1. Temporary reporting of the computed path state 1169 This section describes an optional extension to the stateless 1170 behavior where the underlying SDN controller, after having received 1171 a path computation RPC request, maintains some "temporary 1172 state" associated with the computed path, allowing the client to 1173 request the setup of exactly that path, if still available. 1175 This is similar to the stateful solution but, to avoid the drawbacks 1176 of the stateful approach, is leveraging the path computation RPC and 1177 the separation between configuration and operational DS, as defined 1178 in the NMDA architecture [RFC8342]. 1180 The underlying SDN controller, after having computed a path, as 1181 requested by a path computation RPC, also creates a te-tunnel 1182 instance within the operational DS, to store that computed path. 1183 This would be similar to the stateful solution with the only 1184 difference that there is no associated te-tunnel instance within the 1185 running DS. 1187 Since underlying SDN controller stores in the operational DS the 1188 computed path based on an abstract topology it exposes, it also 1189 remembers, internally, which is the actual native path (physical 1190 path), within its native topology (physical topology), associated 1191 with that compute-only te-tunnel instance. 1193 Afterwards, the client (e.g., MDSC) can request to setup that 1194 specific path by creating a te-tunnel instance (not in compute-only 1195 mode) in the running DS using the same tunnel-name of 1196 the existing te-tunnel in the operational datastore: this will 1197 trigger the underlying SDN controller to setup that path, if still 1198 available. 1200 There are still cases where the path being setup is not exactly the 1201 same as the path that has been computed: 1203 o When the tunnel is configured with path constraints which are not 1204 compatible with the computed path 1206 o When the tunnel setup is requested after the resources of the 1207 computed path are no longer available 1209 o When the tunnel setup is requested after the computed path is no 1210 longer known (e.g. due to a server reboot) by the underlying SDN 1211 controller 1213 In all these cases, the underlying SDN controller should compute and 1214 setup a new path. 1216 Therefore the "path verification" phase, as described in section 3.3 1217 above, is still needed to check that the path that has been setup is 1218 still "good enough". 1220 Since this new approach is not completely stateless, garbage 1221 collection is implemented using a timeout that, when it expires, 1222 triggers the removal of the computed path from the operational DS. 1223 This operation is fully controlled by the underlying SDN controller 1224 without the need for any action to be taken by the client that is 1225 not able to act on the operational datastore. The default value of 1226 this timeout is 10 minutes but a different value may be configured 1227 by the client. 1229 In addition, it is possible for the client to tag each path 1230 computation requests with a transaction-id allowing for a faster 1231 removal of all the paths associated with a transaction-id, without 1232 waiting for their timers to expire. 1234 The underlying SDN controller can remove from the operational DS all 1235 the paths computed with a given transaction-id which have not been 1236 setup either when it receives a Path Delete RPC request for that 1237 transaction-id or, automatically, right after the setup up of a path 1238 that have been previously computed with that transaction-id. 1240 This possibility is useful when multiple paths are computed but, at 1241 most, only one is setup (e.g., in multi-domain path computation 1242 scenario scenarios). After the selected path has been setup (e.g, in 1243 one domain during multi-domain path setup), all the other 1244 alternative computed paths can be automatically deleted by the 1245 underlying SDN controller (since no longer needed). The client can 1246 also request, using the Path Delete RPC request, the underlying SDN 1247 controller to remove all the computed paths, if none of them is 1248 going to be setup (e.g., in a transit domain not being selected by 1249 multi-domain path computation and so not being automatically 1250 deleted). 1252 This approach is complimentary and not alternative to the timer 1253 which is always needed to avoid stranded computed paths being stored 1254 in the operational DS when no path is setup and no explicit delete 1255 RPC is received. 1257 4. Path Computation and Optimization for multiple paths 1259 There are use cases, where it is advantageous to request path 1260 computation for a set of paths, through a network or through a 1261 network domain, using a single request [RFC5440]. 1263 In this case, sending a single request for multiple path 1264 computations, instead of sending multiple requests for each path 1265 computation, would reduce the protocol overhead and it would consume 1266 less resources (e.g., threads in the client and server). 1268 In the context of a typical multi-domain TE network, there could 1269 multiple choices for the ingress/egress points of a domain and the 1270 Multi-domain controller needs to request path computation between 1271 all the ingress/egress pairs to select the best pair. For example, 1272 in the example of section 2.2, the Multi-domain controller needs to 1273 request the TE network controller 1 to compute the A-C and the A-D 1274 paths and to the TE network controller 2 to compute the E-H and the 1275 F-H paths. 1277 It is also possible that the Multi-domain controller receives a 1278 request to setup a group of multiple end to end connections. The 1279 multi-domain controller needs to request each TE domain controller 1280 to compute multiple paths, one (or more) for each end to end 1281 connection. 1283 There are also scenarios where it can be needed to request path 1284 computation for a set of paths in a synchronized fashion. 1286 One example could be computing multiple diverse paths. Computing a 1287 set of diverse paths in a not-synchronized fashion, leads to the 1288 possibility of not being able to satisfy the diversity requirement. 1289 In this case, it is preferable to compute a sub-optimal primary path 1290 for which a diversely routed secondary path exists. 1292 There are also scenarios where it is needed to request optimizing a 1293 set of paths using objective functions that apply to the whole set 1294 of paths, see [RFC5541], e.g. to minimize the sum of the costs of 1295 all the computed paths in the set. 1297 5. YANG Model for requesting Path Computation 1299 This document define a YANG stateless RPC to request path 1300 computation as an "augmentation" of tunnel-rpc, defined in [TE- 1301 TUNNEL]. This model provides the RPC input attributes that are 1302 needed to request path computation and the RPC output attributes 1303 that are needed to report the computed paths. 1305 augment /te:tunnels-rpc/te:input/te:tunnel-info: 1306 +---- path-request* [request-id] 1307 ........... 1309 augment /te:tunnels-rpc/te:output/te:result: 1310 +--ro response* [response-id] 1311 +--ro response-id uint32 1312 +--ro (response-type)? 1313 +--:(no-path-case) 1314 | +--ro no-path! 1315 +--:(path-case) 1316 +--ro computed-path 1317 ........... 1319 This model extensively re-uses the grouping defined in [TE-TUNNEL] 1320 to ensure maximal syntax and semantics commonality. 1322 5.1. Synchronization of multiple path computation requests 1324 The YANG model permits to synchronize a set of multiple path 1325 requests (identified by specific request-id) all related to a "svec" 1326 container emulating the syntax of "SVEC" PCEP object [RFC5440]. 1328 +---- synchronization* [synchronization-id] 1329 +---- synchronization-id uint32 1330 +---- svec 1331 | +---- relaxable? boolean 1332 | +---- disjointness? te-types:te-path-disjointness 1333 | +---- request-id-number* uint32 1334 +---- svec-constraints 1335 | +---- path-metric-bound* [metric-type] 1336 | +---- metric-type identityref 1337 | +---- upper-bound? uint64 1338 +---- path-srlgs-values 1339 | +---- usage? identityref 1340 | +---- values* srlg 1341 +---- path-srlgs-names 1342 | +---- path-srlgs-name* [usage] 1343 | +---- usage identityref 1344 | +---- srlg-name* [name] 1345 | +---- name string 1346 +---- exclude-objects 1347 ........... 1348 +---- optimizations 1349 +---- (algorithm)? 1350 +--:(metric) 1351 | +---- optimization-metric* [metric-type] 1352 | +---- metric-type identityref 1353 | +---- weight? uint8 1354 +--:(objective-function) 1355 +---- objective-function 1356 +---- objective-function-type? identityref 1358 The model, in addition to the metric types, defined in [TE-TUNNEL], 1359 which can be applied to each individual path request, defines 1360 additional specific metrics types that apply to a set of 1361 synchronized requests, as referenced in [RFC5541]. 1363 identity svec-metric-type { 1364 description 1365 "Base identity for svec metric type"; 1366 } 1368 identity svec-metric-cumul-te { 1369 base svec-metric-type; 1370 description 1371 "TE cumulative path metric"; 1372 } 1374 identity svec-metric-cumul-igp { 1375 base svec-metric-type; 1376 description 1377 "IGP cumulative path metric"; 1379 } 1381 identity svec-metric-cumul-hop { 1382 base svec-metric-type; 1383 description 1384 "Hop cumulative path metric"; 1385 } 1387 identity svec-metric-aggregate-bandwidth-consumption { 1388 base svec-metric-type; 1389 description 1390 "Cumulative bandwith consumption of the set of 1391 synchronized paths"; 1392 } 1394 identity svec-metric-load-of-the-most-loaded-link { 1395 base svec-metric-type; 1396 description 1397 "Load of the most loaded link"; 1398 } 1400 5.2. Returned metric values 1402 This YANG model provides a way to return the values of the metrics 1403 computed by the path computation in the output of RPC, together with 1404 other important information (e.g. srlg, affinities, explicit route), 1405 emulating the syntax of the "C" flag of the "METRIC" PCEP object 1406 [RFC5440]: 1408 augment /te:tunnels-rpc/te:output/te:result: 1409 +--ro response* [response-id] 1410 +--ro response-id uint32 1411 +--ro (response-type)? 1412 +--:(no-path-case) 1413 | +--ro no-path! 1414 +--:(path-case) 1415 +--ro computed-path 1416 +--ro path-id? yang-types:uuid 1417 +--ro path-properties 1418 +--ro path-metric* [metric-type] 1419 | +--ro metric-type identityref 1420 | +--ro accumulative-value? uint64 1421 +--ro path-affinities-values 1422 | +--ro path-affinities-value* [usage] 1423 | +--ro usage identityref 1424 | +--ro value? admin-groups 1425 +--ro path-affinity-names 1426 | +--ro path-affinity-name* [usage] 1427 | +--ro usage identityref 1428 | +--ro affinity-name* [name] 1429 | +--ro name string 1430 +--ro path-srlgs-values 1431 | +--ro usage? identityref 1432 | +--ro values* srlg 1433 +--ro path-srlgs-names 1434 | +--ro path-srlgs-name* [usage] 1435 | +--ro usage identityref 1436 | +--ro srlg-name* [name] 1437 | +--ro name string 1438 +--ro path-route-objects 1439 ........... 1441 It also allows to request in the input of RPC which information 1442 (metrics, srlg and/or affinities) should be returned: 1444 module: ietf-te-path-computation 1445 augment /te:tunnels-rpc/te:input/te:tunnel-info: 1446 +---- path-request* [request-id] 1447 | +---- request-id uint32 1448 ........... 1449 | +---- requested-metrics* [metric-type] 1450 | | +---- metric-type identityref 1451 | +---- return-srlgs? boolean 1452 | +---- return-affinities? boolean 1453 ........... 1455 This feature is essential for using a stateless path computation in 1456 a multi-domain TE network as described in section 2.2. In this case, 1457 the metrics returned by a path computation requested to a given TE 1458 network controller must be used by the client to compute the best 1459 end-to-end path. If they are missing the client cannot compare 1460 different paths calculated by the TE network controllers and choose 1461 the best one for the optimal e2e path. 1463 6. YANG model for stateless TE path computation 1465 6.1. YANG Tree 1467 Figure 11 below shows the tree diagram of the YANG model defined in 1468 module ietf-te-path-computation.yang. 1470 module: ietf-te-path-computation 1471 augment /te:tunnels-rpc/te:input/te:tunnel-info: 1472 +---- path-request* [request-id] 1473 | +---- request-id uint32 1474 | +---- encoding? identityref 1475 | +---- switching-type? identityref 1476 | +---- source? inet:ip-address 1477 | +---- destination? inet:ip-address 1478 | +---- src-tp-id? binary 1479 | +---- dst-tp-id? binary 1480 | +---- bidirectional? boolean 1481 | +---- te-topology-identifier 1482 | | +---- provider-id? te-global-id 1483 | | +---- client-id? te-global-id 1484 | | +---- topology-id? te-topology-id 1485 | +---- explicit-route-objects-always 1486 | | +---- route-object-exclude-always* [index] 1487 | | | +---- index uint32 1488 | | | +---- (type)? 1489 | | | +--:(numbered-node-hop) 1490 | | | | +---- numbered-node-hop 1491 | | | | +---- node-id te-node-id 1492 | | | | +---- hop-type? te-hop-type 1493 | | | +--:(numbered-link-hop) 1494 | | | | +---- numbered-link-hop 1495 | | | | +---- link-tp-id te-tp-id 1496 | | | | +---- hop-type? te-hop-type 1497 | | | | +---- direction? te-link-direction 1498 | | | +--:(unnumbered-link-hop) 1499 | | | | +---- unnumbered-link-hop 1500 | | | | +---- link-tp-id te-tp-id 1501 | | | | +---- node-id te-node-id 1502 | | | | +---- hop-type? te-hop-type 1503 | | | | +---- direction? te-link-direction 1504 | | | +--:(as-number) 1505 | | | | +---- as-number-hop 1506 | | | | +---- as-number inet:as-number 1507 | | | | +---- hop-type? te-hop-type 1508 | | | +--:(label) 1509 | | | +---- label-hop 1510 | | | +---- te-label 1511 | | | +---- (technology)? 1512 | | | | +--:(generic) 1513 | | | | +---- generic? 1514 | | | | rt-types:generalized-label 1515 | | | +---- direction? te-label-direction 1516 | | +---- route-object-include-exclude* [index] 1517 | | +---- explicit-route-usage? identityref 1518 | | +---- index uint32 1519 | | +---- (type)? 1520 | | +--:(numbered-node-hop) 1521 | | | +---- numbered-node-hop 1522 | | | +---- node-id te-node-id 1523 | | | +---- hop-type? te-hop-type 1524 | | +--:(numbered-link-hop) 1525 | | | +---- numbered-link-hop 1526 | | | +---- link-tp-id te-tp-id 1527 | | | +---- hop-type? te-hop-type 1528 | | | +---- direction? te-link-direction 1529 | | +--:(unnumbered-link-hop) 1530 | | | +---- unnumbered-link-hop 1531 | | | +---- link-tp-id te-tp-id 1532 | | | +---- node-id te-node-id 1533 | | | +---- hop-type? te-hop-type 1534 | | | +---- direction? te-link-direction 1535 | | +--:(as-number) 1536 | | | +---- as-number-hop 1537 | | | +---- as-number inet:as-number 1538 | | | +---- hop-type? te-hop-type 1539 | | +--:(label) 1540 | | | +---- label-hop 1541 | | | +---- te-label 1542 | | | +---- (technology)? 1543 | | | | +--:(generic) 1544 | | | | +---- generic? 1545 | | | | rt-types:generalized-label 1546 | | | +---- direction? te-label-direction 1547 | | +--:(srlg) 1548 | | +---- srlg 1549 | | +---- srlg? uint32 1550 | +---- path-constraints 1551 | | +---- te-bandwidth 1552 | | | +---- (technology)? 1553 | | | +--:(generic) 1554 | | | +---- generic? te-bandwidth 1555 | | +---- link-protection? identityref 1556 | | +---- setup-priority? uint8 1557 | | +---- hold-priority? uint8 1558 | | +---- signaling-type? identityref 1559 | | +---- path-metric-bounds 1560 | | | +---- path-metric-bound* [metric-type] 1561 | | | +---- metric-type identityref 1562 | | | +---- upper-bound? uint64 1563 | | +---- path-affinities-values 1564 | | | +---- path-affinities-value* [usage] 1565 | | | +---- usage identityref 1566 | | | +---- value? admin-groups 1567 | | +---- path-affinity-names 1568 | | | +---- path-affinity-name* [usage] 1569 | | | +---- usage identityref 1570 | | | +---- affinity-name* [name] 1571 | | | +---- name string 1572 | | +---- path-srlgs-lists 1573 | | | +---- path-srlgs-list* [usage] 1574 | | | +---- usage identityref 1575 | | | +---- values* srlg 1576 | | +---- path-srlgs-names 1577 | | | +---- path-srlgs-name* [usage] 1578 | | | +---- usage identityref 1579 | | | +---- names* string 1580 | | +---- disjointness? te-path-disjointness 1581 | +---- optimizations 1582 | | +---- (algorithm)? 1583 | | +--:(metric) {path-optimization-metric}? 1584 | | | +---- optimization-metric* [metric-type] 1585 | | | | +---- metric-type 1586 identityref 1587 | | | | +---- weight? uint8 1588 | | | | +---- explicit-route-exclude-objects 1589 | | | | | +---- route-object-exclude-object* [index] 1590 | | | | | +---- index uint32 1591 | | | | | +---- (type)? 1592 | | | | | +--:(numbered-node-hop) 1593 | | | | | | +---- numbered-node-hop 1594 | | | | | | +---- node-id te-node-id 1595 | | | | | | +---- hop-type? te-hop-type 1596 | | | | | +--:(numbered-link-hop) 1597 | | | | | | +---- numbered-link-hop 1598 | | | | | | +---- link-tp-id te-tp-id 1599 | | | | | | +---- hop-type? te-hop-type 1600 | | | | | | +---- direction? te-link- 1601 direction 1602 | | | | | +--:(unnumbered-link-hop) 1603 | | | | | | +---- unnumbered-link-hop 1604 | | | | | | +---- link-tp-id te-tp-id 1605 | | | | | | +---- node-id te-node-id 1606 | | | | | | +---- hop-type? te-hop-type 1607 | | | | | | +---- direction? te-link- 1608 direction 1609 | | | | | +--:(as-number) 1610 | | | | | | +---- as-number-hop 1611 | | | | | | +---- as-number inet:as-number 1612 | | | | | | +---- hop-type? te-hop-type 1613 | | | | | +--:(label) 1614 | | | | | | +---- label-hop 1615 | | | | | | +---- te-label 1616 | | | | | | +---- (technology)? 1617 | | | | | | | +--:(generic) 1618 | | | | | | | +---- generic? 1619 | | | | | | | rt- 1620 types:generalized-label 1621 | | | | | | +---- direction? 1622 | | | | | | te-label-direction 1623 | | | | | +--:(srlg) 1624 | | | | | +---- srlg 1625 | | | | | +---- srlg? uint32 1626 | | | | +---- explicit-route-include-objects 1627 | | | | +---- route-object-include-object* [index] 1628 | | | | +---- index uint32 1629 | | | | +---- (type)? 1630 | | | | +--:(numbered-node-hop) 1631 | | | | | +---- numbered-node-hop 1632 | | | | | +---- node-id te-node-id 1633 | | | | | +---- hop-type? te-hop-type 1634 | | | | +--:(numbered-link-hop) 1635 | | | | | +---- numbered-link-hop 1636 | | | | | +---- link-tp-id te-tp-id 1637 | | | | | +---- hop-type? te-hop-type 1638 | | | | | +---- direction? te-link- 1639 direction 1640 | | | | +--:(unnumbered-link-hop) 1641 | | | | | +---- unnumbered-link-hop 1642 | | | | | +---- link-tp-id te-tp-id 1643 | | | | | +---- node-id te-node-id 1644 | | | | | +---- hop-type? te-hop-type 1645 | | | | | +---- direction? te-link- 1646 direction 1647 | | | | +--:(as-number) 1648 | | | | | +---- as-number-hop 1649 | | | | | +---- as-number inet:as-number 1650 | | | | | +---- hop-type? te-hop-type 1651 | | | | +--:(label) 1652 | | | | +---- label-hop 1653 | | | | +---- te-label 1654 | | | | +---- (technology)? 1655 | | | | | +--:(generic) 1656 | | | | | +---- generic? 1657 | | | | | rt- 1658 types:generalized-label 1659 | | | | +---- direction? 1660 | | | | te-label-direction 1661 | | | +---- tiebreakers 1662 | | | +---- tiebreaker* [tiebreaker-type] 1663 | | | +---- tiebreaker-type identityref 1664 | | +--:(objective-function) 1665 | | {path-optimization-objective-function}? 1666 | | +---- objective-function 1667 | | +---- objective-function-type? identityref 1668 | +---- path-in-segment! 1669 | | +---- label-restrictions 1670 | | +---- label-restriction* [index] 1671 | | +---- restriction? enumeration 1672 | | +---- index uint32 1673 | | +---- label-start 1674 | | | +---- te-label 1675 | | | +---- (technology)? 1676 | | | | +--:(generic) 1677 | | | | +---- generic? rt-types:generalized- 1678 label 1679 | | | +---- direction? te-label-direction 1680 | | +---- label-end 1681 | | | +---- te-label 1682 | | | +---- (technology)? 1683 | | | | +--:(generic) 1684 | | | | +---- generic? rt-types:generalized- 1685 label 1686 | | | +---- direction? te-label-direction 1687 | | +---- label-step 1688 | | | +---- (technology)? 1689 | | | +--:(generic) 1690 | | | +---- generic? int32 1691 | | +---- range-bitmap? yang:hex-string 1692 | +---- path-out-segment! 1693 | | +---- label-restrictions 1694 | | +---- label-restriction* [index] 1695 | | +---- restriction? enumeration 1696 | | +---- index uint32 1697 | | +---- label-start 1698 | | | +---- te-label 1699 | | | +---- (technology)? 1700 | | | | +--:(generic) 1701 | | | | +---- generic? rt-types:generalized- 1702 label 1703 | | | +---- direction? te-label-direction 1704 | | +---- label-end 1705 | | | +---- te-label 1706 | | | +---- (technology)? 1707 | | | | +--:(generic) 1708 | | | | +---- generic? rt-types:generalized- 1709 label 1710 | | | +---- direction? te-label-direction 1711 | | +---- label-step 1712 | | | +---- (technology)? 1713 | | | +--:(generic) 1714 | | | +---- generic? int32 1715 | | +---- range-bitmap? yang:hex-string 1716 | +---- requested-metrics* [metric-type] 1717 | | +---- metric-type identityref 1718 | +---- return-srlgs? boolean 1719 | +---- return-affinities? boolean 1720 | +---- requested-state! 1721 | +---- timer? uint16 1722 | +---- transaction-id? string 1723 | +---- tunnel-name? string 1724 | +---- (path)? 1725 | +--:(primary) 1726 | | +---- primary-path-name? string 1727 | +--:(secondary) 1728 | +---- secondary-path-name? string 1729 +---- synchronization* [synchronization-id] 1730 +---- synchronization-id uint32 1731 +---- svec 1732 | +---- relaxable? boolean 1733 | +---- disjointness? te-path-disjointness 1734 | +---- request-id-number* uint32 1735 +---- svec-constraints 1736 | +---- path-metric-bound* [metric-type] 1737 | +---- metric-type identityref 1738 | +---- upper-bound? uint64 1739 +---- path-srlgs-lists 1740 | +---- path-srlgs-list* [usage] 1741 | +---- usage identityref 1742 | +---- values* srlg 1743 +---- path-srlgs-names 1744 | +---- path-srlgs-name* [usage] 1745 | +---- usage identityref 1746 | +---- names* string 1747 +---- exclude-objects 1748 | +---- excludes* [index] 1749 | +---- index uint32 1750 | +---- (type)? 1751 | +--:(numbered-node-hop) 1752 | | +---- numbered-node-hop 1753 | | +---- node-id te-node-id 1754 | | +---- hop-type? te-hop-type 1755 | +--:(numbered-link-hop) 1756 | | +---- numbered-link-hop 1757 | | +---- link-tp-id te-tp-id 1758 | | +---- hop-type? te-hop-type 1759 | | +---- direction? te-link-direction 1760 | +--:(unnumbered-link-hop) 1761 | | +---- unnumbered-link-hop 1762 | | +---- link-tp-id te-tp-id 1763 | | +---- node-id te-node-id 1764 | | +---- hop-type? te-hop-type 1765 | | +---- direction? te-link-direction 1766 | +--:(as-number) 1767 | | +---- as-number-hop 1768 | | +---- as-number inet:as-number 1769 | | +---- hop-type? te-hop-type 1770 | +--:(label) 1771 | +---- label-hop 1772 | +---- te-label 1773 | +---- (technology)? 1774 | | +--:(generic) 1775 | | +---- generic? 1776 | | rt-types:generalized-label 1777 | +---- direction? te-label-direction 1778 +---- optimizations 1779 +---- (algorithm)? 1780 +--:(metric) {te-types:path-optimization-metric}? 1781 | +---- optimization-metric* [metric-type] 1782 | +---- metric-type identityref 1783 | +---- weight? uint8 1784 +--:(objective-function) 1785 {te-types:path-optimization-objective- 1786 function}? 1787 +---- objective-function 1788 +---- objective-function-type? identityref 1789 augment /te:tunnels-rpc/te:output/te:result: 1790 +--ro response* [response-id] 1791 +--ro response-id uint32 1792 +--ro (response-type)? 1793 +--:(no-path-case) 1794 | +--ro no-path! 1795 +--:(path-case) 1796 +--ro computed-path 1797 +--ro path-properties 1798 | +--ro path-metric* [metric-type] 1799 | | +--ro metric-type identityref 1800 | | +--ro accumulative-value? uint64 1801 | +--ro path-affinities-values 1802 | | +--ro path-affinities-value* [usage] 1803 | | +--ro usage identityref 1804 | | +--ro value? admin-groups 1805 | +--ro path-affinity-names 1806 | | +--ro path-affinity-name* [usage] 1807 | | +--ro usage identityref 1808 | | +--ro affinity-name* [name] 1809 | | +--ro name string 1810 | +--ro path-srlgs-lists 1811 | | +--ro path-srlgs-list* [usage] 1812 | | +--ro usage identityref 1813 | | +--ro values* srlg 1814 | +--ro path-srlgs-names 1815 | | +--ro path-srlgs-name* [usage] 1816 | | +--ro usage identityref 1817 | | +--ro names* string 1818 | +--ro path-route-objects 1819 | +--ro path-route-object* [index] 1820 | +--ro index uint32 1821 | +--ro (type)? 1822 | +--:(numbered-node-hop) 1823 | | +--ro numbered-node-hop 1824 | | +--ro node-id te-node-id 1825 | | +--ro hop-type? te-hop-type 1826 | +--:(numbered-link-hop) 1827 | | +--ro numbered-link-hop 1828 | | +--ro link-tp-id te-tp-id 1829 | | +--ro hop-type? te-hop-type 1830 | | +--ro direction? te-link- 1831 direction 1832 | +--:(unnumbered-link-hop) 1833 | | +--ro unnumbered-link-hop 1834 | | +--ro link-tp-id te-tp-id 1835 | | +--ro node-id te-node-id 1836 | | +--ro hop-type? te-hop-type 1837 | | +--ro direction? te-link- 1838 direction 1839 | +--:(as-number) 1840 | | +--ro as-number-hop 1841 | | +--ro as-number inet:as-number 1842 | | +--ro hop-type? te-hop-type 1843 | +--:(label) 1844 | +--ro label-hop 1845 | +--ro te-label 1846 | +--ro (technology)? 1847 | | +--:(generic) 1848 | | +--ro generic? 1849 | | rt- 1850 types:generalized-label 1851 | +--ro direction? 1852 | te-label-direction 1853 +--ro tunnel-ref? te:tunnel-ref 1854 +--ro (path)? 1855 +--:(primary) 1856 | +--ro primary-path-ref? leafref 1857 +--:(secondary) 1858 +--ro secondary-path-ref? leafref 1859 augment /te:tunnels-rpc/te:input/te:tunnel-info: 1860 +---- deleted-paths-transaction-id* string 1861 augment /te:tunnels-rpc/te:output/te:result: 1862 +---- deleted-paths-transaction-id* string 1864 Figure 11 - TE path computation YANG tree 1866 6.2. YANG Module 1868 file "ietf-te-path-computation@2019-03-11.yang" 1869 module ietf-te-path-computation { 1870 yang-version 1.1; 1871 namespace "urn:ietf:params:xml:ns:yang:ietf-te-path-computation"; 1872 // replace with IANA namespace when assigned 1874 prefix "tepc"; 1876 import ietf-inet-types { 1877 prefix "inet"; 1878 } 1880 import ietf-te { 1881 prefix "te"; 1882 } 1884 import ietf-te-types { 1885 prefix "te-types"; 1886 } 1888 organization 1889 "Traffic Engineering Architecture and Signaling (TEAS) 1890 Working Group"; 1892 contact 1893 "WG Web: 1894 WG List: 1896 WG Chair: Lou Berger 1897 1899 WG Chair: Vishnu Pavan Beeram 1900 1902 "; 1904 description "YANG model for stateless TE path computation"; 1906 revision "2019-03-11" { 1907 description 1908 "Initial revision"; 1909 reference 1910 "draft-ietf-teas-yang-path-computation"; 1911 } 1913 /* 1914 * Features 1915 */ 1917 feature stateless-path-computation { 1918 description 1919 "This feature indicates that the system supports 1920 stateless path computation."; 1921 } 1923 /* 1924 * Groupings 1925 */ 1927 grouping path-info { 1928 uses te-types:generic-path-properties; 1929 description "Path computation output information"; 1931 } 1933 grouping requested-info { 1934 description 1935 "This grouping defines the information (e.g., metrics) 1936 which must be returned in the response"; 1937 list requested-metrics { 1938 key 'metric-type'; 1939 description 1940 "The list of the requested metrics 1941 The metrics listed here must be returned in the response. 1942 Returning other metrics in the response is optional."; 1943 leaf metric-type { 1944 type identityref { 1945 base te-types:path-metric-type; 1946 } 1947 description 1948 "The metric that must be returned in the response"; 1949 } 1950 } 1951 leaf return-srlgs { 1952 type boolean; 1953 default false; 1954 description 1955 "If true, path srlgs must be returned in the response. 1956 If false, returning path srlgs in the response optional."; 1957 } 1958 leaf return-affinities { 1959 type boolean; 1960 default false; 1961 description 1962 "If true, path affinities must be returned in the response. 1963 If false, returning path affinities in the response is 1964 optional."; 1965 } 1966 } 1968 grouping requested-state { 1969 description 1970 "Configuration for the transient state used 1971 to report the computed path"; 1972 leaf timer { 1973 type uint16; 1974 units minutes; 1975 default 10; 1976 description 1977 "The timeout after which the transient state reporting 1978 the computed path should be removed."; 1979 } 1980 leaf transaction-id { 1981 type string; 1982 description 1983 " 1984 The transaction-id associated with this path computation 1985 to be used for fast deletion of the transient states 1986 associated with multiple path computations. 1988 This transaction-id can be used to explicitly delete all 1989 the transient states of all the computed paths associated 1990 with the same transaction-id. 1992 When one path associated with a transaction-id is setup, 1993 the transient states of all the other computed paths 1994 with the same transaction-id are automatically removed. 1996 If not specified, the transient state is removed only 1997 when the timer expires (when the timer is specified) 1998 or not created at all (stateless path computation, 1999 when the timer is not specified). 2000 "; 2001 } 2002 leaf tunnel-name { 2003 type string; 2004 description 2005 " 2006 The suggested name to be assigned to the te-tunnel 2007 instance which is created to report the computed path. 2009 In case multiple paths are requested with the same 2010 suggested name, the server will create only one te-tunnel 2011 instance to report all the computed paths 2012 with the same suggested name. 2014 A different name can be assigned by server (e.g., when a 2015 te-tunnel with this name already exists). 2016 "; 2017 } 2018 choice path { 2019 description 2020 "The transient state of the computed path can be reported 2021 as a primary or a secondary path of a te-tunnel"; 2022 case primary { 2023 leaf primary-path-name { 2024 type string; 2025 description 2026 " 2027 The suggested name to be assigned to the 2028 p2p-primary-path instance which is created 2029 to report the computed path. 2031 A different name can be assigned by the server 2032 (e.g., when a p2p-primary-path with this name 2033 already exists). 2034 "; 2035 } 2036 } 2037 case secondary { 2038 leaf secondary-path-name { 2039 type string; 2040 description 2041 " 2042 The suggested name to be assigned to the 2043 p2p-secondary-path instance which is created 2044 to report the computed path. 2046 A different name can be assigned by the server 2047 (e.g., when a p2p-secondary-path with this 2048 name already exists). 2050 If not specified, the a p2p-primary-path is created 2051 by the server. 2052 "; 2053 } 2054 } 2055 } 2056 } 2058 grouping reported-state { 2059 description 2060 "Information about the transient state created 2061 to report the computed path"; 2063 leaf tunnel-ref { 2064 type te:tunnel-ref; 2065 description 2066 " 2067 Reference to the tunnel that reports the transient state 2068 of the computed path. 2070 If no transient state is created, this attribute is empty. 2071 "; 2072 } 2073 choice path { 2074 description 2075 "The transient state of the computed path can be reported 2076 as a primary or a secondary path of a te-tunnel"; 2077 case primary { 2078 leaf primary-path-ref { 2079 type leafref { 2080 path "/te:te/te:tunnels/" + 2081 "te:tunnel[te:name=current()/../tunnel-ref]/" + 2082 "te:p2p-primary-paths/te:p2p-primary-path/" + 2083 "te:name"; 2084 } 2085 must "../tunnel-ref" { 2086 description 2087 "The primary-path-name can only be reported 2088 if also the tunnel is reported 2089 to provide the complete reference."; 2090 } 2091 description 2092 " 2093 Reference to the p2p-primary-path that reports 2094 the transient state of the computed path. 2096 If no transient state is created, 2097 this attribute is empty. 2098 "; 2099 } 2100 } 2101 case secondary { 2102 leaf secondary-path-ref { 2103 type leafref { 2104 path "/te:te/te:tunnels/" + 2105 "te:tunnel[te:name=current()/../tunnel-ref]/" + 2106 "te:p2p-secondary-paths/te:p2p-secondary-path/" + 2107 "te:name"; 2108 } 2109 must "../tunnel-ref" { 2110 description 2111 "The secondary-path-name can only be reported 2112 if also the tunnel is reported to provide 2113 the complete reference."; 2114 } 2115 description 2116 " 2117 Reference to the p2p-secondary-path that reports 2118 the transient state of the computed path. 2120 If no transient state is created, 2121 this attribute is empty. 2122 "; 2123 } 2124 } 2125 } 2127 } 2129 identity svec-metric-type { 2130 description 2131 "Base identity for svec metric type"; 2132 } 2134 identity svec-metric-cumul-te { 2135 base svec-metric-type; 2136 description 2137 "TE cumulative path metric"; 2138 } 2140 identity svec-metric-cumul-igp { 2141 base svec-metric-type; 2142 description 2143 "IGP cumulative path metric"; 2144 } 2146 identity svec-metric-cumul-hop { 2147 base svec-metric-type; 2148 description 2149 "Hop cumulative path metric"; 2150 } 2152 identity svec-metric-aggregate-bandwidth-consumption { 2153 base svec-metric-type; 2154 description 2155 "Cumulative bandwith consumption of the set of 2156 synchronized paths"; 2157 } 2159 identity svec-metric-load-of-the-most-loaded-link { 2160 base svec-metric-type; 2161 description 2162 "Load of the most loaded link"; 2163 } 2165 grouping svec-metrics-bounds_config { 2166 description 2167 "TE path metric bounds grouping for computing a set of 2168 synchronized requests"; 2169 leaf metric-type { 2170 type identityref { 2171 base svec-metric-type; 2172 } 2173 description "TE path metric type usable for computing a set of 2174 synchronized requests"; 2175 } 2176 leaf upper-bound { 2177 type uint64; 2178 description "Upper bound on end-to-end svec path metric"; 2179 } 2180 } 2182 grouping svec-metrics-optimization_config { 2183 description 2184 "TE path metric bounds grouping for computing a set of 2185 synchronized requests"; 2187 leaf metric-type { 2188 type identityref { 2189 base svec-metric-type; 2190 } 2191 description "TE path metric type usable for computing a set of 2192 synchronized requests"; 2193 } 2194 leaf weight { 2195 type uint8; 2196 description "Metric normalization weight"; 2197 } 2198 } 2200 grouping svec-exclude { 2201 description "List of resources to be excluded by all the paths 2202 in the SVEC"; 2203 container exclude-objects { 2204 description "resources to be excluded"; 2205 list excludes { 2206 key index; 2207 ordered-by user; 2208 leaf index { 2209 type uint32; 2210 description "XRO subobject index"; 2211 } 2212 description 2213 "List of explicit route objects to always exclude 2214 from synchronized path computation"; 2215 uses te-types:explicit-route-hop; 2216 } 2217 } 2218 } 2220 grouping synchronization-constraints { 2221 description "Global constraints applicable to synchronized 2222 path computation"; 2223 container svec-constraints { 2224 description "global svec constraints"; 2225 list path-metric-bound { 2226 key metric-type; 2227 description "list of bound metrics"; 2228 uses svec-metrics-bounds_config; 2229 } 2230 } 2231 uses te-types:generic-path-srlgs; 2232 uses svec-exclude; 2233 } 2235 grouping synchronization-optimization { 2236 description "Synchronized request optimization"; 2237 container optimizations { 2238 description 2239 "The objective function container that includes attributes 2240 to impose when computing a synchronized set of paths"; 2242 choice algorithm { 2243 description "Optimizations algorithm."; 2244 case metric { 2245 if-feature te-types:path-optimization-metric; 2246 list optimization-metric { 2247 key "metric-type"; 2248 description "svec path metric type"; 2249 uses svec-metrics-optimization_config; 2250 } 2251 } 2252 case objective-function { 2253 if-feature te-types:path-optimization-objective-function; 2254 container objective-function { 2255 description 2256 "The objective function container that includes 2257 attributes to impose when computing a TE path"; 2258 leaf objective-function-type { 2259 type identityref { 2260 base te-types:objective-function-type; 2261 } 2262 default te-types:of-minimize-cost-path; 2263 description "Objective function entry"; 2264 } 2265 } 2266 } 2267 } 2268 } 2269 } 2271 grouping synchronization-info { 2272 description "Information for sync"; 2273 list synchronization { 2274 key "synchronization-id"; 2275 description "sync list"; 2276 leaf synchronization-id { 2277 type uint32; 2278 description "index"; 2279 } 2280 container svec { 2281 description 2282 "Synchronization VECtor"; 2284 leaf relaxable { 2285 type boolean; 2286 default true; 2287 description 2288 "If this leaf is true, path computation process is 2289 free to ignore svec content. 2290 Otherwise, it must take into account this svec."; 2291 } 2292 uses te-types:generic-path-disjointness; 2293 leaf-list request-id-number { 2294 type uint32; 2295 description 2296 "This list reports the set of path computation 2297 requests that must be synchronized."; 2298 } 2299 } 2300 uses synchronization-constraints; 2301 uses synchronization-optimization; 2302 } 2303 } 2305 grouping no-path-info { 2306 description "no-path-info"; 2307 container no-path { 2308 presence "Response without path information, due to failure 2309 performing the path computation"; 2310 description "if path computation cannot identify a path, 2311 rpc returns no path."; 2312 } 2313 } 2315 /* 2316 * These groupings should be removed when defined in te-types 2317 */ 2319 grouping encoding-and-switching-type { 2320 description 2321 "Common grouping to define the LSP encoding and 2322 switching types"; 2324 leaf encoding { 2325 type identityref { 2326 base te-types:lsp-encoding-types; 2327 } 2328 description "LSP encoding type"; 2329 reference "RFC3945"; 2330 } 2331 leaf switching-type { 2332 type identityref { 2333 base te-types:switching-capabilities; 2334 } 2335 description "LSP switching type"; 2336 reference "RFC3945"; 2337 } 2338 } 2340 grouping tunnel-p2p-common-params { 2341 description 2342 "Common grouping to define the TE tunnel parameters"; 2344 uses encoding-and-switching-type; 2345 leaf source { 2346 type inet:ip-address; 2347 description "TE tunnel source address."; 2348 } 2349 leaf destination { 2350 type inet:ip-address; 2351 description "P2P tunnel destination address"; 2352 } 2353 leaf src-tp-id { 2354 type binary; 2355 description 2356 "TE tunnel source termination point identifier."; 2357 } 2358 leaf dst-tp-id { 2359 type binary; 2360 description 2361 "TE tunnel destination termination point identifier."; 2363 } 2364 leaf bidirectional { 2365 type boolean; 2366 default 'false'; 2367 description "TE tunnel bidirectional"; 2368 } 2369 } 2371 /* 2372 * AUGMENTS TO TE RPC 2373 */ 2375 augment "/te:tunnels-rpc/te:input/te:tunnel-info" { 2376 description "Path Computation RPC input"; 2377 list path-request { 2378 key "request-id"; 2379 description "request-list"; 2380 leaf request-id { 2381 type uint32; 2382 mandatory true; 2383 description 2384 "Each path computation request is uniquely identified 2385 by the request-id-number."; 2386 } 2387 uses tunnel-p2p-common-params; 2388 uses te-types:te-topology-identifier; 2389 uses te-types:path-constraints-route-objects; 2390 uses te-types:generic-path-constraints; 2391 uses te-types:generic-path-optimization; 2392 uses te:path-access-segment-info; 2393 uses requested-info; 2394 container requested-state { 2395 presence 2396 "Request temporary reporting of the computed path state"; 2397 description 2398 "Configures attributes for the temporary reporting of the 2399 computed path state (e.g., expiration timer)."; 2400 uses requested-state; 2401 } 2403 } 2404 uses synchronization-info; 2405 } 2407 augment "/te:tunnels-rpc/te:output/te:result" { 2408 description "Path Computation RPC output"; 2409 list response { 2410 key "response-id"; 2411 config false; 2412 description "response"; 2413 leaf response-id { 2414 type uint32; 2415 description 2416 "The response-id has the same value of the corresponding 2417 request-id."; 2418 } 2419 choice response-type { 2420 config false; 2421 description "response-type"; 2422 case no-path-case { 2423 uses no-path-info; 2424 } 2425 case path-case { 2426 container computed-path { 2427 uses path-info; 2428 uses reported-state; 2429 description "Path computation service."; 2430 } 2431 } 2432 } 2433 } 2434 } 2436 augment "/te:tunnels-rpc/te:input/te:tunnel-info" { 2437 description "Path Delete RPC input"; 2438 leaf-list deleted-paths-transaction-id { 2439 type string; 2440 description 2441 "The list of the transaction-id values of the 2442 transient states to be deleted"; 2443 } 2444 } 2446 augment "/te:tunnels-rpc/te:output/te:result" { 2447 description "Path Delete RPC output"; 2448 leaf-list deleted-paths-transaction-id { 2449 type string; 2450 description 2451 "The list of the transaction-id values of the 2452 transient states that have been successfully deleted"; 2453 } 2454 } 2455 } 2456 2458 Figure 12 - TE path computation YANG module 2460 7. Security Considerations 2462 This document describes use cases of requesting Path Computation 2463 using YANG models, which could be used at the ABNO Control Interface 2464 [RFC7491] and/or between controllers in ACTN [RFC8453]. As such, it 2465 does not introduce any new security considerations compared to the 2466 ones related to YANG specification, ABNO specification and ACTN 2467 Framework defined in [RFC7950], [RFC7491] and [RFC8453]. 2469 The YANG module defined in this draft is designed to be accessed via 2470 the NETCONF protocol [RFC6241] or RESTCONF protocol [RFC8040]. The 2471 lowest NETCONF layer is the secure transport layer, and the 2472 mandatory-to-implement secure transport is Secure Shell (SSH) 2473 [RFC6242]. The lowest RESTCONF layer is HTTPS, and the mandatory-to- 2474 implement secure transport is TLS [RFC8446]. 2476 This document also defines common data types using the YANG data 2477 modeling language. The definitions themselves have no security 2478 impact on the Internet, but the usage of these definitions in 2479 concrete YANG modules might have. The security considerations 2480 spelled out in the YANG specification [RFC7950] apply for this 2481 document as well. 2483 The NETCONF access control model [RFC8341] provides the means to 2484 restrict access for particular NETCONF or RESTCONF users to a 2485 preconfigured subset of all available NETCONF or RESTCONF protocol 2486 operations and content. 2488 Note - The security analysis of each leaf is for further study. 2490 8. IANA Considerations 2492 This document registers the following URIs in the IETF XML registry 2493 [RFC3688]. Following the format in [RFC3688], the following 2494 registration is requested to be made. 2496 URI: urn:ietf:params:xml:ns:yang:ietf-te-path-computation 2497 XML: N/A, the requested URI is an XML namespace. 2499 This document registers a YANG module in the YANG Module Names 2500 registry [RFC7950]. 2502 name: ietf-te-path-computation 2503 namespace: urn:ietf:params:xml:ns:yang:ietf-te-path-computation 2504 prefix: tepc 2506 9. References 2508 9.1. Normative References 2510 [RFC3688] Mealling, M., "The IETF XML Registry", RFC 3688, January 2511 2004. 2513 [RFC5440] Vasseur, JP., Le Roux, JL. et al., "Path Computation 2514 Element (PCE) Communication Protocol (PCEP)", RFC 5440, 2515 March 2009. 2517 [RFC5441] Vasseur, JP., Ed., Zhang, R., Bitar, N., and JL. Le Roux, 2518 "A Backward-Recursive PCE-Based Computation (BRPC) 2519 Procedure to Compute Shortest Constrained Inter-Domain 2520 Traffic Engineering Label Switched Paths", RFC 5441, 2521 DOI 10.17487/RFC5441, April 2009, . 2524 [RFC5541] Le Roux, JL. et al., " Encoding of Objective Functions in 2525 the Path Computation Element Communication Protocol 2526 (PCEP)", RFC 5541, June 2009. 2528 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 2529 and A. Bierman, Ed., "Network Configuration Protocol 2530 (NETCONF)", RFC 6241, June 2011. 2532 [RFC6242] Wasserman, M., "Using the NETCONF Protocol over Secure 2533 Shell (SSH)", RFC 6242, June 2011. 2535 [RFC8040] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 2536 Protocol", RFC 8040, January 2017. 2538 [RFC8341] Bierman, A., and M. Bjorklund, "Network Configuration 2539 Access Control Model", RFC 8341, March 2018. 2541 [RFC7491] Farrel, A., King, D., "A PCE-Based Architecture for 2542 Application-Based Network Operations", RFC 7491, March 2543 2015. 2545 [RFC7926] Farrel, A. et al., "Problem Statement and Architecture for 2546 Information Exchange Between Interconnected Traffic 2547 Engineered Networks", RFC 7926, July 2016. 2549 [RFC7950] Bjorklund, M., "The YANG 1.1 Data Modeling Language", RFC 2550 7950, August 2016. 2552 [RFC8040] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 2553 Protocol", RFC 8040, January 2017. 2555 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 2556 Version 1.3", RFC 8446, August 2018. 2558 [RFC8453] Ceccarelli, D., Lee, Y. et al., "Framework for Abstraction 2559 and Control of TE Networks (ACTN)", RFC8453, August 2018. 2561 [RFC8454] Lee, Y. et al., "Information Model for Abstraction and 2562 Control of TE Networks (ACTN)", RFC8454, September 2018. 2564 [TE-TOPO] Liu, X. et al., "YANG Data Model for TE Topologies", 2565 draft-ietf-teas-yang-te-topo, work in progress. 2567 [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic 2568 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 2569 te, work in progress. 2571 9.1. Informative References 2573 [RFC4655] Farrel, A. et al., "A Path Computation Element (PCE)-Based 2574 Architecture", RFC 4655, August 2006. 2576 [RFC6805] King, D., Ed. and A. Farrel, Ed., "The Application of the 2577 Path Computation Element Architecture to the Determination 2578 of a Sequence of Domains in MPLS and GMPLS", RFC 6805, DOI 2579 10.17487/RFC6805, November 2012, . 2582 [RFC7139] Zhang, F. et al., "GMPLS Signaling Extensions for Control 2583 of Evolving G.709 Optical Transport Networks", RFC 7139, 2584 March 2014. 2586 [RFC7446] Lee, Y. et al., "Routing and Wavelength Assignment 2587 Information Model for Wavelength Switched Optical 2588 Networks", RFC 7446, February 2015. 2590 [RFC8233] Dhody, D. et al., "Extensions to the Path Computation 2591 Element Communication Protocol (PCEP) to Compute Service- 2592 Aware Label Switched Paths (LSPs)", RFC 8233, September 2593 2017 2595 [RFC8342] Bjorklund,M. et al. "Network Management Datastore 2596 Architecture (NMDA)", RFC 8342, March 2018 2598 [OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical 2599 Transport Network Topology", draft-ietf-ccamp-otn-topo- 2600 yang, work in progress. 2602 [ITU-T G.709-2016] ITU-T Recommendation G.709 (06/16), "Interface 2603 for the optical transport network", June 2016. 2605 10. Acknowledgments 2607 The authors would like to thank Igor Bryskin and Xian Zhang for 2608 participating in the initial discussions that have triggered this 2609 work and providing valuable insights. 2611 The authors would like to thank the authors of the TE Tunnel YANG 2612 model [TE-TUNNEL], in particular Igor Bryskin, Vishnu Pavan Beeram, 2613 Tarek Saad and Xufeng Liu, for their inputs to the discussions and 2614 support in having consistency between the Path Computation and TE 2615 Tunnel YANG models. 2617 The authors would like to thank Adrian Farrel, Dhruv Dhody, Igor 2618 Bryskin, Julien Meuric and Lou Berger for their valuable input to 2619 the discussions that has clarified that the path being setup is not 2620 necessarily the same as the path that have been previously computed 2621 and, in particular to Dhruv Dhody, for his suggestion to describe 2622 the need for a path verification phase to check that the actual path 2623 being setup meets the required end-to-end metrics and constraints. 2625 This document was prepared using 2-Word-v2.0.template.dot. 2627 Appendix A. Examples of dimensioning the "detailed connectivity matrix" 2629 In the following table, a list of the possible constraints, 2630 associated with their potential cardinality, is reported. 2632 The maximum number of potential connections to be computed and 2633 reported is, in first approximation, the multiplication of all of 2634 them. 2636 Constraint Cardinality 2637 ---------- ------------------------------------------------------- 2639 End points N(N-1)/2 if connections are bidirectional (OTN and WDM), 2640 N(N-1) for unidirectional connections. 2642 Bandwidth In WDM networks, bandwidth values are expressed in GHz. 2644 On fixed-grid WDM networks, the central frequencies are 2645 on a 50GHz grid and the channel width of the transmitters 2646 are typically 50GHz such that each central frequency can 2647 be used, i.e., adjacent channels can be placed next to 2648 each other in terms of central frequencies. 2650 On flex-grid WDM networks, the central frequencies are on 2651 a 6.25GHz grid and the channel width of the transmitters 2652 can be multiples of 12.5GHz. 2654 For fixed-grid WDM networks typically there is only one 2655 possible bandwidth value (i.e., 50GHz) while for flex- 2656 grid WDM networks typically there are 4 possible 2657 bandwidth values (e.g., 37.5GHz, 50GHz, 62.5GHz, 75GHz). 2659 In OTN (ODU) networks, bandwidth values are expressed as 2660 pairs of ODU type and, in case of ODUflex, ODU rate in 2661 bytes/sec as described in section 5 of [RFC7139]. 2663 For "fixed" ODUk types, 6 possible bandwidth values are 2664 possible (i.e., ODU0, ODU1, ODU2, ODU2e, ODU3, ODU4). 2666 For ODUflex(GFP), up to 80 different bandwidth values can 2667 be specified, as defined in Table 7-8 of [ITU-T G.709- 2668 2016]. 2670 For other ODUflex types, like ODUflex(CBR), the number of 2671 possible bandwidth values depends on the rates of the 2672 clients that could be mapped over these ODUflex types, as 2673 shown in Table 7.2 of [ITU-T G.709-2016], which in theory 2674 could be a countinuum of values. However, since different 2675 ODUflex bandwidths that use the same number of TSs on 2676 each link along the path are equivalent for path 2677 computation purposes, up to 120 different bandwidth 2678 ranges can be specified. 2680 Ideas to reduce the number of ODUflex bandwidth values in 2681 the detailed connectivity matrix, to less than 100, are 2682 for further study. 2684 Bandwidth specification for ODUCn is currently for 2685 further study but it is expected that other bandwidth 2686 values can be specified as integer multiples of 100Gb/s. 2688 In IP we have bandwidth values in bytes/sec. In 2689 principle, this is a countinuum of values, but in 2690 practice we can identify a set of bandwidth ranges, where 2691 any bandwidth value inside the same range produces the 2692 same path. 2693 The number of such ranges is the cardinality, which 2694 depends on the topology, available bandwidth and status 2695 of the network. Simulations (Note: reference paper 2696 submitted for publication) show that values for medium 2697 size topologies (around 50-150 nodes) are in the range 4- 2698 7 (5 on average) for each end points couple. 2700 Metrics IGP, TE and hop number are the basic objective metrics 2701 defined so far. There are also the 2 objective functions 2702 defined in [RFC5541]: Minimum Load Path (MLP) and Maximum 2703 Residual Bandwidth Path (MBP). Assuming that one only 2704 metric or objective function can be optimized at once, 2705 the total cardinality here is 5. 2707 With [RFC8233], a number of additional metrics are 2708 defined, including Path Delay metric, Path Delay 2709 Variation metric and Path Loss metric, both for point-to- 2710 point and point-to-multipoint paths. This increases the 2711 cardinality to 8. 2713 Bounds Each metric can be associated with a bound in order to 2714 find a path having a total value of that metric lower 2715 than the given bound. This has a potentially very high 2716 cardinality (as any value for the bound is allowed). In 2717 practice there is a maximum value of the bound (the one 2718 with the maximum value of the associated metric) which 2719 results always in the same path, and a range approach 2720 like for bandwidth in IP should produce also in this case 2721 the cardinality. Assuming to have a cardinality similar 2722 to the one of the bandwidth (let say 5 on average) we 2723 should have 6 (IGP, TE, hop, path delay, path delay 2724 variation and path loss; we don't consider here the two 2725 objective functions of [RFC5541] as they are conceived 2726 only for optimization)*5 = 30 cardinality. 2728 Technology 2729 constraints For further study 2731 Priority We have 8 values for setup priority, which is used in 2732 path computation to route a path using free resources 2733 and, where no free resources are available, resources 2734 used by LSPs having a lower holding priority. 2736 Local prot It's possible to ask for a local protected service, where 2737 all the links used by the path are protected with fast 2738 reroute (this is only for IP networks, but line 2739 protection schemas are available on the other 2740 technologies as well). This adds an alternative path 2741 computation, so the cardinality of this constraint is 2. 2743 Administrative 2744 Colors Administrative colors (aka affinities) are typically 2745 assigned to links but when topology abstraction is used 2746 affinity information can also appear in the detailed 2747 connectivity matrix. 2749 There are 32 bits available for the affinities. Links can 2750 be tagged with any combination of these bits, and path 2751 computation can be constrained to include or exclude any 2752 or all of them. The relevant cardinality is 3 (include- 2753 any, exclude-any, include-all) times 2^32 possible 2754 values. However, the number of possible values used in 2755 real networks is quite small. 2757 Included Resources 2759 A path computation request can be associated to an 2760 ordered set of network resources (links, nodes) to be 2761 included along the computed path. This constraint would 2762 have a huge cardinality as in principle any combination 2763 of network resources is possible. However, as far as the 2764 Orchestrator doesn't know details about the internal 2765 topology of the domain, it shouldn't include this type of 2766 constraint at all (see more details below). 2768 Excluded Resources 2770 A path computation request can be associated to a set of 2771 network resources (links, nodes, SRLGs) to be excluded 2772 from the computed path. Like for included resources, 2773 this constraint has a potentially very high cardinality, 2774 but, once again, it can't be actually used by the 2775 Orchestrator, if it's not aware of the domain topology 2776 (see more details below). 2777 As discussed above, the Orchestrator can specify include or exclude 2778 resources depending on the abstract topology information that the 2779 domain controller exposes: 2781 o In case the domain controller exposes the entire domain as a 2782 single abstract TE node with his own external terminations and 2783 detailed connectivity matrix (whose size we are estimating), no 2784 other topological details are available, therefore the size of 2785 the detailed connectivity matrix only depends on the combination 2786 of the constraints that the Orchestrator can use in a path 2787 computation request to the domain controller. These constraints 2788 cannot refer to any details of the internal topology of the 2789 domain, as those details are not known to the Orchestrator and so 2790 they do not impact size of the detailed connectivity matrix 2791 exported. 2793 o Instead in case the domain controller exposes a topology 2794 including more than one abstract TE nodes and TE links, and their 2795 attributes (e.g. SRLGs, affinities for the links), the 2796 Orchestrator knows these details and therefore could compute a 2797 path across the domain referring to them in the constraints. The 2798 detailed connectivity matrixes, whose size need to be estimated 2799 here, are the ones relevant to the abstract TE nodes exported to 2800 the Orchestrator. These detailed connectivity matrixes and 2801 therefore theirs sizes, while cannot depend on the other abstract 2802 TE nodes and TE links, which are external to the given abstract 2803 node, could depend to SRLGs (and other attributes, like 2804 affinities) which could be present also in the portion of the 2805 topology represented by the abstract nodes, and therefore 2806 contribute to the size of the related detailed connectivity 2807 matrix. 2809 We also don't consider here the possibility to ask for more than one 2810 path in diversity or for point-to-multi-point paths, which are for 2811 further study. 2813 Considering for example an IP domain without considering SRLG and 2814 affinities, we have an estimated number of paths depending on these 2815 estimated cardinalities: 2817 Endpoints = N*(N-1), Bandwidth = 5, Metrics = 6, Bounds = 20, 2818 Priority = 8, Local prot = 2 2820 The number of paths to be pre-computed by each IP domain is 2821 therefore 24960 * N(N-1) where N is the number of domain access 2822 points. 2824 This means that with just 4 access points we have nearly 300000 2825 paths to compute, advertise and maintain (if a change happens in the 2826 domain, due to a fault, or just the deployment of new traffic, a 2827 substantial number of paths need to be recomputed and the relevant 2828 changes advertised to the upper controller). 2830 This seems quite challenging. In fact, if we assume a mean length of 2831 1K for the json describing a path (a quite conservative estimate), 2832 reporting 300000 paths means transferring and then parsing more than 2833 300 Mbytes for each domain. If we assume that 20% (to be checked) of 2834 this paths change when a new deployment of traffic occurs, we have 2835 60 Mbytes of transfer for each domain traversed by a new end-to-end 2836 path. If a network has, let say, 20 domains (we want to estimate the 2837 load for a non-trivial domain setup) in the beginning a total 2838 initial transfer of 6Gigs is needed, and eventually, assuming 4-5 2839 domains are involved in mean during a path deployment we could have 2840 240-300 Mbytes of changes advertised to the higher order controller. 2842 Further bare-bone solutions can be investigated, removing some more 2843 options, if this is considered not acceptable; in conclusion, it 2844 seems that an approach based only on the information provided by the 2845 detailed connectivity matrix is hardly feasible, and could be 2846 applicable only to small networks with a limited meshing degree 2847 between domains and renouncing to a number of path computation 2848 features. 2850 Contributors 2852 Dieter Beller 2853 Nokia 2854 Email: dieter.beller@nokia.com 2856 Gianmarco Bruno 2857 Ericsson 2858 Email: gianmarco.bruno@ericsson.com 2860 Francesco Lazzeri 2861 Ericsson 2862 Email: francesco.lazzeri@ericsson.com 2864 Young Lee 2865 Huawei 2866 Email: leeyoung@huawei.com 2867 Carlo Perocchio 2868 Ericsson 2869 Email: carlo.perocchio@ericsson.com 2871 Olivier Dugeon 2872 Orange Labs 2873 Email: olivier.dugeon@orange.com 2875 Julien Meuric 2876 Orange Labs 2877 Email: julien.meuric@orange.com 2879 Authors' Addresses 2881 Italo Busi (Editor) 2882 Huawei 2883 Email: italo.busi@huawei.com 2885 Sergio Belotti (Editor) 2886 Nokia 2887 Email: sergio.belotti@nokia.com 2889 Victor Lopez 2890 Telefonica 2891 Email: victor.lopezalvarez@telefonica.com 2893 Oscar Gonzalez de Dios 2894 Telefonica 2895 Email: oscar.gonzalezdedios@telefonica.com 2897 Anurag Sharma 2898 Google 2899 Email: ansha@google.com 2901 Yan Shi 2902 China Unicom 2903 Email: shiyan49@chinaunicom.cn 2904 Ricard Vilalta 2905 CTTC 2906 Email: ricard.vilalta@cttc.es 2908 Karthik Sethuraman 2909 NEC 2910 Email: karthik.sethuraman@necam.com 2912 Michael Scharf 2913 Nokia 2914 Email: michael.scharf@gmail.com 2916 Daniele Ceccarelli 2917 Ericsson 2918 Email: daniele.ceccarelli@ericsson.com