idnits 2.17.1 draft-ietf-teas-yang-path-computation-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 128 instances of too long lines in the document, the longest one being 2 characters in excess of 72. ** The abstract seems to contain references ([TE-TUNNEL]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 357 has weird spacing: '...ach may have ...' == Line 740 has weird spacing: '...ination is ve...' == Line 1118 has weird spacing: '...ic-type ide...' == Line 1130 has weird spacing: '...te-type ide...' == Line 1134 has weird spacing: '...pectrum ide...' == (25 more instances...) -- The document date (November 13, 2017) is 2356 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ACTN-frame' is mentioned on line 1649, but not defined == Missing Reference: 'L1-TOPO' is mentioned on line 140, but not defined == Missing Reference: 'RFC5440' is mentioned on line 1012, but not defined == Unused Reference: 'OTN-TOPO' is defined on line 1713, but no explicit reference was found in the text == Unused Reference: 'ACTN-Info' is defined on line 1717, but no explicit reference was found in the text -- No information found for draft-ietf-actn-framework - is the name correct? Summary: 2 errors (**), 0 flaws (~~), 12 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Italo Busi (Ed.) 2 Internet Draft Huawei 3 Intended status: Informational Sergio Belotti (Ed.) 4 Expires: May 2018 Nokia 5 Victor Lopez 6 Oscar Gonzalez de Dios 7 Telefonica 8 Anurag Sharma 9 Infinera 10 Yan Shi 11 China Unicom 12 Ricard Vilalta 13 CTTC 14 Karthik Sethuraman 15 NEC 17 November 13, 2017 19 Yang model for requesting Path Computation 20 draft-ietf-teas-yang-path-computation-00.txt 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF), its areas, and its working groups. Note that 29 other groups may also distribute working documents as Internet- 30 Drafts. 32 Internet-Drafts are draft documents valid for a maximum of six 33 months and may be updated, replaced, or obsoleted by other documents 34 at any time. It is inappropriate to use Internet-Drafts as 35 reference material or to cite them other than as "work in progress." 37 The list of current Internet-Drafts can be accessed at 38 http://www.ietf.org/ietf/1id-abstracts.txt 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html 42 This Internet-Draft will expire on May 13, 2016. 44 Copyright Notice 46 Copyright (c) 2017 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with 54 respect to this document. Code Components extracted from this 55 document must include Simplified BSD License text as described in 56 Section 4.e of the Trust Legal Provisions and are provided without 57 warranty as described in the Simplified BSD License. 59 Abstract 61 There are scenarios, typically in a hierarchical SDN context, in 62 which an orchestrator may not have detailed information to be able 63 to perform an end-to-end path computation and would need to request 64 lower layer/domain controllers to calculate some (partial) feasible 65 paths. 67 Multiple protocol solutions can be used for communication between 68 different controller hierarchical levels. This document assumes that 69 the controllers are communicating using YANG-based protocols (e.g., 70 NETCONF or RESTCONF). 72 This document describes some use cases where a path computation 73 request, via YANG-based protocols (e.g., NETCONF or RESTCONF), can 74 be needed. 76 This document also proposes a yang model for a stateless RPC which 77 complements the stateful solution defined in [TE-TUNNEL]. 79 Table of Contents 81 1. Introduction...................................................3 82 2. Use Cases......................................................4 83 2.1. IP-Optical integration....................................5 84 2.1.1. Inter-layer path computation.........................6 85 2.1.2. Route Diverse IP Services............................8 86 2.2. Multi-domain TE Networks..................................8 87 2.3. Data center interconnections..............................9 88 3. Interactions with TE Topology.................................11 89 3.1. TE Topology Aggregation using the "virtual link model"...11 90 3.2. TE Topology Abstraction..................................19 91 3.3. Complementary use of TE topology and path computation....20 92 4. Motivation for a YANG Model...................................22 93 4.1. Benefits of common data models...........................22 94 4.2. Benefits of a single interface...........................23 95 4.3. Extensibility............................................23 96 5. Path Computation for multiple LSPs............................24 97 6. YANG Model for requesting Path Computation....................25 98 6.1. Stateless and Stateful Path Computation..................25 99 6.2. YANG model for stateless TE path computation.............26 100 6.2.1. YANG Tree...........................................26 101 6.2.2. YANG Module.........................................34 102 7. Security Considerations.......................................40 103 8. IANA Considerations...........................................41 104 9. References....................................................41 105 9.1. Normative References.....................................41 106 9.2. Informative References...................................42 107 10. Acknowledgments..............................................42 109 1. Introduction 111 There are scenarios, typically in a hierarchical SDN context, in 112 which an orchestrator may not have detailed information to be able 113 to perform an end-to-end path computation and would need to request 114 lower layer/domain controllers to calculate some (partial) feasible 115 paths. 117 When we are thinking to this type of scenarios we have in mind 118 specific level of interfaces on which this request can be applied. 120 We can reference ABNO Control Interface [RFC7491] in which an 121 Application Service Coordinator can request ABNO controller to take 122 in charge path calculation (see Figure 1 in the RFC) and/or ACTN 123 [ACTN-frame],where controller hierarchy is defined, the need for 124 path computation arises on both interfaces CMI (interface between 125 Customer Network Controller(CNC) and Multi Domain Service 126 Coordinator (MDSC)) and/or MPI (interface between MSDC-PNC).[ACTN- 127 Info] describes an information model for the Path Computation 128 request. 130 Multiple protocol solutions can be used for communication between 131 different controller hierarchical levels. This document assumes that 132 the controllers are communicating using YANG-based protocols (e.g., 133 NETCONF or RESTCONF). 135 Path Computation Elements, Controllers and Orchestrators perform 136 their operations based on Traffic Engineering Databases (TED). Such 137 TEDs can be described, in a technology agnostic way, with the YANG 138 Data Model for TE Topologies [TE-TOPO]. Furthermore, the technology 139 specific details of the TED are modeled in the augmented TE topology 140 models (e.g. [L1-TOPO] for Layer-1 ODU technologies). 142 The availability of such topology models allows providing the TED 143 using YANG-based protocols (e.g., NETCONF or RESTCONF). Furthermore, 144 it enables a PCE/Controller performing the necessary abstractions or 145 modifications and offering this customized topology to another 146 PCE/Controller or high level orchestrator. 148 The tunnels that can be provided over the networks described with 149 the topology models can be also set-up, deleted and modified via 150 YANG-based protocols (e.g., NETCONF or RESTCONF)using the TE-Tunnel 151 Yang model [TE-TUNNEL]. 153 This document describes some use cases where a path computation 154 request, via YANG-based protocols (e.g., NETCONF or RESTCONF), can 155 be needed. 157 This document also proposes a yang model for a stateless RPC which 158 complements the stateful solution defined in [TE-TUNNEL]. 160 2. Use Cases 162 This section presents different use cases, where an orchestrator 163 needs to request underlying SDN controllers for path computation. 165 The presented uses cases have been grouped, depending on the 166 different underlying topologies: a) IP-Optical integration; b) 167 Multi-domain Traffic Engineered (TE) Networks; and c) Data center 168 interconnections. 170 2.1. IP-Optical integration 172 In these use cases, an Optical domain is used to provide 173 connectivity between IP routers which are connected with the Optical 174 domains using access links (see Figure 1). 176 -------------------------------------------------------------------- 177 I I 178 I I 179 I I 180 I IP+Optical Use Cases I 181 I I 182 I I 183 I I 184 I I 185 I (only in PDF version) I 186 I I 187 I I 188 I I 189 I I 190 I I 191 I I 192 I I 193 -------------------------------------------------------------------- 195 Figure 1 - IP+Optical Use Cases 197 It is assumed that the Optical domain controller provides to the 198 orchestrator an abstracted view of the Optical network. A possible 199 abstraction shall be representing the optical domain as one "virtual 200 node" with "virtual ports" connected to the access links. 202 The path computation request helps the orchestrator to know which 203 are the real connections that can be provided at the optical domain. 205 -------------------------------------------------------------------- 206 I I 207 I I 208 I I 209 I I 210 I I 211 I I 212 I I 213 I I 214 I IP+Optical Topology Abstraction I 215 I I 216 I I 217 I I 218 I I 219 I (only in PDF version) I 220 I I 221 I I 222 I I 223 I I 224 I I 225 I I 226 I I 227 I I 228 -------------------------------------------------------------------- 230 Figure 2 - IP+Optical Topology Abstraction 232 2.1.1. Inter-layer path computation 234 In this use case, the orchestrator needs to setup an optimal path 235 between two IP routers R1 and R2. 237 As depicted in Figure 2, the Orchestrator has only an "abstracted 238 view" of the physical network, and it does not know the feasibility 239 or the cost of the possible optical paths (e.g., VP1-VP4 and VP2- 240 VP5), which depend from the current status of the physical resources 241 within the optical network and on vendor-specific optical 242 attributes. 244 The orchestrator can request the underlying Optical domain 245 controller to compute a set of potential optimal paths, taking into 246 account optical constraints. Then, based on its own constraints, 247 policy and knowledge (e.g. cost of the access links), it can choose 248 which one of these potential paths to use to setup the optimal e2e 249 path crossing optical network. 251 -------------------------------------------------------------------- 252 I I 253 I IP+Optical Path Computation Example I 254 I I 255 I I 256 I (only in PDF version) I 257 I I 258 -------------------------------------------------------------------- 260 Figure 3 - IP+Optical Path Computation Example 262 For example, in Figure 3, the Orchestrator can request the Optical 263 domain controller to compute the paths between VP1-VP4 and VP2-VP5 264 and then decide to setup the optimal end-to-end path using the VP2- 265 VP5 Optical path even this is not the optimal path from the Optical 266 domain perspective. 268 Considering the dynamicity of the connectivity constraints of an 269 Optical domain, it is possible that a path computed by the Optical 270 domain controller when requested by the Orchestrator is no longer 271 valid when the Orchestrator requests it to be setup up. 273 It is worth noting that with the approach proposed in this document, 274 the likelihood for this issue to happen can be quite small since the 275 time window between the path computation request and the path setup 276 request should be quite short (especially if compared with the time 277 that would be needed to update the information of a very detailed 278 abstract connectivity matrix). 280 If this risk is still not acceptable, the Orchestrator may also 281 optionally request the Optical domain controller not only to compute 282 the path but also to keep track of its resources (e.g., these 283 resources can be reserved to avoid being used by any other 284 connection). In this case, some mechanism (e.g., a timeout) needs to 285 be defined to avoid having stranded resources within the Optical 286 domain. 288 These issues and solutions can be fine-tuned during the design of 289 the YANG model for requesting Path Computation. 291 2.1.2. Route Diverse IP Services 293 This is for further study. 295 2.2. Multi-domain TE Networks 297 In this use case there are two TE domains which are interconnected 298 together by multiple inter-domains links. 300 A possible example could be a multi-domain optical network. 302 -------------------------------------------------------------------- 303 I I 304 I I 305 I I 306 I I 307 I I 308 I I 309 I Multi-domain multi-link interconnection I 310 I I 311 I I 312 I I 313 I I 314 I (only in PDF version) I 315 I I 316 I I 317 I I 318 I I 319 I I 320 I I 321 -------------------------------------------------------------------- 323 Figure 4 - Multi-domain multi-link interconnection 325 In order to setup an end-to-end multi-domain TEpath (e.g., between 326 nodes A and H), the orchestrator needs to know the feasibility or 327 the cost of the possible TE paths within the two TE domains, which 328 depend from the current status of the physical resources within each 329 TE network. This is more challenging in case of optical networks 330 because the optimal paths depend also on vendor-specific optical 331 attributes (which may be different in the two domains if they are 332 provided by different vendors). 334 In order to setup a multi-domain TE path (e.g., between nodes A and 335 H), Orchestrator can request the TE domain controllers to compute a 336 set of intra-domain optimal paths and take decisions based on the 337 information received. For example: 339 o The Orchestrator asks TE domain controllers to provide set of 340 paths between A-C, A-D, E-H and F-H 342 o TE domain controllers return a set of feasible paths with the 343 associated costs: the path A-C is not part of this set(in optical 344 networks, it is typical to have some paths not being feasible due 345 to optical constraints that are known only by the optical domain 346 controller) 348 o The Orchestrator will select the path A- D-F- H since it is the 349 only feasible multi-domain path and then request the TE domain 350 controllers to setup the A-D and F-H intra-domain paths 352 o If there are multiple feasible paths, the Orchestrator can select 353 the optimal path knowing the cost of the intra-domain paths 354 (provided by the TE domain controllers) and the cost of the 355 inter-domain links (known by the Orchestrator) 357 This approach may have some scalability issues when the number of TE 358 domains is quite big (e.g. 20). 360 In this case, it would be worthwhile using the abstract TE topology 361 information provided by the domain controllers to limit the number of 362 potential optimal end-to-end paths and then request path computation 363 to fewer domain controllers in order to decide what the optimal path 364 within this limited set is. 366 For more details, see section 3.3. 368 2.3. Data center interconnections 370 In these use case, there is an TE domain which is used to provide 371 connectivity between data centers which are connected with the TE 372 domain using access links. 374 -------------------------------------------------------------------- 375 I I 376 I I 377 I I 378 I I 379 I I 380 I I 381 I Data Center Interconnection Use Case I 382 I I 383 I I 384 I I 385 I (only in PDF version) I 386 I I 387 I I 388 I I 389 I I 390 I I 391 I I 392 -------------------------------------------------------------------- 394 Figure 5 - Data Center Interconnection Use Case 396 In this use case, a virtual machine within Data Center 1 (DC1) needs 397 to transfer data to another virtual machine that can reside either 398 in DC2 or in DC3. 400 The optimal decision depends both on the cost of the TE path (DC1- 401 DC2 or DC1-DC3) and of the computing power (data center resources) 402 within DC2 or DC3. 404 The Cloud Orchestrator may not be able to make this decision because 405 it has only an abstract view of the TE network (as in use case in 406 2.1). 408 The cloud orchestrator can request to the TE domain controller to 409 compute the cost of the possible TE paths (e.g., DC1-DC2 and DC1- 410 DC3) and to the DC controller to compute the cost of the computing 411 power (DC resources) within DC2 and DC3 and then it can take the 412 decision about the optimal solution based on this information and 413 its policy. 415 3. Interactions with TE Topology 417 The use cases described in section 2 have been described assuming 418 that the topology view exported by each underlying SDN controller to 419 the orchestrator is aggregated using the "virtual node model", 420 defined in [RFC7926]. 422 TE Topology information, e.g., as provided by [TE-TOPO], could in 423 theory be used by an underlying SDN controllers to provide TE 424 information to the orchestrator thus allowing the Path Computation 425 Element (PCE) within the Orchestrator to perform multi-domain path 426 computation by its own, without requesting path computations to the 427 underlying SDN controllers. 429 This section analyzes the need for an orchestrator to request 430 underlying SDN controllers for path computation even in these 431 scenarios as well as how the TE Topology information and the path 432 computation can be complementary. 434 In nutshell, there is a scalability trade-off between providing all 435 the TE information needed by the Orchestrator's PCE to take optimal 436 path computation decisions by its own versus requesting the 437 Orchestrator to ask to too many underlying SDN Domain Controllers a 438 set of feasible optimal intra-domain TE paths. 440 3.1. TE Topology Aggregation using the "virtual link model" 442 Using the TE Topology model, as defined in [TE-TOPO], the underlying 443 SDN controller can export the whole TE domain as a single abstract 444 TE node with a "detailed connectivity matrix", which extends the 445 "connectivity matrix", defined in [RFC7446], with specific TE 446 attributes (e.g., delay, SRLGs and summary TE metrics). 448 The information provided by the "detailed abstract connectivity 449 matrix" would be equivalent to the information that should be 450 provided by "virtual link model" as defined in [RFC7926]. 452 For example, in the IP-Optical integration use case, described in 453 section 2.1, the Optical domain controller can make the information 454 shown in Figure 3 available to the Orchestrator as part of the TE 455 Topology information and the Orchestrator could use this information 456 to calculate by its own the optimal path between routers R1 and R2, 457 without requesting any additional information to the Optical Domain 458 Controller. 460 However, there is a tradeoff between the accuracy (i.e., providing 461 "all" the information that might be needed by the Orchestrator's 462 PCE) and scalability to be considered when designing the amount of 463 information to provide within the "detailed abstract connectivity 464 matrix". 466 Figure 6 below shows another example, similar to Figure 3, where 467 there are two possible Optical paths between VP1 and VP4 with 468 different properties (e.g., available bandwidth and cost). 470 -------------------------------------------------------------------- 471 I I 472 I IP+Optical Path Computation Example I 473 I with multiple choices I 474 I I 475 I I 476 I I 477 I (only in PDF version) I 478 I I 479 -------------------------------------------------------------------- 481 Figure 6 - IP+Optical Path Computation Example with multiple choices 483 Reporting all the information, as in Figure 6, using the "detailed 484 abstract connectivity matrix", is quite challenging from a 485 scalability perspective. The amount of this information is not just 486 based on number of end points (which would scale as N-square), but 487 also on many other parameters, including client rate, user 488 constraints / policies for the service, e.g. max latency < N ms, max 489 cost, etc., exclusion policies to route around busy links, min OSNR 490 margin, max preFEC BER etc. All these constraints could be different 491 based on connectivity requirements. 493 In the following table, a list of the possible constraints, 494 associated with their potential cardinality, is reported. 496 The maximum number of potential connections to be computed and 497 reported is, in first approximation, the multiplication of all of 498 them. 500 Constraint Cardinality 501 ---------- ------------------------------------------------------- 503 End points N(N-1)/2 if connections are bidirectional (OTN and WDM), 504 N(N-1) for unidirectional connections. 506 Bandwidth In WDM networks, bandwidth values are expressed in GHz. 508 On fixed-grid WDM networks, the central frequencies are 509 on a 50GHz grid and the channel width of the transmitters 510 are typically 50GHz such that each central frequency can 511 be used, i.e., adjacent channels can be placed next to 512 each other in terms of central frequencies. 514 On flex-grid WDM networks, the central frequencies are on 515 a 6.25GHz grid and the channel width of the transmitters 516 can be multiples of 12.5GHz. 518 For fixed-grid WDM networks typically there is only one 519 possible bandwidth value (i.e., 50GHz) while for flex- 520 grid WDM networks typically there are 4 possible 521 bandwidth values (e.g., 37.5GHz, 50GHz, 62.5GHz, 75GHz). 523 In OTN (ODU) networks, bandwidth values are expressed as 524 pairs of ODU type and, in case of ODUflex, ODU rate in 525 bytes/sec as described in section 5 of [RFC7139]. 527 For "fixed" ODUk types, 6 possible bandwidth values are 528 possible (i.e., ODU0, ODU1, ODU2, ODU2e, ODU3, ODU4). 530 For ODUflex(GFP), up to 80 different bandwidth values can 531 be specified, as defined in Table 7-8 of [ITU-T G.709- 532 2016]. 534 For other ODUflex types, like ODUflex(CBR), the number of 535 possible bandwidth values depends on the rates of the 536 clients that could be mapped over these ODUflex types, as 537 shown in Table 7.2 of [ITU-T G.709-2016], which in theory 538 could be a countinuum of values. However, since different 539 ODUflex bandwidths that use the same number of TSs on 540 each link along the path are equivalent for path 541 computation purposes, up to 120 different bandwidth 542 ranges can be specified. 544 Ideas to reduce the number of ODUflex bandwidth values in 545 the detailed connectivity matrix, to less than 100, are 546 for further study. 548 Bandwidth specification for ODUCn is currently for 549 further study but it is expected that other bandwidth 550 values can be specified as integer multiples of 100Gb/s. 552 In IP we have bandwidth values in bytes/sec. In 553 principle, this is a countinuum of values, but in 554 practice we can identify a set of bandwidth ranges, where 555 any bandwidth value inside the same range produces the 556 same path. 557 The number of such ranges is the cardinality, which 558 depends on the topology, available bandwidth and status 559 of the network. Simulations (Note: reference paper 560 submitted for publication) show that values for medium 561 size topologies (around 50-150 nodes) are in the range 4- 562 7 (5 on average) for each end points couple. 564 Metrics IGP, TE and hop number are the basic objective metrics 565 defined so far. There are also the 2 objective functions 566 defined in [RFC5541]: Minimum Load Path (MLP) and Maximum 567 Residual Bandwidth Path (MBP). Assuming that one only 568 metric or objective function can be optimized at once, 569 the total cardinality here is 5. 571 With [PCEP-Service-Aware], a number of additional metrics 572 are defined, including Path Delay metric, Path Delay 573 Variation metric and Path Loss metric, both for point-to- 574 point and point-to-multipoint paths. This increases the 575 cardinality to 8. 577 Bounds Each metric can be associated with a bound in order to 578 find a path having a total value of that metric lower 579 than the given bound. This has a potentially very high 580 cardinality (as any value for the bound is allowed). In 581 practice there is a maximum value of the bound (the one 582 with the maximum value of the associated metric) which 583 results always in the same path, and a range approach 584 like for bandwidth in IP should produce also in this case 585 the cardinality. Assuming to have a cardinality similar 586 to the one of the bandwidth (let say 5 on average) we 587 should have 6 (IGP, TE, hop, path delay, path delay 588 variation and path loss; we don't consider here the two 589 objective functions of [RFC5541] as they are conceived 590 only for optimization)*5 = 30 cardinality. 592 Priority We have 8 values for setup priority, which is used in 593 path computation to route a path using free resources 594 and, where no free resources are available, resources 595 used by LSPs having a lower holding priority. 597 Local prot It's possible to ask for a local protected service, where 598 all the links used by the path are protected with fast 599 reroute (this is only for IP networks, but line 600 protection schemas are available on the other 601 technologies as well). This adds an alternative path 602 computation, so the cardinality of this constraint is 2. 604 Administrative 605 Colors Administrative colors (aka affinities) are typically 606 assigned to links but when topology abstraction is used 607 affinity information can also appear in the detailed 608 connectivity matrix. 610 There are 32 bits available for the affinities. Links can 611 be tagged with any combination of these bits, and path 612 computation can be constrained to include or exclude any 613 or all of them. The relevant cardinality is 3 (include- 614 any, exclude-any, include-all) times 2^32 possible 615 values. However, the number of possible values used in 616 real networks is quite small. 618 Included Resources 620 A path computation request can be associated to an 621 ordered set of network resources (links, nodes) to be 622 included along the computed path. This constraint would 623 have a huge cardinality as in principle any combination 624 of network resources is possible. However, as far as the 625 Orchestrator doesn't know details about the internal 626 topology of the domain, it shouldn't include this type of 627 constraint at all (see more details below). 629 Excluded Resources 631 A path computation request can be associated to a set of 632 network resources (links, nodes, SRLGs) to be excluded 633 from the computed path. Like for included resources, 634 this constraint has a potentially very high cardinality, 635 but, once again, it can't be actually used by the 636 Orchestrator, if it's not aware of the domain topology 637 (see more details below). 638 As discussed above, the Orchestrator can specify include or exclude 639 resources depending on the abstract topology information that the 640 domain controller exposes: 642 o In case the domain controller exposes the entire domain as a 643 single abstract TE node with his own external terminations and 644 connectivity matrix (whose size we are estimating), no other 645 topological details are available, therefore the size of the 646 connectivity matrix only depends on the combination of the 647 constraints that the Orchestrator can use in a path computation 648 request to the domain controller. These constraints cannot refer 649 to any details of the internal topology of the domain, as those 650 details are not known to the Orchestrator and so they do not 651 impact size of connectivity matrix exported. 653 o Instead in case the domain controller exposes a topology 654 including more than one abstract TE nodes and TE links, and their 655 attributes (e.g. SRLGs, affinities for the links), the 656 Orchestrator knows these details and therefore could compute a 657 path across the domain referring to them in the constraints. The 658 connectivity matrixes to be estimated here are the ones relevant 659 to the abstract TE nodes exported to the Orchestrator. These 660 connectivity matrixes and therefore theirs sizes, while cannot 661 depend on the other abstract TE nodes and TE links, which are 662 external to the given abstract node, could depend to SRLGs (and 663 other attributes, like affinities) which could be present also in 664 the portion of the topology represented by the abstract nodes, 665 and therefore contribute to the size of the related connectivity 666 matrix. 668 We also don't consider here the possibility to ask for more than one 669 path in diversity or for point-to-multi-point paths, which are for 670 further study. 672 Considering for example an IP domain without considering SRLG and 673 affinities, we have an estimated number of paths depending on these 674 estimated cardinalities: 676 Endpoints = N*(N-1), Bandwidth = 5, Metrics = 6, Bounds = 20, 677 Priority = 8, Local prot = 2 678 The number of paths to be pre-computed by each IP domain is 679 therefore 24960 * N(N-1) where N is the number of domain access 680 points. 682 This means that with just 4 access points we have nearly 300000 683 paths to compute, advertise and maintain (if a change happens in the 684 domain, due to a fault, or just the deployment of new traffic, a 685 substantial number of paths need to be recomputed and the relevant 686 changes advertised to the upper controller). 688 This seems quite challenging. In fact, if we assume a mean length of 689 1K for the json describing a path (a quite conservative estimate), 690 reporting 300000 paths means transferring and then parsing more than 691 300 Mbytes for each domain. If we assume that 20% (to be checked) of 692 this paths change when a new deployment of traffic occurs, we have 693 60 Mbytes of transfer for each domain traversed by a new end-to-end 694 path. If a network has, let say, 20 domains (we want to estimate the 695 load for a non-trivial domain setup) in the beginning a total 696 initial transfer of 6Gigs is needed, and eventually, assuming 4-5 697 domains are involved in mean during a path deployment we could have 698 240-300 Mbytes of changes advertised to the higher order controller. 700 Further bare-bone solutions can be investigated, removing some more 701 options, if this is considered not acceptable; in conclusion, it 702 seems that an approach based only on connectivity matrix is hardly 703 feasible, and could be applicable only to small networks with a 704 limited meshing degree between domains and renouncing to a number of 705 path computation features. 707 It is also worth noting that the "connectivity matrix" has been 708 originally defined in WSON, [RFC7446] to report the connectivity 709 constrains of a physical node within the WDM network: the 710 information it contains is pretty "static" and therefore, once taken 711 and stored in the TE data base, it can be always being considered 712 valid and up-to-date in path computation request. 714 Using the "connectivity matrix" with an abstract node to abstract 715 the information regarding the connectivity constraints of an Optical 716 domain, would make this information more "dynamic" since the 717 connectivity constraints of an Optical domain can change over time 718 because some optical paths that are feasible at a given time may 719 become unfeasible at a later time when e.g., another optical path is 720 established. The information in the "detailed abstract connectivity 721 matrix" is even more dynamic since the establishment of another 722 optical path may change some of the parameters (e.g., delay or 723 available bandwidth) in the "detailed abstract connectivity matrix" 724 while not changing the feasibility of the path. 726 "Connectivity matrix" is sometimes confused with optical reach table 727 that contain multiple (e.g. k-shortest) regen-free reachable paths 728 for every A-Z node combination in the network. Optical reach tables 729 can be calculated offline, utilizing vendor optical design and 730 planning tools,and periodically uploaded to the Controller: these 731 optical path reach tables are fairly static. However, to get the 732 connectivity matrix, between any two sites, either a regen free path 733 can be used, if one is available, or multiple regen free paths are 734 concatenated to get from src to dest, which can be a very large 735 combination. Additionally, when the optical path within optical 736 domain needs to be computed, it can result in different paths based 737 on input objective, constraints, and network conditions. In summary, 738 even though "optical reachability table" is fairly static, which 739 regen free paths to build the connectivity matrix between any source 740 and destination is very dynamic, and is done using very 741 sophisticated routing algorithms. 743 There is therefore the need to keep the information in the 744 "connectivity matrix" updated which means that there another 745 tradeoff between the accuracy (i.e., providing "all" the information 746 that might be needed by the Orchestrator's PCE) and having up-to- 747 date information. The more the information is provided and the 748 longer it takes to keep it up-to-date which increases the likelihood 749 that the Orchestrator's PCE computes paths using not updated 750 information. 752 It seems therefore quite challenging to have a "detailed abstract 753 connectivity matrix" that provides accurate, scalable and updated 754 information to allow the Orchestrator's PCE to take optimal 755 decisions by its own. 757 If the information in the "detailed abstract connectivity matrix" is 758 not complete/accurate, we can have the following drawbacks 759 considering for example the case in Figure 6: 761 o If only the VP1-VP4 path with available bandwidth of 2 Gb/s and 762 cost 50 is reported, the Orchestrator's PCE will fail to compute 763 a 5 Gb/s path between routers R1 and R2, although this would be 764 feasible; 766 o If only the VP1-VP4 path with available bandwidth of 10 Gb/s and 767 cost 60 is reported, the Orchestrator's PCE will compute, as 768 optimal, the 1 Gb/s path between R1 and R2 going through the VP2- 769 VP5 path within the Optical domain while the optimal path would 770 actually be the one going thought the VP1-VP4 sub-path (with cost 771 50) within the Optical domain. 773 Instead, using the approach proposed in this document, the 774 Orchestrator, when it needs to setup an end-to-end path, it can 775 request the Optical domain controller to compute a set of optimal 776 paths (e.g., for VP1-VP4 and VP2-VP5) and take decisions based on 777 the information received: 779 o When setting up a 5 Gb/s path between routers R1 and R2, the 780 Optical domain controller may report only the VP1-VP4 path as the 781 only feasible path: the Orchestrator can successfully setup the 782 end-to-end path passing though this Optical path; 784 o When setting up a 1 Gb/s path between routers R1 and R2, the 785 Optical domain controller (knowing that the path requires only 1 786 Gb/s) can report both the VP1-VP4 path, with cost 50, and the 787 VP2-VP5 path, with cost 65. The Orchestrator can then compute the 788 optimal path which is passing thought the VP1-VP4 sub-path (with 789 cost 50) within the Optical domain. 791 3.2. TE Topology Abstraction 793 Using the TE Topology model, as defined in [TE-TOPO], the underlying 794 SDN controller can export an abstract TE Topology, composed by a set 795 of TE nodes and TE links, which are abstracting the topology 796 controlled by each domain controller. 798 Considering the example in Figure 4, the TE domain controller 1 can 799 export a TE Topology encompassing the TE nodes A, B, C and D and the 800 TE Link interconnecting them. In a similar way, TE domain controller 801 2 can export a TE Topology encompassing the TE nodes E, F, G and H 802 and the TE Link interconnecting them. 804 In this example, for simplicity reasons, each abstract TE node maps 805 with each physical node, but this is not necessary. 807 In order to setup a multi-domain TE path (e.g., between nodes A and 808 H), the Orchestrator can compute by its own an optimal end-to-end 809 path based on the abstract TE topology information provided by the 810 domain controllers. For example: 812 o Orchestrator's PCE, based on its own information, can compute the 813 optimal multi-domain path being A-B-C-E-G-H, and then request the 814 TE domain controllers to setup the A-B-C and E-G-H intra-domain 815 paths 817 o But, during path setup, the domain controller may find out that 818 A-B-C intra-domain path is not feasible (as discussed in section 819 2.2, in optical networks it is typical to have some paths not 820 being feasible due to optical constraints that are known only by 821 the optical domain controller), while only the path A-B-D is 822 feasible 824 o So what the hierarchical controller computed is not good and need 825 to re-start the path computation from scratch 827 As discussed in section 3.1, providing more extensive abstract 828 information from the TE domain controllers to the multi-domain 829 Orchestator may lead to scalability problems. 831 In a sense this is similar to the problem of routing and wavelength 832 assignment within an Optical domain. It is possible to do first 833 routing (step 1) and then wavelength assignment (step 2), but the 834 chances of ending up with a good path is low. Alternatively, it is 835 possible to do combined routing and wavelength assignment, which is 836 known to be a more optimal and effective way for Optical path setup. 837 Similarly, it is possible to first compute an abstract end-to-end 838 path within the multi-domain Orchestrator (step 1) and then compute 839 an intra-domain path within each Optical domain (step 2), but there 840 are more chances not to find a path or to get a suboptimal path that 841 performing per-domain path computation and then stitch them. 843 3.3. Complementary use of TE topology and path computation 845 As discussed in section 2.2, there are some scalability issues with 846 path computation requests in a multi-domain TE network with many TE 847 domains, in terms of the number of requests to send to the TE domain 848 controllers. It would therefore be worthwhile using the TE topology 849 information provided by the domain controllers to limit the number 850 of requests. 852 An example can be described considering the multi-domain abstract 853 topology shown in Figure 7. In this example, an end-to-end TE path 854 between domains A and F needs to be setup. The transit domain should 855 be selected between domains B, C, D and E. 857 -------------------------------------------------------------------- 858 I I 859 I I 860 I I 861 I Multi-domain with many domains I 862 I (Topology information) I 863 I I 864 I I 865 I (only in PDF version) I 866 I I 867 I I 868 I I 869 -------------------------------------------------------------------- 871 Figure 7 - Multi-domain with many domains (Topology information) 873 The actual cost of each intra-domain path is not known a priori from 874 the abstract topology information. The Orchestrator only knows, from 875 the TE topology provided by the underlying domain controllers, the 876 feasibility of some intra-domain paths and some upper-bound and/or 877 lower-bound cost information. With this information, together with 878 the cost of inter-domain links, the Orchestrator can understand by 879 its own that: 881 o Domain B cannot be selected as the path connecting domains A and 882 E is not feasible; 884 o Domain E cannot be selected as a transit domain since it is know 885 from the abstract topology information provided by domain 886 controllers that the cost of the multi-domain path A-E-F (which 887 is 100, in the best case) will be always be higher than the cost 888 of the multi-domain paths A-D-F (which is 90, in the worst case) 889 and A-E-F (which is 80, in the worst case) 891 Therefore, the Orchestrator can understand by its own that the 892 optimal multi-domain path could be either A-D-F or A-E-F but it 893 cannot known which one of the two possible option actually provides 894 the optimal end-to-end path. 896 The Orchestrator can therefore request path computation only to the 897 TE domain controllers A, D, E and F (and not to all the possible TE 898 domain controllers). 900 -------------------------------------------------------------------- 901 I I 902 I I 903 I I 904 I Multi-domain with many domains I 905 I (Path Computation information) I 906 I I 907 I I 908 I I 909 I I 910 I (only in PDF version) I 911 I I 912 I I 913 I I 914 -------------------------------------------------------------------- 916 Figure 8 - Multi-domain with many domains (Path Computation 917 information) 919 Based on these requests, the Orchestrator can know the actual cost 920 of each intra-domain paths which belongs to potential optimal end- 921 to-end paths, as shown in Figure 8, and then compute the optimal 922 end-to-end path (e.g., A-D-F, having total cost of 50, instead of A- 923 C-F having a total cost of 70). 925 4. Motivation for a YANG Model 927 4.1. Benefits of common data models 929 Path computation requests should be closely aligned with the YANG 930 data models that provide (abstract) TE topology information, i.e., 931 [TE-TOPO] as well as that are used to configure and manage TE 932 Tunnels, i.e., [TE-TUNNEL]. Otherwise, an error-prone mapping or 933 correlation of information would be required. For instance, there is 934 benefit in using the same endpoint identifiers in path computation 935 requests and in the topology modeling. Also, the attributes used in 936 path computation constraints could use the same or similar data 937 models. As a result, there are many benefits in aligning path 938 computation requests with YANG models for TE topology information 939 and TE Tunnels configuration and management. 941 4.2. Benefits of a single interface 943 A typical use case for path computation requests is the interface 944 between an orchestrator and a domain controller. The system 945 integration effort is typically lower if a single, consistent 946 interface is used between such systems, i.e., one data modeling 947 language (i.e., YANG) and a common protocol (e.g., NETCONF or 948 RESTCONF). 950 Practical benefits of using a single, consistent interface include: 952 1. Simple authentication and authorization: The interface between 953 different components has to be secured. If different protocols 954 have different security mechanisms, ensuring a common access 955 control model may result in overhead. For instance, there may 956 be a need to deal with different security mechanisms, e.g., 957 different credentials or keys. This can result in increased 958 integration effort. 959 2. Consistency: Keeping data consistent over multiple different 960 interfaces or protocols is not trivial. For instance, the 961 sequence of actions can matter in certain use cases, or 962 transaction semantics could be desired. While ensuring 963 consistency within one protocol can already be challenging, it 964 is typically cumbersome to achieve that across different 965 protocols. 966 3. Testing: System integration requires comprehensive testing, 967 including corner cases. The more different technologies are 968 involved, the more difficult it is to run comprehensive test 969 cases and ensure proper integration. 970 4. Middle-box friendliness: Provider and consumer of path 971 computation requests may be located in different networks, and 972 middle-boxes such as firewalls, NATs, or load balancers may be 973 deployed. In such environments it is simpler to deploy a single 974 protocol. Also, it may be easier to debug connectivity 975 problems. 976 5. Tooling reuse: Implementers may want to implement path 977 computation requests with tools and libraries that already 978 exist in controllers and/or orchestrators, e.g., leveraging the 979 rapidly growing eco-system for YANG tooling. 981 4.3. Extensibility 983 Path computation is only a subset of the typical functionality of a 984 controller. In many use cases, issuing path computation requests 985 comes along with the need to access other functionality on the same 986 system. In addition to obtaining TE topology, for instance also 987 configuration of services (setup/modification/deletion) may be 988 required, as well as: 990 1. Receiving notifications for topology changes as well as 991 integration with fault management 992 2. Performance management such as retrieving monitoring and 993 telemetry data 994 3. Service assurance, e.g., by triggering OAM functionality 995 4. Other fulfilment and provisioning actions beyond tunnels and 996 services, such as changing QoS configurations 998 YANG is a very extensible and flexible data modeling language that 999 can be used for all these use cases. 1001 Adding support for path computation requests to YANG models would 1002 seamlessly complement with [TE-TOPO] and [TE-TUNNEL] in the use 1003 cases where YANG-based protocols (e.g., NETCONF or RESTCONF) are 1004 used. 1006 5. Path Computation for multiple LSPs 1008 There are use cases, where path computation is required for multiple 1009 Traffic Engineering Label Switched Paths (TE LSPs) through a network 1010 or through a network domain. It may be advantageous to request the 1011 new paths for a set of LSPs in one single path computation request 1012 [RFC5440] that also includes information regarding the desired 1013 objective function, see [RFC5541]. 1015 In the context of abstraction and control of TE networks (ACTN), as 1016 defined in [ACTN-Frame], when a MDSC receives a vitual network (VN) 1017 request from a CNC, the MDSC needs to perform path computation for 1018 multiple LSPs as a typical VN is constructed by a set of multiple 1019 paths also called end-to-end tunnels. The MDSC may send a single 1020 path computation request to the PNC for multiple LSPs, i.e. between 1021 the VN end points (access points in ACTN terminology). 1023 In a more general context, when a MDSC needs to send multiple path 1024 provisioning requests to the PNC, the MDSC may also group these path 1025 provisioning requests together and send them in a single message to 1026 the PNC instead of sending separet requests for each path. 1028 6. YANG Model for requesting Path Computation 1030 The TE Tunnel YANG model has been extended to support the need to 1031 request path computation. 1033 It is possible to request path computation by configuring a 1034 "compute-only" TE tunnel and retrieving the computed path(s) in the 1035 LSP(s) Record-Route Object (RRO) list as described in section 3.3.1 1036 of [TE-TUNNEL]. 1038 This is a stateful solution since the state of each created 1039 "compute-only" TE tunnel needs to be maintained and updated, when 1040 underlying network conditions change. 1042 The need also for a stateless solution, based on an RPC, has been 1043 recognized, as outlined in section 6.1. 1045 A proposal for a stateless RPC to request path computation is 1046 provided in section 6.2. 1048 6.1. Stateless and Stateful Path Computation 1050 It is very useful to provide options for both stateless and stateful 1051 path computation mechanisms. It is suggested to use stateless 1052 mechanisms as much as possible and to rely on stateful path 1053 computation when really needed. 1055 Stateless RPC allows requesting path computation using a simple 1056 atomic operation and it is the natural option/choice, especially 1057 with stateless PCE. 1059 Since the operation is stateless, there is no guarantee that the 1060 returned path would still be available when path setup is requested: 1061 this is not a major issue in case the time between path computation 1062 and path setup is short. 1064 The RPC response must be provided synchronously and, if 1065 collaborative computations are time consuming, it may not be 1066 possible to immediate reply to client. 1068 In this case, the client can define a maximum time it can wait for 1069 the reply, such that if the computation does not complete in time, 1070 the server will abort the path computation and reply to the client 1071 with an error. It may be possible that the server has tighter timing 1072 constraints than the client: in this case the path computation is 1073 aborted earlier than the time specified by the client. 1075 Note - The RPC response issue (slow RPC server) is not specific to 1076 the path computation RPC case so, it may be worthwhile, evaluating 1077 whether a more generic solution applicable to any YANG RPC can be 1078 used instead. 1080 In case the stateless solution is not sufficient, a stateful 1081 solution, based on "compute-only" TE tunnel, could be used to 1082 support asynchronous operations and/or to get notifications in case 1083 the computed path has been changed. 1085 It is worth noting that also the stateful solution, although 1086 increasing the likelihood that the computed path is available at 1087 path setup, it does not guaranteed that because notifications may 1088 not be reliable or delivered on time. 1090 The stateful path computation has also the following drawbacks: 1092 o Several messages required for any path computation 1094 o Requires persistent storage in the provider controller 1096 o Need for garbage collection for stranded paths 1098 o Process burden to detect changes on the computed paths in order 1099 to provide notifications update 1101 6.2. YANG model for stateless TE path computation 1103 6.2.1. YANG Tree 1105 Figure 9 below shows the tree diagram of the YANG model defined in 1106 module ietf-te-path-computation.yang. 1108 module: ietf-te-path-computation 1109 +--rw paths 1110 | +--ro path* [path-id] 1111 | +--ro _telink* [link-ref] 1112 | | +--ro link-ref -> 1113 /nd:networks/network[nd:network-id=current()/../network- 1114 ref]/lnk:link/link-id 1115 | | +--ro network-ref? -> /nd:networks/network/network-id 1116 | +--ro path-constraints 1117 | | +--ro path-metric-bound* [metric-type] 1118 | | | +--ro metric-type identityref 1119 | | | +--ro upper-bound? uint64 1120 | | +--ro topology-id? te-types:te-topology-id 1121 | | +--ro ignore-overload? boolean 1122 | | +--ro bandwidth-generic 1123 | | | +--ro te-bandwidth 1124 | | | +--ro (technology)? 1125 | | | +--:(psc) 1126 | | | | +--ro psc? rt-types:bandwidth-ieee- 1127 float32 1128 | | | +--:(otn) 1129 | | | | +--ro otn* [rate-type] 1130 | | | | +--ro rate-type identityref 1131 | | | | +--ro counter? uint16 1132 | | | +--:(lsc) 1133 | | | | +--ro wdm* [spectrum slot] 1134 | | | | +--ro spectrum identityref 1135 | | | | +--ro slot int16 1136 | | | | +--ro width? uint16 1137 | | | +--:(generic) 1138 | | | +--ro generic? te-bandwidth 1139 | | +--ro disjointness? te-types:te-path- 1140 disjointness 1141 | | +--ro setup-priority? uint8 1142 | | +--ro hold-priority? uint8 1143 | | +--ro signaling-type? identityref 1144 | | +--ro path-affinities 1145 | | | +--ro constraint* [usage] 1146 | | | +--ro usage identityref 1147 | | | +--ro value? admin-groups 1148 | | +--ro path-srlgs 1149 | | +--ro usage? identityref 1150 | | +--ro values* srlg 1151 | +--ro path-id yang-types:uuid 1152 +--ro pathComputationService 1153 +--ro _path-ref* -> /paths/path/path-id 1154 +--ro _servicePort 1155 | +--ro source? inet:ip-address 1156 | +--ro destination? inet:ip-address 1157 | +--ro src-tp-id? binary 1158 | +--ro dst-tp-id? binary 1159 | +--ro bidirectional 1160 | +--ro association 1161 | +--ro id? uint16 1162 | +--ro source? inet:ip-address 1163 | +--ro global-source? inet:ip-address 1164 | +--ro type? identityref 1165 | +--ro provisioing? identityref 1166 +--ro path-constraints 1167 | +--ro path-metric-bound* [metric-type] 1168 | | +--ro metric-type identityref 1169 | | +--ro upper-bound? uint64 1170 | +--ro topology-id? te-types:te-topology-id 1171 | +--ro ignore-overload? boolean 1172 | +--ro bandwidth-generic 1173 | | +--ro te-bandwidth 1174 | | +--ro (technology)? 1175 | | +--:(psc) 1176 | | | +--ro psc? rt-types:bandwidth-ieee- 1177 float32 1178 | | +--:(otn) 1179 | | | +--ro otn* [rate-type] 1180 | | | +--ro rate-type identityref 1181 | | | +--ro counter? uint16 1182 | | +--:(lsc) 1183 | | | +--ro wdm* [spectrum slot] 1184 | | | +--ro spectrum identityref 1185 | | | +--ro slot int16 1186 | | | +--ro width? uint16 1187 | | +--:(generic) 1188 | | +--ro generic? te-bandwidth 1189 | +--ro disjointness? te-types:te-path-disjointness 1190 | +--ro setup-priority? uint8 1191 | +--ro hold-priority? uint8 1192 | +--ro signaling-type? identityref 1193 | +--ro path-affinities 1194 | | +--ro constraint* [usage] 1195 | | +--ro usage identityref 1196 | | +--ro value? admin-groups 1197 | +--ro path-srlgs 1198 | +--ro usage? identityref 1199 | +--ro values* srlg 1200 +--ro optimizations 1201 +--ro (algorithm)? 1202 +--:(metric) {path-optimization-metric}? 1203 | +--ro optimization-metric* [metric-type] 1204 | | +--ro metric-type identityref 1205 | | +--ro weight? uint8 1206 | +--ro tiebreakers 1207 | +--ro tiebreaker* [tiebreaker-type] 1208 | +--ro tiebreaker-type identityref 1209 +--:(objective-function) {path-optimization-objective- 1210 function}? 1211 +--ro objective-function 1212 +--ro objective-function-type? identityref 1213 augment /te:tunnels-rpc/te:input/te:tunnel-info: 1214 +---- request-list* [request-id-number] 1215 | +---- request-id-number uint32 1216 | +---- servicePort* 1217 | | +---- source? inet:ip-address 1218 | | +---- destination? inet:ip-address 1219 | | +---- src-tp-id? binary 1220 | | +---- dst-tp-id? binary 1221 | | +---- bidirectional 1222 | | +---- association 1223 | | +---- id? uint16 1224 | | +---- source? inet:ip-address 1225 | | +---- global-source? inet:ip-address 1226 | | +---- type? identityref 1227 | | +---- provisioing? identityref 1228 | +---- path-constraints 1229 | | +---- path-metric-bound* [metric-type] 1230 | | | +---- metric-type identityref 1231 | | | +---- upper-bound? uint64 1232 | | +---- topology-id? te-types:te-topology-id 1233 | | +---- ignore-overload? boolean 1234 | | +---- bandwidth-generic 1235 | | | +---- te-bandwidth 1236 | | | +---- (technology)? 1237 | | | +--:(psc) 1238 | | | | +---- psc? rt-types:bandwidth-ieee- 1239 float32 1240 | | | +--:(otn) 1241 | | | | +---- otn* [rate-type] 1242 | | | | +---- rate-type identityref 1243 | | | | +---- counter? uint16 1244 | | | +--:(lsc) 1245 | | | | +---- wdm* [spectrum slot] 1246 | | | | +---- spectrum identityref 1247 | | | | +---- slot int16 1248 | | | | +---- width? uint16 1249 | | | +--:(generic) 1250 | | | +---- generic? te-bandwidth 1251 | | +---- disjointness? te-types:te-path-disjointness 1252 | | +---- setup-priority? uint8 1253 | | +---- hold-priority? uint8 1254 | | +---- signaling-type? identityref 1255 | | +---- path-affinities 1256 | | | +---- constraint* [usage] 1257 | | | +---- usage identityref 1258 | | | +---- value? admin-groups 1259 | | +---- path-srlgs 1260 | | +---- usage? identityref 1261 | | +---- values* srlg 1262 | +---- optimizations 1263 | +---- (algorithm)? 1264 | +--:(metric) {path-optimization-metric}? 1265 | | +---- optimization-metric* [metric-type] 1266 | | | +---- metric-type identityref 1267 | | | +---- weight? uint8 1268 | | +---- tiebreakers 1269 | | +---- tiebreaker* [tiebreaker-type] 1270 | | +---- tiebreaker-type identityref 1271 | +--:(objective-function) {path-optimization-objective- 1272 function}? 1273 | +---- objective-function 1274 | +---- objective-function-type? identityref 1275 +---- synchronization* [synchronization-index] 1276 +---- synchronization-index uint32 1277 +---- svec 1278 | +---- relaxable? boolean 1279 | +---- link-diverse? boolean 1280 | +---- node-diverse? boolean 1281 | +---- srlg-diverse? boolean 1282 | +---- request-id-number* uint32 1283 +---- path-constraints 1284 +---- path-metric-bound* [metric-type] 1285 | +---- metric-type identityref 1286 | +---- upper-bound? uint64 1287 +---- topology-id? te-types:te-topology-id 1288 +---- ignore-overload? boolean 1289 +---- bandwidth-generic 1290 | +---- te-bandwidth 1291 | +---- (technology)? 1292 | +--:(psc) 1293 | | +---- psc? rt-types:bandwidth-ieee- 1294 float32 1295 | +--:(otn) 1296 | | +---- otn* [rate-type] 1297 | | +---- rate-type identityref 1298 | | +---- counter? uint16 1299 | +--:(lsc) 1300 | | +---- wdm* [spectrum slot] 1301 | | +---- spectrum identityref 1302 | | +---- slot int16 1303 | | +---- width? uint16 1304 | +--:(generic) 1305 | +---- generic? te-bandwidth 1306 +---- disjointness? te-types:te-path-disjointness 1307 +---- setup-priority? uint8 1308 +---- hold-priority? uint8 1309 +---- signaling-type? identityref 1310 +---- path-affinities 1311 | +---- constraint* [usage] 1312 | +---- usage identityref 1313 | +---- value? admin-groups 1314 +---- path-srlgs 1315 +---- usage? identityref 1316 +---- values* srlg 1317 augment /te:tunnels-rpc/te:output/te:result: 1318 +--ro response* [response-index] 1319 +--ro response-index uint32 1320 +--ro (response-type)? 1321 +--:(no-path-case) 1322 | +--ro no-path 1323 +--:(path-case) 1324 +--ro pathCompService 1325 +--ro _path-ref* -> /paths/path/path-id 1326 +--ro _servicePort 1327 | +--ro source? inet:ip-address 1328 | +--ro destination? inet:ip-address 1329 | +--ro src-tp-id? binary 1330 | +--ro dst-tp-id? binary 1331 | +--ro bidirectional 1332 | +--ro association 1333 | +--ro id? uint16 1334 | +--ro source? inet:ip-address 1335 | +--ro global-source? inet:ip-address 1336 | +--ro type? identityref 1337 | +--ro provisioing? identityref 1338 +--ro path-constraints 1339 | +--ro path-metric-bound* [metric-type] 1340 | | +--ro metric-type identityref 1341 | | +--ro upper-bound? uint64 1342 | +--ro topology-id? te-types:te-topology- 1343 id 1344 | +--ro ignore-overload? boolean 1345 | +--ro bandwidth-generic 1346 | | +--ro te-bandwidth 1347 | | +--ro (technology)? 1348 | | +--:(psc) 1349 | | | +--ro psc? rt-types:bandwidth- 1350 ieee-float32 1351 | | +--:(otn) 1352 | | | +--ro otn* [rate-type] 1353 | | | +--ro rate-type identityref 1354 | | | +--ro counter? uint16 1355 | | +--:(lsc) 1356 | | | +--ro wdm* [spectrum slot] 1357 | | | +--ro spectrum identityref 1358 | | | +--ro slot int16 1359 | | | +--ro width? uint16 1360 | | +--:(generic) 1361 | | +--ro generic? te-bandwidth 1362 | +--ro disjointness? te-types:te-path- 1363 disjointness 1364 | +--ro setup-priority? uint8 1365 | +--ro hold-priority? uint8 1366 | +--ro signaling-type? identityref 1367 | +--ro path-affinities 1368 | | +--ro constraint* [usage] 1369 | | +--ro usage identityref 1370 | | +--ro value? admin-groups 1371 | +--ro path-srlgs 1372 | +--ro usage? identityref 1373 | +--ro values* srlg 1374 +--ro optimizations 1375 +--ro (algorithm)? 1376 +--:(metric) {path-optimization-metric}? 1377 | +--ro optimization-metric* [metric-type] 1378 | | +--ro metric-type identityref 1379 | | +--ro weight? uint8 1380 | +--ro tiebreakers 1381 | +--ro tiebreaker* [tiebreaker-type] 1382 | +--ro tiebreaker-type identityref 1383 +--:(objective-function) {path-optimization- 1384 objective-function}? 1385 +--ro objective-function 1386 +--ro objective-function-type? 1387 identityref 1388 Figure 9 - TE path computation tree 1390 6.2.2. YANG Module 1392 file " ietf-te-path-computation.yang " 1393 module ietf-te-path-computation { 1394 yang-version 1.1; 1395 namespace "urn:ietf:params:xml:ns:yang:ietf-te-path-computation"; 1396 // replace with IANA namespace when assigned 1398 prefix "tepc"; 1400 import ietf-inet-types { 1401 prefix "inet"; 1402 } 1404 import ietf-yang-types { 1405 prefix "yang-types"; 1406 } 1408 import ietf-network-topology { 1409 prefix "nt"; 1410 } 1412 import ietf-te { 1413 prefix "te"; 1414 } 1416 import ietf-te-types { 1417 prefix "te-types"; 1418 } 1420 organization 1421 "Traffic Engineering Architecture and Signaling (TEAS) 1422 Working Group"; 1424 contact 1425 "WG Web: 1426 WG List: 1427 WG Chair: Lou Berger 1428 1430 WG Chair: Vishnu Pavan Beeram 1431 1433 "; 1435 description "YANG model for stateless TE path computation"; 1437 revision "2016-10-10" { 1438 description "Initial revision"; 1439 reference "YANG model for stateless TE path computation"; 1440 } 1442 /* 1443 * Features 1444 */ 1446 feature stateless-path-computation { 1447 description 1448 "This feature indicates that the system supports 1449 stateless path computation."; 1450 } 1452 /* 1453 * Groupings 1454 */ 1456 grouping Path { 1457 list _telink { 1458 key 'link-ref'; 1459 config false; 1460 uses nt:link-ref; 1461 description "List of telink refs."; 1462 } 1463 uses te-types:generic-path-constraints; 1464 leaf path-id { 1465 type yang-types:uuid; 1466 config false; 1467 description "path-id ref."; 1468 } 1469 description "Path is described by an ordered list of TE Links."; 1470 } 1472 grouping PathCompServicePort { 1473 leaf source { 1474 type inet:ip-address; 1475 description "TE tunnel source address."; 1476 } 1477 leaf destination { 1478 type inet:ip-address; 1479 description "P2P tunnel destination address"; 1480 } 1481 leaf src-tp-id { 1482 type binary; 1483 description "TE tunnel source termination point identifier."; 1484 } 1485 leaf dst-tp-id { 1486 type binary; 1487 description "TE tunnel destination termination point 1488 identifier."; 1489 } 1490 uses te:bidir-assoc-properties; 1491 description "Path Computation Service Port grouping."; 1492 } 1494 grouping PathComputationService { 1495 leaf-list _path-ref { 1496 type leafref { 1497 path '/paths/path/path-id'; 1498 } 1499 config false; 1500 description "List of previously computed path references."; 1501 } 1502 container _servicePort { 1503 uses PathCompServicePort; 1504 description "Path Computation Service Port."; 1505 } 1506 uses te-types:generic-path-constraints; 1507 uses te-types:generic-path-optimization; 1509 description "Path computation service."; 1510 } 1512 grouping synchronization-info { 1513 description "Information for sync"; 1514 list synchronization { 1515 key "synchronization-index"; 1516 description "sync list"; 1517 leaf synchronization-index { 1518 type uint32; 1519 description "index"; 1520 } 1521 container svec { 1522 description 1523 "Synchronization VECtor"; 1524 leaf relaxable { 1525 type boolean; 1526 default true; 1527 description 1528 "If this leaf is true, path computation process is free 1529 to ignore svec content. 1530 otherwise it must take into account this svec."; 1531 } 1532 leaf link-diverse { 1533 type boolean; 1534 default false; 1535 description "link-diverse"; 1536 } 1537 leaf node-diverse { 1538 type boolean; 1539 default false; 1540 description "node-diverse"; 1541 } 1542 leaf srlg-diverse { 1543 type boolean; 1544 default false; 1545 description "srlg-diverse"; 1546 } 1547 leaf-list request-id-number { 1548 type uint32; 1549 description 1550 "This list reports the set of M path computation requests 1551 that must be synchronized."; 1552 } 1553 } 1554 uses te-types:generic-path-constraints; 1555 } 1556 } 1558 grouping no-path-info { 1559 description "no-path-info"; 1560 container no-path { 1561 description "no-path container"; 1562 } 1563 } 1565 /* 1566 * Root container 1567 */ 1568 container paths { 1569 list path { 1570 key "path-id"; 1571 config false; 1572 uses Path; 1574 description "List of previous computed paths."; 1575 } 1576 description "Root container for path-computation"; 1578 } 1580 container pathComputationService { 1581 config false; 1582 uses PathComputationService; 1583 description "Service for computing paths."; 1584 } 1586 /** 1587 * AUGMENTS TO TE RPC 1588 */ 1590 augment "/te:tunnels-rpc/te:input/te:tunnel-info" { 1591 description "statelessComputeP2PPath input"; 1592 list request-list { 1593 key "request-id-number"; 1594 description "request-list"; 1595 leaf request-id-number { 1596 type uint32; 1597 mandatory true; 1598 description "Each path computation request is uniquely 1599 identified by the request-id-number. 1600 It must be present also in rpcs."; 1601 } 1602 list servicePort { 1603 min-elements 1; 1604 uses PathCompServicePort; 1605 description "List of service ports."; 1606 } 1607 uses te-types:generic-path-constraints; 1608 uses te-types:generic-path-optimization; 1610 } 1611 uses synchronization-info; 1612 } 1614 augment "/te:tunnels-rpc/te:output/te:result" { 1615 description "statelessComputeP2PPath output"; 1616 list response { 1617 key response-index; 1618 config false; 1619 description "response"; 1620 leaf response-index { 1621 type uint32; 1622 description 1623 "The list key that has to reuse request-id-number."; 1624 } 1625 choice response-type { 1626 config false; 1627 description "response-type"; 1628 case no-path-case { 1629 uses no-path-info; 1630 } 1631 case path-case { 1632 container pathCompService { 1633 uses PathComputationService; 1634 description "Path computation service."; 1635 } 1636 } 1637 } 1638 } 1639 } 1640 } 1641 1643 Figure 10 - TE path computation YANG module 1645 7. Security Considerations 1647 This document describes use cases of requesting Path Computation 1648 using YANG models, which could be used at the ABNO Control Interface 1649 [RFC7491] and/or between controllers in ACTN [ACTN-frame]. As such, 1650 it does not introduce any new security considerations compared to 1651 the ones related to YANG specification, ABNO specification and ACTN 1652 Framework defined in [RFC6020], [RFC7950], [RFC7491] and [ACTN- 1653 frame]. 1655 This document also defines common data types using the YANG data 1656 modeling language. The definitions themselves have no security 1657 impact on the Internet, but the usage of these definitions in 1658 concrete YANG modules might have. The security considerations 1659 spelled out in the YANG specification [RFC6020] apply for this 1660 document as well. 1662 8. IANA Considerations 1664 This section is for further study: to be completed when the YANG 1665 model is more stable. 1667 9. References 1669 9.1. Normative References 1671 [RFC6020] Bjorklund, M., "YANG - A Data Modeling Language for the 1672 Network Configuration Protocol (NETCONF)", RFC 6020, 1673 October 2010. 1675 [RFC7139] Zhang, F. et al., "GMPLS Signaling Extensions for Control 1676 of Evolving G.709 Optical Transport Networks", RFC 7139, 1677 March 2014. 1679 [RFC7491] Farrel, A., King, D., "A PCE-Based Architecture for 1680 Application-Based Network Operations", RFC 7491, March 2015. 1682 [RFC7926] Farrel, A. et al., "Problem Statement and Architecture for 1683 Information Exchange Between Interconnected Traffic 1684 Engineered Networks", RFC 7926, July 2016. 1686 [RFC7950] Bjorklund, M., "The YANG 1.1 Data Modeling Language", RFC 1687 7950, August 2016. 1689 [TE-TOPO] Liu, X. et al., "YANG Data Model for TE Topologies", 1690 draft-ietf-teas-yang-te-topo, work in progress. 1692 [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic 1693 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 1694 te, work in progress. 1696 [ACTN-Frame] Ceccarelli, D., Lee, Y. et al., "Framework for 1697 Abstraction and Control of Traffic Engineered Networks" 1698 draft-ietf-actn-framework, work in progress. 1700 [ITU-T G.709-2016] ITU-T Recommendation G.709 (06/16), "Interface 1701 for the optical transport network", June 2016 1703 9.2. Informative References 1705 [RFC5541] Le Roux, JL. et al., " Encoding of Objective Functions in 1706 the Path Computation Element Communication Protocol 1707 (PCEP)", RFC 5541, June 2009. 1709 [RFC7446] Lee, Y. et al., "Routing and Wavelength Assignment 1710 Information Model for Wavelength Switched Optical 1711 Networks", RFC 7446, February 2015. 1713 [OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical 1714 Transport Network Topology", draft-ietf-ccamp-otn-topo- 1715 yang, work in progress. 1717 [ACTN-Info] Lee, Y., Belotti, S., Dhody, D., Ceccarelli, D., 1718 "Information Model for Abstraction and Control of 1719 Transport Networks", draft-leebelotti-actn-info, work in 1720 progress. 1722 [PCEP-Service-Aware] Dhody, D. et al., " Extensions to the Path 1723 Computation Element Communication Protocol (PCEP) to 1724 compute service aware Label Switched Path (LSP)", draft- 1725 ietf-pce-pcep-service-aware, work in progress. 1727 10. Acknowledgments 1729 The authors would like to thank Igor Bryskin and Xian Zhang for 1730 participating in discussions and providing valuable insights. 1732 The authors would like to thank the authors of the TE Tunnel YANG 1733 model [TE-TUNNEL], in particular Igor Bryskin, Vishnu Pavan Beeram, 1734 Tarek Saad and Xufeng Liu, for their inputs to the discussions and 1735 support in having consistency between the Path Computation and TE 1736 Tunnel YANG models. 1738 This document was prepared using 2-Word-v2.0.template.dot. 1740 Contributors 1742 Dieter Beller 1743 Nokia 1744 Email: dieter.beller@nokia.com 1746 Gianmarco Bruno 1747 Ericsson 1748 Email: gianmarco.bruno@ericsson.com 1750 Francesco Lazzeri 1751 Ericsson 1752 Email: francesco.lazzeri@ericsson.com 1754 Young Lee 1755 Huawei 1756 Email: leeyoung@huawei.com 1758 Carlo Perocchio 1759 Ericsson 1760 Email: carlo.perocchio@ericsson.com 1762 Authors' Addresses 1764 Italo Busi (Editor) 1765 Huawei 1766 Email: italo.busi@huawei.com 1768 Sergio Belotti (Editor) 1769 Nokia 1770 Email: sergio.belotti@nokia.com 1772 Victor Lopez 1773 Telefonica 1774 Email: victor.lopezalvarez@telefonica.com 1775 Oscar Gonzalez de Dios 1776 Telefonica 1777 Email: oscar.gonzalezdedios@telefonica.com 1779 Anurag Sharma 1780 Infinera 1781 Email: AnSharma@infinera.com 1783 Yan Shi 1784 China Unicom 1785 Email: shiyan49@chinaunicom.cn 1787 Ricard Vilalta 1788 CTTC 1789 Email: ricard.vilalta@cttc.es 1791 Karthik Sethuraman 1792 NEC 1793 Email: karthik.sethuraman@necam.com 1795 Michael Scharf 1796 Nokia 1797 Email: michael.scharf@nokia.com 1799 Daniele Ceccarelli 1800 Ericsson 1801 Email: daniele.ceccarelli@ericsson.com