idnits 2.17.1 draft-busibel-teas-yang-path-computation-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 129 instances of too long lines in the document, the longest one being 14 characters in excess of 72. ** The abstract seems to contain references ([TE-TUNNEL]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 359 has weird spacing: '...ach may have ...' == Line 742 has weird spacing: '...ination is ve...' == Line 1100 has weird spacing: '...ro name str...' == Line 1111 has weird spacing: '...ro name str...' == Line 1166 has weird spacing: '...ro name str...' == (9 more instances...) -- The document date (March 3, 2017) is 2605 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ACTN-frame' is mentioned on line 1835, but not defined == Unused Reference: 'ACTN-Info' is defined on line 1908, but no explicit reference was found in the text -- No information found for draft-ietf-actn-framework - is the name correct? Summary: 2 errors (**), 0 flaws (~~), 9 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Italo Busi (Ed.) 2 Internet Draft Huawei 3 Intended status: Informational Sergio Belotti (Ed.) 4 Expires: September 2017 Nokia 5 Victor Lopez 6 Oscar Gonzalez de Dios 7 Telefonica 8 Anurag Sharma 9 Infinera 10 Yan Shi 11 China Unicom 12 Ricard Vilalta 13 CTTC 14 Karthik Sethuraman 15 NEC 17 March 3, 2017 19 Yang model for requesting Path Computation 20 draft-busibel-teas-yang-path-computation-02.txt 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF), its areas, and its working groups. Note that 29 other groups may also distribute working documents as Internet- 30 Drafts. 32 Internet-Drafts are draft documents valid for a maximum of six 33 months and may be updated, replaced, or obsoleted by other documents 34 at any time. It is inappropriate to use Internet-Drafts as 35 reference material or to cite them other than as "work in progress." 37 The list of current Internet-Drafts can be accessed at 38 http://www.ietf.org/ietf/1id-abstracts.txt 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html 42 This Internet-Draft will expire on September 3, 2016. 44 Copyright Notice 46 Copyright (c) 2017 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with 54 respect to this document. Code Components extracted from this 55 document must include Simplified BSD License text as described in 56 Section 4.e of the Trust Legal Provisions and are provided without 57 warranty as described in the Simplified BSD License. 59 Abstract 61 There are scenarios, typically in a hierarchical SDN context, in 62 which an orchestrator may not have detailed information to be able 63 to perform an end-to-end path computation and would need to request 64 lower layer/domain controllers to calculate some (partial) feasible 65 paths. 67 Multiple protocol solutions can be used for communication between 68 different controller hierarchical levels. This document assumes that 69 the controllers are communicating using YANG-based protocols (e.g., 70 NETCONF or RESTCONF). 72 This document describes some use cases where a path computation 73 request, via YANG-based protocols (e.g., NETCONF or RESTCONF), can 74 be needed. 76 This document also proposes a yang model for a stateless RPC which 77 complements the stateful solution defined in [TE-TUNNEL]. 79 Table of Contents 81 1. Introduction...................................................3 82 2. Use Cases......................................................4 83 2.1. IP-Optical integration....................................5 84 2.1.1. Inter-layer path computation.........................6 85 2.1.2. Route Diverse IP Services............................8 86 2.2. Multi-domain TE Networks..................................8 87 2.3. Data center interconnections..............................9 88 3. Interactions with TE Topology.................................11 89 3.1. TE Topology Aggregation using the "virtual link model"...11 90 3.2. TE Topology Abstraction..................................19 91 3.3. Complementary use of TE topology and path computation....20 92 4. Motivation for a YANG Model...................................22 93 4.1. Benefits of common data models...........................22 94 4.2. Benefits of a single interface...........................23 95 4.3. Extensibility............................................23 96 5. Path Computation for multiple LSPs............................24 97 6. YANG Model for requesting Path Computation....................25 98 6.1. Modeling Considerations..................................25 99 6.1.1. Stateless and Stateful Path Computation.............25 100 6.1.2. Reduction of Path Computation Requests..............25 101 6.2. YANG model for stateless TE path computation.............25 102 6.2.1. YANG Tree...........................................25 103 6.2.2. YANG Module.........................................36 104 7. Security Considerations.......................................45 105 8. IANA Considerations...........................................45 106 9. References....................................................45 107 9.1. Normative References.....................................45 108 9.2. Informative References...................................46 109 10. Acknowledgments..............................................47 111 1. Introduction 113 There are scenarios, typically in a hierarchical SDN context, in 114 which an orchestrator may not have detailed information to be able 115 to perform an end-to-end path computation and would need to request 116 lower layer/domain controllers to calculate some (partial) feasible 117 paths. 119 When we are thinking to this type of scenarios we have in mind 120 specific level of interfaces on which this request can be applied. 122 We can reference ABNO Control Interface [RFC7491] in which an 123 Application Service Coordinator can request ABNO controller to take 124 in charge path calculation (see Figure 1 in the RFC) and/or ACTN 125 [ACTN-frame],where controller hierarchy is defined, the need for 126 path computation arises on both interfaces CMI (interface between 127 Customer Network Controller(CNC) and Multi Domain Service 128 Coordinator (MDSC)) and/or MPI (interface between MSDC-PNC).[ACTN- 129 Info] describes an information model for the Path Computation 130 request. 132 Multiple protocol solutions can be used for communication between 133 different controller hierarchical levels. This document assumes that 134 the controllers are communicating using YANG-based protocols (e.g., 135 NETCONF or RESTCONF). 137 Path Computation Elements, Controllers and Orchestrators perform 138 their operations based on Traffic Engineering Databases (TED). Such 139 TEDs can be described, in a technology agnostic way, with the YANG 140 Data Model for TE Topologies [TE-TOPO]. Furthermore, the technology 141 specific details of the TED are modeled in the augmented TE topology 142 models (e.g. [L1-TOPO] for Layer-1 ODU technologies). 144 The availability of such topology models allows providing the TED 145 using YANG-based protocols (e.g., NETCONF or RESTCONF). Furthermore, 146 it enables a PCE/Controller performing the necessary abstractions or 147 modifications and offering this customized topology to another 148 PCE/Controller or high level orchestrator. 150 The tunnels that can be provided over the networks described with 151 the topology models can be also set-up, deleted and modified via 152 YANG-based protocols (e.g., NETCONF or RESTCONF)using the TE-Tunnel 153 Yang model [TE-TUNNEL]. 155 This document describes some use cases where a path computation 156 request, via YANG-based protocols (e.g., NETCONF or RESTCONF), can 157 be needed. 159 This document also proposes a yang model for a stateless RPC which 160 complements the stateful solution defined in [TE-TUNNEL]. 162 2. Use Cases 164 This section presents different use cases, where an orchestrator 165 needs to request underlying SDN controllers for path computation. 167 The presented uses cases have been grouped, depending on the 168 different underlying topologies: a) IP-Optical integration; b) 169 Multi-domain Traffic Engineered (TE) Networks; and c) Data center 170 interconnections. 172 2.1. IP-Optical integration 174 In these use cases, an Optical domain is used to provide 175 connectivity between IP routers which are connected with the Optical 176 domains using access links (see Figure 1). 178 -------------------------------------------------------------------- 179 I I 180 I I 181 I I 182 I IP+Optical Use Cases I 183 I I 184 I I 185 I I 186 I I 187 I (only in PDF version) I 188 I I 189 I I 190 I I 191 I I 192 I I 193 I I 194 I I 195 -------------------------------------------------------------------- 197 Figure 1 - IP+Optical Use Cases 199 It is assumed that the Optical domain controller provides to the 200 orchestrator an abstracted view of the Optical network. A possible 201 abstraction shall be representing the optical domain as one "virtual 202 node" with "virtual ports" connected to the access links. 204 The path computation request helps the orchestrator to know which 205 are the real connections that can be provided at the optical domain. 207 -------------------------------------------------------------------- 208 I I 209 I I 210 I I 211 I I 212 I I 213 I I 214 I I 215 I I 216 I IP+Optical Topology Abstraction I 217 I I 218 I I 219 I I 220 I I 221 I (only in PDF version) I 222 I I 223 I I 224 I I 225 I I 226 I I 227 I I 228 I I 229 I I 230 -------------------------------------------------------------------- 232 Figure 2 - IP+Optical Topology Abstraction 234 2.1.1. Inter-layer path computation 236 In this use case, the orchestrator needs to setup an optimal path 237 between two IP routers R1 and R2. 239 As depicted in Figure 2, the Orchestrator has only an "abstracted 240 view" of the physical network, and it does not know the feasibility 241 or the cost of the possible optical paths (e.g., VP1-VP4 and VP2- 242 VP5), which depend from the current status of the physical resources 243 within the optical network and on vendor-specific optical 244 attributes. 246 The orchestrator can request the underlying Optical domain 247 controller to compute a set of potential optimal paths, taking into 248 account optical constraints. Then, based on its own constraints, 249 policy and knowledge (e.g. cost of the access links), it can choose 250 which one of these potential paths to use to setup the optimal e2e 251 path crossing optical network. 253 -------------------------------------------------------------------- 254 I I 255 I IP+Optical Path Computation Example I 256 I I 257 I I 258 I (only in PDF version) I 259 I I 260 -------------------------------------------------------------------- 262 Figure 3 - IP+Optical Path Computation Example 264 For example, in Figure 3, the Orchestrator can request the Optical 265 domain controller to compute the paths between VP1-VP4 and VP2-VP5 266 and then decide to setup the optimal end-to-end path using the VP2- 267 VP5 Optical path even this is not the optimal path from the Optical 268 domain perspective. 270 Considering the dynamicity of the connectivity constraints of an 271 Optical domain, it is possible that a path computed by the Optical 272 domain controller when requested by the Orchestrator is no longer 273 valid when the Orchestrator requests it to be setup up. 275 It is worth noting that with the approach proposed in this document, 276 the likelihood for this issue to happen can be quite small since the 277 time window between the path computation request and the path setup 278 request should be quite short (especially if compared with the time 279 that would be needed to update the information of a very detailed 280 abstract connectivity matrix). 282 If this risk is still not acceptable, the Orchestrator may also 283 optionally request the Optical domain controller not only to compute 284 the path but also to keep track of its resources (e.g., these 285 resources can be reserved to avoid being used by any other 286 connection). In this case, some mechanism (e.g., a timeout) needs to 287 be defined to avoid having stranded resources within the Optical 288 domain. 290 These issues and solutions can be fine-tuned during the design of 291 the YANG model for requesting Path Computation. 293 2.1.2. Route Diverse IP Services 295 This is for further study. 297 2.2. Multi-domain TE Networks 299 In this use case there are two TE domains which are interconnected 300 together by multiple inter-domains links. 302 A possible example could be a multi-domain optical network. 304 -------------------------------------------------------------------- 305 I I 306 I I 307 I I 308 I I 309 I I 310 I I 311 I Multi-domain multi-link interconnection I 312 I I 313 I I 314 I I 315 I I 316 I (only in PDF version) I 317 I I 318 I I 319 I I 320 I I 321 I I 322 I I 323 -------------------------------------------------------------------- 325 Figure 4 - Multi-domain multi-link interconnection 327 In order to setup an end-to-end multi-domain TEpath (e.g., between 328 nodes A and H), the orchestrator needs to know the feasibility or 329 the cost of the possible TE paths within the two TE domains, which 330 depend from the current status of the physical resources within each 331 TE network. This is more challenging in case of optical networks 332 because the optimal paths depend also on vendor-specific optical 333 attributes (which may be different in the two domains if they are 334 provided by different vendors). 336 In order to setup a multi-domain TE path (e.g., between nodes A and 337 H), Orchestrator can request the TE domain controllers to compute a 338 set of intra-domain optimal paths and take decisions based on the 339 information received. For example: 341 o The Orchestrator asks TE domain controllers to provide set of 342 paths between A-C, A-D, E-H and F-H 344 o TE domain controllers return a set of feasible paths with the 345 associated costs: the path A-C is not part of this set(in optical 346 networks, it is typical to have some paths not being feasible due 347 to optical constraints that are known only by the optical domain 348 controller) 350 o The Orchestrator will select the path A- D-F- H since it is the 351 only feasible multi-domain path and then request the TE domain 352 controllers to setup the A-D and F-H intra-domain paths 354 o If there are multiple feasible paths, the Orchestrator can select 355 the optimal path knowing the cost of the intra-domain paths 356 (provided by the TE domain controllers) and the cost of the 357 inter-domain links (known by the Orchestrator) 359 This approach may have some scalability issues when the number of TE 360 domains is quite big (e.g. 20). 362 In this case, it would be worthwhile using the abstract TE topology 363 information provided by the domain controllers to limit the number of 364 potential optimal end-to-end paths and then request path computation 365 to fewer domain controllers in order to decide what the optimal path 366 within this limited set is. 368 For more details, see section 3.3. 370 2.3. Data center interconnections 372 In these use case, there is an TE domain which is used to provide 373 connectivity between data centers which are connected with the TE 374 domain using access links. 376 -------------------------------------------------------------------- 377 I I 378 I I 379 I I 380 I I 381 I I 382 I I 383 I Data Center Interconnection Use Case I 384 I I 385 I I 386 I I 387 I (only in PDF version) I 388 I I 389 I I 390 I I 391 I I 392 I I 393 I I 394 -------------------------------------------------------------------- 396 Figure 5 - Data Center Interconnection Use Case 398 In this use case, a virtual machine within Data Center 1 (DC1) needs 399 to transfer data to another virtual machine that can reside either 400 in DC2 or in DC3. 402 The optimal decision depends both on the cost of the TE path (DC1- 403 DC2 or DC1-DC3) and of the computing power (data center resources) 404 within DC2 or DC3. 406 The Cloud Orchestrator may not be able to make this decision because 407 it has only an abstract view of the TE network (as in use case in 408 2.1). 410 The cloud orchestrator can request to the TE domain controller to 411 compute the cost of the possible TE paths (e.g., DC1-DC2 and DC1- 412 DC3) and to the DC controller to compute the cost of the computing 413 power (DC resources) within DC2 and DC3 and then it can take the 414 decision about the optimal solution based on this information and 415 its policy. 417 3. Interactions with TE Topology 419 The use cases described in section 2 have been described assuming 420 that the topology view exported by each underlying SDN controller to 421 the orchestrator is aggregated using the "virtual node model", 422 defined in [RFC7926]. 424 TE Topology information, e.g., as provided by [TE-TOPO], could in 425 theory be used by an underlying SDN controllers to provide TE 426 information to the orchestrator thus allowing the Path Computation 427 Element (PCE) within the Orchestrator to perform multi-domain path 428 computation by its own, without requesting path computations to the 429 underlying SDN controllers. 431 This section analyzes the need for an orchestrator to request 432 underlying SDN controllers for path computation even in these 433 scenarios as well as how the TE Topology information and the path 434 computation can be complementary. 436 In nutshell, there is a scalability trade-off between providing all 437 the TE information needed by the Orchestrator's PCE to take optimal 438 path computation decisions by its own versus requesting the 439 Orchestrator to ask to too many underlying SDN Domain Controllers a 440 set of feasible optimal intra-domain TE paths. 442 3.1. TE Topology Aggregation using the "virtual link model" 444 Using the TE Topology model, as defined in [TE-TOPO], the underlying 445 SDN controller can export the whole TE domain as a single abstract 446 TE node with a "detailed connectivity matrix", which extends the 447 "connectivity matrix", defined in [RFC7446], with specific TE 448 attributes (e.g., delay, SRLGs and summary TE metrics). 450 The information provided by the "detailed abstract connectivity 451 matrix" would be equivalent to the information that should be 452 provided by "virtual link model" as defined in [RFC7926]. 454 For example, in the IP-Optical integration use case, described in 455 section 2.1, the Optical domain controller can make the information 456 shown in Figure 3 available to the Orchestrator as part of the TE 457 Topology information and the Orchestrator could use this information 458 to calculate by its own the optimal path between routers R1 and R2, 459 without requesting any additional information to the Optical Domain 460 Controller. 462 However, there is a tradeoff between the accuracy (i.e., providing 463 "all" the information that might be needed by the Orchestrator's 464 PCE) and scalability to be considered when designing the amount of 465 information to provide within the "detailed abstract connectivity 466 matrix". 468 Figure 6 below shows another example, similar to Figure 3, where 469 there are two possible Optical paths between VP1 and VP4 with 470 different properties (e.g., available bandwidth and cost). 472 -------------------------------------------------------------------- 473 I I 474 I IP+Optical Path Computation Example I 475 I with multiple choices I 476 I I 477 I I 478 I I 479 I (only in PDF version) I 480 I I 481 -------------------------------------------------------------------- 483 Figure 6 - IP+Optical Path Computation Example with multiple choices 485 Reporting all the information, as in Figure 6, using the "detailed 486 abstract connectivity matrix", is quite challenging from a 487 scalability perspective. The amount of this information is not just 488 based on number of end points (which would scale as N-square), but 489 also on many other parameters, including client rate, user 490 constraints / policies for the service, e.g. max latency < N ms, max 491 cost, etc., exclusion policies to route around busy links, min OSNR 492 margin, max preFEC BER etc. All these constraints could be different 493 based on connectivity requirements. 495 In the following table, a list of the possible constraints, 496 associated with their potential cardinality, is reported. 498 The maximum number of potential connections to be computed and 499 reported is, in first approximation, the multiplication of all of 500 them. 502 Constraint Cardinality 503 ---------- ------------------------------------------------------- 505 End points N(N-1)/2 if connections are bidirectional (OTN and WDM), 506 N(N-1) for unidirectional connections. 508 Bandwidth In WDM networks, bandwidth values are expressed in GHz. 510 On fixed-grid WDM networks, the central frequencies are 511 on a 50GHz grid and the channel width of the transmitters 512 are typically 50GHz such that each central frequency can 513 be used, i.e., adjacent channels can be placed next to 514 each other in terms of central frequencies. 516 On flex-grid WDM networks, the central frequencies are on 517 a 6.25GHz grid and the channel width of the transmitters 518 can be multiples of 12.5GHz. 520 For fixed-grid WDM networks typically there is only one 521 possible bandwidth value (i.e., 50GHz) while for flex- 522 grid WDM networks typically there are 4 possible 523 bandwidth values (e.g., 37.5GHz, 50GHz, 62.5GHz, 75GHz). 525 In OTN (ODU) networks, bandwidth values are expressed as 526 pairs of ODU type and, in case of ODUflex, ODU rate in 527 bytes/sec as described in section 5 of [RFC7139]. 529 For "fixed" ODUk types, 6 possible bandwidth values are 530 possible (i.e., ODU0, ODU1, ODU2, ODU2e, ODU3, ODU4). 532 For ODUflex(GFP), up to 80 different bandwidth values can 533 be specified, as defined in Table 7-8 of [ITU-T G.709- 534 2016]. 536 For other ODUflex types, like ODUflex(CBR), the number of 537 possible bandwidth values depends on the rates of the 538 clients that could be mapped over these ODUflex types, as 539 shown in Table 7.2 of [ITU-T G.709-2016], which in theory 540 could be a countinuum of values. However, since different 541 ODUflex bandwidths that use the same number of TSs on 542 each link along the path are equivalent for path 543 computation purposes, up to 120 different bandwidth 544 ranges can be specified. 546 Ideas to reduce the number of ODUflex bandwidth values in 547 the detailed connectivity matrix, to less than 100, are 548 for further study. 550 Bandwidth specification for ODUCn is currently for 551 further study but it is expected that other bandwidth 552 values can be specified as integer multiples of 100Gb/s. 554 In IP we have bandwidth values in bytes/sec. In 555 principle, this is a countinuum of values, but in 556 practice we can identify a set of bandwidth ranges, where 557 any bandwidth value inside the same range produces the 558 same path. 559 The number of such ranges is the cardinality, which 560 depends on the topology, available bandwidth and status 561 of the network. Simulations (Note: reference paper 562 submitted for publication) show that values for medium 563 size topologies (around 50-150 nodes) are in the range 4- 564 7 (5 on average) for each end points couple. 566 Metrics IGP, TE and hop number are the basic objective metrics 567 defined so far. There are also the 2 objective functions 568 defined in [RFC5541]: Minimum Load Path (MLP) and Maximum 569 Residual Bandwidth Path (MBP). Assuming that one only 570 metric or objective function can be optimized at once, 571 the total cardinality here is 5. 573 With [PCEP-Service-Aware], a number of additional metrics 574 are defined, including Path Delay metric, Path Delay 575 Variation metric and Path Loss metric, both for point-to- 576 point and point-to-multipoint paths. This increases the 577 cardinality to 8. 579 Bounds Each metric can be associated with a bound in order to 580 find a path having a total value of that metric lower 581 than the given bound. This has a potentially very high 582 cardinality (as any value for the bound is allowed). In 583 practice there is a maximum value of the bound (the one 584 with the maximum value of the associated metric) which 585 results always in the same path, and a range approach 586 like for bandwidth in IP should produce also in this case 587 the cardinality. Assuming to have a cardinality similar 588 to the one of the bandwidth (let say 5 on average) we 589 should have 6 (IGP, TE, hop, path delay, path delay 590 variation and path loss; we don't consider here the two 591 objective functions of [RFC5541] as they are conceived 592 only for optimization)*5 = 30 cardinality. 594 Priority We have 8 values for setup priority, which is used in 595 path computation to route a path using free resources 596 and, where no free resources are available, resources 597 used by LSPs having a lower holding priority. 599 Local prot It's possible to ask for a local protected service, where 600 all the links used by the path are protected with fast 601 reroute (this is only for IP networks, but line 602 protection schemas are available on the other 603 technologies as well). This adds an alternative path 604 computation, so the cardinality of this constraint is 2. 606 Administrative 607 Colors Administrative colors (aka affinities) are typically 608 assigned to links but when topology abstraction is used 609 affinity information can also appear in the detailed 610 connectivity matrix. 612 There are 32 bits available for the affinities. Links can 613 be tagged with any combination of these bits, and path 614 computation can be constrained to include or exclude any 615 or all of them. The relevant cardinality is 3 (include- 616 any, exclude-any, include-all) times 2^32 possible 617 values. However, the number of possible values used in 618 real networks is quite small. 620 Included Resources 622 A path computation request can be associated to an 623 ordered set of network resources (links, nodes) to be 624 included along the computed path. This constraint would 625 have a huge cardinality as in principle any combination 626 of network resources is possible. However, as far as the 627 Orchestrator doesn't know details about the internal 628 topology of the domain, it shouldn't include this type of 629 constraint at all (see more details below). 631 Excluded Resources 633 A path computation request can be associated to a set of 634 network resources (links, nodes, SRLGs) to be excluded 635 from the computed path. Like for included resources, 636 this constraint has a potentially very high cardinality, 637 but, once again, it can't be actually used by the 638 Orchestrator, if it's not aware of the domain topology 639 (see more details below). 640 As discussed above, the Orchestrator can specify include or exclude 641 resources depending on the abstract topology information that the 642 domain controller exposes: 644 o In case the domain controller exposes the entire domain as a 645 single abstract TE node with his own external terminations and 646 connectivity matrix (whose size we are estimating), no other 647 topological details are available, therefore the size of the 648 connectivity matrix only depends on the combination of the 649 constraints that the Orchestrator can use in a path computation 650 request to the domain controller. These constraints cannot refer 651 to any details of the internal topology of the domain, as those 652 details are not known to the Orchestrator and so they do not 653 impact size of connectivity matrix exported. 655 o Instead in case the domain controller exposes a topology 656 including more than one abstract TE nodes and TE links, and their 657 attributes (e.g. SRLGs, affinities for the links), the 658 Orchestrator knows these details and therefore could compute a 659 path across the domain referring to them in the constraints. The 660 connectivity matrixes to be estimated here are the ones relevant 661 to the abstract TE nodes exported to the Orchestrator. These 662 connectivity matrixes and therefore theirs sizes, while cannot 663 depend on the other abstract TE nodes and TE links, which are 664 external to the given abstract node, could depend to SRLGs (and 665 other attributes, like affinities) which could be present also in 666 the portion of the topology represented by the abstract nodes, 667 and therefore contribute to the size of the related connectivity 668 matrix. 670 We also don't consider here the possibility to ask for more than one 671 path in diversity or for point-to-multi-point paths, which are for 672 further study. 674 Considering for example an IP domain without considering SRLG and 675 affinities, we have an estimated number of paths depending on these 676 estimated cardinalities: 678 Endpoints = N*(N-1), Bandwidth = 5, Metrics = 6, Bounds = 20, 679 Priority = 8, Local prot = 2 680 The number of paths to be pre-computed by each IP domain is 681 therefore 24960 * N(N-1) where N is the number of domain access 682 points. 684 This means that with just 4 access points we have nearly 300000 685 paths to compute, advertise and maintain (if a change happens in the 686 domain, due to a fault, or just the deployment of new traffic, a 687 substantial number of paths need to be recomputed and the relevant 688 changes advertised to the upper controller). 690 This seems quite challenging. In fact, if we assume a mean length of 691 1K for the json describing a path (a quite conservative estimate), 692 reporting 300000 paths means transferring and then parsing more than 693 300 Mbytes for each domain. If we assume that 20% (to be checked) of 694 this paths change when a new deployment of traffic occurs, we have 695 60 Mbytes of transfer for each domain traversed by a new end-to-end 696 path. If a network has, let say, 20 domains (we want to estimate the 697 load for a non-trivial domain setup) in the beginning a total 698 initial transfer of 6Gigs is needed, and eventually, assuming 4-5 699 domains are involved in mean during a path deployment we could have 700 240-300 Mbytes of changes advertised to the higher order controller. 702 Further bare-bone solutions can be investigated, removing some more 703 options, if this is considered not acceptable; in conclusion, it 704 seems that an approach based only on connectivity matrix is hardly 705 feasible, and could be applicable only to small networks with a 706 limited meshing degree between domains and renouncing to a number of 707 path computation features. 709 It is also worth noting that the "connectivity matrix" has been 710 originally defined in WSON, [RFC7446] to report the connectivity 711 constrains of a physical node within the WDM network: the 712 information it contains is pretty "static" and therefore, once taken 713 and stored in the TE data base, it can be always being considered 714 valid and up-to-date in path computation request. 716 Using the "connectivity matrix" with an abstract node to abstract 717 the information regarding the connectivity constraints of an Optical 718 domain, would make this information more "dynamic" since the 719 connectivity constraints of an Optical domain can change over time 720 because some optical paths that are feasible at a given time may 721 become unfeasible at a later time when e.g., another optical path is 722 established. The information in the "detailed abstract connectivity 723 matrix" is even more dynamic since the establishment of another 724 optical path may change some of the parameters (e.g., delay or 725 available bandwidth) in the "detailed abstract connectivity matrix" 726 while not changing the feasibility of the path. 728 "Connectivity matrix" is sometimes confused with optical reach table 729 that contain multiple (e.g. k-shortest) regen-free reachable paths 730 for every A-Z node combination in the network. Optical reach tables 731 can be calculated offline, utilizing vendor optical design and 732 planning tools,and periodically uploaded to the Controller: these 733 optical path reach tables are fairly static. However, to get the 734 connectivity matrix, between any two sites, either a regen free path 735 can be used, if one is available, or multiple regen free paths are 736 concatenated to get from src to dest, which can be a very large 737 combination. Additionally, when the optical path within optical 738 domain needs to be computed, it can result in different paths based 739 on input objective, constraints, and network conditions. In summary, 740 even though "optical reachability table" is fairly static, which 741 regen free paths to build the connectivity matrix between any source 742 and destination is very dynamic, and is done using very 743 sophisticated routing algorithms. 745 There is therefore the need to keep the information in the 746 "connectivity matrix" updated which means that there another 747 tradeoff between the accuracy (i.e., providing "all" the information 748 that might be needed by the Orchestrator's PCE) and having up-to- 749 date information. The more the information is provided and the 750 longer it takes to keep it up-to-date which increases the likelihood 751 that the Orchestrator's PCE computes paths using not updated 752 information. 754 It seems therefore quite challenging to have a "detailed abstract 755 connectivity matrix" that provides accurate, scalable and updated 756 information to allow the Orchestrator's PCE to take optimal 757 decisions by its own. 759 If the information in the "detailed abstract connectivity matrix" is 760 not complete/accurate, we can have the following drawbacks 761 considering for example the case in Figure 6: 763 o If only the VP1-VP4 path with available bandwidth of 2 Gb/s and 764 cost 50 is reported, the Orchestrator's PCE will fail to compute 765 a 5 Gb/s path between routers R1 and R2, although this would be 766 feasible; 768 o If only the VP1-VP4 path with available bandwidth of 10 Gb/s and 769 cost 60 is reported, the Orchestrator's PCE will compute, as 770 optimal, the 1 Gb/s path between R1 and R2 going through the VP2- 771 VP5 path within the Optical domain while the optimal path would 772 actually be the one going thought the VP1-VP4 sub-path (with cost 773 50) within the Optical domain. 775 Instead, using the approach proposed in this document, the 776 Orchestrator, when it needs to setup an end-to-end path, it can 777 request the Optical domain controller to compute a set of optimal 778 paths (e.g., for VP1-VP4 and VP2-VP5) and take decisions based on 779 the information received: 781 o When setting up a 5 Gb/s path between routers R1 and R2, the 782 Optical domain controller may report only the VP1-VP4 path as the 783 only feasible path: the Orchestrator can successfully setup the 784 end-to-end path passing though this Optical path; 786 o When setting up a 1 Gb/s path between routers R1 and R2, the 787 Optical domain controller (knowing that the path requires only 1 788 Gb/s) can report both the VP1-VP4 path, with cost 50, and the 789 VP2-VP5 path, with cost 65. The Orchestrator can then compute the 790 optimal path which is passing thought the VP1-VP4 sub-path (with 791 cost 50) within the Optical domain. 793 3.2. TE Topology Abstraction 795 Using the TE Topology model, as defined in [TE-TOPO], the underlying 796 SDN controller can export an abstract TE Topology, composed by a set 797 of TE nodes and TE links, which are abstracting the topology 798 controlled by each domain controller. 800 Considering the example in Figure 4, the TE domain controller 1 can 801 export a TE Topology encompassing the TE nodes A, B, C and D and the 802 TE Link interconnecting them. In a similar way, TE domain controller 803 2 can export a TE Topology encompassing the TE nodes E, F, G and H 804 and the TE Link interconnecting them. 806 In this example, for simplicity reasons, each abstract TE node maps 807 with each physical node, but this is not necessary. 809 In order to setup a multi-domain TE path (e.g., between nodes A and 810 H), the Orchestrator can compute by its own an optimal end-to-end 811 path based on the abstract TE topology information provided by the 812 domain controllers. For example: 814 o Orchestrator's PCE, based on its own information, can compute the 815 optimal multi-domain path being A-B-C-E-G-H, and then request the 816 TE domain controllers to setup the A-B-C and E-G-H intra-domain 817 paths 819 o But, during path setup, the domain controller may find out that 820 A-B-C intra-domain path is not feasible (as discussed in section 821 2.2, in optical networks it is typical to have some paths not 822 being feasible due to optical constraints that are known only by 823 the optical domain controller), while only the path A-B-D is 824 feasible 826 o So what the hierarchical controller computed is not good and need 827 to re-start the path computation from scratch 829 As discussed in section 3.1, providing more extensive abstract 830 information from the TE domain controllers to the multi-domain 831 Orchestator may lead to scalability problems. 833 In a sense this is similar to the problem of routing and wavelength 834 assignment within an Optical domain. It is possible to do first 835 routing (step 1) and then wavelength assignment (step 2), but the 836 chances of ending up with a good path is low. Alternatively, it is 837 possible to do combined routing and wavelength assignment, which is 838 known to be a more optimal and effective way for Optical path setup. 839 Similarly, it is possible to first compute an abstract end-to-end 840 path within the multi-domain Orchestrator (step 1) and then compute 841 an intra-domain path within each Optical domain (step 2), but there 842 are more chances not to find a path or to get a suboptimal path that 843 performing per-domain path computation and then stitch them. 845 3.3. Complementary use of TE topology and path computation 847 As discussed in section 2.2, there are some scalability issues with 848 path computation requests in a multi-domain TE network with many TE 849 domains, in terms of the number of requests to send to the TE domain 850 controllers. It would therefore be worthwhile using the TE topology 851 information provided by the domain controllers to limit the number 852 of requests. 854 An example can be described considering the multi-domain abstract 855 topology shown in Figure 7. In this example, an end-to-end TE path 856 between domains A and F needs to be setup. The transit domain should 857 be selected between domains B, C, D and E. 859 -------------------------------------------------------------------- 860 I I 861 I I 862 I I 863 I Multi-domain with many domains I 864 I (Topology information) I 865 I I 866 I I 867 I (only in PDF version) I 868 I I 869 I I 870 I I 871 -------------------------------------------------------------------- 873 Figure 7 - Multi-domain with many domains (Topology information) 875 The actual cost of each intra-domain path is not known a priori from 876 the abstract topology information. The Orchestrator only knows, from 877 the TE topology provided by the underlying domain controllers, the 878 feasibility of some intra-domain paths and some upper-bound and/or 879 lower-bound cost information. With this information, together with 880 the cost of inter-domain links, the Orchestrator can understand by 881 its own that: 883 o Domain B cannot be selected as the path connecting domains A and 884 E is not feasible; 886 o Domain E cannot be selected as a transit domain since it is know 887 from the abstract topology information provided by domain 888 controllers that the cost of the multi-domain path A-E-F (which 889 is 100, in the best case) will be always be higher than the cost 890 of the multi-domain paths A-D-F (which is 90, in the worst case) 891 and A-E-F (which is 80, in the worst case) 893 Therefore, the Orchestrator can understand by its own that the 894 optimal multi-domain path could be either A-D-F or A-E-F but it 895 cannot known which one of the two possible option actually provides 896 the optimal end-to-end path. 898 The Orchestrator can therefore request path computation only to the 899 TE domain controllers A, D, E and F (and not to all the possible TE 900 domain controllers). 902 -------------------------------------------------------------------- 903 I I 904 I I 905 I I 906 I Multi-domain with many domains I 907 I (Path Computation information) I 908 I I 909 I I 910 I I 911 I I 912 I (only in PDF version) I 913 I I 914 I I 915 I I 916 -------------------------------------------------------------------- 918 Figure 8 - Multi-domain with many domains (Path Computation 919 information) 921 Based on these requests, the Orchestrator can know the actual cost 922 of each intra-domain paths which belongs to potential optimal end- 923 to-end paths, as shown in Figure 8, and then compute the optimal 924 end-to-end path (e.g., A-D-F, having total cost of 50, instead of A- 925 C-F having a total cost of 70). 927 4. Motivation for a YANG Model 929 4.1. Benefits of common data models 931 Path computation requests should be closely aligned with the YANG 932 data models that provide (abstract) TE topology information, i.e., 933 [TE-TOPO] as well as that are used to configure and manage TE 934 Tunnels, i.e., [TE-TUNNEL]. Otherwise, an error-prone mapping or 935 correlation of information would be required. For instance, there is 936 benefit in using the same endpoint identifiers in path computation 937 requests and in the topology modeling. Also, the attributes used in 938 path computation constraints could use the same or similar data 939 models. As a result, there are many benefits in aligning path 940 computation requests with YANG models for TE topology information 941 and TE Tunnels configuration and management. 943 4.2. Benefits of a single interface 945 A typical use case for path computation requests is the interface 946 between an orchestrator and a domain controller. The system 947 integration effort is typically lower if a single, consistent 948 interface is used between such systems, i.e., one data modeling 949 language (i.e., YANG) and a common protocol (e.g., NETCONF or 950 RESTCONF). 952 Practical benefits of using a single, consistent interface include: 954 1. Simple authentication and authorization: The interface between 955 different components has to be secured. If different protocols 956 have different security mechanisms, ensuring a common access 957 control model may result in overhead. For instance, there may 958 be a need to deal with different security mechanisms, e.g., 959 different credentials or keys. This can result in increased 960 integration effort. 961 2. Consistency: Keeping data consistent over multiple different 962 interfaces or protocols is not trivial. For instance, the 963 sequence of actions can matter in certain use cases, or 964 transaction semantics could be desired. While ensuring 965 consistency within one protocol can already be challenging, it 966 is typically cumbersome to achieve that across different 967 protocols. 968 3. Testing: System integration requires comprehensive testing, 969 including corner cases. The more different technologies are 970 involved, the more difficult it is to run comprehensive test 971 cases and ensure proper integration. 972 4. Middle-box friendliness: Provider and consumer of path 973 computation requests may be located in different networks, and 974 middle-boxes such as firewalls, NATs, or load balancers may be 975 deployed. In such environments it is simpler to deploy a single 976 protocol. Also, it may be easier to debug connectivity 977 problems. 978 5. Tooling reuse: Implementers may want to implement path 979 computation requests with tools and libraries that already 980 exist in controllers and/or orchestrators, e.g., leveraging the 981 rapidly growing eco-system for YANG tooling. 983 4.3. Extensibility 985 Path computation is only a subset of the typical functionality of a 986 controller. In many use cases, issuing path computation requests 987 comes along with the need to access other functionality on the same 988 system. In addition to obtaining TE topology, for instance also 989 configuration of services (setup/modification/deletion) may be 990 required, as well as: 992 1. Receiving notifications for topology changes as well as 993 integration with fault management 994 2. Performance management such as retrieving monitoring and 995 telemetry data 996 3. Service assurance, e.g., by triggering OAM functionality 997 4. Other fulfilment and provisioning actions beyond tunnels and 998 services, such as changing QoS configurations 1000 YANG is a very extensible and flexible data modeling language that 1001 can be used for all these use cases. 1003 Adding support for path computation requests to YANG models would 1004 seamlessly complement with [TE-TOPO] and [TE-TUNNEL] in the use 1005 cases where YANG-based protocols (e.g., NETCONF or RESTCONF) are 1006 used. 1008 5. Path Computation for multiple LSPs 1010 There are use cases, where path computation is required for multiple 1011 Traffic Engineering Label Switched Paths (TE LSPs) through a network 1012 or through a network domain. It may be advantageous to request the 1013 new paths for a set of LSPs in one single path computation request 1014 [RFC5440] that also includes information regarding the desired 1015 objective function, see [RFC5541]. 1017 In the context of abstraction and control of TE networks (ACTN), as 1018 defined in [ACTN-Frame], when a MDSC receives a vitual network (VN) 1019 request from a CNC, the MDSC needs to perform path computation for 1020 multiple LSPs as a typical VN is constructed by a set of multiple 1021 paths also called end-to-end tunnels. The MDSC may send a single 1022 path computation request to the PNC for multiple LSPs, i.e. between 1023 the VN end points (access points in ACTN terminology). 1025 In a more general context, when a MDSC needs to send multiple path 1026 provisioning requests to the PNC, the MDSC may also group these path 1027 provisioning requests together and send them in a single message to 1028 the PNC instead of sending separet requests for each path. 1030 6. YANG Model for requesting Path Computation 1032 The TE Tunnel YANG model has been extended to support the need to 1033 request path computation. 1035 It is possible to request path computation by configuring a 1036 "compute-only" TE tunnel and retrieving the computed path(s) in the 1037 LSP(s) Record-Route Object (RRO) list as described in section 3.3.1 1038 of [TE-TUNNEL]. 1040 This is a stateful solution since the state of each created 1041 "compute-only" TE tunnel needs to be maintained and updated, when 1042 underlying network conditions change. 1044 The need also for a stateless solution, based on an RPC, has been 1045 recognized, as outlined in section 6.1.1. 1047 A proposal for a stateless RPC to request path computation is 1048 provided in section 6.2. 1050 This is intended as an input for further evaluation and discussion 1051 with the authors of [TE-TUNNEL] Internet-Draft and TEAS WG 1052 participants, about the technical solution as well as whether this 1053 RPC should be merged with the YANG model defined in [TE-TUNNEL]. 1055 6.1. Modeling Considerations 1057 6.1.1. Stateless and Stateful Path Computation 1059 For further study. 1061 6.1.2. Reduction of Path Computation Requests 1063 For further study. 1065 6.2. YANG model for stateless TE path computation 1067 6.2.1. YANG Tree 1069 Figure 9 below shows the tree diagram of the YANG model defined in 1070 module ietf-te-path-computation.yang. 1072 module: ietf-te-path-computation 1073 +--rw paths 1074 | +--ro path* [path-id] 1075 | +--ro _telink* [link-ref] 1076 | | +--ro link-ref -> 1077 /nd:networks/network[nd:network-id=current()/../network- 1078 ref]/lnk:link/link-id 1079 | | +--ro network-ref? -> /nd:networks/network/network-id 1080 | +--ro _routingConstraint 1081 | | +--ro requestedCapacity? tet:te-bandwidth 1082 | | +--ro pathConstraints 1083 | | | +--ro path-constraints 1084 | | | +--ro topology-id? te-types:te-topology-id 1085 | | | +--ro cost-limit? uint32 1086 | | | +--ro hop-limit? uint8 1087 | | | +--ro metric-type? identityref 1088 | | | +--ro tiebreaker-type? identityref 1089 | | | +--ro ignore-overload? boolean 1090 | | | +--ro path-affinities {named-path-affinities}? 1091 | | | | +--ro (style)? 1092 | | | | +--:(values) 1093 | | | | | +--ro value? uint32 1094 | | | | | +--ro mask? uint32 1095 | | | | +--:(named) 1096 | | | | +--ro constraints* [usage] 1097 | | | | +--ro usage identityref 1098 | | | | +--ro constraint 1099 | | | | +--ro affinity-names* [name] 1100 | | | | +--ro name string 1101 | | | +--ro path-srlgs 1102 | | | +--ro (style)? 1103 | | | +--:(values) 1104 | | | | +--ro usage? identityref 1105 | | | | +--ro values* te-types:srlg 1106 | | | +--:(named) 1107 | | | +--ro constraints* [usage] 1108 | | | +--ro usage identityref 1109 | | | +--ro constraint 1110 | | | +--ro srlg-names* [name] 1111 | | | +--ro name string 1112 | | +--ro bidirectional 1113 | | | +--ro association 1114 | | | +--ro id? uint16 1115 | | | +--ro source? inet:ip-address 1116 | | | +--ro global-source? inet:ip-address 1117 | | | +--ro type? identityref 1118 | | | +--ro provisioing? identityref 1119 | | +--ro _avoidTopology 1120 | | +--ro provider-ref? -> 1121 /nw:networks/network[nw:network-id = current()/../network- 1122 ref]/tet:provider-id 1123 | | +--ro client-ref? -> 1124 /nw:networks/network[nw:network-id = current()/../network- 1125 ref]/tet:client-id 1126 | | +--ro te-topology-ref? -> 1127 /nw:networks/network[nw:network-id = current()/../network- 1128 ref]/tet:te-topology-id 1129 | | +--ro network-ref? -> 1130 /nw:networks/network/network-id 1131 | +--ro path-id yang-types:uuid 1132 +--rw pathComputationService 1133 +--ro _path-ref* -> /paths/path/path-id 1134 +--rw _servicePort 1135 | +--rw source? inet:ip-address 1136 | +--rw destination? inet:ip-address 1137 | +--rw src-tp-id? binary 1138 | +--rw dst-tp-id? binary 1139 | +--rw bidirectional 1140 | +--rw association 1141 | +--rw id? uint16 1142 | +--rw source? inet:ip-address 1143 | +--rw global-source? inet:ip-address 1144 | +--rw type? identityref 1145 | +--rw provisioing? identityref 1146 +--rw _routingConstraint 1147 | +--ro requestedCapacity? tet:te-bandwidth 1148 | +--ro pathConstraints 1149 | | +--ro path-constraints 1150 | | +--ro topology-id? te-types:te-topology-id 1151 | | +--ro cost-limit? uint32 1152 | | +--ro hop-limit? uint8 1153 | | +--ro metric-type? identityref 1154 | | +--ro tiebreaker-type? identityref 1155 | | +--ro ignore-overload? boolean 1156 | | +--ro path-affinities {named-path-affinities}? 1157 | | | +--ro (style)? 1158 | | | +--:(values) 1159 | | | | +--ro value? uint32 1160 | | | | +--ro mask? uint32 1161 | | | +--:(named) 1162 | | | +--ro constraints* [usage] 1163 | | | +--ro usage identityref 1164 | | | +--ro constraint 1165 | | | +--ro affinity-names* [name] 1166 | | | +--ro name string 1167 | | +--ro path-srlgs 1168 | | +--ro (style)? 1169 | | +--:(values) 1170 | | | +--ro usage? identityref 1171 | | | +--ro values* te-types:srlg 1172 | | +--:(named) 1173 | | +--ro constraints* [usage] 1174 | | +--ro usage identityref 1175 | | +--ro constraint 1176 | | +--ro srlg-names* [name] 1177 | | +--ro name string 1178 | +--rw bidirectional 1179 | | +--rw association 1180 | | +--rw id? uint16 1181 | | +--rw source? inet:ip-address 1182 | | +--rw global-source? inet:ip-address 1183 | | +--rw type? identityref 1184 | | +--rw provisioing? identityref 1185 | +--ro _avoidTopology 1186 | +--ro provider-ref? -> 1187 /nw:networks/network[nw:network-id = current()/../network- 1188 ref]/tet:provider-id 1189 | +--ro client-ref? -> 1190 /nw:networks/network[nw:network-id = current()/../network- 1191 ref]/tet:client-id 1192 | +--ro te-topology-ref? -> 1193 /nw:networks/network[nw:network-id = current()/../network- 1194 ref]/tet:te-topology-id 1195 | +--ro network-ref? -> 1196 /nw:networks/network/network-id 1197 +--rw _objectiveFunction 1198 | +--ro objectiveFunction? ObjectiveFunction 1199 +--rw _optimizationConstraint 1200 +--ro trafficInterruption? DirectiveValue 1202 rpcs: 1203 +---x statelessComputeP2PPath 1204 | +---w input 1205 | | +---w servicePort* 1206 | | | +---w source? inet:ip-address 1207 | | | +---w destination? inet:ip-address 1208 | | | +---w src-tp-id? binary 1209 | | | +---w dst-tp-id? binary 1210 | | | +---w bidirectional 1211 | | | +---w association 1212 | | | +---w id? uint16 1213 | | | +---w source? inet:ip-address 1214 | | | +---w global-source? inet:ip-address 1215 | | | +---w type? identityref 1216 | | | +---w provisioing? identityref 1217 | | +---w routingConstraint 1218 | | | +---w requestedCapacity? tet:te-bandwidth 1219 | | | +---w pathConstraints 1220 | | | | +---w path-constraints 1221 | | | | +---w topology-id? te-types:te-topology-id 1222 | | | | +---w cost-limit? uint32 1223 | | | | +---w hop-limit? uint8 1224 | | | | +---w metric-type? identityref 1225 | | | | +---w tiebreaker-type? identityref 1226 | | | | +---w ignore-overload? boolean 1227 | | | | +---w path-affinities {named-path-affinities}? 1228 | | | | | +---w (style)? 1229 | | | | | +--:(values) 1230 | | | | | | +---w value? uint32 1231 | | | | | | +---w mask? uint32 1232 | | | | | +--:(named) 1233 | | | | | +---w constraints* [usage] 1234 | | | | | +---w usage identityref 1235 | | | | | +---w constraint 1236 | | | | | +---w affinity-names* [name] 1237 | | | | | +---w name string 1238 | | | | +---w path-srlgs 1239 | | | | +---w (style)? 1240 | | | | +--:(values) 1241 | | | | | +---w usage? identityref 1242 | | | | | +---w values* te-types:srlg 1243 | | | | +--:(named) 1244 | | | | +---w constraints* [usage] 1245 | | | | +---w usage identityref 1246 | | | | +---w constraint 1247 | | | | +---w srlg-names* [name] 1248 | | | | +---w name string 1249 | | | +---w bidirectional 1250 | | | | +---w association 1251 | | | | +---w id? uint16 1252 | | | | +---w source? inet:ip-address 1253 | | | | +---w global-source? inet:ip-address 1254 | | | | +---w type? identityref 1255 | | | | +---w provisioing? identityref 1256 | | | +---w _avoidTopology 1257 | | | +---w provider-ref? -> 1258 /nw:networks/network[nw:network-id = current()/../network- 1259 ref]/tet:provider-id 1260 | | | +---w client-ref? -> 1261 /nw:networks/network[nw:network-id = current()/../network- 1262 ref]/tet:client-id 1263 | | | +---w te-topology-ref? -> 1264 /nw:networks/network[nw:network-id = current()/../network- 1265 ref]/tet:te-topology-id 1266 | | | +---w network-ref? -> 1267 /nw:networks/network/network-id 1268 | | +---w objectiveFunction 1269 | | +---w objectiveFunction? ObjectiveFunction 1270 | +--ro output 1271 | +--ro pathCompService 1272 | +--ro _path-ref* -> /paths/path/path-id 1273 | +--ro _servicePort 1274 | | +--ro source? inet:ip-address 1275 | | +--ro destination? inet:ip-address 1276 | | +--ro src-tp-id? binary 1277 | | +--ro dst-tp-id? binary 1278 | | +--ro bidirectional 1279 | | +--ro association 1280 | | +--ro id? uint16 1281 | | +--ro source? inet:ip-address 1282 | | +--ro global-source? inet:ip-address 1283 | | +--ro type? identityref 1284 | | +--ro provisioing? identityref 1285 | +--ro _routingConstraint 1286 | | +--ro requestedCapacity? tet:te-bandwidth 1287 | | +--ro pathConstraints 1288 | | | +--ro path-constraints 1289 | | | +--ro topology-id? te-types:te-topology- 1290 id 1291 | | | +--ro cost-limit? uint32 1292 | | | +--ro hop-limit? uint8 1293 | | | +--ro metric-type? identityref 1294 | | | +--ro tiebreaker-type? identityref 1295 | | | +--ro ignore-overload? boolean 1296 | | | +--ro path-affinities {named-path-affinities}? 1297 | | | | +--ro (style)? 1298 | | | | +--:(values) 1299 | | | | | +--ro value? uint32 1300 | | | | | +--ro mask? uint32 1301 | | | | +--:(named) 1302 | | | | +--ro constraints* [usage] 1303 | | | | +--ro usage identityref 1304 | | | | +--ro constraint 1305 | | | | +--ro affinity-names* [name] 1306 | | | | +--ro name string 1307 | | | +--ro path-srlgs 1308 | | | +--ro (style)? 1309 | | | +--:(values) 1310 | | | | +--ro usage? identityref 1311 | | | | +--ro values* te-types:srlg 1312 | | | +--:(named) 1313 | | | +--ro constraints* [usage] 1314 | | | +--ro usage identityref 1315 | | | +--ro constraint 1316 | | | +--ro srlg-names* [name] 1317 | | | +--ro name string 1318 | | +--ro bidirectional 1319 | | | +--ro association 1320 | | | +--ro id? uint16 1321 | | | +--ro source? inet:ip-address 1322 | | | +--ro global-source? inet:ip-address 1323 | | | +--ro type? identityref 1324 | | | +--ro provisioing? identityref 1325 | | +--ro _avoidTopology 1326 | | +--ro provider-ref? -> 1327 /nw:networks/network[nw:network-id = current()/../network- 1328 ref]/tet:provider-id 1329 | | +--ro client-ref? -> 1330 /nw:networks/network[nw:network-id = current()/../network- 1331 ref]/tet:client-id 1332 | | +--ro te-topology-ref? -> 1333 /nw:networks/network[nw:network-id = current()/../network- 1334 ref]/tet:te-topology-id 1335 | | +--ro network-ref? -> 1336 /nw:networks/network/network-id 1337 | +--ro _objectiveFunction 1338 | | +--ro objectiveFunction? ObjectiveFunction 1339 | +--ro _optimizationConstraint 1340 | +--ro trafficInterruption? DirectiveValue 1341 +---x optimizeP2PPath 1342 +---w input 1343 | +---w pathIdOrName? string 1344 | +---w routingConstraint 1345 | | +---w requestedCapacity? tet:te-bandwidth 1346 | | +---w pathConstraints 1347 | | | +---w path-constraints 1348 | | | +---w topology-id? te-types:te-topology-id 1349 | | | +---w cost-limit? uint32 1350 | | | +---w hop-limit? uint8 1351 | | | +---w metric-type? identityref 1352 | | | +---w tiebreaker-type? identityref 1353 | | | +---w ignore-overload? boolean 1354 | | | +---w path-affinities {named-path-affinities}? 1355 | | | | +---w (style)? 1356 | | | | +--:(values) 1357 | | | | | +---w value? uint32 1358 | | | | | +---w mask? uint32 1359 | | | | +--:(named) 1360 | | | | +---w constraints* [usage] 1361 | | | | +---w usage identityref 1362 | | | | +---w constraint 1363 | | | | +---w affinity-names* [name] 1364 | | | | +---w name string 1365 | | | +---w path-srlgs 1366 | | | +---w (style)? 1367 | | | +--:(values) 1368 | | | | +---w usage? identityref 1369 | | | | +---w values* te-types:srlg 1370 | | | +--:(named) 1371 | | | +---w constraints* [usage] 1372 | | | +---w usage identityref 1373 | | | +---w constraint 1374 | | | +---w srlg-names* [name] 1375 | | | +---w name string 1376 | | +---w bidirectional 1377 | | | +---w association 1378 | | | +---w id? uint16 1379 | | | +---w source? inet:ip-address 1380 | | | +---w global-source? inet:ip-address 1381 | | | +---w type? identityref 1382 | | | +---w provisioing? identityref 1383 | | +---w _avoidTopology 1384 | | +---w provider-ref? -> 1385 /nw:networks/network[nw:network-id = current()/../network- 1386 ref]/tet:provider-id 1387 | | +---w client-ref? -> 1388 /nw:networks/network[nw:network-id = current()/../network- 1389 ref]/tet:client-id 1390 | | +---w te-topology-ref? -> 1391 /nw:networks/network[nw:network-id = current()/../network- 1392 ref]/tet:te-topology-id 1393 | | +---w network-ref? -> 1394 /nw:networks/network/network-id 1395 | +---w optimizationConstraint 1396 | | +---w trafficInterruption? DirectiveValue 1397 | +---w objectiveFunction 1398 | +---w objectiveFunction? ObjectiveFunction 1399 +--ro output 1400 +--ro pathCompService 1401 +--ro _path-ref* -> /paths/path/path-id 1402 +--ro _servicePort 1403 | +--ro source? inet:ip-address 1404 | +--ro destination? inet:ip-address 1405 | +--ro src-tp-id? binary 1406 | +--ro dst-tp-id? binary 1407 | +--ro bidirectional 1408 | +--ro association 1409 | +--ro id? uint16 1410 | +--ro source? inet:ip-address 1411 | +--ro global-source? inet:ip-address 1412 | +--ro type? identityref 1413 | +--ro provisioing? identityref 1414 +--ro _routingConstraint 1415 | +--ro requestedCapacity? tet:te-bandwidth 1416 | +--ro pathConstraints 1417 | | +--ro path-constraints 1418 | | +--ro topology-id? te-types:te-topology- 1419 id 1420 | | +--ro cost-limit? uint32 1421 | | +--ro hop-limit? uint8 1422 | | +--ro metric-type? identityref 1423 | | +--ro tiebreaker-type? identityref 1424 | | +--ro ignore-overload? boolean 1425 | | +--ro path-affinities {named-path-affinities}? 1426 | | | +--ro (style)? 1427 | | | +--:(values) 1428 | | | | +--ro value? uint32 1429 | | | | +--ro mask? uint32 1430 | | | +--:(named) 1431 | | | +--ro constraints* [usage] 1432 | | | +--ro usage identityref 1433 | | | +--ro constraint 1434 | | | +--ro affinity-names* [name] 1435 | | | +--ro name string 1436 | | +--ro path-srlgs 1437 | | +--ro (style)? 1438 | | +--:(values) 1439 | | | +--ro usage? identityref 1440 | | | +--ro values* te-types:srlg 1441 | | +--:(named) 1442 | | +--ro constraints* [usage] 1443 | | +--ro usage identityref 1444 | | +--ro constraint 1445 | | +--ro srlg-names* [name] 1446 | | +--ro name string 1447 | +--ro bidirectional 1448 | | +--ro association 1449 | | +--ro id? uint16 1450 | | +--ro source? inet:ip-address 1451 | | +--ro global-source? inet:ip-address 1452 | | +--ro type? identityref 1453 | | +--ro provisioing? identityref 1454 | +--ro _avoidTopology 1455 | +--ro provider-ref? -> 1456 /nw:networks/network[nw:network-id = current()/../network- 1457 ref]/tet:provider-id 1458 | +--ro client-ref? -> 1459 /nw:networks/network[nw:network-id = current()/../network- 1460 ref]/tet:client-id 1461 | +--ro te-topology-ref? -> 1462 /nw:networks/network[nw:network-id = current()/../network- 1463 ref]/tet:te-topology-id 1464 | +--ro network-ref? -> 1465 /nw:networks/network/network-id 1466 +--ro _objectiveFunction 1467 | +--ro objectiveFunction? ObjectiveFunction 1468 +--ro _optimizationConstraint 1469 +--ro trafficInterruption? DirectiveValue 1471 Figure 9 - TE path computation tree 1473 6.2.2. YANG Module 1475 file " ietf-te-path-computation.yang " 1476 module ietf-te-path-computation { 1477 yang-version 1.1; 1478 namespace "urn:ietf:params:xml:ns:yang:ietf-te-path-computation"; 1479 // replace with IANA namespace when assigned 1481 prefix "tepc"; 1483 import ietf-inet-types { 1484 prefix "inet"; 1485 } 1487 import ietf-yang-types { 1488 prefix "yang-types"; 1489 } 1491 import ietf-te-types { 1492 prefix "te-types"; 1493 } 1495 import ietf-te-topology { 1496 prefix "tet"; 1497 } 1499 import ietf-network-topology { 1500 prefix "nt"; 1502 } 1504 organization 1505 "Traffic Engineering Architecture and Signaling (TEAS) 1506 Working Group"; 1508 contact 1509 "WG Web: 1510 WG List: 1512 WG Chair: Lou Berger 1513 1515 WG Chair: Vishnu Pavan Beeram 1516 1518 "; 1520 description "YANG model for stateless TE path computation"; 1522 revision "2016-10-10" { 1523 description "Initial revision"; 1524 reference "YANG model for stateless TE path computation"; 1525 } 1527 /* 1528 * Features 1529 */ 1531 feature stateless-path-computation { 1532 description 1533 "This feature indicates that the system supports 1534 stateless path computation."; 1535 } 1537 /* 1538 * Typedefs 1539 */ 1541 typedef DirectiveValue { 1542 type enumeration { 1543 enum MINIMIZE { 1544 description "Minimize directive."; 1545 } 1546 enum MAXIMIZE { 1547 description "Maximize directive."; 1548 } 1549 enum ALLOW { 1550 description "Allow directive."; 1551 } 1552 enum DISALLOW { 1553 description "Disallow directive."; 1554 } 1555 enum DONT_CARE { 1556 description "Don't care directive."; 1557 } 1558 } 1559 description "Value to determine optimization type."; 1560 } 1562 typedef ObjectiveFunction { 1563 type enumeration { 1564 enum MCP { 1565 description "MCP."; 1566 } 1567 enum MLP { 1568 description "MLP."; 1569 } 1570 enum MBP { 1571 description "MBP."; 1572 } 1573 enum MBC { 1574 description "MBC."; 1575 } 1576 enum MLL { 1577 description "MLL."; 1578 } 1579 enum MCC { 1580 description "MCC."; 1581 } 1582 } 1583 description "RFC 5541 - Encoding of Objective Functions in the 1584 Path Computation Element Communication Protocol (PCEP)"; 1585 } 1587 /* 1588 * Groupings 1589 */ 1591 grouping Path { 1592 list _telink { 1593 key 'link-ref'; 1594 config false; 1595 uses nt:link-ref; 1596 description "List of telink refs."; 1597 } 1598 container _routingConstraint { 1599 config false; 1600 uses RoutingConstraint; 1601 description "Extended routing constraints."; 1602 } 1603 leaf path-id { 1604 type yang-types:uuid; 1605 config false; 1606 description "path-id ref."; 1607 } 1608 description "Path is described by an ordered list of TE Links."; 1609 } 1611 grouping PathCompServicePort { 1612 leaf source { 1613 type inet:ip-address; 1614 description "TE tunnel source address."; 1615 } 1616 leaf destination { 1617 type inet:ip-address; 1618 description "P2P tunnel destination address"; 1620 } 1621 leaf src-tp-id { 1622 type binary; 1623 description "TE tunnel source termination point identifier."; 1624 } 1625 leaf dst-tp-id { 1626 type binary; 1627 description "TE tunnel destination termination point 1628 identifier."; 1629 } 1630 uses te-types:bidir-assoc-properties; 1631 description "Path Computation Service Port grouping."; 1632 } 1634 grouping PathComputationService { 1635 leaf-list _path-ref { 1636 type leafref { 1637 path '/paths/path/path-id'; 1638 } 1639 config false; 1640 description "List of previously computed path references."; 1641 } 1642 container _servicePort { 1643 uses PathCompServicePort; 1644 description "Path Computation Service Port."; 1645 } 1646 container _routingConstraint { 1647 uses RoutingConstraint; 1648 description "Routing constraints."; 1649 } 1650 container _objectiveFunction { 1651 uses PathObjectiveFunction; 1652 description "Path Objective Function."; 1653 } 1654 container _optimizationConstraint { 1655 uses PathOptimizationConstraint; 1656 description "Path Optimization Constraint."; 1657 } 1658 description "Path computation service."; 1660 } 1662 grouping PathObjectiveFunction { 1663 leaf objectiveFunction { 1664 type ObjectiveFunction; 1665 config false; 1666 description "Objective Function."; 1667 } 1668 description "Path Objective Function."; 1669 } 1671 grouping PathOptimizationConstraint { 1672 leaf trafficInterruption { 1673 type DirectiveValue; 1674 config false; 1675 description "Traffic Interruption."; 1676 } 1677 description "Path Optimization Constraint."; 1678 } 1680 grouping RoutingConstraint { 1681 leaf requestedCapacity { 1682 type tet:te-bandwidth; 1683 config false; 1684 description "Capacity required for connectivity service."; 1685 } 1686 container pathConstraints { 1687 config false; 1688 uses te-types:path-constraints; 1689 description "Service connectivity path selection properties"; 1690 } 1691 uses te-types:bidir-assoc-properties; 1692 // path-constaints contains include topology 1693 /*leaf _includeTopology { 1694 uses te-types:te-topology-ref; 1695 config false; 1696 }*/ 1697 container _avoidTopology { 1698 uses tet:te-topology-ref; 1699 config false; 1700 description "Topology to be avoided."; 1701 } 1702 // path-constrains already include/exclude path 1703 /*list _includePath { 1704 key 'link-ref'; 1705 config false; 1706 uses nt:link-ref; 1707 }*/ 1708 /*list _excludePath { 1709 key 'link-ref'; 1710 config false; 1711 uses nt:link-ref; 1712 }*/ 1713 description "Extended routing constraints. Created to align with 1714 path-constaints."; 1715 } 1717 /* 1718 * Root container 1719 */ 1720 container paths { 1721 list path { 1722 key "path-id"; 1723 uses Path; 1724 config false; 1725 description "List of previous computed paths."; 1726 } 1727 description "Root container for path-computation"; 1728 } 1730 container pathComputationService { 1731 uses PathComputationService; 1732 description "Service for computing paths."; 1733 } 1735 /*********************** 1736 * package Interfaces 1737 **********************/ 1738 rpc statelessComputeP2PPath { 1739 description "statelessComputeP2PPath"; 1740 input { 1741 list servicePort { 1742 min-elements 1; 1743 uses PathCompServicePort; 1744 description "List of service ports."; 1745 } 1746 container routingConstraint { 1747 uses RoutingConstraint; 1748 description "routing constraint."; 1749 } 1750 container objectiveFunction { 1751 uses PathObjectiveFunction; 1752 description "objective function."; 1753 } 1754 } 1755 output { 1756 container pathCompService { 1757 uses PathComputationService; 1758 description "Path computation service."; 1759 } 1760 } 1761 } 1763 /**rpc computeP2PPath { 1764 input { 1765 list servicePort { 1766 min-elements 2; 1767 max-elements 2; 1768 uses PathCompServicePort; 1769 } 1770 container routingConstraint { 1771 uses RoutingConstraint; 1772 } 1773 container objectiveFunction { 1774 uses PathObjectiveFunction; 1775 } 1776 } 1777 output { 1778 container pathCompService { 1779 uses PathComputationService; 1780 } 1781 } 1782 }*/ 1784 rpc optimizeP2PPath { 1785 description "optimizeP2PPath."; 1786 input { 1787 leaf pathIdOrName { 1788 type string; 1789 description "path id or path name."; 1790 } 1791 container routingConstraint { 1792 uses RoutingConstraint; 1793 description "routing constraint."; 1794 } 1795 container optimizationConstraint { 1796 uses PathOptimizationConstraint; 1797 description "optimizationConstraint."; 1798 } 1799 container objectiveFunction { 1800 uses PathObjectiveFunction; 1801 description "objective function."; 1802 } 1803 } 1804 output { 1805 container pathCompService { 1806 uses PathComputationService; 1807 description "path computation service."; 1808 } 1809 } 1810 } 1812 /**rpc deleteP2PPath { 1813 input { 1814 leaf pathIdOrName { 1815 type string; 1817 } 1818 } 1819 output { 1820 container pathCompService { 1821 uses PathComputationService; 1822 } 1823 } 1824 }*/ 1826 } 1827 1829 Figure 10 - TE path computation YANG module 1831 7. Security Considerations 1833 This document describes use cases of requesting Path Computation 1834 using YANG models, which could be used at the ABNO Control Interface 1835 [RFC7491] and/or between controllers in ACTN [ACTN-frame]. As such, 1836 it does not introduce any new security considerations compared to 1837 the ones related to YANG specification, ABNO specification and ACTN 1838 Framework defined in [RFC6020], [RFC7950], [RFC7491] and [ACTN- 1839 frame]. 1841 This document also defines common data types using the YANG data 1842 modeling language. The definitions themselves have no security 1843 impact on the Internet, but the usage of these definitions in 1844 concrete YANG modules might have. The security considerations 1845 spelled out in the YANG specification [RFC6020] apply for this 1846 document as well. 1848 8. IANA Considerations 1850 This section is for further study: to be completed when the YANG 1851 model is more stable. 1853 9. References 1855 9.1. Normative References 1857 [RFC6020] Bjorklund, M., "YANG - A Data Modeling Language for the 1858 Network Configuration Protocol (NETCONF)", RFC 6020, 1859 October 2010. 1861 [RFC7139] Zhang, F. et al., "GMPLS Signaling Extensions for Control 1862 of Evolving G.709 Optical Transport Networks", RFC 7139, 1863 March 2014. 1865 [RFC7491] Farrel, A., King, D., "A PCE-Based Architecture for 1866 Application-Based Network Operations", RFC 7491, March 1867 2015. 1869 [RFC7926] Farrel, A. et al., "Problem Statement and Architecture for 1870 Information Exchange Between Interconnected Traffic 1871 Engineered Networks", RFC 7926, July 2016. 1873 [RFC7950] Bjorklund, M., "The YANG 1.1 Data Modeling Language", RFC 1874 7950, August 2016. 1876 [TE-TOPO] Liu, X. et al., "YANG Data Model for TE Topologies", 1877 draft-ietf-teas-yang-te-topo, work in progress. 1879 [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic 1880 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 1881 te, work in progress. 1883 [ACTN-Frame] Ceccarelli, D., Lee, Y. et al., "Framework for 1884 Abstraction and Control of Traffic Engineered Networks" 1885 draft-ietf-actn-framework, work in progress. 1887 [ITU-T G.709-2016] ITU-T Recommendation G.709 (06/16), "Interface 1888 for the optical transport network", June 2016 1890 9.2. Informative References 1892 [RFC5440] Vasseur, YP., Le Roux, JL., "Path Computation Element 1893 (PCE) Communication Protocol (PCEP)", RFC 5440, March 1894 2009. 1896 [RFC5541] Le Roux, JL. et al., "Encoding of Objective Functions in 1897 the Path Computation Element Communication Protocol 1898 (PCEP)", RFC 5541, June 2009. 1900 [RFC7446] Lee, Y. et al., "Routing and Wavelength Assignment 1901 Information Model for Wavelength Switched Optical 1902 Networks", RFC 7446, February 2015. 1904 [L1-TOPO] Zhang, X. et al., "A YANG Data Model for Layer 1 (ODU) 1905 Network Topology", draft-zhang-ccamp-l1-topo-yang, work in 1906 progress. 1908 [ACTN-Info] Lee, Y., Belotti, S., Dhody, D., Ceccarelli, D., 1909 "Information Model for Abstraction and Control of 1910 Transport Networks", draft-leebelotti-actn-info, work in 1911 progress. 1913 [PCEP-Service-Aware] Dhody, D. et al., "Extensions to the Path 1914 Computation Element Communication Protocol (PCEP) to 1915 compute service aware Label Switched Path (LSP)", draft- 1916 ietf-pce-pcep-service-aware, work in progress. 1918 10. Acknowledgments 1920 The authors would like to thank Igor Bryskin and Xian Zhang for 1921 participating in discussions and providing valuable insights. 1923 The authors would like to thank the authors of the TE Tunnel YANG 1924 model [TE-TUNNEL], in particular Igor Bryskin, Tarek Saad and Xufeng 1925 Liu, for their inputs to the discussions and support in having 1926 consistency between the Path Computation and TE Tunnel YANG models. 1928 This document was prepared using 2-Word-v2.0.template.dot. 1930 Contributors 1932 Dieter Beller 1933 Nokia 1934 Email: dieter.beller@nokia.com 1936 Gianmarco Bruno 1937 Ericsson 1938 Email: gianmarco.bruno@ericsson.com 1940 Francesco Lazzeri 1941 Ericsson 1942 Email: francesco.lazzeri@ericsson.com 1944 Young Lee 1945 Huawei 1946 Email: leeyoung@huawei.com 1948 Carlo Perocchio 1949 Ericsson 1950 Email: carlo.perocchio@ericsson.com 1952 Authors' Addresses 1954 Italo Busi (Editor) 1955 Huawei 1956 Email: italo.busi@huawei.com 1958 Sergio Belotti (Editor) 1959 Nokia 1960 Email: sergio.belotti@nokia.com 1962 Victor Lopez 1963 Telefonica 1964 Email: victor.lopezalvarez@telefonica.com 1965 Oscar Gonzalez de Dios 1966 Telefonica 1967 Email: oscar.gonzalezdedios@telefonica.com 1969 Anurag Sharma 1970 Infinera 1971 Email: AnSharma@infinera.com 1973 Yan Shi 1974 China Unicom 1975 Email: shiyan49@chinaunicom.cn 1977 Ricard Vilalta 1978 CTTC 1979 Email: ricard.vilalta@cttc.es 1981 Karthik Sethuraman 1982 NEC 1983 Email: karthik.sethuraman@necam.com 1985 Michael Scharf 1986 Nokia 1987 Email: michael.scharf@nokia.com 1989 Daniele Ceccarelli 1990 Ericsson 1991 Email: daniele.ceccarelli@ericsson.com