idnits 2.17.1 draft-ietf-teas-yang-path-computation-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 134 instances of too long lines in the document, the longest one being 15 characters in excess of 72. ** The abstract seems to contain references ([TE-TUNNEL]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 443 has weird spacing: '... about the r...' == Line 472 has weird spacing: '...traints use t...' == Line 982 has weird spacing: '...tion-id uin...' == Line 991 has weird spacing: '...ic-type ide...' == Line 1002 has weird spacing: '...ic-type ide...' == (11 more instances...) -- The document date (March 5, 2018) is 2238 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'ACTN-frame' is mentioned on line 1790, but not defined == Missing Reference: 'RFC5440' is mentioned on line 917, but not defined == Missing Reference: 'RFC 5440' is mentioned on line 1052, but not defined == Unused Reference: 'ACTN-Frame' is defined on line 1837, but no explicit reference was found in the text == Unused Reference: 'ACTN-Info' is defined on line 1861, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 7491 -- No information found for draft-ietf-actn-framework - is the name correct? -- Possible downref: Normative reference to a draft: ref. 'ACTN-Frame' -- Possible downref: Non-RFC (?) normative reference: ref. 'ITU-T G.709-2016' Summary: 3 errors (**), 0 flaws (~~), 12 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Italo Busi (Ed.) 2 Internet Draft Huawei 3 Intended status: Standard Track Sergio Belotti (Ed.) 4 Expires: September 2018 Nokia 5 Victor Lopez 6 Oscar Gonzalez de Dios 7 Telefonica 8 Anurag Sharma 9 Google 10 Yan Shi 11 China Unicom 12 Ricard Vilalta 13 CTTC 14 Karthik Sethuraman 15 NEC 17 March 5, 2018 19 Yang model for requesting Path Computation 20 draft-ietf-teas-yang-path-computation-01.txt 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF), its areas, and its working groups. Note that 29 other groups may also distribute working documents as Internet- 30 Drafts. 32 Internet-Drafts are draft documents valid for a maximum of six 33 months and may be updated, replaced, or obsoleted by other documents 34 at any time. It is inappropriate to use Internet-Drafts as 35 reference material or to cite them other than as "work in progress." 37 The list of current Internet-Drafts can be accessed at 38 http://www.ietf.org/ietf/1id-abstracts.txt 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html 42 This Internet-Draft will expire on September 5, 2016. 44 Copyright Notice 46 Copyright (c) 2018 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with 54 respect to this document. Code Components extracted from this 55 document must include Simplified BSD License text as described in 56 Section 4.e of the Trust Legal Provisions and are provided without 57 warranty as described in the Simplified BSD License. 59 Abstract 61 There are scenarios, typically in a hierarchical SDN context, in 62 which an orchestrator may not have detailed information to be able 63 to perform an end-to-end path computation and would need to request 64 lower layer/domain controllers to calculate some (partial) feasible 65 paths. 67 Multiple protocol solutions can be used for communication between 68 different controller hierarchical levels. This document assumes that 69 the controllers are communicating using YANG-based protocols (e.g., 70 NETCONF or RESTCONF). 72 Based on this assumption this document proposes a YANG model for a 73 path computation request that an higher controller can exploit to 74 retrieve the needed information, complementing his topology 75 knowledge, to make his E2E path computation feasible. 77 The draft proposes a stateless RPC which complements the stateful 78 solution defined in [TE-TUNNEL]. 80 Moreover this document describes some use cases where a path 81 computation request, via YANG-based protocols (e.g., NETCONF or 82 RESTCONF), can be needed. 84 Table of Contents 86 1. Introduction...................................................3 87 1.1. Terminology...............................................5 88 2. Use Cases......................................................5 89 2.1. Packet/Optical Integration................................5 90 2.2. Multi-domain TE Networks..................................8 91 2.3. Data center interconnections.............................10 92 3. Motivations...................................................12 93 3.1. Motivation for a YANG Model..............................12 94 3.1.1. Benefits of common data models......................12 95 3.1.2. Benefits of a single interface......................12 96 3.1.3. Extensibility.......................................13 97 3.2. Interactions with TE Topology............................14 98 3.2.1. TE Topology Aggregation.............................14 99 3.2.2. TE Topology Abstraction.............................18 100 3.2.3. Complementary use of TE topology and path computation19 101 3.3. Stateless and Stateful Path Computation..................21 102 4. Path Computation and Optimization for multiple paths..........22 103 5. YANG Model for requesting Path Computation....................23 104 5.1. Synchronization of multiple path computation requests....24 105 5.2. Returned metric values...................................25 106 6. YANG model for stateless TE path computation..................27 107 6.1. YANG Tree................................................27 108 6.2. YANG Module..............................................35 109 7. Security Considerations.......................................44 110 8. IANA Considerations...........................................45 111 9. References....................................................45 112 9.1. Normative References.....................................45 113 9.2. Informative References...................................46 114 10. Acknowledgments..............................................46 115 Appendix A. Examples of dimensioning the "detailed connectivity 116 matrix"..........................................................47 118 1. Introduction 120 There are scenarios, typically in a hierarchical SDN context, in 121 which an orchestrator may not have detailed information to be able 122 to perform an end-to-end path computation and would need to request 123 lower layer/domain controllers to calculate some (partial) feasible 124 paths. 126 When we are thinking to this type of scenarios we have in mind 127 specific level of interfaces on which this request can be applied. 129 We can reference ABNO Control Interface [RFC7491] in which an 130 Application Service Coordinator can request ABNO controller to take 131 in charge path calculation (see Figure 1 in the RFC) and/or ACTN 132 [ACTN-frame],where controller hierarchy is defined, the need for 133 path computation arises on both interfaces CMI (interface between 134 Customer Network Controller(CNC) and Multi Domain Service 135 Coordinator (MDSC)) and/or MPI (interface between MSDC-PNC).[ACTN- 136 Info] describes an information model for the Path Computation 137 request. 139 Multiple protocol solutions can be used for communication between 140 different controller hierarchical levels. This document assumes that 141 the controllers are communicating using YANG-based protocols (e.g., 142 NETCONF or RESTCONF). 144 Path Computation Elements, Controllers and Orchestrators perform 145 their operations based on Traffic Engineering Databases (TED). Such 146 TEDs can be described, in a technology agnostic way, with the YANG 147 Data Model for TE Topologies [TE-TOPO]. Furthermore, the technology 148 specific details of the TED are modeled in the augmented TE topology 149 models (e.g. [OTN-TOPO] for OTN ODU technologies). 151 The availability of such topology models allows providing the TED 152 using YANG-based protocols (e.g., NETCONF or RESTCONF). Furthermore, 153 it enables a PCE/Controller performing the necessary abstractions or 154 modifications and offering this customized topology to another 155 PCE/Controller or high level orchestrator. 157 Note: This document does not assume that an orchestrator/coordinator 158 always implements a "PCE" functionality, as defined in [RFC4655]. 160 The tunnels that can be provided over the networks described with 161 the topology models can be also set-up, deleted and modified via 162 YANG-based protocols (e.g., NETCONF or RESTCONF) using the TE-Tunnel 163 Yang model [TE-TUNNEL]. 165 This document proposes a YANG model for a path computation request 166 defined as a stateless RPC, which complements the stateful solution 167 defined in [TE-TUNNEL]. 169 Moreover, this document describes some use cases where a path 170 computation request, via YANG-based protocols (e.g., NETCONF or 171 RESTCONF), can be needed. 173 1.1. Terminology 175 TED: The traffic engineering database is a collection of all TE 176 information about all TE nodes and TE links in a given network. 178 PCE: A Path Computation Element (PCE) is an entity that is capable 179 of computing a network path or route based on a network graph, and 180 of applying computational constraints during the computation. The 181 PCE entity is an application that can be located within a network 182 node or component, on an out-of-network server, etc. For example, a 183 PCE would be able to compute the path of a TE LSP by operating on 184 the TED and considering bandwidth and other constraints applicable 185 to the TE LSP service request. [RFC4655] 187 2. Use Cases 189 This section presents different use cases, where an orchestrator 190 needs to request underlying SDN controllers for path computation. 192 The presented uses cases have been grouped, depending on the 193 different underlying topologies: a) IP-Optical integration; b) 194 Multi-domain Traffic Engineered (TE) Networks; and c) Data center 195 interconnections. 197 2.1. Packet/Optical Integration 199 In this use case, an Optical network is used to provide connectivity 200 to some nodes of a Packet network (see Figure 1). 202 A possible example could be the case where an Optical network 203 provides connectivity to same IP routers of an IP network. 205 -------------------------------------------------------------------- 206 I I 207 I I 208 I I 209 I Packet/Optical Integration Use Case I 210 I I 211 I I 212 I I 213 I I 214 I (only in PDF version) I 215 I I 216 I I 217 I I 218 I I 219 I I 220 I I 221 I I 222 -------------------------------------------------------------------- 224 Figure 1 - Packet/Optical Integration Use Case 226 Figure 1 as well as Figure 2 below only show a partial view of the 227 packet network connectivity, before additional packet connectivity 228 is provided by the Optical network. 230 It is assumed that the Optical network controller provides to the 231 packet/optical coordinator an abstracted view of the Optical 232 network. A possible abstraction shall be representing the optical 233 network as one "virtual node" with "virtual ports" connected to the 234 access links. 236 It is also assumed that Packet network controller can provide the 237 packet/optical coordinator the information it needs to setup 238 connectivity between packet nodes through the Optical network (e.g., 239 the access links). 241 The path computation request helps the coordinator to know the real 242 connections that can be provided by the optical network. 244 -------------------------------------------------------------------- 245 I I 246 I I 247 I I 248 I I 249 I I 250 I I 251 I I 252 I I 253 I Packet and Optical Topology Abstractions I 254 I I 255 I I 256 I I 257 I I 258 I (only in PDF version) I 259 I I 260 I I 261 I I 262 I I 263 I I 264 I I 265 I I 266 I I 267 -------------------------------------------------------------------- 269 Figure 2 - Packet and Optical Topology Abstractions 271 In this use case, the coordinator needs to setup an optimal 272 underlying path for an IP link between R1 and R2. 274 As depicted in Figure 2, the coordinator has only an "abstracted 275 view" of the physical network, and it does not know the feasibility 276 or the cost of the possible optical paths (e.g., VP1-VP4 and VP2- 277 VP5), which depend from the current status of the physical resources 278 within the optical network and on vendor-specific optical 279 attributes. 281 The coordinator can request the underlying Optical domain controller 282 to compute a set of potential optimal paths, taking into account 283 optical constraints. Then, based on its own constraints, policy and 284 knowledge (e.g. cost of the access links), it can choose which one 285 of these potential paths to use to setup the optimal e2e path 286 crossing optical network. 288 -------------------------------------------------------------------- 289 I I 290 I Packet/Optical Path Computation Example I 291 I I 292 I I 293 I I 294 I I 295 I (only in PDF version) I 296 I I 297 -------------------------------------------------------------------- 299 Figure 3 - Packet/Optical Path Computation Example 301 For example, in Figure 3, the Coordinator can request the Optical 302 network controller to compute the paths between VP1-VP4 and VP2-VP5 303 and then decide to setup the optimal end-to-end path using the VP2- 304 VP5 Optical path even this is not the optimal path from the Optical 305 domain perspective. 307 Considering the dynamicity of the connectivity constraints of an 308 Optical domain, it is possible that a path computed by the Optical 309 network controller when requested by the Coordinator is no longer 310 valid/available when the Coordinator requests it to be setup up. 312 It is worth noting that with the approach proposed in this document, 313 the likelihood for this issue to happen can be quite small since the 314 time window between the path computation request and the path setup 315 request should be quite short (especially if compared with the time 316 that would be needed to update the information of a very detailed 317 abstract connectivity matrix). 319 If this risk is still not acceptable, the Orchestrator may also 320 optionally request the Optical domain controller not only to compute 321 the path but also to keep track of its resources (e.g., these 322 resources can be reserved to avoid being used by any other 323 connection). In this case, some mechanism (e.g., a timeout) needs to 324 be defined to avoid having stranded resources within the Optical 325 domain. 327 2.2. Multi-domain TE Networks 329 In this use case there are two TE domains which are interconnected 330 together by multiple inter-domains links. 332 A possible example could be a multi-domain optical network. 334 -------------------------------------------------------------------- 335 I I 336 I I 337 I I 338 I I 339 I I 340 I I 341 I Multi-domain multi-link interconnection I 342 I I 343 I I 344 I I 345 I I 346 I (only in PDF version) I 347 I I 348 I I 349 I I 350 I I 351 I I 352 I I 353 -------------------------------------------------------------------- 355 Figure 4 - Multi-domain multi-link interconnection 357 In order to setup an end-to-end multi-domain TE path (e.g., between 358 nodes A and H), the orchestrator needs to know the feasibility or 359 the cost of the possible TE paths within the two TE domains, which 360 depend from the current status of the physical resources within each 361 TE network. This is more challenging in case of optical networks 362 because the optimal paths depend also on vendor-specific optical 363 attributes (which may be different in the two domains if they are 364 provided by different vendors). 366 In order to setup a multi-domain TE path (e.g., between nodes A and 367 H), Orchestrator can request the TE domain controllers to compute a 368 set of intra-domain optimal paths and take decisions based on the 369 information received. For example: 371 o The Orchestrator asks TE domain controllers to provide set of 372 paths between A-C, A-D, E-H and F-H 374 o TE domain controllers return a set of feasible paths with the 375 associated costs: the path A-C is not part of this set(in optical 376 networks, it is typical to have some paths not being feasible due 377 to optical constraints that are known only by the optical domain 378 controller) 380 o The Orchestrator will select the path A- D-F- H since it is the 381 only feasible multi-domain path and then request the TE domain 382 controllers to setup the A-D and F-H intra-domain paths 384 o If there are multiple feasible paths, the Orchestrator can select 385 the optimal path knowing the cost of the intra-domain paths 386 (provided by the TE domain controllers) and the cost of the 387 inter-domain links (known by the Orchestrator) 389 This approach may have some scalability issues when the number of TE 390 domains is quite big (e.g. 20). 392 In this case, it would be worthwhile using the abstract TE topology 393 information provided by the domain controllers to limit the number of 394 potential optimal end-to-end paths and then request path computation 395 to fewer domain controllers in order to decide what the optimal path 396 within this limited set is. 398 For more details, see section 3.2.3. 400 2.3. Data center interconnections 402 In these use case, there is a TE domain which is used to provide 403 connectivity between data centers which are connected with the TE 404 domain using access links. 406 -------------------------------------------------------------------- 407 I I 408 I I 409 I I 410 I I 411 I I 412 I I 413 I Data Center Interconnection Use Case I 414 I I 415 I I 416 I I 417 I I 418 I (only in PDF version) I 419 I I 420 I I 421 I I 422 I I 423 I I 424 I I 425 -------------------------------------------------------------------- 427 Figure 5 - Data Center Interconnection Use Case 429 In this use case, there is need to transfer data from Data Center 1 430 (DC1) to either DC2 or DC3 (e.g. workload migration). 432 The optimal decision depends both on the cost of the TE path (DC1- 433 DC2 or DC1-DC3) and of the data center resources within DC2 or DC3. 435 The Cloud Orchestrator needs to make a decision for optimal 436 connection based on TE Network constraints and data centers 437 resources. It may not be able to make this decision because it has 438 only an abstract view of the TE network (as in use case in 2.1). 440 The cloud orchestrator can request to the TE domain controller to 441 compute the cost of the possible TE paths (e.g., DC1-DC2 and DC1- 442 DC3) and to the DC controller to provide the information it needs 443 about the required data center resources within DC2 and DC3 and 444 then it can take the decision about the optimal solution based on 445 this information and its policy. 447 3. Motivations 449 This section provides the motivation for the YANG model defined in 450 this document. 452 Section 3.1 describes the motivation for a YANG model to request 453 path computation. 455 Section 3.2 describes the motivation for a YANG model which 456 complements the TE Topology YANG model defined in [TE-TOPO]. 458 Section 3.3 describes the motivation for a stateless YANG RPC which 459 complements the TE Tunnel YANG model defined in [TE-TUNNEL]. 461 3.1. Motivation for a YANG Model 463 3.1.1. Benefits of common data models 465 Path computation requests are closely aligned with the YANG data 466 models that provide (abstract) TE topology information, i.e., [TE- 467 TOPO] as well as that are used to configure and manage TE Tunnels, 468 i.e., [TE-TUNNEL]. Therefore, there is no need for an error-prone 469 mapping or correlation of information. For instance, there is 470 benefit in using the same endpoint identifiers in path computation 471 requests and in the topology modeling. Also, the attributes used in 472 path computation constraints use the same data models. As a result, 473 there are many benefits in aligning path computation requests with 474 YANG models for TE topology information and TE Tunnels configuration 475 and management. 477 3.1.2. Benefits of a single interface 479 A typical use case for path computation requests is the interface 480 between an orchestrator and a domain controller. The system 481 integration effort is typically lower if a single, consistent 482 interface is used between such systems, i.e., one data modeling 483 language (i.e., YANG) and a common protocol (e.g., NETCONF or 484 RESTCONF). 486 Practical benefits of using a single, consistent interface include: 488 1. Simple authentication and authorization: The interface between 489 different components has to be secured. If different protocols 490 have different security mechanisms, ensuring a common access 491 control model may result in overhead. For instance, there may 492 be a need to deal with different security mechanisms, e.g., 493 different credentials or keys. This can result in increased 494 integration effort. 495 2. Consistency: Keeping data consistent over multiple different 496 interfaces or protocols is not trivial. For instance, the 497 sequence of actions can matter in certain use cases, or 498 transaction semantics could be desired. While ensuring 499 consistency within one protocol can already be challenging, it 500 is typically cumbersome to achieve that across different 501 protocols. 502 3. Testing: System integration requires comprehensive testing, 503 including corner cases. The more different technologies are 504 involved, the more difficult it is to run comprehensive test 505 cases and ensure proper integration. 506 4. Middle-box friendliness: Provider and consumer of path 507 computation requests may be located in different networks, and 508 middle-boxes such as firewalls, NATs, or load balancers may be 509 deployed. In such environments it is simpler to deploy a single 510 protocol. Also, it may be easier to debug connectivity 511 problems. 512 5. Tooling reuse: Implementers may want to implement path 513 computation requests with tools and libraries that already 514 exist in controllers and/or orchestrators, e.g., leveraging the 515 rapidly growing eco-system for YANG tooling. 517 3.1.3. Extensibility 519 Path computation is only a subset of the typical functionality of a 520 controller. In many use cases, issuing path computation requests 521 comes along with the need to access other functionality on the same 522 system. In addition to obtaining TE topology, for instance also 523 configuration of services (setup/modification/deletion) may be 524 required, as well as: 526 1. Receiving notifications for topology changes as well as 527 integration with fault management 528 2. Performance management such as retrieving monitoring and 529 telemetry data 530 3. Service assurance, e.g., by triggering OAM functionality 531 4. Other fulfilment and provisioning actions beyond tunnels and 532 services, such as changing QoS configurations 534 YANG is a very extensible and flexible data modeling language that 535 can be used for all these use cases. 537 The YANG model for path computation requests seamlessly complements 538 with [TE-TOPO] and [TE-TUNNEL] in the use cases where YANG-based 539 protocols (e.g., NETCONF or RESTCONF) are used. 541 3.2. Interactions with TE Topology 543 The use cases described in section 2 have been described assuming 544 that the topology view exported by each underlying SDN controller to 545 the orchestrator is aggregated using the "virtual node model", 546 defined in [RFC7926]. 548 TE Topology information, e.g., as provided by [TE-TOPO], could in 549 theory be used by an underlying SDN controllers to provide TE 550 information to the orchestrator thus allowing a PCE available within 551 the Orchestrator to perform multi-domain path computation by its 552 own, without requesting path computations to the underlying SDN 553 controllers. 555 In case the Orchestrator does not implement a PCE function, as 556 discussed in section 1, it could not perform path computation based 557 on TE Topology information and would instead need to request path 558 computation to the underlying controllers to get the information it 559 needs to compute the optimal end-to-end path. 561 This section analyzes the need for an orchestrator to request 562 underlying SDN controllers for path computation even in case the 563 Orchestrator implements a PCE functionality, as well as how the TE 564 Topology information and the path computation can be complementary. 566 In nutshell, there is a scalability trade-off between providing all 567 the TE information needed by PCE, when implemented by the 568 Orchestrator, to take optimal path computation decisions by its own 569 versus requesting the Orchestrator to ask to too many underlying SDN 570 Domain Controllers a set of feasible optimal intra-domain TE paths. 572 3.2.1. TE Topology Aggregation 574 Using the TE Topology model, as defined in [TE-TOPO], the underlying 575 SDN controller can export the whole TE domain as a single abstract 576 TE node with a "detailed connectivity matrix", which extends the 577 "connectivity matrix", defined in [RFC7446], with specific TE 578 attributes (e.g., delay, SRLGs and summary TE metrics). 580 The information provided by the "detailed abstract connectivity 581 matrix" would be equivalent to the information that should be 582 provided by "virtual link model" as defined in [RFC7926]. 584 For example, in the Packet/Optical integration use case, described 585 in section 2.1, the Optical network controller can make the 586 information shown in Figure 3 available to the Coordinator as part 587 of the TE Topology information and the Coordinator could use this 588 information to calculate by its own the optimal path between R1 and 589 R2, without requesting any additional information to the Optical 590 network Controller. 592 However, there is a tradeoff between accuracy (i.e., providing "all" 593 the information that might be needed by the PCE available to 594 Orchestrator) and scalability, to be considered when designing the 595 amount of information to provide within the "detailed abstract 596 connectivity matrix". 598 Figure 6 below shows another example, similar to Figure 3, where 599 there are two possible Optical paths between VP1 and VP4 with 600 different properties (e.g., available bandwidth and cost). 602 -------------------------------------------------------------------- 603 I I 604 I IP+Optical Path Computation Example I 605 I with multiple choices I 606 I I 607 I I 608 I I 609 I (only in PDF version) I 610 I I 611 -------------------------------------------------------------------- 613 Figure 6 - Packet/Optical Path Computation Example with multiple 614 choices 616 Reporting all the information, as in Figure 6, using the "detailed 617 abstract connectivity matrix", is quite challenging from a 618 scalability perspective. The amount of this information is not just 619 based on number of end points (which would scale as N-square), but 620 also on many other parameters, including client rate, user 621 constraints / policies for the service, e.g. max latency < N ms, max 622 cost, etc., exclusion policies to route around busy links, min OSNR 623 margin, max preFEC BER etc. All these constraints could be different 624 based on connectivity requirements. 626 Examples of how the "detailed connectivity matrix" can be 627 dimensioned are described in Appendix A. 629 It is also worth noting that the "connectivity matrix" has been 630 originally defined in WSON, [RFC7446] to report the connectivity 631 constrains of a physical node within the WDM network: the 632 information it contains is pretty "static" and therefore, once taken 633 and stored in the TE data base, it can be always being considered 634 valid and up-to-date in path computation request. 636 Using the "connectivity matrix" with an abstract node to abstract 637 the information regarding the connectivity constraints of an Optical 638 domain, would make this information more "dynamic" since the 639 connectivity constraints of an Optical domain can change over time 640 because some optical paths that are feasible at a given time may 641 become unfeasible at a later time when e.g., another optical path is 642 established. The information in the "detailed abstract connectivity 643 matrix" is even more dynamic since the establishment of another 644 optical path may change some of the parameters (e.g., delay or 645 available bandwidth) in the "detailed abstract connectivity matrix" 646 while not changing the feasibility of the path. 648 "Connectivity matrix" is sometimes confused with optical reach table 649 that contain multiple (e.g. k-shortest) regen-free reachable paths 650 for every A-Z node combination in the network. Optical reach tables 651 can be calculated offline, utilizing vendor optical design and 652 planning tools, and periodically uploaded to the Controller: these 653 optical path reach tables are fairly static. However, to get the 654 connectivity matrix, between any two sites, either a regen free path 655 can be used, if one is available, or multiple regen free paths are 656 concatenated to get from src to dest, which can be a very large 657 combination. Additionally, when the optical path within optical 658 domain needs to be computed, it can result in different paths based 659 on input objective, constraints, and network conditions. In summary, 660 even though "optical reachability table" is fairly static, which 661 regen free paths to build the connectivity matrix between any source 662 and destination is very dynamic, and is done using very 663 sophisticated routing algorithms. 665 There is therefore the need to keep the information in the 666 "connectivity matrix" updated which means that there another 667 tradeoff between the accuracy (i.e., providing "all" the information 668 that might be needed by the Orchestrator's PCE) and having up-to- 669 date information. The more the information is provided and the 670 longer it takes to keep it up-to-date which increases the likelihood 671 that the Orchestrator's PCE computes paths using not updated 672 information. 674 It seems therefore quite challenging to have a "detailed abstract 675 connectivity matrix" that provides accurate, scalable and updated 676 information to allow the Orchestrator's PCE to take optimal 677 decisions by its own. 679 If the information in the "detailed abstract connectivity matrix" is 680 not complete/accurate, we can have the following drawbacks 681 considering for example the case in Figure 6: 683 o If only the VP1-VP4 path with available bandwidth of 2 Gb/s and 684 cost 50 is reported, the Orchestrator's PCE will fail to compute 685 a 5 Gb/s path between routers R1 and R2, although this would be 686 feasible; 688 o If only the VP1-VP4 path with available bandwidth of 10 Gb/s and 689 cost 60 is reported, the Orchestrator's PCE will compute, as 690 optimal, the 1 Gb/s path between R1 and R2 going through the VP2- 691 VP5 path within the Optical domain while the optimal path would 692 actually be the one going thought the VP1-VP4 sub-path (with cost 693 50) within the Optical domain. 695 Instead, using the approach proposed in this document, the 696 Orchestrator, when it needs to setup an end-to-end path, it can 697 request the Optical domain controller to compute a set of optimal 698 paths (e.g., for VP1-VP4 and VP2-VP5) and take decisions based on 699 the information received: 701 o When setting up a 5 Gb/s path between routers R1 and R2, the 702 Optical domain controller may report only the VP1-VP4 path as the 703 only feasible path: the Orchestrator can successfully setup the 704 end-to-end path passing though this Optical path; 706 o When setting up a 1 Gb/s path between routers R1 and R2, the 707 Optical domain controller (knowing that the path requires only 1 708 Gb/s) can report both the VP1-VP4 path, with cost 50, and the 709 VP2-VP5 path, with cost 65. The Orchestrator can then compute the 710 optimal path which is passing thought the VP1-VP4 sub-path (with 711 cost 50) within the Optical domain. 713 3.2.2. TE Topology Abstraction 715 Using the TE Topology model, as defined in [TE-TOPO], the underlying 716 SDN controller can export an abstract TE Topology, composed by a set 717 of TE nodes and TE links, which are abstracting the topology 718 controlled by each domain controller. 720 Considering the example in Figure 4, the TE domain controller 1 can 721 export a TE Topology encompassing the TE nodes A, B, C and D and the 722 TE Link interconnecting them. In a similar way, TE domain controller 723 2 can export a TE Topology encompassing the TE nodes E, F, G and H 724 and the TE Link interconnecting them. 726 In this example, for simplicity reasons, each abstract TE node maps 727 with each physical node, but this is not necessary. 729 In order to setup a multi-domain TE path (e.g., between nodes A and 730 H), the Orchestrator can compute by its own an optimal end-to-end 731 path based on the abstract TE topology information provided by the 732 domain controllers. For example: 734 o Orchestrator's PCE, based on its own information, can compute the 735 optimal multi-domain path being A-B-C-E-G-H, and then request the 736 TE domain controllers to setup the A-B-C and E-G-H intra-domain 737 paths 739 o But, during path setup, the domain controller may find out that 740 A-B-C intra-domain path is not feasible (as discussed in section 741 2.2, in optical networks it is typical to have some paths not 742 being feasible due to optical constraints that are known only by 743 the optical domain controller), while only the path A-B-D is 744 feasible 746 o So what the hierarchical controller computed is not good and need 747 to re-start the path computation from scratch 749 As discussed in section 3.2.1, providing more extensive abstract 750 information from the TE domain controllers to the multi-domain 751 Orchestrator may lead to scalability problems. 753 In a sense this is similar to the problem of routing and wavelength 754 assignment within an Optical domain. It is possible to do first 755 routing (step 1) and then wavelength assignment (step 2), but the 756 chances of ending up with a good path is low. Alternatively, it is 757 possible to do combined routing and wavelength assignment, which is 758 known to be a more optimal and effective way for Optical path setup. 759 Similarly, it is possible to first compute an abstract end-to-end 760 path within the multi-domain Orchestrator (step 1) and then compute 761 an intra-domain path within each Optical domain (step 2), but there 762 are more chances not to find a path or to get a suboptimal path that 763 performing per-domain path computation and then stitch them. 765 3.2.3. Complementary use of TE topology and path computation 767 As discussed in section 2.2, there are some scalability issues with 768 path computation requests in a multi-domain TE network with many TE 769 domains, in terms of the number of requests to send to the TE domain 770 controllers. It would therefore be worthwhile using the TE topology 771 information provided by the domain controllers to limit the number 772 of requests. 774 An example can be described considering the multi-domain abstract 775 topology shown in Figure 7. In this example, an end-to-end TE path 776 between domains A and F needs to be setup. The transit domain should 777 be selected between domains B, C, D and E. 779 -------------------------------------------------------------------- 780 I I 781 I I 782 I I 783 I Multi-domain with many domains I 784 I (Topology information) I 785 I I 786 I I 787 I I 788 I (only in PDF version) I 789 I I 790 I I 791 I I 792 -------------------------------------------------------------------- 794 Figure 7 - Multi-domain with many domains (Topology information) 796 The actual cost of each intra-domain path is not known a priori from 797 the abstract topology information. The Orchestrator only knows, from 798 the TE topology provided by the underlying domain controllers, the 799 feasibility of some intra-domain paths and some upper-bound and/or 800 lower-bound cost information. With this information, together with 801 the cost of inter-domain links, the Orchestrator can understand by 802 its own that: 804 o Domain B cannot be selected as the path connecting domains A and 805 E is not feasible; 807 o Domain E cannot be selected as a transit domain since it is know 808 from the abstract topology information provided by domain 809 controllers that the cost of the multi-domain path A-E-F (which 810 is 100, in the best case) will be always be higher than the cost 811 of the multi-domain paths A-D-F (which is 90, in the worst case) 812 and A-E-F (which is 80, in the worst case) 814 Therefore, the Orchestrator can understand by its own that the 815 optimal multi-domain path could be either A-D-F or A-E-F but it 816 cannot known which one of the two possible option actually provides 817 the optimal end-to-end path. 819 The Orchestrator can therefore request path computation only to the 820 TE domain controllers A, D, E and F (and not to all the possible TE 821 domain controllers). 823 -------------------------------------------------------------------- 824 I I 825 I I 826 I I 827 I Multi-domain with many domains I 828 I (Path Computation information) I 829 I I 830 I I 831 I I 832 I I 833 I (only in PDF version) I 834 I I 835 I I 836 I I 837 -------------------------------------------------------------------- 839 Figure 8 - Multi-domain with many domains (Path Computation 840 information) 842 Based on these requests, the Orchestrator can know the actual cost 843 of each intra-domain paths which belongs to potential optimal end- 844 to-end paths, as shown in Figure 8, and then compute the optimal 845 end-to-end path (e.g., A-D-F, having total cost of 50, instead of A- 846 C-F having a total cost of 70). 848 3.3. Stateless and Stateful Path Computation 850 The TE Tunnel YANG model, defined in [TE-TUNNEL], can support the 851 need to request path computation. 853 It is possible to request path computation by configuring a 854 "compute-only" TE tunnel and retrieving the computed path(s) in the 855 LSP(s) Record-Route Object (RRO) list as described in section 3.3.1 856 of [TE-TUNNEL]. 858 This is a stateful solution since the state of each created 859 "compute-only" TE tunnel needs to be maintained and updated, when 860 underlying network conditions change. 862 It is very useful to provide options for both stateless and stateful 863 path computation mechanisms. It is suggested to use stateless 864 mechanisms as much as possible and to rely on stateful path 865 computation when really needed. 867 Stateless RPC allows requesting path computation using a simple 868 atomic operation and it is the natural option/choice, especially 869 with stateless PCE. 871 Since the operation is stateless, there is no guarantee that the 872 returned path would still be available when path setup is requested: 873 this is not a major issue in case the time between path computation 874 and path setup is short. 876 The RPC response must be provided synchronously and, if 877 collaborative computations are time consuming, it may not be 878 possible to immediate reply to client. 880 In this case, the client can define a maximum time it can wait for 881 the reply, such that if the computation does not complete in time, 882 the server will abort the path computation and reply to the client 883 with an error. It may be possible that the server has tighter timing 884 constraints than the client: in this case the path computation is 885 aborted earlier than the time specified by the client. 887 Note - The RPC response issue (slow RPC server) is not specific to 888 the path computation RPC case so, it may be worthwhile, evaluating 889 whether a more generic solution applicable to any YANG RPC can be 890 used instead. 892 In case the stateless solution is not sufficient, a stateful 893 solution, based on "compute-only" TE tunnel, could be used to 894 support asynchronous operations and/or to get notifications in case 895 the computed path has been changed. 897 It is worth noting that also the stateful solution, although 898 increasing the likelihood that the computed path is available at 899 path setup, it does not guaranteed that because notifications may 900 not be reliable or delivered on time. 902 The stateful path computation has also the following drawbacks: 904 o Several messages required for any path computation 906 o Requires persistent storage in the provider controller 908 o Need for garbage collection for stranded paths 910 o Process burden to detect changes on the computed paths in order 911 to provide notifications update 913 4. Path Computation and Optimization for multiple paths 915 There are use cases, where it is advantageous to request path 916 computation for a set of paths, through a network or through a 917 network domain, using a single request [RFC5440]. 919 This would reduce the protocol overhead to send multiple requests. 921 In the context of a typical multi-domain TE network, there could 922 multiple choices for the ingress/egress points of a domain and the 923 Orchestrator needs to request path computation between all the 924 ingress/egress pairs to select the best pair. For example, in the 925 example of section 2.2, the Orchestrator needs to request the TE 926 network controller 1 to compute the A-C and the A-D paths and to the 927 TE network controller 2 to compute the E-H and the F-H paths. 929 It is also possible that the Orchestrator receives a request to 930 setup a group of multiple end to end connections. The orchestrator 931 needs to request each TE domain controller to compute multiple 932 paths, one (or more) for each end to end connection. 934 There are also scenarios where it can be needed to request path 935 computation for a set of paths in a synchronized fashion. 937 One example could be computing multiple diverse paths. Computing a 938 set of diverse paths in a not-synchronized fashion, leads to a high 939 probability of not being able to satisfy all request. In this case, 940 a sub-optimal primary path that could be protected by a diversely 941 routed secondary path should be computed instead of an optimal 942 primary path that could not be protected. 944 There are also scenarios where it is needed to request optimizing a 945 set of paths using objective functions that apply to the whole set 946 of paths, see [RFC5541], e.g. to minimize the sum of the costs of 947 all the computed paths in the set. 949 5. YANG Model for requesting Path Computation 951 This document define a YANG stateless RPC to request path 952 computation as an "augmentation" of tunnel-rpc, defined in [TE- 953 TUNNEL]. This model provides the RPC input attributes that are 954 needed to request path computation and the RPC output attributes 955 that are needed to report the computed paths. 957 augment /te:tunnels-rpc/te:input/te:tunnel-info: 958 +---- path-request* [request-id] 959 ........... 961 augment /te:tunnels-rpc/te:output/te:result: 962 +--ro response* [response-id] 963 +--ro response-id uint32 964 +--ro (response-type)? 965 +--:(no-path-case) 966 | +--ro no-path! 967 +--:(path-case) 968 +--ro computed-path 969 +--ro path-id? yang-types:uuid 970 +--ro path-properties 971 ........... 972 This model extensively re-uses the grouping defined in [TE-TUNNEL] 973 to ensure maximal syntax and semantics commonality. 975 5.1. Synchronization of multiple path computation requests 977 The YANG model permits to synchronize a set of multiple path 978 requests (identified by specific request-id) all related to a "svec" 979 container emulating the syntax of "SVEC" PCEP object [RFC 5440]. 981 +---- synchronization* [synchronization-id] 982 +---- synchronization-id uint32 983 +---- svec 984 | +---- relaxable? boolean 985 | +---- link-diverse? boolean 986 | +---- node-diverse? boolean 987 | +---- srlg-diverse? boolean 988 | +---- request-id-number* uint32 989 +---- svec-constraints 990 | +---- path-metric-bound* [metric-type] 991 | +---- metric-type identityref 992 | +---- upper-bound? uint64 993 +---- path-srlgs 994 | +---- usage? identityref 995 | +---- values* srlg 996 +---- exclude-objects 997 ........... 998 +---- optimizations 999 +---- (algorithm)? 1000 +--:(metric) 1001 | +---- optimization-metric* [metric-type] 1002 | +---- metric-type identityref 1003 | +---- weight? uint8 1004 +--:(objective-function) 1005 +---- objective-function 1006 +---- objective-function-type? identityref 1007 The model, in addition to the metric types, defined in [TE-TUNNEL], 1008 which can be applied to each individual path request, defines 1009 additional specific metrics types that apply to a set of 1010 synchronized requests, as referenced in [RFC5541]. 1012 identity svec-metric-type { 1013 description 1014 "Base identity for svec metric type"; 1015 } 1016 identity svec-metric-cumul-te { 1017 base svec-metric-type; 1018 description 1019 "TE cumulative path metric"; 1020 } 1022 identity svec-metric-cumul-igp { 1023 base svec-metric-type; 1024 description 1025 "IGP cumulative path metric"; 1026 } 1028 identity svec-metric-cumul-hop { 1029 base svec-metric-type; 1030 description 1031 "Hop cumulative path metric"; 1032 } 1034 identity svec-metric-aggregate-bandwidth-consumption { 1035 base svec-metric-type; 1036 description 1037 "Cumulative bandwith consumption of the set of synchronized 1038 paths"; 1039 } 1041 identity svec-metric-load-of-the-most-loaded-link { 1042 base svec-metric-type; 1043 description 1044 "Load of the most loaded link"; 1045 } 1046 5.2. Returned metric values 1048 This YANG model provides a way to return the values of the metrics 1049 computed by the path computation in the output of RPC, together with 1050 other important information (e.g. srlg, affinities, explicit route), 1051 emulating the syntax of the "C" flag of the "METRIC" PCEP object 1052 [RFC 5440]: 1054 augment /te:tunnels-rpc/te:output/te:result: 1056 +--ro response* [response-id] 1057 +--ro response-id uint32 1058 +--ro (response-type)? 1059 +--:(no-path-case) 1060 | +--ro no-path! 1061 +--:(path-case) 1062 +--ro pathCompService 1063 +--ro path-id? yang-types:uuid 1064 +--ro path-properties 1065 +--ro path-metric* [metric-type] 1066 | +--ro metric-type identityref 1067 | +--ro accumulative-value? uint64 1068 +--ro path-affinities 1069 | +--ro constraint* [usage] 1070 | +--ro usage identityref 1071 | +--ro value? admin-groups 1072 +--ro path-srlgs 1073 | +--ro usage? identityref 1074 | +--ro values* srlg 1075 +--ro path-route-objects 1076 ........... 1077 It also allows to request which metric should returned in the input 1078 of RPC: 1080 augment /te:tunnels-rpc/te:input/te:tunnel-info: 1081 +---- path-request* [request-id] 1082 | +---- request-id uint32 1083 ........... 1084 | +---- requested-metrics* [metric-type] 1085 | +---- metric-type identityref 1086 ........... 1087 This feature is essential for using a stateless path computation in 1088 a multi-domain TE network as described in section 2.2. In this case, 1089 the metrics returned by a path computation requested to a given TE 1090 network controller must be used by the Orchestrator to compute the 1091 best end-to-end path. If they are missing the Orchestrator cannot 1092 compare different paths calculated by the TE network controllers and 1093 choose the best one for the optimal e2e path. 1095 6. YANG model for stateless TE path computation 1097 6.1. YANG Tree 1099 Figure 9 below shows the tree diagram of the YANG model defined in 1100 module ietf-te-path-computation.yang. 1102 module: ietf-te-path-computation 1103 +--rw paths 1104 +--ro path* [path-id] 1105 +--ro path-id yang-types:uuid 1106 +--ro path-properties 1107 +--ro path-metric* [metric-type] 1108 | +--ro metric-type identityref 1109 | +--ro accumulative-value? uint64 1110 +--ro path-affinities 1111 | +--ro constraint* [usage] 1112 | +--ro usage identityref 1113 | +--ro value? admin-groups 1114 +--ro path-srlgs 1115 | +--ro usage? identityref 1116 | +--ro values* srlg 1117 +--ro path-route-objects 1118 +--ro path-route-object* [index] 1119 +--ro index uint32 1120 +--ro (type)? 1121 +--:(numbered) 1122 | +--ro numbered-hop 1123 | +--ro address? te-types:te-tp-id 1124 | +--ro hop-type? te-hop-type 1125 | +--ro direction? te-link-direction 1126 +--:(as-number) 1127 | +--ro as-number-hop 1128 | +--ro as-number? binary 1129 | +--ro hop-type? te-hop-type 1130 +--:(unnumbered) 1131 | +--ro unnumbered-hop 1132 | +--ro node-id? te-types:te-node-id 1133 | +--ro link-tp-id? te-types:te-tp-id 1134 | +--ro hop-type? te-hop-type 1135 | +--ro direction? te-link-direction 1136 +--:(label) 1137 +--ro label-hop 1138 +--ro te-label 1139 +--ro (technology)? 1140 | +--:(generic) 1141 | +--ro generic? rt- 1142 types:generalized-label 1143 +--ro direction? te-label-direction 1144 augment /te:tunnels-rpc/te:input/te:tunnel-info: 1145 +---- path-request* [request-id] 1146 | +---- request-id uint32 1147 | +---- source? inet:ip-address 1148 | +---- destination? inet:ip-address 1149 | +---- src-tp-id? binary 1150 | +---- dst-tp-id? binary 1151 | +---- bidirectional 1152 | | +---- association 1153 | | +---- id? uint16 1154 | | +---- source? inet:ip-address 1155 | | +---- global-source? inet:ip-address 1156 | | +---- type? identityref 1157 | | +---- provisioning? identityref 1158 | +---- explicit-route-objects 1159 | | +---- route-object-exclude-always* [index] 1160 | | | +---- index uint32 1161 | | | +---- (type)? 1162 | | | +--:(numbered) 1163 | | | | +---- numbered-hop 1164 | | | | +---- address? te-types:te-tp-id 1165 | | | | +---- hop-type? te-hop-type 1166 | | | | +---- direction? te-link-direction 1167 | | | +--:(as-number) 1168 | | | | +---- as-number-hop 1169 | | | | +---- as-number? binary 1170 | | | | +---- hop-type? te-hop-type 1171 | | | +--:(unnumbered) 1172 | | | | +---- unnumbered-hop 1173 | | | | +---- node-id? te-types:te-node-id 1174 | | | | +---- link-tp-id? te-types:te-tp-id 1175 | | | | +---- hop-type? te-hop-type 1176 | | | | +---- direction? te-link-direction 1177 | | | +--:(label) 1178 | | | +---- label-hop 1179 | | | +---- te-label 1180 | | | +---- (technology)? 1181 | | | | +--:(generic) 1182 | | | | +---- generic? rt- 1183 types:generalized-label 1184 | | | +---- direction? te-label-direction 1185 | | +---- route-object-include-exclude* [index] 1186 | | +---- explicit-route-usage? identityref 1187 | | +---- index uint32 1188 | | +---- (type)? 1189 | | +--:(numbered) 1190 | | | +---- numbered-hop 1191 | | | +---- address? te-types:te-tp-id 1192 | | | +---- hop-type? te-hop-type 1193 | | | +---- direction? te-link-direction 1194 | | +--:(as-number) 1195 | | | +---- as-number-hop 1196 | | | +---- as-number? binary 1197 | | | +---- hop-type? te-hop-type 1198 | | +--:(unnumbered) 1199 | | | +---- unnumbered-hop 1200 | | | +---- node-id? te-types:te-node-id 1201 | | | +---- link-tp-id? te-types:te-tp-id 1202 | | | +---- hop-type? te-hop-type 1203 | | | +---- direction? te-link-direction 1204 | | +--:(label) 1205 | | +---- label-hop 1206 | | +---- te-label 1207 | | +---- (technology)? 1208 | | | +--:(generic) 1209 | | | +---- generic? rt- 1210 types:generalized-label 1211 | | +---- direction? te-label-direction 1212 | +---- path-constraints 1213 | | +---- te-bandwidth 1214 | | | +---- (technology)? 1215 | | | +--:(generic) 1216 | | | +---- generic? te-bandwidth 1217 | | +---- setup-priority? uint8 1218 | | +---- hold-priority? uint8 1219 | | +---- signaling-type? identityref 1220 | | +---- disjointness? te-types:te-path-disjointness 1221 | | +---- path-metric-bounds 1222 | | | +---- path-metric-bound* [metric-type] 1223 | | | +---- metric-type identityref 1224 | | | +---- upper-bound? uint64 1225 | | +---- path-affinities 1226 | | | +---- constraint* [usage] 1227 | | | +---- usage identityref 1228 | | | +---- value? admin-groups 1229 | | +---- path-srlgs 1230 | | +---- usage? identityref 1231 | | +---- values* srlg 1232 | +---- optimizations 1233 | | +---- (algorithm)? 1234 | | +--:(metric) {path-optimization-metric}? 1235 | | | +---- optimization-metric* [metric-type] 1236 | | | | +---- metric-type 1237 identityref 1238 | | | | +---- weight? uint8 1239 | | | | +---- explicit-route-exclude-objects 1240 | | | | | +---- route-object-exclude-object* [index] 1241 | | | | | +---- index uint32 1242 | | | | | +---- (type)? 1243 | | | | | +--:(numbered) 1244 | | | | | | +---- numbered-hop 1245 | | | | | | +---- address? te-types:te-tp- 1246 id 1247 | | | | | | +---- hop-type? te-hop-type 1248 | | | | | | +---- direction? te-link- 1249 direction 1250 | | | | | +--:(as-number) 1251 | | | | | | +---- as-number-hop 1252 | | | | | | +---- as-number? binary 1253 | | | | | | +---- hop-type? te-hop-type 1254 | | | | | +--:(unnumbered) 1255 | | | | | | +---- unnumbered-hop 1256 | | | | | | +---- node-id? te-types:te- 1257 node-id 1258 | | | | | | +---- link-tp-id? te-types:te- 1259 tp-id 1260 | | | | | | +---- hop-type? te-hop-type 1261 | | | | | | +---- direction? te-link- 1262 direction 1263 | | | | | +--:(label) 1264 | | | | | +---- label-hop 1265 | | | | | +---- te-label 1266 | | | | | +---- (technology)? 1267 | | | | | | +--:(generic) 1268 | | | | | | +---- generic? rt- 1269 types:generalized-label 1270 | | | | | +---- direction? te-label- 1271 direction 1272 | | | | +---- explicit-route-include-objects 1273 | | | | +---- route-object-include-object* [index] 1274 | | | | +---- index uint32 1275 | | | | +---- (type)? 1276 | | | | +--:(numbered) 1277 | | | | | +---- numbered-hop 1278 | | | | | +---- address? te-types:te-tp- 1279 id 1280 | | | | | +---- hop-type? te-hop-type 1281 | | | | | +---- direction? te-link- 1282 direction 1283 | | | | +--:(as-number) 1284 | | | | | +---- as-number-hop 1285 | | | | | +---- as-number? binary 1286 | | | | | +---- hop-type? te-hop-type 1287 | | | | +--:(unnumbered) 1288 | | | | | +---- unnumbered-hop 1289 | | | | | +---- node-id? te-types:te- 1290 node-id 1291 | | | | | +---- link-tp-id? te-types:te- 1292 tp-id 1293 | | | | | +---- hop-type? te-hop-type 1294 | | | | | +---- direction? te-link- 1295 direction 1296 | | | | +--:(label) 1297 | | | | +---- label-hop 1298 | | | | +---- te-label 1299 | | | | +---- (technology)? 1300 | | | | | +--:(generic) 1301 | | | | | +---- generic? rt- 1302 types:generalized-label 1303 | | | | +---- direction? te-label- 1304 direction 1305 | | | +---- tiebreakers 1306 | | | +---- tiebreaker* [tiebreaker-type] 1307 | | | +---- tiebreaker-type identityref 1308 | | +--:(objective-function) {path-optimization-objective- 1309 function}? 1310 | | +---- objective-function 1311 | | +---- objective-function-type? identityref 1312 | +---- requested-metrics* [metric-type] 1313 | +---- metric-type identityref 1314 +---- synchronization* [synchronization-id] 1315 +---- synchronization-id uint32 1316 +---- svec 1317 | +---- relaxable? boolean 1318 | +---- link-diverse? boolean 1319 | +---- node-diverse? boolean 1320 | +---- srlg-diverse? boolean 1321 | +---- request-id-number* uint32 1322 +---- svec-constraints 1323 | +---- path-metric-bound* [metric-type] 1324 | +---- metric-type identityref 1325 | +---- upper-bound? uint64 1326 +---- path-srlgs 1327 | +---- usage? identityref 1328 | +---- values* srlg 1329 +---- exclude-objects 1330 | +---- excludes* [index] 1331 | +---- index uint32 1332 | +---- (type)? 1333 | +--:(numbered) 1334 | | +---- numbered-hop 1335 | | +---- address? te-types:te-tp-id 1336 | | +---- hop-type? te-hop-type 1337 | | +---- direction? te-link-direction 1338 | +--:(as-number) 1339 | | +---- as-number-hop 1340 | | +---- as-number? binary 1341 | | +---- hop-type? te-hop-type 1342 | +--:(unnumbered) 1343 | | +---- unnumbered-hop 1344 | | +---- node-id? te-types:te-node-id 1345 | | +---- link-tp-id? te-types:te-tp-id 1346 | | +---- hop-type? te-hop-type 1347 | | +---- direction? te-link-direction 1348 | +--:(label) 1349 | +---- label-hop 1350 | +---- te-label 1351 | +---- (technology)? 1352 | | +--:(generic) 1353 | | +---- generic? rt- 1354 types:generalized-label 1355 | +---- direction? te-label-direction 1356 +---- optimizations 1357 +---- (algorithm)? 1358 +--:(metric) 1359 | +---- optimization-metric* [metric-type] 1360 | +---- metric-type identityref 1361 | +---- weight? uint8 1362 +--:(objective-function) 1363 +---- objective-function 1364 +---- objective-function-type? identityref 1365 augment /te:tunnels-rpc/te:output/te:result: 1366 +--ro response* [response-id] 1367 +--ro response-id uint32 1368 +--ro (response-type)? 1369 +--:(no-path-case) 1370 | +--ro no-path! 1371 +--:(path-case) 1372 +--ro computed-path 1373 +--ro path-id? yang-types:uuid 1374 +--ro path-properties 1375 +--ro path-metric* [metric-type] 1376 | +--ro metric-type identityref 1377 | +--ro accumulative-value? uint64 1378 +--ro path-affinities 1379 | +--ro constraint* [usage] 1380 | +--ro usage identityref 1381 | +--ro value? admin-groups 1382 +--ro path-srlgs 1383 | +--ro usage? identityref 1384 | +--ro values* srlg 1385 +--ro path-route-objects 1386 +--ro path-route-object* [index] 1387 +--ro index uint32 1388 +--ro (type)? 1389 +--:(numbered) 1390 | +--ro numbered-hop 1391 | +--ro address? te-types:te-tp- 1392 id 1393 | +--ro hop-type? te-hop-type 1394 | +--ro direction? te-link- 1395 direction 1396 +--:(as-number) 1397 | +--ro as-number-hop 1398 | +--ro as-number? binary 1399 | +--ro hop-type? te-hop-type 1400 +--:(unnumbered) 1401 | +--ro unnumbered-hop 1402 | +--ro node-id? te-types:te- 1403 node-id 1404 | +--ro link-tp-id? te-types:te- 1405 tp-id 1406 | +--ro hop-type? te-hop-type 1407 | +--ro direction? te-link- 1408 direction 1409 +--:(label) 1410 +--ro label-hop 1411 +--ro te-label 1412 +--ro (technology)? 1413 | +--:(generic) 1414 | +--ro generic? rt- 1415 types:generalized-label 1416 +--ro direction? te-label- 1417 direction 1419 Figure 9 - TE path computation YANG tree 1421 6.2. YANG Module 1423 file "ietf-te-path-computation@2018-03-02.yang" 1424 module ietf-te-path-computation { 1425 yang-version 1.1; 1426 namespace "urn:ietf:params:xml:ns:yang:ietf-te-path-computation"; 1427 // replace with IANA namespace when assigned 1429 prefix "tepc"; 1431 import ietf-inet-types { 1432 prefix "inet"; 1433 } 1435 import ietf-yang-types { 1436 prefix "yang-types"; 1437 } 1439 import ietf-te { 1440 prefix "te"; 1441 } 1443 import ietf-te-types { 1444 prefix "te-types"; 1445 } 1446 organization 1447 "Traffic Engineering Architecture and Signaling (TEAS) 1448 Working Group"; 1450 contact 1451 "WG Web: 1452 WG List: 1454 WG Chair: Lou Berger 1455 1457 WG Chair: Vishnu Pavan Beeram 1458 1460 "; 1462 description "YANG model for stateless TE path computation"; 1464 revision "2018-03-02" { 1465 description "Revision to fix issues #22, 29, 33 and 39"; 1466 reference "YANG model for stateless TE path computation"; 1467 } 1469 /* 1470 * Features 1471 */ 1473 feature stateless-path-computation { 1474 description 1475 "This feature indicates that the system supports 1476 stateless path computation."; 1477 } 1479 /* 1480 * Groupings 1481 */ 1483 grouping path-info { 1484 leaf path-id { 1485 type yang-types:uuid; 1486 config false; 1487 description "path-id ref."; 1488 } 1489 uses te-types:generic-path-properties; 1490 description "Path computation output information"; 1491 } 1493 grouping end-points { 1494 leaf source { 1495 type inet:ip-address; 1496 description "TE tunnel source address."; 1497 } 1498 leaf destination { 1499 type inet:ip-address; 1500 description "P2P tunnel destination address"; 1501 } 1502 leaf src-tp-id { 1503 type binary; 1504 description "TE tunnel source termination point identifier."; 1505 } 1506 leaf dst-tp-id { 1507 type binary; 1508 description "TE tunnel destination termination point 1509 identifier."; 1510 } 1511 description "Path Computation End Points grouping."; 1512 } 1514 grouping requested-metrics-info { 1515 description "requested metric"; 1516 list requested-metrics { 1517 key 'metric-type'; 1518 description "list of requested metrics"; 1519 leaf metric-type { 1520 type identityref { 1521 base te-types:path-metric-type; 1522 } 1523 description "the requested metric"; 1524 } 1525 } 1526 } 1528 identity svec-metric-type { 1529 description 1530 "Base identity for svec metric type"; 1531 } 1533 identity svec-metric-cumul-te { 1534 base svec-metric-type; 1535 description 1536 "TE cumulative path metric"; 1537 } 1539 identity svec-metric-cumul-igp { 1540 base svec-metric-type; 1541 description 1542 "IGP cumulative path metric"; 1543 } 1545 identity svec-metric-cumul-hop { 1546 base svec-metric-type; 1547 description 1548 "Hop cumulative path metric"; 1549 } 1551 identity svec-metric-aggregate-bandwidth-consumption { 1552 base svec-metric-type; 1553 description 1554 "Cumulative bandwith consumption of the set of synchronized 1555 paths"; 1556 } 1558 identity svec-metric-load-of-the-most-loaded-link { 1559 base svec-metric-type; 1560 description 1561 "Load of the most loaded link"; 1563 } 1565 grouping svec-metrics-bounds_config { 1566 description "TE path metric bounds grouping for computing a set 1567 of 1568 synchronized requests"; 1569 leaf metric-type { 1570 type identityref { 1571 base svec-metric-type; 1572 } 1573 description "TE path metric type usable for computing a set of 1574 synchronized requests"; 1575 } 1576 leaf upper-bound { 1577 type uint64; 1578 description "Upper bound on end-to-end svec path metric"; 1579 } 1580 } 1582 grouping svec-metrics-optimization_config { 1583 description "TE path metric bounds grouping for computing a set 1584 of 1585 synchronized requests"; 1586 leaf metric-type { 1587 type identityref { 1588 base svec-metric-type; 1589 } 1590 description "TE path metric type usable for computing a set of 1591 synchronized requests"; 1592 } 1593 leaf weight { 1594 type uint8; 1595 description "Metric normalization weight"; 1596 } 1597 } 1599 grouping svec-exclude { 1600 description "List of resources to be excluded by all the paths 1601 in the SVEC"; 1603 container exclude-objects { 1604 description "resources to be excluded"; 1605 list excludes { 1606 key index; 1607 description 1608 "List of explicit route objects to always exclude 1609 from synchronized path computation"; 1610 uses te-types:explicit-route-hop; 1611 } 1612 } 1613 } 1615 grouping synchronization-constraints { 1616 description "Global constraints applicable to synchronized 1617 path computation"; 1618 container svec-constraints { 1619 description "global svec constraints"; 1620 list path-metric-bound { 1621 key metric-type; 1622 description "list of bound metrics"; 1623 uses svec-metrics-bounds_config; 1624 } 1625 } 1626 uses te-types:generic-path-srlgs; 1627 uses svec-exclude; 1628 } 1630 grouping synchronization-optimization { 1631 description "Synchronized request optimization"; 1632 container optimizations { 1633 description 1634 "The objective function container that includes 1635 attributes to impose when computing a synchronized set of 1636 paths"; 1638 choice algorithm { 1639 description "Optimizations algorithm."; 1640 case metric { 1641 list optimization-metric { 1642 key "metric-type"; 1643 description "svec path metric type"; 1644 uses svec-metrics-optimization_config; 1645 } 1646 } 1647 case objective-function { 1648 container objective-function { 1649 description 1650 "The objective function container that includes 1651 attributes to impose when computing a TE path"; 1652 uses te-types:path-objective-function_config; 1653 } 1654 } 1655 } 1656 } 1657 } 1659 grouping synchronization-info { 1660 description "Information for sync"; 1661 list synchronization { 1662 key "synchronization-id"; 1663 description "sync list"; 1664 leaf synchronization-id { 1665 type uint32; 1666 description "index"; 1667 } 1668 container svec { 1669 description 1670 "Synchronization VECtor"; 1671 leaf relaxable { 1672 type boolean; 1673 default true; 1674 description 1675 "If this leaf is true, path computation process is free 1676 to ignore svec content. 1677 otherwise it must take into account this svec."; 1678 } 1679 leaf link-diverse { 1680 type boolean; 1681 default false; 1682 description "link-diverse"; 1683 } 1684 leaf node-diverse { 1685 type boolean; 1686 default false; 1687 description "node-diverse"; 1688 } 1689 leaf srlg-diverse { 1690 type boolean; 1691 default false; 1692 description "srlg-diverse"; 1693 } 1694 leaf-list request-id-number { 1695 type uint32; 1696 description "This list reports the set of M path 1697 computation 1698 requests that must be synchronized."; 1699 } 1700 } 1701 uses synchronization-constraints; 1702 uses synchronization-optimization; 1703 } 1704 } 1706 grouping no-path-info { 1707 description "no-path-info"; 1708 container no-path { 1709 presence "Response without path information, due to failure 1710 performing the path computation"; 1711 description "if path computation cannot identify a path, 1712 rpc returns no path."; 1713 } 1714 } 1716 /* 1717 * Root container 1718 */ 1719 container paths { 1720 list path { 1721 key "path-id"; 1722 config false; 1723 uses path-info; 1724 description "List of previous computed paths."; 1725 } 1726 description "Root container for path-computation"; 1727 } 1729 /** 1730 * AUGMENTS TO TE RPC 1731 */ 1733 augment "/te:tunnels-rpc/te:input/te:tunnel-info" { 1734 description "statelessComputeP2PPath input"; 1735 list path-request { 1736 key "request-id"; 1737 description "request-list"; 1738 leaf request-id { 1739 type uint32; 1740 mandatory true; 1741 description "Each path computation request is uniquely 1742 identified by the request-id-number. 1743 It must be present also in rpcs."; 1744 } 1745 uses end-points; 1746 uses te:bidir-assoc-properties; 1747 uses te-types:path-route-objects; 1748 uses te-types:generic-path-constraints; 1749 uses te-types:generic-path-optimization; 1750 uses requested-metrics-info; 1751 } 1752 uses synchronization-info; 1753 } 1755 augment "/te:tunnels-rpc/te:output/te:result" { 1756 description "statelessComputeP2PPath output"; 1757 list response { 1758 key response-id; 1759 config false; 1760 description "response"; 1761 leaf response-id { 1762 type uint32; 1763 description 1764 "The list key that has to reuse request-id-number."; 1765 } 1766 choice response-type { 1767 config false; 1768 description "response-type"; 1769 case no-path-case { 1770 uses no-path-info; 1771 } 1772 case path-case { 1773 container computed-path { 1774 uses path-info; 1775 description "Path computation service."; 1776 } 1777 } 1778 } 1779 } 1780 } 1781 } 1782 1784 Figure 10 - TE path computation YANG module 1786 7. Security Considerations 1788 This document describes use cases of requesting Path Computation 1789 using YANG models, which could be used at the ABNO Control Interface 1790 [RFC7491] and/or between controllers in ACTN [ACTN-frame]. As such, 1791 it does not introduce any new security considerations compared to 1792 the ones related to YANG specification, ABNO specification and ACTN 1793 Framework defined in [RFC6020], [RFC7950], [RFC7491] and [ACTN- 1794 frame]. 1796 This document also defines common data types using the YANG data 1797 modeling language. The definitions themselves have no security 1798 impact on the Internet, but the usage of these definitions in 1799 concrete YANG modules might have. The security considerations 1800 spelled out in the YANG specification [RFC6020] apply for this 1801 document as well. 1803 8. IANA Considerations 1805 This section is for further study: to be completed when the YANG 1806 model is more stable. 1808 9. References 1810 9.1. Normative References 1812 [RFC6020] Bjorklund, M., "YANG - A Data Modeling Language for the 1813 Network Configuration Protocol (NETCONF)", RFC 6020, 1814 October 2010. 1816 [RFC7139] Zhang, F. et al., "GMPLS Signaling Extensions for Control 1817 of Evolving G.709 Optical Transport Networks", RFC 7139, 1818 March 2014. 1820 [RFC7491] Farrel, A., King, D., "A PCE-Based Architecture for 1821 Application-Based Network Operations", RFC 7491, March 2015. 1823 [RFC7926] Farrel, A. et al., "Problem Statement and Architecture for 1824 Information Exchange Between Interconnected Traffic 1825 Engineered Networks", RFC 7926, July 2016. 1827 [RFC7950] Bjorklund, M., "The YANG 1.1 Data Modeling Language", RFC 1828 7950, August 2016. 1830 [TE-TOPO] Liu, X. et al., "YANG Data Model for TE Topologies", 1831 draft-ietf-teas-yang-te-topo, work in progress. 1833 [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic 1834 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 1835 te, work in progress. 1837 [ACTN-Frame] Ceccarelli, D., Lee, Y. et al., "Framework for 1838 Abstraction and Control of Traffic Engineered Networks" 1839 draft-ietf-actn-framework, work in progress. 1841 [ITU-T G.709-2016] ITU-T Recommendation G.709 (06/16), "Interface 1842 for the optical transport network", June 2016 1844 9.2. Informative References 1846 [RFC4655] Farrel, A. et al., "A Path Computation Element (PCE)-Based 1847 Architecture", RFC 4655, August 2006. 1849 [RFC5541] Le Roux, JL. et al., " Encoding of Objective Functions in 1850 the Path Computation Element Communication Protocol 1851 (PCEP)", RFC 5541, June 2009. 1853 [RFC7446] Lee, Y. et al., "Routing and Wavelength Assignment 1854 Information Model for Wavelength Switched Optical 1855 Networks", RFC 7446, February 2015. 1857 [OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical 1858 Transport Network Topology", draft-ietf-ccamp-otn-topo- 1859 yang, work in progress. 1861 [ACTN-Info] Lee, Y., Belotti, S., Dhody, D., Ceccarelli, D., 1862 "Information Model for Abstraction and Control of 1863 Transport Networks", draft-leebelotti-actn-info, work in 1864 progress. 1866 [PCEP-Service-Aware] Dhody, D. et al., "Extensions to the Path 1867 Computation Element Communication Protocol (PCEP) to 1868 compute service aware Label Switched Path (LSP)", draft- 1869 ietf-pce-pcep-service-aware, work in progress. 1871 10. Acknowledgments 1873 The authors would like to thank Igor Bryskin and Xian Zhang for 1874 participating in discussions and providing valuable insights. 1876 The authors would like to thank the authors of the TE Tunnel YANG 1877 model [TE-TUNNEL], in particular Igor Bryskin, Vishnu Pavan Beeram, 1878 Tarek Saad and Xufeng Liu, for their inputs to the discussions and 1879 support in having consistency between the Path Computation and TE 1880 Tunnel YANG models. 1882 This document was prepared using 2-Word-v2.0.template.dot. 1884 Appendix A. Examples of dimensioning the "detailed connectivity matrix" 1886 In the following table, a list of the possible constraints, 1887 associated with their potential cardinality, is reported. 1889 The maximum number of potential connections to be computed and 1890 reported is, in first approximation, the multiplication of all of 1891 them. 1893 Constraint Cardinality 1894 ---------- ------------------------------------------------------- 1896 End points N(N-1)/2 if connections are bidirectional (OTN and WDM), 1897 N(N-1) for unidirectional connections. 1899 Bandwidth In WDM networks, bandwidth values are expressed in GHz. 1901 On fixed-grid WDM networks, the central frequencies are 1902 on a 50GHz grid and the channel width of the transmitters 1903 are typically 50GHz such that each central frequency can 1904 be used, i.e., adjacent channels can be placed next to 1905 each other in terms of central frequencies. 1907 On flex-grid WDM networks, the central frequencies are on 1908 a 6.25GHz grid and the channel width of the transmitters 1909 can be multiples of 12.5GHz. 1911 For fixed-grid WDM networks typically there is only one 1912 possible bandwidth value (i.e., 50GHz) while for flex- 1913 grid WDM networks typically there are 4 possible 1914 bandwidth values (e.g., 37.5GHz, 50GHz, 62.5GHz, 75GHz). 1916 In OTN (ODU) networks, bandwidth values are expressed as 1917 pairs of ODU type and, in case of ODUflex, ODU rate in 1918 bytes/sec as described in section 5 of [RFC7139]. 1920 For "fixed" ODUk types, 6 possible bandwidth values are 1921 possible (i.e., ODU0, ODU1, ODU2, ODU2e, ODU3, ODU4). 1923 For ODUflex(GFP), up to 80 different bandwidth values can 1924 be specified, as defined in Table 7-8 of [ITU-T G.709- 1925 2016]. 1927 For other ODUflex types, like ODUflex(CBR), the number of 1928 possible bandwidth values depends on the rates of the 1929 clients that could be mapped over these ODUflex types, as 1930 shown in Table 7.2 of [ITU-T G.709-2016], which in theory 1931 could be a countinuum of values. However, since different 1932 ODUflex bandwidths that use the same number of TSs on 1933 each link along the path are equivalent for path 1934 computation purposes, up to 120 different bandwidth 1935 ranges can be specified. 1937 Ideas to reduce the number of ODUflex bandwidth values in 1938 the detailed connectivity matrix, to less than 100, are 1939 for further study. 1941 Bandwidth specification for ODUCn is currently for 1942 further study but it is expected that other bandwidth 1943 values can be specified as integer multiples of 100Gb/s. 1945 In IP we have bandwidth values in bytes/sec. In 1946 principle, this is a countinuum of values, but in 1947 practice we can identify a set of bandwidth ranges, where 1948 any bandwidth value inside the same range produces the 1949 same path. 1950 The number of such ranges is the cardinality, which 1951 depends on the topology, available bandwidth and status 1952 of the network. Simulations (Note: reference paper 1953 submitted for publication) show that values for medium 1954 size topologies (around 50-150 nodes) are in the range 4- 1955 7 (5 on average) for each end points couple. 1957 Metrics IGP, TE and hop number are the basic objective metrics 1958 defined so far. There are also the 2 objective functions 1959 defined in [RFC5541]: Minimum Load Path (MLP) and Maximum 1960 Residual Bandwidth Path (MBP). Assuming that one only 1961 metric or objective function can be optimized at once, 1962 the total cardinality here is 5. 1964 With [PCEP-Service-Aware], a number of additional metrics 1965 are defined, including Path Delay metric, Path Delay 1966 Variation metric and Path Loss metric, both for point-to- 1967 point and point-to-multipoint paths. This increases the 1968 cardinality to 8. 1970 Bounds Each metric can be associated with a bound in order to 1971 find a path having a total value of that metric lower 1972 than the given bound. This has a potentially very high 1973 cardinality (as any value for the bound is allowed). In 1974 practice there is a maximum value of the bound (the one 1975 with the maximum value of the associated metric) which 1976 results always in the same path, and a range approach 1977 like for bandwidth in IP should produce also in this case 1978 the cardinality. Assuming to have a cardinality similar 1979 to the one of the bandwidth (let say 5 on average) we 1980 should have 6 (IGP, TE, hop, path delay, path delay 1981 variation and path loss; we don't consider here the two 1982 objective functions of [RFC5541] as they are conceived 1983 only for optimization)*5 = 30 cardinality. 1985 Technology 1986 constraints For further study 1988 Priority We have 8 values for setup priority, which is used in 1989 path computation to route a path using free resources 1990 and, where no free resources are available, resources 1991 used by LSPs having a lower holding priority. 1993 Local prot It's possible to ask for a local protected service, where 1994 all the links used by the path are protected with fast 1995 reroute (this is only for IP networks, but line 1996 protection schemas are available on the other 1997 technologies as well). This adds an alternative path 1998 computation, so the cardinality of this constraint is 2. 2000 Administrative 2001 Colors Administrative colors (aka affinities) are typically 2002 assigned to links but when topology abstraction is used 2003 affinity information can also appear in the detailed 2004 connectivity matrix. 2006 There are 32 bits available for the affinities. Links can 2007 be tagged with any combination of these bits, and path 2008 computation can be constrained to include or exclude any 2009 or all of them. The relevant cardinality is 3 (include- 2010 any, exclude-any, include-all) times 2^32 possible 2011 values. However, the number of possible values used in 2012 real networks is quite small. 2014 Included Resources 2016 A path computation request can be associated to an 2017 ordered set of network resources (links, nodes) to be 2018 included along the computed path. This constraint would 2019 have a huge cardinality as in principle any combination 2020 of network resources is possible. However, as far as the 2021 Orchestrator doesn't know details about the internal 2022 topology of the domain, it shouldn't include this type of 2023 constraint at all (see more details below). 2025 Excluded Resources 2027 A path computation request can be associated to a set of 2028 network resources (links, nodes, SRLGs) to be excluded 2029 from the computed path. Like for included resources, 2030 this constraint has a potentially very high cardinality, 2031 but, once again, it can't be actually used by the 2032 Orchestrator, if it's not aware of the domain topology 2033 (see more details below). 2034 As discussed above, the Orchestrator can specify include or exclude 2035 resources depending on the abstract topology information that the 2036 domain controller exposes: 2038 o In case the domain controller exposes the entire domain as a 2039 single abstract TE node with his own external terminations and 2040 connectivity matrix (whose size we are estimating), no other 2041 topological details are available, therefore the size of the 2042 connectivity matrix only depends on the combination of the 2043 constraints that the Orchestrator can use in a path computation 2044 request to the domain controller. These constraints cannot refer 2045 to any details of the internal topology of the domain, as those 2046 details are not known to the Orchestrator and so they do not 2047 impact size of connectivity matrix exported. 2049 o Instead in case the domain controller exposes a topology 2050 including more than one abstract TE nodes and TE links, and their 2051 attributes (e.g. SRLGs, affinities for the links), the 2052 Orchestrator knows these details and therefore could compute a 2053 path across the domain referring to them in the constraints. The 2054 connectivity matrixes to be estimated here are the ones relevant 2055 to the abstract TE nodes exported to the Orchestrator. These 2056 connectivity matrixes and therefore theirs sizes, while cannot 2057 depend on the other abstract TE nodes and TE links, which are 2058 external to the given abstract node, could depend to SRLGs (and 2059 other attributes, like affinities) which could be present also in 2060 the portion of the topology represented by the abstract nodes, 2061 and therefore contribute to the size of the related connectivity 2062 matrix. 2064 We also don't consider here the possibility to ask for more than one 2065 path in diversity or for point-to-multi-point paths, which are for 2066 further study. 2068 Considering for example an IP domain without considering SRLG and 2069 affinities, we have an estimated number of paths depending on these 2070 estimated cardinalities: 2072 Endpoints = N*(N-1), Bandwidth = 5, Metrics = 6, Bounds = 20, 2073 Priority = 8, Local prot = 2 2075 The number of paths to be pre-computed by each IP domain is 2076 therefore 24960 * N(N-1) where N is the number of domain access 2077 points. 2079 This means that with just 4 access points we have nearly 300000 2080 paths to compute, advertise and maintain (if a change happens in the 2081 domain, due to a fault, or just the deployment of new traffic, a 2082 substantial number of paths need to be recomputed and the relevant 2083 changes advertised to the upper controller). 2085 This seems quite challenging. In fact, if we assume a mean length of 2086 1K for the json describing a path (a quite conservative estimate), 2087 reporting 300000 paths means transferring and then parsing more than 2088 300 Mbytes for each domain. If we assume that 20% (to be checked) of 2089 this paths change when a new deployment of traffic occurs, we have 2090 60 Mbytes of transfer for each domain traversed by a new end-to-end 2091 path. If a network has, let say, 20 domains (we want to estimate the 2092 load for a non-trivial domain setup) in the beginning a total 2093 initial transfer of 6Gigs is needed, and eventually, assuming 4-5 2094 domains are involved in mean during a path deployment we could have 2095 240-300 Mbytes of changes advertised to the higher order controller. 2097 Further bare-bone solutions can be investigated, removing some more 2098 options, if this is considered not acceptable; in conclusion, it 2099 seems that an approach based only on connectivity matrix is hardly 2100 feasible, and could be applicable only to small networks with a 2101 limited meshing degree between domains and renouncing to a number of 2102 path computation features. 2104 Contributors 2106 Dieter Beller 2107 Nokia 2108 Email: dieter.beller@nokia.com 2110 Gianmarco Bruno 2111 Ericsson 2112 Email: gianmarco.bruno@ericsson.com 2114 Francesco Lazzeri 2115 Ericsson 2116 Email: francesco.lazzeri@ericsson.com 2118 Young Lee 2119 Huawei 2120 Email: leeyoung@huawei.com 2122 Carlo Perocchio 2123 Ericsson 2124 Email: carlo.perocchio@ericsson.com 2126 Authors' Addresses 2128 Italo Busi (Editor) 2129 Huawei 2130 Email: italo.busi@huawei.com 2132 Sergio Belotti (Editor) 2133 Nokia 2134 Email: sergio.belotti@nokia.com 2136 Victor Lopez 2137 Telefonica 2138 Email: victor.lopezalvarez@telefonica.com 2139 Oscar Gonzalez de Dios 2140 Telefonica 2141 Email: oscar.gonzalezdedios@telefonica.com 2143 Anurag Sharma 2144 Google 2145 Email: ansha@google.com 2147 Yan Shi 2148 China Unicom 2149 Email: shiyan49@chinaunicom.cn 2151 Ricard Vilalta 2152 CTTC 2153 Email: ricard.vilalta@cttc.es 2155 Karthik Sethuraman 2156 NEC 2157 Email: karthik.sethuraman@necam.com 2159 Michael Scharf 2160 Nokia 2161 Email: michael.scharf@nokia.com 2163 Daniele Ceccarelli 2164 Ericsson 2165 Email: daniele.ceccarelli@ericsson.com