idnits 2.17.1 draft-ietf-teas-yang-path-computation-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([TE-TUNNEL]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 1109 has weird spacing: '...tion-id uin...' == Line 1116 has weird spacing: '...ic-type ide...' == Line 1125 has weird spacing: '...-- name str...' == Line 1132 has weird spacing: '...ic-type ide...' == Line 1201 has weird spacing: '...o usage ide...' == (16 more instances...) -- The document date (October 22, 2018) is 1984 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'ACTN-frame' is mentioned on line 1975, but not defined == Missing Reference: 'RFC5440' is mentioned on line 1039, but not defined == Missing Reference: 'RFC 5440' is mentioned on line 1184, but not defined == Missing Reference: 'RFC6242' is mentioned on line 1981, but not defined == Missing Reference: 'RFC5246' is mentioned on line 1982, but not defined ** Obsolete undefined reference: RFC 5246 (Obsoleted by RFC 8446) == Missing Reference: 'RF7950' is mentioned on line 1988, but not defined == Missing Reference: 'RFC6536' is mentioned on line 1991, but not defined ** Obsolete undefined reference: RFC 6536 (Obsoleted by RFC 8341) == Missing Reference: 'RFC3688' is mentioned on line 2005, but not defined == Unused Reference: 'ACTN-Frame' is defined on line 2051, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 7491 -- No information found for draft-ietf-actn-framework - is the name correct? -- Possible downref: Normative reference to a draft: ref. 'ACTN-Frame' -- No information found for draft-ietf-teas-actn-info - is the name correct? -- Possible downref: Normative reference to a draft: ref. 'ACTN-Info' Summary: 4 errors (**), 0 flaws (~~), 16 warnings (==), 5 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Italo Busi (Ed.) 2 Internet Draft Huawei 3 Intended status: Standard Track Sergio Belotti (Ed.) 4 Expires: April 2019 Nokia 5 Victor Lopez 6 Oscar Gonzalez de Dios 7 Telefonica 8 Anurag Sharma 9 Google 10 Yan Shi 11 China Unicom 12 Ricard Vilalta 13 CTTC 14 Karthik Sethuraman 15 NEC 17 October 22, 2018 19 Yang model for requesting Path Computation 20 draft-ietf-teas-yang-path-computation-03.txt 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF), its areas, and its working groups. Note that 29 other groups may also distribute working documents as Internet- 30 Drafts. 32 Internet-Drafts are draft documents valid for a maximum of six 33 months and may be updated, replaced, or obsoleted by other documents 34 at any time. It is inappropriate to use Internet-Drafts as 35 reference material or to cite them other than as "work in progress." 37 The list of current Internet-Drafts can be accessed at 38 http://www.ietf.org/ietf/1id-abstracts.txt 40 The list of Internet-Draft Shadow Directories can be accessed at 41 http://www.ietf.org/shadow.html 42 This Internet-Draft will expire on April 22, 2019. 44 Copyright Notice 46 Copyright (c) 2018 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with 54 respect to this document. Code Components extracted from this 55 document must include Simplified BSD License text as described in 56 Section 4.e of the Trust Legal Provisions and are provided without 57 warranty as described in the Simplified BSD License. 59 Abstract 61 There are scenarios, typically in a hierarchical SDN context, where 62 the topology information provided by a TE network provider may not 63 be sufficient for its client to perform end-to-end path computation. 64 In these cases the client would need to request the provider to 65 calculate some (partial) feasible paths. 67 This document defines a YANG data model for a stateless RPC to 68 request path computation. This model complements the stateful 69 solution defined in [TE-TUNNEL]. 71 Moreover this document describes some use cases where a path 72 computation request, via YANG-based protocols (e.g., NETCONF or 73 RESTCONF), can be needed. 75 Table of Contents 77 1. Introduction...................................................3 78 1.1. Terminology...............................................4 79 2. Use Cases......................................................5 80 2.1. Packet/Optical Integration................................5 81 2.2. Multi-domain TE Networks.................................10 82 2.3. Data center interconnections.............................12 83 3. Motivations...................................................14 84 3.1. Motivation for a YANG Model..............................14 85 3.1.1. Benefits of common data models......................14 86 3.1.2. Benefits of a single interface......................15 87 3.1.3. Extensibility.......................................15 88 3.2. Interactions with TE Topology............................16 89 3.2.1. TE Topology Aggregation.............................17 90 3.2.2. TE Topology Abstraction.............................20 91 3.2.3. Complementary use of TE topology and path computation21 92 3.3. Stateless and Stateful Path Computation..................24 93 4. Path Computation and Optimization for multiple paths..........25 94 5. YANG Model for requesting Path Computation....................26 95 5.1. Synchronization of multiple path computation requests....27 96 5.2. Returned metric values...................................29 97 6. YANG model for stateless TE path computation..................30 98 6.1. YANG Tree................................................30 99 6.2. YANG Module..............................................39 100 7. Security Considerations.......................................49 101 8. IANA Considerations...........................................50 102 9. References....................................................50 103 9.1. Normative References.....................................50 104 9.1. Informative References...................................51 105 10. Acknowledgments..............................................52 106 Appendix A. Examples of dimensioning the "detailed connectivity 107 matrix"..........................................................53 109 1. Introduction 111 There are scenarios, typically in a hierarchical SDN context, where 112 the topology information provided by a TE network provider may not 113 be sufficient for its client to perform end-to-end path computation. 114 In these cases the client would need to request the provider to 115 calculate some (partial) feasible paths, complementing his topology 116 knowledge, to make his end-to-end path computation feasible. 118 This type of scenarios can be applied to different interfaces in 119 different reference architectures: 121 o ABNO control interface [RFC7491], in which an Application Service 122 Coordinator can request ABNO controller to take in charge path 123 calculation (see Figure 1 in [RFC7491]). 125 o ACTN [ACTN-frame], where a controller hierarchy is defined, the 126 need for path computation arises on both interfaces CMI 127 (interface between Customer Network Controller (CNC) and Multi 128 Domain Service Coordinator (MDSC)) and/or MPI (interface between 129 MSDC-PNC). [ACTN-Info] describes an information model for the 130 Path Computation request. 132 Multiple protocol solutions can be used for communication between 133 different controller hierarchical levels. This document assumes that 134 the controllers are communicating using YANG-based protocols (e.g., 135 NETCONF or RESTCONF). 137 Path Computation Elements, Controllers and Orchestrators perform 138 their operations based on Traffic Engineering Databases (TED). Such 139 TEDs can be described, in a technology agnostic way, with the YANG 140 Data Model for TE Topologies [TE-TOPO]. Furthermore, the technology 141 specific details of the TED are modeled in the augmented TE topology 142 models (e.g. [OTN-TOPO] for OTN ODU technologies). 144 The availability of such topology models allows providing the TED 145 using YANG-based protocols (e.g., NETCONF or RESTCONF). Furthermore, 146 it enables a PCE/Controller performing the necessary abstractions or 147 modifications and offering this customized topology to another 148 PCE/Controller or high level orchestrator. 150 Note: This document assumes that the client of the YANG data model 151 defined in this document may not implement a "PCE" functionality, as 152 defined in [RFC4655]. 154 The tunnels that can be provided over the networks described with 155 the topology models can be also set-up, deleted and modified via 156 YANG-based protocols (e.g., NETCONF or RESTCONF) using the TE-Tunnel 157 Yang model [TE-TUNNEL]. 159 This document proposes a YANG model for a path computation request 160 defined as a stateless RPC, which complements the stateful solution 161 defined in [TE-TUNNEL]. 163 Moreover, this document describes some use cases where a path 164 computation request, via YANG-based protocols (e.g., NETCONF or 165 RESTCONF), can be needed. 167 1.1. Terminology 169 TED: The traffic engineering database is a collection of all TE 170 information about all TE nodes and TE links in a given network. 172 PCE: A Path Computation Element (PCE) is an entity that is capable 173 of computing a network path or route based on a network graph, and 174 of applying computational constraints during the computation. The 175 PCE entity is an application that can be located within a network 176 node or component, on an out-of-network server, etc. For example, a 177 PCE would be able to compute the path of a TE LSP by operating on 178 the TED and considering bandwidth and other constraints applicable 179 to the TE LSP service request. [RFC4655] 181 2. Use Cases 183 This section presents different use cases, where a client needs to 184 request underlying SDN controllers for path computation. 186 The presented uses cases have been grouped, depending on the 187 different underlying topologies: a) Packet-Optical integration; b) 188 Multi-domain Traffic Engineered (TE) Networks; and c) Data center 189 interconnections. 191 2.1. Packet/Optical Integration 193 In this use case, an Optical network is used to provide connectivity 194 to some nodes of a Packet network (see Figure 1). 196 +----------------+ 197 | | 198 | Packet/Optical | 199 | Coordinator | 200 | | 201 +---+------+-----+ 202 | | 203 +------------+ | 204 | +-----------+ 205 +------V-----+ | 206 | | +------V-----+ 207 | Packet | | | 208 | Network | | Optical | 209 | Controller | | Network | 210 | | | Controller | 211 +------+-----+ +-------+----+ 212 | | 213 .........V......................... | 214 : Packet Network : | 215 +----+ +----+ | 216 | R1 |= = = = = = = = = = = = = = = =| R2 | | 217 +-+--+ +--+-+ | 218 | : : | | 219 | :................................ : | | 220 | | | 221 | +-----+ | | 222 | ...........| Opt |........... | | 223 | : | C | : | | 224 | : /+--+--+\ : | | 225 | : / | \ : | | 226 | : / | \ : | | 227 | +-----+ / +--+--+ \ +-----+ | | 228 | | Opt |/ | Opt | \| Opt | | | 229 +---| A | | D | | B |---+ | 230 +-----+\ +--+--+ /+-----+ | 231 : \ | / : | 232 : \ | / : | 233 : \ +--+--+ / Optical<---------+ 234 : \| Opt |/ Network: 235 :..........| E |..........: 236 +-----+ 238 Figure 1 - Packet/Optical Integration Use Case 240 Figure 1 as well as Figure 2 below only show a partial view of the 241 packet network connectivity, before additional packet connectivity 242 is provided by the Optical network. 244 It is assumed that the Optical network controller provides to the 245 packet/optical coordinator an abstracted view of the Optical 246 network. A possible abstraction could be to represent the whole 247 optical network as one "virtual node" with "virtual ports" connected 248 to the access links, as shown in Figure 2. 250 It is also assumed that Packet network controller can provide the 251 packet/optical coordinator the information it needs to setup 252 connectivity between packet nodes through the Optical network (e.g., 253 the access links). 255 The path computation request helps the coordinator to know the real 256 connections that can be provided by the optical network. 258 ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,. 259 , Packet/Optical Coordinator view , 260 , +----+ , . 261 , | | , 262 , | R2 | , . 263 , +----+ +------------ + /+----+ , 264 , | | | |/-----/ / / , . 265 , | R1 |--O VP1 VP4 O / / , 266 , | |\ | | /----/ / , . 267 , +----+ \| |/ / , 268 , / O VP2 VP5 O / , . 269 , / | | +----+ , 270 , / | | | | , . 271 , / O VP3 VP6 O--| R4 | , 272 , +----+ /-----/|_____________| +----+ , . 273 , | |/ +------------ + , 274 , | R3 | , . 275 , +----+ ,,,,,,,,,,,,,,,,, 276 ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, ,. 277 . Packet Network Controller view +----+ , 278 only packet nodes and packet links | | , . 279 . with access links to the optical network | R2 | , 280 , +----+ /+----+ , . 281 . , | | /-----/ / / , 282 , | R1 |--- / / , . 283 . , +----+\ /----/ / , 284 , / \ / / , . 285 . , / / , 286 , / +----+ , . 287 . , / | | , 288 , / ---| R4 | , . 289 . , +----+ /-----/ +----+ , 290 , | |/ , . 291 . , | R3 | , 292 , +----+ ,,,,,,,,,,,,,,,,,. 293 .,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, , 294 Optical Network Controller view , . 295 . only optical nodes, +--+ , 296 optical links and /|OF| , . 297 . access links from the +--++--+ / , 298 packet network |OA| \ /-----/ / , . 299 . , ---+--+--\ +--+/ / , 300 , \ | \ \-|OE|-------/ , . 301 . , \ | \ /-+--+ , 302 , \+--+ X | , . 304 . , |OB|-/ \ | , 305 , +--+-\ \+--+ , . 306 . , / \ \--|OD|--- , 307 , /-----/ +--+ +--+ , . 308 . , / |OC|/ , 309 , +--+ , . 310 ., ,,,,,,,,,,,,,,,,,, 311 ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, , 312 . Actual Physical View +----+ , 313 , +--+ | | , 314 . , /|OF| | R2 | , 315 , +----+ +--++--+ /+----+ , 316 . , | | |OA| \ /-----/ / / , 317 , | R1 |---+--+--\ +--+/ / / , 318 . , +----+\ | \ \-|OE|-------/ / , 319 , / \ | \ /-+--+ / , 320 . , / \+--+ X | / , 321 , / |OB|-/ \ | +----+ , 322 . , / +--+-\ \+--+ | | , 323 , / / \ \--|OD|---| R4 | , 324 . , +----+ /-----/ +--+ +--+ +----+ , 325 , | |/ |OC|/ , 326 . , | R3 | +--+ , 327 , +----+ , 328 .,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 330 Figure 2 - Packet and Optical Topology Abstractions 332 In this use case, the coordinator needs to setup an optimal 333 underlying path for an IP link between R1 and R2. 335 As depicted in Figure 2, the coordinator has only an "abstracted 336 view" of the physical network, and it does not know the feasibility 337 or the cost of the possible optical paths (e.g., VP1-VP4 and VP2- 338 VP5), which depend from the current status of the physical resources 339 within the optical network and on vendor-specific optical 340 attributes. 342 The coordinator can request the underlying Optical domain controller 343 to compute a set of potential optimal paths, taking into account 344 optical constraints. Then, based on its own constraints, policy and 345 knowledge (e.g. cost of the access links), it can choose which one 346 of these potential paths to use to setup the optimal end-to-end path 347 crossing optical network. 349 ............................ 350 : : 351 O VP1 VP4 O 352 cost=10 /:\ /:\ cost=10 353 / : \----------------------/ : \ 354 +----+ / : cost=50 : \ +----+ 355 | |/ : : \| | 356 | R1 | : : | R2 | 357 | |\ : : /| | 358 +----+ \ : /--------------------\ : / +----+ 359 \ : / cost=55 \ : / 360 cost=5 \:/ \:/ cost=5 361 O VP2 VP5 O 362 : : 363 :..........................: 365 Figure 3 - Packet/Optical Path Computation Example 367 For example, in Figure 3, the Coordinator can request the Optical 368 network controller to compute the paths between VP1-VP4 and VP2-VP5 369 and then decide to setup the optimal end-to-end path using the VP2- 370 VP5 Optical path even this is not the optimal path from the Optical 371 domain perspective. 373 Considering the dynamicity of the connectivity constraints of an 374 Optical domain, it is possible that a path computed by the Optical 375 network controller when requested by the Coordinator is no longer 376 valid/available when the Coordinator requests it to be setup up. 377 This is further discussed in section 3.3. 379 2.2. Multi-domain TE Networks 381 In this use case there are two TE domains which are interconnected 382 together by multiple inter-domains links. 384 A possible example could be a multi-domain optical network. 386 +--------------+ 387 | Multi-domain | 388 | Controller | 389 +---+------+---+ 390 | | 391 +------------+ | 392 | +-----------+ 393 +------V-----+ | 394 | | | 395 | TE Domain | +------V-----+ 396 | Controller | | | 397 | 1 | | TE Domain | 398 +------+-----+ | Controller | 399 | | 2 | 400 | +------+-----+ 401 .........V.......... | 402 : : | 403 +-----+ : | 404 | | : .........V.......... 405 | X | : : : 406 | | +-----+ +-----+ : 407 +-----+ | | | | : 408 : | C |------| E | : 409 +-----+ +-----+ /| | | |\ +-----+ +-----+ 410 | | | |/ +-----+ +-----+ \| | | | 411 | A |----| B | : : | G |----| H | 412 | | | |\ : : /| | | | 413 +-----+ +-----+ \+-----+ +-----+/ +-----+ +-----+ 414 : | | | | : 415 : | D |------| F | : 416 : | | | | +-----+ 417 : +-----+ +-----+ | | 418 : : : | Y | 419 : : : | | 420 : Domain 1 : : Domain 2 +-----+ 421 :..................: :.................: 423 Figure 4 - Multi-domain multi-link interconnection 425 In order to setup an end-to-end multi-domain TE path (e.g., between 426 nodes A and H), the multi-domain controller needs to know the 427 feasibility or the cost of the possible TE paths within the two TE 428 domains, which depend from the current status of the physical 429 resources within each TE network. This is more challenging in case 430 of optical networks because the optimal paths depend also on vendor- 431 specific optical attributes (which may be different in the two 432 domains if they are provided by different vendors). 434 In order to setup a multi-domain TE path (e.g., between nodes A and 435 H), the multi-domain controller can request the TE domain 436 controllers to compute a set of intra-domain optimal paths and take 437 decisions based on the information received. For example: 439 o The multi-domain controller asks TE domain controllers to provide 440 set of paths between A-C, A-D, E-H and F-H 442 o TE domain controllers return a set of feasible paths with the 443 associated costs: the path A-C is not part of this set(in optical 444 networks, it is typical to have some paths not being feasible due 445 to optical constraints that are known only by the optical domain 446 controller) 448 o The multi-domain controller will select the path A-D-F-H since it 449 is the only feasible multi-domain path and then request the TE 450 domain controllers to setup the A-D and F-H intra-domain paths 452 o If there are multiple feasible paths, the multi-domain controller 453 can select the optimal path knowing the cost of the intra-domain 454 paths (provided by the TE domain controllers) and the cost of the 455 inter-domain links (known by the multi-domain controller) 457 This approach may have some scalability issues when the number of TE 458 domains is quite big (e.g. 20). 460 In this case, it would be worthwhile using the abstract TE topology 461 information provided by the TE domain controllers to limit the 462 number of potential optimal end-to-end paths and then request path 463 computation to fewer TE domain controllers in order to decide what 464 the optimal path within this limited set is. 466 For more details, see section 3.2.3. 468 2.3. Data center interconnections 470 In these use case, there is a TE domain which is used to provide 471 connectivity between data centers which are connected with the TE 472 domain using access links. 474 +--------------+ 475 | Cloud Network| 476 | Orchestrator | 477 +--------------+ 478 | | | | 479 +-------------+ | | +------------------------+ 480 | | +------------------+ | 481 | +--------V---+ | | 482 | | | | | 483 | | TE Network | | | 484 +------V-----+ | Controller | +------V-----+ | 485 | DC | +------------+ | DC | | 486 | Controller | | | Controller | | 487 +------------+ | +-----+ +------------+ | 488 | ....V...| |........ | | 489 | : | P | : | | 490 .....V..... : /+-----+\ : .....V..... | 491 : : +-----+ / | \ +-----+ : : | 492 : DC1 || : | |/ | \| | : DC2 || : | 493 : ||||----| PE1 | | | PE2 |---- |||| : | 494 : _|||||| : | |\ | /| | : _|||||| : | 495 : : +-----+ \ +-----+ / +-----+ : : | 496 :.........: : \| |/ : :.........: | 497 :.......| PE3 |.......: | 498 | | | 499 +-----+ +---------V--+ 500 .....|..... | DC | 501 : : | Controller | 502 : DC3 || : +------------+ 503 : |||| : | 504 : _|||||| <------------------+ 505 : : 506 :.........: 508 Figure 5 - Data Center Interconnection Use Case 510 In this use case, there is need to transfer data from Data Center 1 511 (DC1) to either DC2 or DC3 (e.g. workload migration). 513 The optimal decision depends both on the cost of the TE path (DC1- 514 DC2 or DC1-DC3) and of the data center resources within DC2 or DC3. 516 The cloud network orchestrator needs to make a decision for optimal 517 connection based on TE Network constraints and data centers 518 resources. It may not be able to make this decision because it has 519 only an abstract view of the TE network (as in use case in 2.1). 521 The cloud network orchestrator can request to the TE network 522 controller to compute the cost of the possible TE paths (e.g., DC1- 523 DC2 and DC1-DC3) and to the DC controller to provide the information 524 it needs about the required data center resources within DC2 and DC3 525 and then it can take the decision about the optimal solution based 526 on this information and its policy. 528 3. Motivations 530 This section provides the motivation for the YANG model defined in 531 this document. 533 Section 3.1 describes the motivation for a YANG model to request 534 path computation. 536 Section 3.2 describes the motivation for a YANG model which 537 complements the TE Topology YANG model defined in [TE-TOPO]. 539 Section 3.3 describes the motivation for a stateless YANG RPC which 540 complements the TE Tunnel YANG model defined in [TE-TUNNEL]. 542 3.1. Motivation for a YANG Model 544 3.1.1. Benefits of common data models 546 The YANG data model for requesting path computation is closely 547 aligned with the YANG data models that provide (abstract) TE 548 topology information, i.e., [TE-TOPO] as well as that are used to 549 configure and manage TE Tunnels, i.e., [TE-TUNNEL]. 551 There are many benefits in aligning the data model used for path 552 computation requests with the YANG data models used for TE topology 553 information and for TE Tunnels configuration and management: 555 o There is no need for an error-prone mapping or correlation of 556 information. 558 o It is possible to use the same endpoint identifiers in path 559 computation requests and in the topology modeling. 561 o The attributes used for path computation constraints are the same 562 as those used when setting up a TE Tunnel. 564 3.1.2. Benefits of a single interface 566 The system integration effort is typically lower if a single, 567 consistent interface is used by controllers, i.e., one data modeling 568 language (i.e., YANG) and a common protocol (e.g., NETCONF or 569 RESTCONF). 571 Practical benefits of using a single, consistent interface include: 573 1. Simple authentication and authorization: The interface between 574 different components has to be secured. If different protocols 575 have different security mechanisms, ensuring a common access 576 control model may result in overhead. For instance, there may be 577 a need to deal with different security mechanisms, e.g., 578 different credentials or keys. This can result in increased 579 integration effort. 581 2. Consistency: Keeping data consistent over multiple different 582 interfaces or protocols is not trivial. For instance, the 583 sequence of actions can matter in certain use cases, or 584 transaction semantics could be desired. While ensuring 585 consistency within one protocol can already be challenging, it is 586 typically cumbersome to achieve that across different protocols. 588 3. Testing: System integration requires comprehensive testing, 589 including corner cases. The more different technologies are 590 involved, the more difficult it is to run comprehensive test 591 cases and ensure proper integration. 593 4. Middle-box friendliness: Provider and consumer of path 594 computation requests may be located in different networks, and 595 middle-boxes such as firewalls, NATs, or load balancers may be 596 deployed. In such environments it is simpler to deploy a single 597 protocol. Also, it may be easier to debug connectivity problems. 599 5. Tooling reuse: Implementers may want to implement path 600 computation requests with tools and libraries that already exist 601 in controllers and/or orchestrators, e.g., leveraging the rapidly 602 growing eco-system for YANG tooling. 604 3.1.3. Extensibility 606 Path computation is only a subset of the typical functionality of a 607 controller. In many use cases, issuing path computation requests 608 comes along with the need to access other functionality on the same 609 system. In addition to obtaining TE topology, for instance also 610 configuration of services (setup/modification/deletion) may be 611 required, as well as: 613 1. Receiving notifications for topology changes as well as 614 integration with fault management 616 2. Performance management such as retrieving monitoring and 617 telemetry data 619 3. Service assurance, e.g., by triggering OAM functionality 621 4. Other fulfilment and provisioning actions beyond tunnels and 622 services, such as changing QoS configurations 624 YANG is a very extensible and flexible data modeling language that 625 can be used for all these use cases. 627 3.2. Interactions with TE Topology 629 The use cases described in section 2 have been described assuming 630 that the topology view exported by each underlying SDN controller to 631 the orchestrator is aggregated using the "virtual node model", 632 defined in [RFC7926]. 634 TE Topology information, e.g., as provided by [TE-TOPO], could in 635 theory be used by an underlying SDN controllers to provide TE 636 information to its client thus allowing a PCE available within its 637 client to perform multi-domain path computation by its own, without 638 requesting path computations to the underlying SDN controllers. 640 In case the client does not implement a PCE function, as discussed 641 in section 1, it could not perform path computation based on TE 642 Topology information and would instead need to request path 643 computation to the underlying controllers to get the information it 644 needs to compute the optimal end-to-end path. 646 This section analyzes the need for a client to request underlying 647 SDN controllers for path computation even in case it implements a 648 PCE functionality, as well as how the TE Topology information and 649 the path computation can be complementary. 651 In nutshell, there is a scalability trade-off between providing all 652 the TE information needed by PCE, when implemented by the client, to 653 take optimal path computation decisions by its own versus sending 654 too many requests to underlying SDN Domain Controllers to compute a 655 set of feasible optimal intra-domain TE paths. 657 3.2.1. TE Topology Aggregation 659 Using the TE Topology model, as defined in [TE-TOPO], the underlying 660 SDN controller can export the whole TE domain as a single abstract 661 TE node with a "detailed connectivity matrix". 663 The concept of a "detailed connectivity matrix" is defined in [TE- 664 TOPO] to provide specific TE attributes (e.g., delay, SRLGs and 665 summary TE metrics) as an extension of the "basic connectivity 666 matrix", which is based on the "connectivity matrix" defined in 667 [RFC7446]. 669 The information provided by the "detailed connectivity matrix" would 670 be equivalent to the information that should be provided by "virtual 671 link model" as defined in [RFC7926]. 673 For example, in the Packet/Optical integration use case, described 674 in section 2.1, the Optical network controller can make the 675 information shown in Figure 3 available to the Coordinator as part 676 of the TE Topology information and the Coordinator could use this 677 information to calculate by its own the optimal path between R1 and 678 R2, without requesting any additional information to the Optical 679 network Controller. 681 However, when designing the amount of information to provide within 682 the "detailed connectivity matrix", there is a tradeoff to be 683 considered between accuracy (i.e., providing "all" the information 684 that might be needed by the PCE available to Orchestrator) and 685 scalability. 687 Figure 6 below shows another example, similar to Figure 3, where 688 there are two possible Optical paths between VP1 and VP4 with 689 different properties (e.g., available bandwidth and cost). 691 ............................ 692 : /--------------------\ : 693 : / cost=65 \ : 694 :/ available-bw=10G \: 695 O VP1 VP4 O 696 cost=10 /:\ /:\ cost=10 697 / : \----------------------/ : \ 698 +----+ / : cost=50 : \ +----+ 699 | |/ : available-bw=2G : \| | 700 | R1 | : : | R2 | 701 | |\ : : /| | 702 +----+ \ : /--------------------\ : / +----+ 703 \ : / cost=55 \ : / 704 cost=5 \:/ available-bw=3G \:/ cost=5 705 O VP2 VP5 O 706 : : 707 :..........................: 709 Figure 6 - Packet/Optical Path Computation Example with multiple 710 choices 712 Reporting all the information, as in Figure 6, using the "detailed 713 connectivity matrix", is quite challenging from a scalability 714 perspective. The amount of this information is not just based on 715 number of end points (which would scale as N-square), but also on 716 many other parameters, including client rate, user 717 constraints/policies for the service, e.g. max latency < N ms, max 718 cost, etc., exclusion policies to route around busy links, min OSNR 719 margin, max preFEC BER etc. All these constraints could be different 720 based on connectivity requirements. 722 Examples of how the "detailed connectivity matrix" can be 723 dimensioned are described in Appendix A. 725 It is also worth noting that the "connectivity matrix" has been 726 originally defined in WSON, [RFC7446], to report the connectivity 727 constrains of a physical node within the WDM network: the 728 information it contains is pretty "static" and therefore, once taken 729 and stored in the TE data base, it can be always being considered 730 valid and up-to-date in path computation request. 732 Using the "basic connectivity matrix" with an abstract node to 733 abstract the information regarding the connectivity constraints of 734 an Optical domain, would make this information more "dynamic" since 735 the connectivity constraints of an Optical domain can change over 736 time because some optical paths that are feasible at a given time 737 may become unfeasible at a later time when e.g., another optical 738 path is established. The information in the "detailed connectivity 739 matrix" is even more dynamic since the establishment of another 740 optical path may change some of the parameters (e.g., delay or 741 available bandwidth) in the "detailed connectivity matrix" while not 742 changing the feasibility of the path. 744 The "connectivity matrix" is sometimes confused with optical reach 745 table that contain multiple (e.g. k-shortest) regen-free reachable 746 paths for every A-Z node combination in the network. Optical reach 747 tables can be calculated offline, utilizing vendor optical design 748 and planning tools, and periodically uploaded to the Controller: 749 these optical path reach tables are fairly static. However, to get 750 the connectivity matrix, between any two sites, either a regen free 751 path can be used, if one is available, or multiple regen free paths 752 are concatenated to get from src to dest, which can be a very large 753 combination. Additionally, when the optical path within optical 754 domain needs to be computed, it can result in different paths based 755 on input objective, constraints, and network conditions. In summary, 756 even though "optical reachability table" is fairly static, which 757 regen free paths to build the connectivity matrix between any source 758 and destination is very dynamic, and is done using very 759 sophisticated routing algorithms. 761 There is therefore the need to keep the information in the "detailed 762 connectivity matrix" updated which means that there another tradeoff 763 between the accuracy (i.e., providing "all" the information that 764 might be needed by the client's PCE) and having up-to-date 765 information. The more the information is provided and the longer it 766 takes to keep it up-to-date which increases the likelihood that the 767 client's PCE computes paths using not updated information. 769 It seems therefore quite challenging to have a "detailed 770 connectivity matrix" that provides accurate, scalable and updated 771 information to allow the client's PCE to take optimal decisions by 772 its own. 774 Instead, if the information in the "detailed connectivity matrix" is 775 not complete/accurate, we can have the following drawbacks 776 considering for example the case in Figure 6: 778 o If only the VP1-VP4 path with available bandwidth of 2 Gb/s and 779 cost 50 is reported, the client's PCE will fail to compute a 5 780 Gb/s path between routers R1 and R2, although this would be 781 feasible; 783 o If only the VP1-VP4 path with available bandwidth of 10 Gb/s and 784 cost 60 is reported, the client's PCE will compute, as optimal, 785 the 1 Gb/s path between R1 and R2 going through the VP2-VP5 path 786 within the Optical domain while the optimal path would actually 787 be the one going thought the VP1-VP4 sub-path (with cost 50) 788 within the Optical domain. 790 Using the approach proposed in this document, the client, when it 791 needs to setup an end-to-end path, it can request the Optical domain 792 controller to compute a set of optimal paths (e.g., for VP1-VP4 and 793 VP2-VP5) and take decisions based on the information received: 795 o When setting up a 5 Gb/s path between routers R1 and R2, the 796 Optical domain controller may report only the VP1-VP4 path as the 797 only feasible path: the Orchestrator can successfully setup the 798 end-to-end path passing though this Optical path; 800 o When setting up a 1 Gb/s path between routers R1 and R2, the 801 Optical domain controller (knowing that the path requires only 1 802 Gb/s) can report both the VP1-VP4 path, with cost 50, and the 803 VP2-VP5 path, with cost 65. The Orchestrator can then compute the 804 optimal path which is passing thought the VP1-VP4 sub-path (with 805 cost 50) within the Optical domain. 807 3.2.2. TE Topology Abstraction 809 Using the TE Topology model, as defined in [TE-TOPO], the underlying 810 SDN controller can export an abstract TE Topology, composed by a set 811 of TE nodes and TE links, representing the abstract view of the 812 topology controlled by each domain controller. 814 Considering the example in Figure 4, the TE domain controller 1 can 815 export a TE Topology encompassing the TE nodes A, B, C and D and the 816 TE Link interconnecting them. In a similar way, TE domain controller 817 2 can export a TE Topology encompassing the TE nodes E, F, G and H 818 and the TE Link interconnecting them. 820 In this example, for simplicity reasons, each abstract TE node maps 821 with each physical node, but this is not necessary. 823 In order to setup a multi-domain TE path (e.g., between nodes A and 824 H), the multi-domain controller can compute by its own an optimal 825 end-to-end path based on the abstract TE topology information 826 provided by the domain controllers. For example: 828 o Multi-domain controller's PCE, based on its own information, can 829 compute the optimal multi-domain path being A-B-C-E-G-H, and then 830 request the TE domain controllers to setup the A-B-C and E-G-H 831 intra-domain paths 833 o But, during path setup, the domain controller may find out that 834 A-B-C intra-domain path is not feasible (as discussed in section 835 2.2, in optical networks it is typical to have some paths not 836 being feasible due to optical constraints that are known only by 837 the optical domain controller), while only the path A-B-D is 838 feasible 840 o So what the multi-domain controller computed is not good and need 841 to re-start the path computation from scratch 843 As discussed in section 3.2.1, providing more extensive abstract 844 information from the TE domain controllers to the multi-domain 845 controller may lead to scalability problems. 847 In a sense this is similar to the problem of routing and wavelength 848 assignment within an Optical domain. It is possible to do first 849 routing (step 1) and then wavelength assignment (step 2), but the 850 chances of ending up with a good path is low. Alternatively, it is 851 possible to do combined routing and wavelength assignment, which is 852 known to be a more optimal and effective way for Optical path setup. 853 Similarly, it is possible to first compute an abstract end-to-end 854 path within the multi-domain Orchestrator (step 1) and then compute 855 an intra-domain path within each Optical domain (step 2), but there 856 are more chances not to find a path or to get a suboptimal path that 857 performing per-domain path computation and then stitch them. 859 3.2.3. Complementary use of TE topology and path computation 861 As discussed in section 2.2, there are some scalability issues with 862 path computation requests in a multi-domain TE network with many TE 863 domains, in terms of the number of requests to send to the TE domain 864 controllers. It would therefore be worthwhile using the TE topology 865 information provided by the domain controllers to limit the number 866 of requests. 868 An example can be described considering the multi-domain abstract 869 topology shown in Figure 7. In this example, an end-to-end TE path 870 between domains A and F needs to be setup. The transit domain should 871 be selected between domains B, C, D and E. 873 .........B......... 874 : _ _ _ _ _ _ _ _ : 875 :/ \: 876 +---O NOT FEASIBLE O---+ 877 cost=5| : : | 878 ......A...... | :.................: | ......F...... 879 : : | | : : 880 : O-----+ .........C......... +-----O : 881 : : : /-------------\ : : : 882 : : :/ \: : : 883 : cost<=20 O---------O cost <= 30 O---------O cost<=20 : 884 : /: cost=5 : : cost=5 :\ : 885 : /------/ : :.................: : \------\ : 886 : / : : \ : 887 :/ cost<=25 : .........D......... : cost<=25 \: 888 O-----------O-------+ : /-------------\ : +-------O-----------O 889 :\ : cost=5| :/ \: |cost=5 : /: 890 : \ : +-O cost <= 30 O-+ : / : 891 : \------\ : : : : /------/ : 892 : cost>=30 \: :.................: :/ cost>=30 : 893 : O-----+ +-----O : 894 :...........: | .........E......... | :...........: 895 | : /-------------\ : | 896 cost=5| :/ \: |cost=5 897 +---O cost >= 30 O---+ 898 : : 899 :.................: 901 Figure 7 - Multi-domain with many domains (Topology information) 903 The actual cost of each intra-domain path is not known a priori from 904 the abstract topology information. The Multi-domain controller only 905 knows, from the TE topology provided by the underlying domain 906 controllers, the feasibility of some intra-domain paths and some 907 upper-bound and/or lower-bound cost information. With this 908 information, together with the cost of inter-domain links, the 909 Multi-domain controller can understand by its own that: 911 o Domain B cannot be selected as the path connecting domains A and 912 E is not feasible; 914 o Domain E cannot be selected as a transit domain since it is know 915 from the abstract topology information provided by domain 916 controllers that the cost of the multi-domain path A-E-F (which 917 is 100, in the best case) will be always be higher than the cost 918 of the multi-domain paths A-D-F (which is 90, in the worst case) 919 and A-E-F (which is 80, in the worst case) 921 Therefore, the Multi-domain controller can understand by its own 922 that the optimal multi-domain path could be either A-D-F or A-E-F 923 but it cannot known which one of the two possible option actually 924 provides the optimal end-to-end path. 926 The Multi-domain controller can therefore request path computation 927 only to the TE domain controllers A, D, E and F (and not to all the 928 possible TE domain controllers). 930 .........B......... 931 : : 932 +---O O---+ 933 ......A...... | :.................: | ......F...... 934 : : | | : : 935 : O-----+ .........C......... +-----O : 936 : : : /-------------\ : : : 937 : : :/ \: : : 938 : cost=15 O---------O cost = 25 O---------O cost=10 : 939 : /: cost=5 : : cost=5 :\ : 940 : /------/ : :.................: : \------\ : 941 : / : : \ : 942 :/ cost=10 : .........D......... : cost=15 \: 943 O-----------O-------+ : /-------------\ : +-------O-----------O 944 : : cost=5| :/ \: |cost=5 : : 945 : : +-O cost = 15 O-+ : : 946 : : : : : : 947 : : :.................: : : 948 : O-----+ +-----O : 949 :...........: | .........E......... | :...........: 950 | : : | 951 +---O O---+ 952 :.................: 954 Figure 8 - Multi-domain with many domains (Path Computation 955 information) 957 Based on these requests, the Multi-domain controller can know the 958 actual cost of each intra-domain paths which belongs to potential 959 optimal end-to-end paths, as shown in Figure 8, and then compute the 960 optimal end-to-end path (e.g., A-D-F, having total cost of 50, 961 instead of A-C-F having a total cost of 70). 963 3.3. Stateless and Stateful Path Computation 965 The TE Tunnel YANG model, defined in [TE-TUNNEL], can support the 966 need to request path computation. 968 It is possible to request path computation by configuring a 969 "compute-only" TE tunnel and retrieving the computed path(s) in the 970 LSP(s) Record-Route Object (RRO) list as described in section 3.3.1 971 of [TE-TUNNEL]. 973 This is a stateful solution since the state of each created 974 "compute-only" TE tunnel needs to be maintained and updated, when 975 underlying network conditions change. 977 It is very useful to provide options for both stateless and stateful 978 path computation mechanisms. It is suggested to use stateless 979 mechanisms as much as possible and to rely on stateful path 980 computation when really needed. 982 Stateless RPC allows requesting path computation using a simple 983 atomic operation and it is the natural option/choice, especially 984 with stateless PCE. 986 Since the operation is stateless, there is no guarantee that the 987 returned path would still be available when path setup is requested: 988 this does not cause major issues in case the time between path 989 computation and path setup is short (especially if compared with the 990 time that would be needed to update the information of a very 991 detailed connectivity matrix). 993 In most of the cases, there is even no need to guarantee that the 994 path that has been setup is the exactly same as the path that has 995 been returned by path computation, especially if has the same or 996 even better metrics. Depending on the abstraction level applied by 997 the server, the client may also not know the actual computed path. 999 The most important requirement is that the required global 1000 objectives (e.g., multi-domain path metrics and constraints) are 1001 met. For this reason a path verification phase is necessary to 1002 verify that the actual path that has been setup meets the global 1003 objectives (for example in a multi-domain network, the resulting 1004 end-to-end path meets the required end-to-end metrics and 1005 constraints). 1007 In most of the cases, even if the setup path is not exactly the same 1008 as the path returned by path computation, its metrics and 1009 constraints are "good enough" (the path verification passes 1010 successfully). In the few corner cases where the path verification 1011 fails, it is possible repeat the whole process (path computation, 1012 path setup and path verification). 1014 In case the stateless solution is not sufficient, a stateful 1015 solution, based on "compute-only" TE tunnel, could be used to get 1016 notifications in case the computed path has been changed. 1018 It is worth noting that also the stateful solution, although 1019 increasing the likelihood that the computed path is available at 1020 path setup, does not guaranteed that because notifications may not 1021 be reliable or delivered on time. Path verification is needed also 1022 when stateful path computation is used. 1024 The stateful path computation has also the following drawbacks: 1026 o Several messages required for any path computation 1028 o Requires persistent storage in the provider controller 1030 o Need for garbage collection for stranded paths 1032 o Process burden to detect changes on the computed paths in order 1033 to provide notifications update 1035 4. Path Computation and Optimization for multiple paths 1037 There are use cases, where it is advantageous to request path 1038 computation for a set of paths, through a network or through a 1039 network domain, using a single request [RFC5440]. 1041 In this case, sending a single request for multiple path 1042 computations, instead of sending multiple requests for each path 1043 computation, would reduce the protocol overhead and it would consume 1044 less resources (e.g., threads in the client and server). 1046 In the context of a typical multi-domain TE network, there could 1047 multiple choices for the ingress/egress points of a domain and the 1048 Multi-domain controller needs to request path computation between 1049 all the ingress/egress pairs to select the best pair. For example, 1050 in the example of section 2.2, the Multi-domain controller needs to 1051 request the TE network controller 1 to compute the A-C and the A-D 1052 paths and to the TE network controller 2 to compute the E-H and the 1053 F-H paths. 1055 It is also possible that the Multi-domain controller receives a 1056 request to setup a group of multiple end to end connections. The 1057 multi-domain controller needs to request each TE domain controller 1058 to compute multiple paths, one (or more) for each end to end 1059 connection. 1061 There are also scenarios where it can be needed to request path 1062 computation for a set of paths in a synchronized fashion. 1064 One example could be computing multiple diverse paths. Computing a 1065 set of diverse paths in a not-synchronized fashion, leads to the 1066 possibility of not being able to satisfy the diversity requirement. 1067 In this case, it is preferable to compute a sub-optimal primary path 1068 for which a diversely routed secondary path exists. 1070 There are also scenarios where it is needed to request optimizing a 1071 set of paths using objective functions that apply to the whole set 1072 of paths, see [RFC5541], e.g. to minimize the sum of the costs of 1073 all the computed paths in the set. 1075 5. YANG Model for requesting Path Computation 1077 This document define a YANG stateless RPC to request path 1078 computation as an "augmentation" of tunnel-rpc, defined in [TE- 1079 TUNNEL]. This model provides the RPC input attributes that are 1080 needed to request path computation and the RPC output attributes 1081 that are needed to report the computed paths. 1083 augment /te:tunnels-rpc/te:input/te:tunnel-info: 1084 +---- path-request* [request-id] 1085 ........... 1087 augment /te:tunnels-rpc/te:output/te:result: 1088 +--ro response* [response-id] 1089 +--ro response-id uint32 1090 +--ro (response-type)? 1091 +--:(no-path-case) 1092 | +--ro no-path! 1093 +--:(path-case) 1094 +--ro computed-path 1095 +--ro path-id? yang-types:uuid 1096 +--ro path-properties 1097 ........... 1099 This model extensively re-uses the grouping defined in [TE-TUNNEL] 1100 to ensure maximal syntax and semantics commonality. 1102 5.1. Synchronization of multiple path computation requests 1104 The YANG model permits to synchronize a set of multiple path 1105 requests (identified by specific request-id) all related to a "svec" 1106 container emulating the syntax of "SVEC" PCEP object [RFC 5440]. 1108 +---- synchronization* [synchronization-id] 1109 +---- synchronization-id uint32 1110 +---- svec 1111 | +---- relaxable? boolean 1112 | +---- disjointness? te-types:te-path-disjointness 1113 | +---- request-id-number* uint32 1114 +---- svec-constraints 1115 | +---- path-metric-bound* [metric-type] 1116 | +---- metric-type identityref 1117 | +---- upper-bound? uint64 1118 +---- path-srlgs-values 1119 | +---- usage? identityref 1120 | +---- values* srlg 1121 +---- path-srlgs-names 1122 | +---- path-srlgs-name* [usage] 1123 | +---- usage identityref 1124 | +---- srlg-name* [name] 1125 | +---- name string 1126 +---- exclude-objects 1127 ........... 1128 +---- optimizations 1129 +---- (algorithm)? 1130 +--:(metric) 1131 | +---- optimization-metric* [metric-type] 1132 | +---- metric-type identityref 1133 | +---- weight? uint8 1134 +--:(objective-function) 1135 +---- objective-function 1136 +---- objective-function-type? identityref 1138 The model, in addition to the metric types, defined in [TE-TUNNEL], 1139 which can be applied to each individual path request, defines 1140 additional specific metrics types that apply to a set of 1141 synchronized requests, as referenced in [RFC5541]. 1143 identity svec-metric-type { 1144 description 1145 "Base identity for svec metric type"; 1146 } 1148 identity svec-metric-cumul-te { 1149 base svec-metric-type; 1150 description 1151 "TE cumulative path metric"; 1152 } 1154 identity svec-metric-cumul-igp { 1155 base svec-metric-type; 1156 description 1157 "IGP cumulative path metric"; 1158 } 1160 identity svec-metric-cumul-hop { 1161 base svec-metric-type; 1162 description 1163 "Hop cumulative path metric"; 1164 } 1166 identity svec-metric-aggregate-bandwidth-consumption { 1167 base svec-metric-type; 1168 description 1169 "Cumulative bandwith consumption of the set of synchronized 1170 paths"; 1171 } 1172 identity svec-metric-load-of-the-most-loaded-link { 1173 base svec-metric-type; 1174 description 1175 "Load of the most loaded link"; 1176 } 1178 5.2. Returned metric values 1180 This YANG model provides a way to return the values of the metrics 1181 computed by the path computation in the output of RPC, together with 1182 other important information (e.g. srlg, affinities, explicit route), 1183 emulating the syntax of the "C" flag of the "METRIC" PCEP object 1184 [RFC 5440]: 1186 augment /te:tunnels-rpc/te:output/te:result: 1187 +--ro response* [response-id] 1188 +--ro response-id uint32 1189 +--ro (response-type)? 1190 +--:(no-path-case) 1191 | +--ro no-path! 1192 +--:(path-case) 1193 +--ro computed-path 1194 +--ro path-id? yang-types:uuid 1195 +--ro path-properties 1196 +--ro path-metric* [metric-type] 1197 | +--ro metric-type identityref 1198 | +--ro accumulative-value? uint64 1199 +--ro path-affinities-values 1200 | +--ro path-affinities-value* [usage] 1201 | +--ro usage identityref 1202 | +--ro value? admin-groups 1203 +--ro path-affinity-names 1204 | +--ro path-affinity-name* [usage] 1205 | +--ro usage identityref 1206 | +--ro affinity-name* [name] 1207 | +--ro name string 1208 +--ro path-srlgs-values 1209 | +--ro usage? identityref 1210 | +--ro values* srlg 1211 +--ro path-srlgs-names 1212 | +--ro path-srlgs-name* [usage] 1213 | +--ro usage identityref 1214 | +--ro srlg-name* [name] 1215 | +--ro name string 1216 +--ro path-route-objects 1217 ........... 1219 It also allows to request in the input of RPC which information 1220 (metrics, srlg and/or affinities) should be returned: 1222 augment /te:tunnels-rpc/te:input/te:tunnel-info: 1223 +---- path-request* [request-id] 1224 | +---- request-id uint32 1225 ........... 1226 | +---- requested-metrics* [metric-type] 1227 | | +---- metric-type identityref 1228 | +---- return-srlgs? boolean 1229 | +---- return-affinities? Boolean 1230 ........... 1232 This feature is essential for using a stateless path computation in 1233 a multi-domain TE network as described in section 2.2. In this case, 1234 the metrics returned by a path computation requested to a given TE 1235 network controller must be used by the client to compute the best 1236 end-to-end path. If they are missing the client cannot compare 1237 different paths calculated by the TE network controllers and choose 1238 the best one for the optimal e2e path. 1240 6. YANG model for stateless TE path computation 1242 6.1. YANG Tree 1244 Figure 9 below shows the tree diagram of the YANG model defined in 1245 module ietf-te-path-computation.yang. 1247 module: ietf-te-path-computation 1248 augment /te:tunnels-rpc/te:input/te:tunnel-info: 1249 +---- path-request* [request-id] 1250 | +---- request-id uint32 1251 | +---- te-topology-identifier 1252 | | +---- provider-id? te-types:te-global-id 1253 | | +---- client-id? te-types:te-global-id 1254 | | +---- topology-id? te-types:te-topology-id 1255 | +---- source? inet:ip-address 1256 | +---- destination? inet:ip-address 1257 | +---- src-tp-id? binary 1258 | +---- dst-tp-id? binary 1259 | +---- bidirectional? boolean 1260 | +---- encoding? identityref 1261 | +---- switching-type? identityref 1262 | +---- explicit-route-objects 1263 | | +---- route-object-exclude-always* [index] 1264 | | | +---- index uint32 1265 | | | +---- (type)? 1266 | | | +--:(num-unnum-hop) 1267 | | | | +---- num-unnum-hop 1268 | | | | +---- node-id? te-types:te-node-id 1269 | | | | +---- link-tp-id? te-types:te-tp-id 1270 | | | | +---- hop-type? te-hop-type 1271 | | | | +---- direction? te-link-direction 1272 | | | +--:(as-number) 1273 | | | | +---- as-number-hop 1274 | | | | +---- as-number? binary 1275 | | | | +---- hop-type? te-hop-type 1276 | | | +--:(label) 1277 | | | +---- label-hop 1278 | | | +---- te-label 1279 | | | +---- (technology)? 1280 | | | | +--:(generic) 1281 | | | | +---- generic? rt- 1282 types:generalized-label 1283 | | | +---- direction? te-label-direction 1284 | | +---- route-object-include-exclude* [index] 1285 | | +---- explicit-route-usage? identityref 1286 | | +---- index uint32 1287 | | +---- (type)? 1288 | | +--:(num-unnum-hop) 1289 | | | +---- num-unnum-hop 1290 | | | +---- node-id? te-types:te-node-id 1291 | | | +---- link-tp-id? te-types:te-tp-id 1292 | | | +---- hop-type? te-hop-type 1293 | | | +---- direction? te-link-direction 1294 | | +--:(as-number) 1295 | | | +---- as-number-hop 1296 | | | +---- as-number? binary 1297 | | | +---- hop-type? te-hop-type 1298 | | +--:(label) 1299 | | | +---- label-hop 1300 | | | +---- te-label 1301 | | | +---- (technology)? 1302 | | | | +--:(generic) 1303 | | | | +---- generic? rt- 1304 types:generalized-label 1305 | | | +---- direction? te-label-direction 1306 | | +--:(srlg) 1307 | | +---- srlg 1308 | | +---- srlg? uint32 1309 | +---- path-constraints 1310 | | +---- te-bandwidth 1311 | | | +---- (technology)? 1312 | | | +--:(generic) 1313 | | | +---- generic? te-bandwidth 1314 | | +---- setup-priority? uint8 1315 | | +---- hold-priority? uint8 1316 | | +---- signaling-type? identityref 1317 | | +---- path-metric-bounds 1318 | | | +---- path-metric-bound* [metric-type] 1319 | | | +---- metric-type identityref 1320 | | | +---- upper-bound? uint64 1321 | | +---- path-affinities-values 1322 | | | +---- path-affinities-value* [usage] 1323 | | | +---- usage identityref 1324 | | | +---- value? admin-groups 1325 | | +---- path-affinity-names 1326 | | | +---- path-affinity-name* [usage] 1327 | | | +---- usage identityref 1328 | | | +---- affinity-name* [name] 1329 | | | +---- name string 1330 | | +---- path-srlgs-values 1331 | | | +---- usage? identityref 1332 | | | +---- values* srlg 1333 | | +---- path-srlgs-names 1334 | | | +---- path-srlgs-name* [usage] 1335 | | | +---- usage identityref 1336 | | | +---- srlg-name* [name] 1337 | | | +---- name string 1338 | | +---- disjointness? te-types:te-path- 1339 disjointness 1340 | +---- optimizations 1341 | | +---- (algorithm)? 1342 | | +--:(metric) {path-optimization-metric}? 1343 | | | +---- optimization-metric* [metric-type] 1344 | | | | +---- metric-type 1345 identityref 1346 | | | | +---- weight? uint8 1347 | | | | +---- explicit-route-exclude-objects 1348 | | | | | +---- route-object-exclude-object* [index] 1349 | | | | | +---- index uint32 1350 | | | | | +---- (type)? 1351 | | | | | +--:(num-unnum-hop) 1352 | | | | | | +---- num-unnum-hop 1353 | | | | | | +---- node-id? te-types:te- 1354 node-id 1355 | | | | | | +---- link-tp-id? te-types:te- 1356 tp-id 1357 | | | | | | +---- hop-type? te-hop-type 1358 | | | | | | +---- direction? te-link- 1359 direction 1360 | | | | | +--:(as-number) 1361 | | | | | | +---- as-number-hop 1362 | | | | | | +---- as-number? binary 1363 | | | | | | +---- hop-type? te-hop-type 1364 | | | | | +--:(label) 1365 | | | | | | +---- label-hop 1366 | | | | | | +---- te-label 1367 | | | | | | +---- (technology)? 1368 | | | | | | | +--:(generic) 1369 | | | | | | | +---- generic? rt- 1370 types:generalized-label 1371 | | | | | | +---- direction? te-label- 1372 direction 1373 | | | | | +--:(srlg) 1374 | | | | | +---- srlg 1375 | | | | | +---- srlg? uint32 1376 | | | | +---- explicit-route-include-objects 1377 | | | | +---- route-object-include-object* [index] 1378 | | | | +---- index uint32 1379 | | | | +---- (type)? 1380 | | | | +--:(num-unnum-hop) 1381 | | | | | +---- num-unnum-hop 1382 | | | | | +---- node-id? te-types:te- 1383 node-id 1384 | | | | | +---- link-tp-id? te-types:te- 1385 tp-id 1386 | | | | | +---- hop-type? te-hop-type 1387 | | | | | +---- direction? te-link- 1388 direction 1389 | | | | +--:(as-number) 1390 | | | | | +---- as-number-hop 1391 | | | | | +---- as-number? binary 1392 | | | | | +---- hop-type? te-hop-type 1393 | | | | +--:(label) 1394 | | | | +---- label-hop 1395 | | | | +---- te-label 1396 | | | | +---- (technology)? 1397 | | | | | +--:(generic) 1398 | | | | | +---- generic? rt- 1399 types:generalized-label 1400 | | | | +---- direction? te-label- 1401 direction 1402 | | | +---- tiebreakers 1403 | | | +---- tiebreaker* [tiebreaker-type] 1404 | | | +---- tiebreaker-type identityref 1405 | | +--:(objective-function) {path-optimization-objective- 1406 function}? 1407 | | +---- objective-function 1408 | | +---- objective-function-type? identityref 1409 | +---- requested-metrics* [metric-type] 1410 | | +---- metric-type identityref 1411 | +---- return-srlgs? boolean 1412 | +---- return-affinities? boolean 1413 | +---- path-in-segment! 1414 | | +---- label-restrictions 1415 | | +---- label-restriction* [index] 1416 | | +---- restriction? enumeration 1417 | | +---- index uint32 1418 | | +---- label-start 1419 | | | +---- te-label 1420 | | | +---- (technology)? 1421 | | | | +--:(generic) 1422 | | | | +---- generic? rt-types:generalized- 1423 label 1424 | | | +---- direction? te-label-direction 1425 | | +---- label-end 1426 | | | +---- te-label 1427 | | | +---- (technology)? 1428 | | | | +--:(generic) 1429 | | | | +---- generic? rt-types:generalized- 1430 label 1431 | | | +---- direction? te-label-direction 1432 | | +---- label-step 1433 | | | +---- (technology)? 1434 | | | +--:(generic) 1435 | | | +---- generic? int32 1436 | | +---- range-bitmap? binary 1437 | +---- path-out-segment! 1438 | +---- label-restrictions 1439 | +---- label-restriction* [index] 1440 | +---- restriction? enumeration 1441 | +---- index uint32 1442 | +---- label-start 1443 | | +---- te-label 1444 | | +---- (technology)? 1445 | | | +--:(generic) 1446 | | | +---- generic? rt-types:generalized- 1447 label 1448 | | +---- direction? te-label-direction 1449 | +---- label-end 1450 | | +---- te-label 1451 | | +---- (technology)? 1452 | | | +--:(generic) 1453 | | | +---- generic? rt-types:generalized- 1454 label 1455 | | +---- direction? te-label-direction 1456 | +---- label-step 1457 | | +---- (technology)? 1458 | | +--:(generic) 1459 | | +---- generic? int32 1460 | +---- range-bitmap? binary 1461 +---- synchronization* [synchronization-id] 1462 +---- synchronization-id uint32 1463 +---- svec 1464 | +---- relaxable? boolean 1465 | +---- disjointness? te-types:te-path-disjointness 1466 | +---- request-id-number* uint32 1467 +---- svec-constraints 1468 | +---- path-metric-bound* [metric-type] 1469 | +---- metric-type identityref 1470 | +---- upper-bound? uint64 1471 +---- path-srlgs-values 1472 | +---- usage? identityref 1473 | +---- values* srlg 1474 +---- path-srlgs-names 1475 | +---- path-srlgs-name* [usage] 1476 | +---- usage identityref 1477 | +---- srlg-name* [name] 1478 | +---- name string 1479 +---- exclude-objects 1480 | +---- excludes* [index] 1481 | +---- index uint32 1482 | +---- (type)? 1483 | +--:(num-unnum-hop) 1484 | | +---- num-unnum-hop 1485 | | +---- node-id? te-types:te-node-id 1486 | | +---- link-tp-id? te-types:te-tp-id 1487 | | +---- hop-type? te-hop-type 1488 | | +---- direction? te-link-direction 1489 | +--:(as-number) 1490 | | +---- as-number-hop 1491 | | +---- as-number? binary 1492 | | +---- hop-type? te-hop-type 1493 | +--:(label) 1494 | +---- label-hop 1495 | +---- te-label 1496 | +---- (technology)? 1497 | | +--:(generic) 1498 | | +---- generic? rt- 1499 types:generalized-label 1500 | +---- direction? te-label-direction 1501 +---- optimizations 1502 +---- (algorithm)? 1503 +--:(metric) 1504 | +---- optimization-metric* [metric-type] 1505 | +---- metric-type identityref 1506 | +---- weight? uint8 1507 +--:(objective-function) 1508 +---- objective-function 1509 +---- objective-function-type? identityref 1510 augment /te:tunnels-rpc/te:output/te:result: 1511 +--ro response* [response-id] 1512 +--ro response-id uint32 1513 +--ro (response-type)? 1514 +--:(no-path-case) 1515 | +--ro no-path! 1516 +--:(path-case) 1517 +--ro computed-path 1518 +--ro path-id? yang-types:uuid 1519 +--ro path-properties 1520 +--ro path-metric* [metric-type] 1521 | +--ro metric-type identityref 1522 | +--ro accumulative-value? uint64 1523 +--ro path-affinities-values 1524 | +--ro path-affinities-value* [usage] 1525 | +--ro usage identityref 1526 | +--ro value? admin-groups 1527 +--ro path-affinity-names 1528 | +--ro path-affinity-name* [usage] 1529 | +--ro usage identityref 1530 | +--ro affinity-name* [name] 1531 | +--ro name string 1532 +--ro path-srlgs-values 1533 | +--ro usage? identityref 1534 | +--ro values* srlg 1535 +--ro path-srlgs-names 1536 | +--ro path-srlgs-name* [usage] 1537 | +--ro usage identityref 1538 | +--ro srlg-name* [name] 1539 | +--ro name string 1540 +--ro path-route-objects 1541 +--ro path-route-object* [index] 1542 +--ro index uint32 1543 +--ro (type)? 1544 +--:(num-unnum-hop) 1545 | +--ro num-unnum-hop 1546 | +--ro node-id? te-types:te- 1547 node-id 1548 | +--ro link-tp-id? te-types:te- 1549 tp-id 1550 | +--ro hop-type? te-hop-type 1551 | +--ro direction? te-link- 1552 direction 1553 +--:(as-number) 1554 | +--ro as-number-hop 1555 | +--ro as-number? binary 1556 | +--ro hop-type? te-hop-type 1557 +--:(label) 1558 +--ro label-hop 1559 +--ro te-label 1560 +--ro (technology)? 1561 | +--:(generic) 1562 | +--ro generic? rt- 1563 types:generalized-label 1564 +--ro direction? te-label- 1565 direction 1567 Figure 9 - TE path computation YANG tree 1569 6.2. YANG Module 1571 file "ietf-te-path-computation@2018-10-22.yang" 1572 module ietf-te-path-computation { 1573 yang-version 1.1; 1574 namespace "urn:ietf:params:xml:ns:yang:ietf-te-path-computation"; 1575 // replace with IANA namespace when assigned 1577 prefix "tepc"; 1579 import ietf-inet-types { 1580 prefix "inet"; 1581 } 1583 import ietf-yang-types { 1584 prefix "yang-types"; 1585 } 1587 import ietf-te { 1588 prefix "te"; 1589 } 1591 import ietf-te-types { 1592 prefix "te-types"; 1593 } 1595 organization 1596 "Traffic Engineering Architecture and Signaling (TEAS) 1597 Working Group"; 1599 contact 1600 "WG Web: 1601 WG List: 1602 WG Chair: Lou Berger 1603 1605 WG Chair: Vishnu Pavan Beeram 1606 1608 "; 1610 description "YANG model for stateless TE path computation"; 1612 revision "2018-10-22" { 1613 description 1614 "Initial revision"; 1615 reference 1616 "draft-ietf-teas-yang-path-computation"; 1617 } 1619 /* 1620 * Features 1621 */ 1623 feature stateless-path-computation { 1624 description 1625 "This feature indicates that the system supports 1626 stateless path computation."; 1627 } 1629 /* 1630 * Groupings 1631 */ 1633 grouping path-info { 1634 leaf path-id { 1635 type yang-types:uuid; 1636 config false; 1637 description "path-id ref."; 1638 } 1639 uses te-types:generic-path-properties; 1640 description "Path computation output information"; 1641 } 1643 grouping requested-info { 1644 description 1645 "This grouping defines the information (e.g., metrics) 1646 which must be returned in the response"; 1647 list requested-metrics { 1648 key 'metric-type'; 1649 description 1650 "The list of the requested metrics 1651 The metrics listed here must be returned in the response. 1652 Returning other metrics in the response is optional."; 1653 leaf metric-type { 1654 type identityref { 1655 base te-types:path-metric-type; 1656 } 1657 description 1658 "The metric that must be returned in the response"; 1659 } 1660 } 1661 leaf return-srlgs { 1662 type boolean; 1663 default false; 1664 description 1665 "If true, path srlgs must be returned in the response. 1666 If false, returning path srlgs in the response optional."; 1667 } 1668 leaf return-affinities { 1669 type boolean; 1670 default false; 1671 description 1672 "If true, path affinities must be returned in the response. 1673 If false, returning path affinities in the response is 1674 optional."; 1675 } 1676 } 1677 identity svec-metric-type { 1678 description 1679 "Base identity for svec metric type"; 1680 } 1682 identity svec-metric-cumul-te { 1683 base svec-metric-type; 1684 description 1685 "TE cumulative path metric"; 1686 } 1688 identity svec-metric-cumul-igp { 1689 base svec-metric-type; 1690 description 1691 "IGP cumulative path metric"; 1692 } 1694 identity svec-metric-cumul-hop { 1695 base svec-metric-type; 1696 description 1697 "Hop cumulative path metric"; 1698 } 1700 identity svec-metric-aggregate-bandwidth-consumption { 1701 base svec-metric-type; 1702 description 1703 "Cumulative bandwith consumption of the set of synchronized 1704 paths"; 1705 } 1707 identity svec-metric-load-of-the-most-loaded-link { 1708 base svec-metric-type; 1709 description 1710 "Load of the most loaded link"; 1711 } 1713 grouping svec-metrics-bounds_config { 1714 description "TE path metric bounds grouping for computing a set 1715 of 1716 synchronized requests"; 1717 leaf metric-type { 1718 type identityref { 1719 base svec-metric-type; 1720 } 1721 description "TE path metric type usable for computing a set of 1722 synchronized requests"; 1723 } 1724 leaf upper-bound { 1725 type uint64; 1726 description "Upper bound on end-to-end svec path metric"; 1727 } 1728 } 1730 grouping svec-metrics-optimization_config { 1731 description "TE path metric bounds grouping for computing a set 1732 of 1733 synchronized requests"; 1734 leaf metric-type { 1735 type identityref { 1736 base svec-metric-type; 1737 } 1738 description "TE path metric type usable for computing a set of 1739 synchronized requests"; 1740 } 1741 leaf weight { 1742 type uint8; 1743 description "Metric normalization weight"; 1744 } 1745 } 1747 grouping svec-exclude { 1748 description "List of resources to be excluded by all the paths 1749 in the SVEC"; 1750 container exclude-objects { 1751 description "resources to be excluded"; 1752 list excludes { 1753 key index; 1754 description 1755 "List of explicit route objects to always exclude 1756 from synchronized path computation"; 1757 leaf index { 1758 type uint32; 1759 description "XRO subobject index"; 1760 } 1761 uses te-types:explicit-route-hop; 1762 } 1763 } 1764 } 1766 grouping synchronization-constraints { 1767 description "Global constraints applicable to synchronized 1768 path computation"; 1769 container svec-constraints { 1770 description "global svec constraints"; 1771 list path-metric-bound { 1772 key metric-type; 1773 description "list of bound metrics"; 1774 uses svec-metrics-bounds_config; 1775 } 1776 } 1777 uses te-types:generic-path-srlgs; 1778 uses svec-exclude; 1779 } 1781 grouping synchronization-optimization { 1782 description "Synchronized request optimization"; 1783 container optimizations { 1784 description 1785 "The objective function container that includes 1786 attributes to impose when computing a synchronized set of 1787 paths"; 1789 choice algorithm { 1790 description "Optimizations algorithm."; 1791 case metric { 1792 list optimization-metric { 1793 key "metric-type"; 1794 description "svec path metric type"; 1795 uses svec-metrics-optimization_config; 1796 } 1797 } 1798 case objective-function { 1799 container objective-function { 1800 description 1801 "The objective function container that includes 1802 attributes to impose when computing a TE path"; 1803 uses te-types:path-objective-function_config; 1804 } 1805 } 1806 } 1807 } 1808 } 1810 grouping synchronization-info { 1811 description "Information for sync"; 1812 list synchronization { 1813 key "synchronization-id"; 1814 description "sync list"; 1815 leaf synchronization-id { 1816 type uint32; 1817 description "index"; 1818 } 1819 container svec { 1820 description 1821 "Synchronization VECtor"; 1822 leaf relaxable { 1823 type boolean; 1824 default true; 1825 description 1826 "If this leaf is true, path computation process is free 1827 to ignore svec content. 1828 otherwise it must take into account this svec."; 1829 } 1830 uses te-types:generic-path-disjointness; 1831 leaf-list request-id-number { 1832 type uint32; 1833 description "This list reports the set of M path 1834 computation 1835 requests that must be synchronized."; 1836 } 1837 } 1838 uses synchronization-constraints; 1839 uses synchronization-optimization; 1840 } 1841 } 1843 grouping no-path-info { 1844 description "no-path-info"; 1845 container no-path { 1846 presence "Response without path information, due to failure 1847 performing the path computation"; 1848 description "if path computation cannot identify a path, 1849 rpc returns no path."; 1850 } 1851 } 1853 /* 1854 * These groupings should be removed when defined in te-types 1855 */ 1857 grouping encoding-and-switching-type { 1858 description 1859 "Common grouping to define the LSP encoding and switching 1860 types"; 1862 leaf encoding { 1863 type identityref { 1864 base te-types:lsp-encoding-types; 1865 } 1866 description "LSP encoding type"; 1867 reference "RFC3945"; 1868 } 1869 leaf switching-type { 1870 type identityref { 1871 base te-types:switching-capabilities; 1873 } 1874 description "LSP switching type"; 1875 reference "RFC3945"; 1876 } 1877 } 1879 grouping end-points { 1880 description 1881 "Common grouping to define the TE tunnel end-points"; 1883 leaf source { 1884 type inet:ip-address; 1885 description "TE tunnel source address."; 1886 } 1887 leaf destination { 1888 type inet:ip-address; 1889 description "P2P tunnel destination address"; 1890 } 1891 leaf src-tp-id { 1892 type binary; 1893 description "TE tunnel source termination point identifier."; 1894 } 1895 leaf dst-tp-id { 1896 type binary; 1897 description "TE tunnel destination termination point 1898 identifier."; 1899 } 1900 leaf bidirectional { 1901 type boolean; 1902 default 'false'; 1903 description "TE tunnel bidirectional"; 1904 } 1905 } 1907 /** 1908 * AUGMENTS TO TE RPC 1909 */ 1911 augment "/te:tunnels-rpc/te:input/te:tunnel-info" { 1912 description "statelessComputeP2PPath input"; 1913 list path-request { 1914 key "request-id"; 1915 description "request-list"; 1916 leaf request-id { 1917 type uint32; 1918 mandatory true; 1919 description "Each path computation request is uniquely 1920 identified by the request-id-number. 1921 It must be present also in rpcs."; 1922 } 1923 uses te-types:te-topology-identifier; 1924 uses end-points; 1925 uses encoding-and-switching-type; 1926 uses te-types:path-route-objects; 1927 uses te-types:generic-path-constraints; 1928 uses te-types:generic-path-optimization; 1929 uses requested-info; 1930 uses te:path-access-segment-info; 1931 } 1932 uses synchronization-info; 1933 } 1935 augment "/te:tunnels-rpc/te:output/te:result" { 1936 description "statelessComputeP2PPath output"; 1937 list response { 1938 key response-id; 1939 config false; 1940 description "response"; 1941 leaf response-id { 1942 type uint32; 1943 description 1944 "The list key that has to reuse request-id-number."; 1945 } 1946 choice response-type { 1947 config false; 1948 description "response-type"; 1949 case no-path-case { 1950 uses no-path-info; 1952 } 1953 case path-case { 1954 container computed-path { 1955 uses path-info; 1956 description "Path computation service."; 1957 } 1958 } 1959 } 1960 } 1961 } 1962 } 1964 1966 Figure 10 - TE path computation YANG module 1968 7. Security Considerations 1970 This document describes use cases of requesting Path Computation 1971 using YANG models, which could be used at the ABNO Control Interface 1972 [RFC7491] and/or between controllers in ACTN [ACTN-frame]. As such, 1973 it does not introduce any new security considerations compared to 1974 the ones related to YANG specification, ABNO specification and ACTN 1975 Framework defined in [RFC7950], [RFC7491] and [ACTN-frame]. 1977 The YANG module defined in this draft is designed to be accessed via 1978 the NETCONF protocol [RFC6241] or RESTCONF protocol [RFC8040]. The 1979 lowest NETCONF layer is the secure transport layer, and the 1980 mandatory-to-implement secure transport is Secure Shell (SSH) 1981 [RFC6242]. The lowest RESTCONF layer is HTTPS, and the mandatory-to- 1982 implement secure transport is TLS [RFC5246]. 1984 This document also defines common data types using the YANG data 1985 modeling language. The definitions themselves have no security 1986 impact on the Internet, but the usage of these definitions in 1987 concrete YANG modules might have. The security considerations 1988 spelled out in the YANG specification [RF7950] apply for this 1989 document as well. 1991 The NETCONF access control model [RFC6536] provides the means to 1992 restrict access for particular NETCONF or RESTCONF users to a 1993 preconfigured subset of all available NETCONF or RESTCONF protocol 1994 operations and content. 1996 Note - The security analysis of each leaf is for further study. 1998 [Editor's note:] Complete the security analysis. Check if we cannot 1999 just reference the te draft since the rpc exposes the same 2000 information that can be exposed by the te-tunnel model. 2002 8. IANA Considerations 2004 This document registers the following URIs in the IETF XML registry 2005 [RFC3688]. Following the format in [RFC3688], the following 2006 registration is requested to be made. 2008 URI: urn:ietf:params:xml:ns:yang:ietf-te-path-computation 2009 XML: N/A, the requested URI is an XML namespace. 2011 This document registers a YANG module in the YANG Module Names 2012 registry [RFC7950]. 2014 name: ietf-te-path-computation 2015 namespace: urn:ietf:params:xml:ns:yang:ietf-te-path-computation 2016 prefix: tepc 2018 9. References 2020 9.1. Normative References 2022 [RFC5541] Le Roux, JL. et al., " Encoding of Objective Functions in 2023 the Path Computation Element Communication Protocol 2024 (PCEP)", RFC 5541, June 2009. 2026 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 2027 and A. Bierman, Ed., "Network Configuration Protocol 2028 (NETCONF)", RFC 6241, June 2011. 2030 [RFC7491] Farrel, A., King, D., "A PCE-Based Architecture for 2031 Application-Based Network Operations", RFC 7491, 2032 March 2015. 2034 [RFC7926] Farrel, A. et al., "Problem Statement and Architecture for 2035 Information Exchange Between Interconnected Traffic 2036 Engineered Networks", RFC 7926, July 2016. 2038 [RFC7950] Bjorklund, M., "The YANG 1.1 Data Modeling Language", RFC 2039 7950, August 2016. 2041 [RFC8040] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 2042 Protocol", RFC 8040, January 2017. 2044 [TE-TOPO] Liu, X. et al., "YANG Data Model for TE Topologies", 2045 draft-ietf-teas-yang-te-topo, work in progress. 2047 [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic 2048 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 2049 te, work in progress. 2051 [ACTN-Frame] Ceccarelli, D., Lee, Y. et al., "Framework for 2052 Abstraction and Control of Traffic Engineered Networks" 2053 draft-ietf-actn-framework, work in progress. 2055 [ACTN-Info] Lee, Y., Belotti, S., Dhody, D., Ceccarelli, D., 2056 "Information Model for Abstraction and Control of 2057 Transport Networks", draft-ietf-teas-actn-info, work in 2058 progress. 2060 9.1. Informative References 2062 [RFC4655] Farrel, A. et al., "A Path Computation Element (PCE)-Based 2063 Architecture", RFC 4655, August 2006. 2065 [RFC7139] Zhang, F. et al., "GMPLS Signaling Extensions for Control 2066 of Evolving G.709 Optical Transport Networks", RFC 7139, 2067 March 2014. 2069 [RFC7446] Lee, Y. et al., "Routing and Wavelength Assignment 2070 Information Model for Wavelength Switched Optical 2071 Networks", RFC 7446, February 2015. 2073 [OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical 2074 Transport Network Topology", draft-ietf-ccamp-otn-topo- 2075 yang, work in progress. 2077 [PCEP-Service-Aware] Dhody, D. et al., "Extensions to the Path 2078 Computation Element Communication Protocol (PCEP) to 2079 compute service aware Label Switched Path (LSP)", draft- 2080 ietf-pce-pcep-service-aware, work in progress. 2082 [ITU-T G.709-2016] ITU-T Recommendation G.709 (06/16), "Interface 2083 for the optical transport network", June 2016. 2085 10. Acknowledgments 2087 The authors would like to thank Igor Bryskin and Xian Zhang for 2088 participating in discussions and providing valuable insights. 2090 The authors would like to thank the authors of the TE Tunnel YANG 2091 model [TE-TUNNEL], in particular Igor Bryskin, Vishnu Pavan Beeram, 2092 Tarek Saad and Xufeng Liu, for their inputs to the discussions and 2093 support in having consistency between the Path Computation and TE 2094 Tunnel YANG models. 2096 This document was prepared using 2-Word-v2.0.template.dot. 2098 Appendix A. Examples of dimensioning the "detailed connectivity matrix" 2100 In the following table, a list of the possible constraints, 2101 associated with their potential cardinality, is reported. 2103 The maximum number of potential connections to be computed and 2104 reported is, in first approximation, the multiplication of all of 2105 them. 2107 Constraint Cardinality 2108 ---------- ------------------------------------------------------- 2110 End points N(N-1)/2 if connections are bidirectional (OTN and WDM), 2111 N(N-1) for unidirectional connections. 2113 Bandwidth In WDM networks, bandwidth values are expressed in GHz. 2115 On fixed-grid WDM networks, the central frequencies are 2116 on a 50GHz grid and the channel width of the transmitters 2117 are typically 50GHz such that each central frequency can 2118 be used, i.e., adjacent channels can be placed next to 2119 each other in terms of central frequencies. 2121 On flex-grid WDM networks, the central frequencies are on 2122 a 6.25GHz grid and the channel width of the transmitters 2123 can be multiples of 12.5GHz. 2125 For fixed-grid WDM networks typically there is only one 2126 possible bandwidth value (i.e., 50GHz) while for flex- 2127 grid WDM networks typically there are 4 possible 2128 bandwidth values (e.g., 37.5GHz, 50GHz, 62.5GHz, 75GHz). 2130 In OTN (ODU) networks, bandwidth values are expressed as 2131 pairs of ODU type and, in case of ODUflex, ODU rate in 2132 bytes/sec as described in section 5 of [RFC7139]. 2134 For "fixed" ODUk types, 6 possible bandwidth values are 2135 possible (i.e., ODU0, ODU1, ODU2, ODU2e, ODU3, ODU4). 2137 For ODUflex(GFP), up to 80 different bandwidth values can 2138 be specified, as defined in Table 7-8 of [ITU-T G.709- 2139 2016]. 2141 For other ODUflex types, like ODUflex(CBR), the number of 2142 possible bandwidth values depends on the rates of the 2143 clients that could be mapped over these ODUflex types, as 2144 shown in Table 7.2 of [ITU-T G.709-2016], which in theory 2145 could be a countinuum of values. However, since different 2146 ODUflex bandwidths that use the same number of TSs on 2147 each link along the path are equivalent for path 2148 computation purposes, up to 120 different bandwidth 2149 ranges can be specified. 2151 Ideas to reduce the number of ODUflex bandwidth values in 2152 the detailed connectivity matrix, to less than 100, are 2153 for further study. 2155 [Editor's note:] It is possible to follow a similar approach as the 2156 one proposed for IP networks and report fewer optimal paths for 2157 ODUflex range of rates which could be one or more consecutive ranges 2158 of the theoretical 120 bandwidth ranges. 2160 Another simplification could be not to report optimal path for 2161 bandwidth ranges for which no client mapping is defined. 2163 More research about this alternative is needed. 2165 Bandwidth specification for ODUCn is currently for 2166 further study but it is expected that other bandwidth 2167 values can be specified as integer multiples of 100Gb/s. 2169 In IP we have bandwidth values in bytes/sec. In 2170 principle, this is a countinuum of values, but in 2171 practice we can identify a set of bandwidth ranges, where 2172 any bandwidth value inside the same range produces the 2173 same path. 2174 The number of such ranges is the cardinality, which 2175 depends on the topology, available bandwidth and status 2176 of the network. Simulations (Note: reference paper 2177 submitted for publication) show that values for medium 2178 size topologies (around 50-150 nodes) are in the range 4- 2179 7 (5 on average) for each end points couple. 2181 [Editor's note (Francesco):] Inform us as soon as the paper is 2182 published to add the reference to this document. 2184 Metrics IGP, TE and hop number are the basic objective metrics 2185 defined so far. There are also the 2 objective functions 2186 defined in [RFC5541]: Minimum Load Path (MLP) and Maximum 2187 Residual Bandwidth Path (MBP). Assuming that one only 2188 metric or objective function can be optimized at once, 2189 the total cardinality here is 5. 2191 With [PCEP-Service-Aware], a number of additional metrics 2192 are defined, including Path Delay metric, Path Delay 2193 Variation metric and Path Loss metric, both for point-to- 2194 point and point-to-multipoint paths. This increases the 2195 cardinality to 8. 2197 Bounds Each metric can be associated with a bound in order to 2198 find a path having a total value of that metric lower 2199 than the given bound. This has a potentially very high 2200 cardinality (as any value for the bound is allowed). In 2201 practice there is a maximum value of the bound (the one 2202 with the maximum value of the associated metric) which 2203 results always in the same path, and a range approach 2204 like for bandwidth in IP should produce also in this case 2205 the cardinality. Assuming to have a cardinality similar 2206 to the one of the bandwidth (let say 5 on average) we 2207 should have 6 (IGP, TE, hop, path delay, path delay 2208 variation and path loss; we don't consider here the two 2209 objective functions of [RFC5541] as they are conceived 2210 only for optimization)*5 = 30 cardinality. 2212 Technology 2213 constraints For further study 2215 [Editor's note:] Discuss further what are the impacts of these 2216 technology constraints (e.g., Modulation format, FEC, ...) to path 2217 computation. 2219 Priority We have 8 values for setup priority, which is used in 2220 path computation to route a path using free resources 2221 and, where no free resources are available, resources 2222 used by LSPs having a lower holding priority. 2224 Local prot It's possible to ask for a local protected service, where 2225 all the links used by the path are protected with fast 2226 reroute (this is only for IP networks, but line 2227 protection schemas are available on the other 2228 technologies as well). This adds an alternative path 2229 computation, so the cardinality of this constraint is 2. 2231 Administrative 2232 Colors Administrative colors (aka affinities) are typically 2233 assigned to links but when topology abstraction is used 2234 affinity information can also appear in the detailed 2235 connectivity matrix. 2237 There are 32 bits available for the affinities. Links can 2238 be tagged with any combination of these bits, and path 2239 computation can be constrained to include or exclude any 2240 or all of them. The relevant cardinality is 3 (include- 2241 any, exclude-any, include-all) times 2^32 possible 2242 values. However, the number of possible values used in 2243 real networks is quite small. 2245 Included Resources 2247 A path computation request can be associated to an 2248 ordered set of network resources (links, nodes) to be 2249 included along the computed path. This constraint would 2250 have a huge cardinality as in principle any combination 2251 of network resources is possible. However, as far as the 2252 Orchestrator doesn't know details about the internal 2253 topology of the domain, it shouldn't include this type of 2254 constraint at all (see more details below). 2256 Excluded Resources 2258 A path computation request can be associated to a set of 2259 network resources (links, nodes, SRLGs) to be excluded 2260 from the computed path. Like for included resources, 2261 this constraint has a potentially very high cardinality, 2262 but, once again, it can't be actually used by the 2263 Orchestrator, if it's not aware of the domain topology 2264 (see more details below). 2265 As discussed above, the Orchestrator can specify include or exclude 2266 resources depending on the abstract topology information that the 2267 domain controller exposes: 2269 o In case the domain controller exposes the entire domain as a 2270 single abstract TE node with his own external terminations and 2271 detailed connectivity matrix (whose size we are estimating), no 2272 other topological details are available, therefore the size of 2273 the detailed connectivity matrix only depends on the combination 2274 of the constraints that the Orchestrator can use in a path 2275 computation request to the domain controller. These constraints 2276 cannot refer to any details of the internal topology of the 2277 domain, as those details are not known to the Orchestrator and so 2278 they do not impact size of the detailed connectivity matrix 2279 exported. 2281 o Instead in case the domain controller exposes a topology 2282 including more than one abstract TE nodes and TE links, and their 2283 attributes (e.g. SRLGs, affinities for the links), the 2284 Orchestrator knows these details and therefore could compute a 2285 path across the domain referring to them in the constraints. The 2286 detailed connectivity matrixes, whose size need to be estimated 2287 here, are the ones relevant to the abstract TE nodes exported to 2288 the Orchestrator. These detailed connectivity matrixes and 2289 therefore theirs sizes, while cannot depend on the other abstract 2290 TE nodes and TE links, which are external to the given abstract 2291 node, could depend to SRLGs (and other attributes, like 2292 affinities) which could be present also in the portion of the 2293 topology represented by the abstract nodes, and therefore 2294 contribute to the size of the related detailed connectivity 2295 matrix. 2297 We also don't consider here the possibility to ask for more than one 2298 path in diversity or for point-to-multi-point paths, which are for 2299 further study. 2301 Considering for example an IP domain without considering SRLG and 2302 affinities, we have an estimated number of paths depending on these 2303 estimated cardinalities: 2305 Endpoints = N*(N-1), Bandwidth = 5, Metrics = 6, Bounds = 20, 2306 Priority = 8, Local prot = 2 2308 The number of paths to be pre-computed by each IP domain is 2309 therefore 24960 * N(N-1) where N is the number of domain access 2310 points. 2312 This means that with just 4 access points we have nearly 300000 2313 paths to compute, advertise and maintain (if a change happens in the 2314 domain, due to a fault, or just the deployment of new traffic, a 2315 substantial number of paths need to be recomputed and the relevant 2316 changes advertised to the upper controller). 2318 This seems quite challenging. In fact, if we assume a mean length of 2319 1K for the json describing a path (a quite conservative estimate), 2320 reporting 300000 paths means transferring and then parsing more than 2321 300 Mbytes for each domain. If we assume that 20% (to be checked) of 2322 this paths change when a new deployment of traffic occurs, we have 2323 60 Mbytes of transfer for each domain traversed by a new end-to-end 2324 path. If a network has, let say, 20 domains (we want to estimate the 2325 load for a non-trivial domain setup) in the beginning a total 2326 initial transfer of 6Gigs is needed, and eventually, assuming 4-5 2327 domains are involved in mean during a path deployment we could have 2328 240-300 Mbytes of changes advertised to the higher order controller. 2330 Further bare-bone solutions can be investigated, removing some more 2331 options, if this is considered not acceptable; in conclusion, it 2332 seems that an approach based only on the information provided by the 2333 detailed connectivity matrix is hardly feasible, and could be 2334 applicable only to small networks with a limited meshing degree 2335 between domains and renouncing to a number of path computation 2336 features. 2338 [Editor's note:] Evaluate whether to describe the bare-bone solution 2339 as another way to jointly use detailed connectivity matrix and path 2340 computation in a complimentary way. The paragraph above could be 2341 updated as follow: 2343 We could still try and provide a bare-bone solution removing some 2344 more options: 2346 - No local protection 2347 - Max one bound in path computation. When asking for a path with a 2348 bound, the request towards the domain is done using the bound 2349 metric as the objective metric unregarding the original objective 2350 metric. While this provides possibly a non-optimal solution, that 2351 solution guarantees the satisfaction of the bound, if possible at 2352 all. 2353 - Reduce the used metric to just IGP and delay, ignoring hops and TE 2354 metric (or using TE for delay). 2356 In this case we have just 5*2*8=80*N*(N-1) paths. With 4 access 2357 points we have 12*80 = 960 paths to be computed and maintained. This 2358 is still heavy but probably feasible if the number of the domain 2359 access points is limited. 2361 In conclusion, the connectivity matrix approach seems feasible only 2362 with a bare-bone approach with limited meshing among domains and 2363 without using most of the available path computation capabilities, 2364 unless the bare-one approach is complemented with path computation 2365 as described in section 3.2.3. 2367 <> 2369 Contributors 2371 Dieter Beller 2372 Nokia 2373 Email: dieter.beller@nokia.com 2375 Gianmarco Bruno 2376 Ericsson 2377 Email: gianmarco.bruno@ericsson.com 2379 Francesco Lazzeri 2380 Ericsson 2381 Email: francesco.lazzeri@ericsson.com 2383 Young Lee 2384 Huawei 2385 Email: leeyoung@huawei.com 2387 Carlo Perocchio 2388 Ericsson 2389 Email: carlo.perocchio@ericsson.com 2391 Authors' Addresses 2393 Italo Busi (Editor) 2394 Huawei 2395 Email: italo.busi@huawei.com 2397 Sergio Belotti (Editor) 2398 Nokia 2399 Email: sergio.belotti@nokia.com 2401 Victor Lopez 2402 Telefonica 2403 Email: victor.lopezalvarez@telefonica.com 2404 Oscar Gonzalez de Dios 2405 Telefonica 2406 Email: oscar.gonzalezdedios@telefonica.com 2408 Anurag Sharma 2409 Google 2410 Email: ansha@google.com 2412 Yan Shi 2413 China Unicom 2414 Email: shiyan49@chinaunicom.cn 2416 Ricard Vilalta 2417 CTTC 2418 Email: ricard.vilalta@cttc.es 2420 Karthik Sethuraman 2421 NEC 2422 Email: karthik.sethuraman@necam.com 2424 Michael Scharf 2425 Nokia 2426 Email: michael.scharf@gmail.com 2428 Daniele Ceccarelli 2429 Ericsson 2430 Email: daniele.ceccarelli@ericsson.com