idnits 2.17.1 draft-ietf-pce-inter-area-as-applicability-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 8, 2019) is 1747 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 5316 (Obsoleted by RFC 9346) -- Obsolete informational reference (is this intentional?): RFC 7525 (Obsoleted by RFC 9325) -- Obsolete informational reference (is this intentional?): RFC 7752 (Obsoleted by RFC 9552) Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 PCE Working Group D. King 2 Internet Draft Old Dog Consulting 3 Intended status: Informational H. Zheng 4 Expires: January 9, 2020 Huawei Technologies 5 July 8, 2019 7 Applicability of the Path Computation Element to Inter-Area and 8 Inter-AS MPLS and GMPLS Traffic Engineering 10 draft-ietf-pce-inter-area-as-applicability-08 12 Abstract 14 The Path Computation Element (PCE) may be used for computing services 15 that traverse multi-area and multi-AS Multiprotocol Label Switching 16 (MPLS) and Generalized MPLS (GMPLS) Traffic Engineered (TE) networks. 18 This document examines the applicability of the PCE architecture, 19 protocols, and protocol extensions for computing multi-area and 20 multi-AS paths in MPLS and GMPLS networks. 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at https://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on January 9, 2020. 39 Copyright Notice 41 Copyright (c) 2019 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (https://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 1. Introduction.................................................3 55 1.1. Domains.................................................4 56 1.2. Path Computation........................................4 57 1.2.1 PCE-based Path Computation Procedure.................5 58 1.3. Traffic Engineering Aggregation and Abstraction.........6 59 1.4. Traffic Engineered Label Switched Paths.................6 60 1.5. Inter-area and Inter-AS Capable PCE Discovery...........6 61 1.6. Objective Functions.....................................6 62 2. Terminology..................................................7 63 3. Issues and Considerations....................................7 64 3.1 Multi-homing.............................................7 65 3.2 Destination Location.....................................8 66 3.3 Domain Confidentiality ..................................8 67 4. Domain Topologies............................................8 68 4.1 Selecting Domain Paths...................................8 69 4.2 Domain Sizes.............................................9 70 4.3 Domain Diversity.........................................9 71 4.4 Synchronized Path Computations...........................9 72 4.5 Domain Inclusion or Exclusion............................9 73 5. Applicability of the PCE to Inter-area Traffic Engineering...10 74 5.1. Inter-area Routing......................................11 75 5.1.1. Area Inclusion and Exclusion..........................11 76 5.1.2. Strict Explicit Path and Loose Path...................11 77 5.1.3. Inter-Area Diverse Path Computation...................11 78 6. Applicability of the PCE to Inter-AS Traffic Engineering.....12 79 6.1. Inter-AS Routing........................................12 80 6.1.1. AS Inclusion and Exclusion............................12 81 6.2. Inter-AS Bandwidth Guarantees...........................12 82 6.3. Inter-AS Recovery.......................................13 83 6.4. Inter-AS PCE Peering Policies...........................13 84 7. Multi-Domain PCE Deployment..................................13 85 7.1 Traffic Engineering Database.............................13 86 7.1.1. Applicability of BGP-LS to PCE........................14 87 7.2 Pre-Planning and Management-Based Solutions..............14 88 8. Domain Confidentiality.......................................15 89 8.1 Loose Hops...............................................15 90 8.2 Confidential Path Segments and Path Keys.................15 91 9. Point-to-Multipoint..........................................16 92 10. Optical Domains.............................................16 93 10.1 Abstraction and Control of TE Networks (ACTN)...........17 94 11. Policy......................................................17 95 12. Manageability Considerations................................18 96 12.1 Control of Function and Policy...........................18 97 12.2 Information and Data Models..............................18 98 12.3 Liveness Detection and Monitoring........................19 99 12.4 Verifying Correct Operation..............................19 100 12.5 Impact on Network Operation..............................19 101 13. Security Considerations.....................................19 102 13.1 Multi-domain Security....................................19 103 14. IANA Considerations.........................................20 104 15. Acknowledgements............................................20 105 16. References..................................................20 106 16.1. Normative References....................................20 107 16.2. Informative References..................................21 108 17. Contributors................................................24 109 18. Author's Addresses..........................................25 111 1. Introduction 113 Computing paths across large multi-domain environments may 114 require special computational components and cooperation between 115 entities in different domains capable of complex path computation. 117 Issues that may exist when routing in multi-domain networks include: 119 o Often there is a lack of full topology and TE information across 120 domains; 121 o No single node has the full visibility to determine an optimal or 122 even feasible end-to-end path across domains; 123 o How to evaluate and select the exit point and next domain boundary 124 from a domain? 125 o How might the ingress node determine which domains should be used 126 for the end-to-end path? 128 Often information exchange across multiple domains is limited due to 129 the lack of trust relationship, security issues, or scalability 130 issues even if there is a trust relationship between domains. 132 The Path Computation Element (PCE) [RFC4655] provides an architecture 133 and a set of functional components to address the problem space, and 134 issues highlighted above. 136 A PCE may be used to compute end-to-end paths across multi-domain 137 environments using a per-domain path computation technique [RFC5152]. 138 The so called backward recursive path computation (BRPC) mechanism 139 [RFC5441] defines a PCE-based path computation procedure to compute 140 inter-domain constrained Multiprotocol Label Switching (MPLS) and 141 Generalized MPLS (GMPLS) Traffic Engineered (TE) networks. However, 142 both per-domain and BRPC techniques assume that the sequence of 143 domains to be crossed from source to destination is known, either 144 fixed by the network operator or obtained by other means. 146 In more advanced deployments (including multi-area and multi- 147 Autonomous System (multi-AS) environments) the sequence of domains 148 may not be known in advance and the choice of domains in the end-to- 149 end domain sequence might be critical to the determination of an 150 optimal end-to-end path. In this case the use of the Hierarchical PCE 151 [RFC6805] architecture and mechanisms may be used to discover the 152 intra-area path and select the optimal end-to-end domain sequence. 154 This document describes the processes and procedures available when 155 using the PCE architecture and protocols, for computing inter-area 156 and inter-AS MPLS and GMPLS Traffic Engineered paths. 158 This document scope does not include discussion on stateful PCE, 159 active PCE, remotely initiated PCE, or PCE as a central controller 160 (PCECC) deployment scenarios. 162 1.1 Domains 164 Generally, a domain can be defined as a separate administrative, 165 geographic, or switching environment within the network. A domain 166 may be further defined as a zone of routing or computational ability. 167 Under these definitions a domain might be categorized as an 168 Autonomous System (AS) or an Interior Gateway Protocol (IGP) area 169 (as per [RFC4726] and [RFC4655]). 171 For the purposes of this document, a domain is considered to be a 172 collection of network elements within an area or AS that has a 173 common sphere of address management or path computational 174 responsibility. Wholly or partially overlapping domains are not 175 within the scope of this document. 177 In the context of GMPLS, a particularly important example of a domain 178 is the Automatically Switched Optical Network (ASON) subnetwork 179 [G-8080]. In this case, computation of an end-to-end path requires 180 the selection of nodes and links within a parent domain where some 181 nodes may, in fact, be subnetworks. Furthermore, a domain might be an 182 ASON routing area [G-7715]. A PCE may perform the path computation 183 function of an ASON routing controller as described in [G-7715-2]. 185 It is assumed that the PCE architecture is not applied to a large 186 group of domains, such as the Internet. 188 1.2 Path Computation 189 For the purpose of this document, it is assumed that the 190 path computation is the sole responsibility of the PCE as per the 191 architecture defined in [RFC4655]. When a path is required the Path 192 Computation Client (PCC) will send a request to the PCE. The PCE 193 will apply the required constraints and compute a path and return a 194 response to the PCC. In the context of this document it may be 195 necessary for the PCE to co-operate with other PCEs in adjacent 196 domains (as per BRPC [RFC5441]) or cooperate with a Parent PCE 197 (as per [RFC6805]). 199 It is entirely feasible that an operator could compute a path across 200 multiple domains without the use of a PCE if the relevant domain 201 information is available to the network planner or network management 202 platform. The definition of what relevant information is required to 203 perform this network planning operation and how that information is 204 discovered and applied is outside the scope of this document. 206 1.2.1 PCE-based Path Computation Procedure 208 As highlighted, the PCE is an entity capable of computing an 209 inter-domain TE path upon receiving a request from a PCC. There could 210 be a single PCE per domain, or single PCE responsible for all 211 domains. A PCE may or may not reside on the same node as the 212 requesting PCC. A path may be computed by either a single PCE node 213 or a set of distributed PCE nodes that collaborate during path 214 computation. 216 [RFC4655] defines that a PCC should send a path computation request 217 to a particular PCE, using [RFC5440] (PCC-to-PCE communication). 218 This negates the need to broadcast a request to all the PCEs. Each 219 PCC can maintain information about the computation capabilities 220 of the PCEs, it is aware of. The PCC-PCE capability awareness can be 221 configured using static configurations or by automatic and dynamic 222 PCE discovery procedures. 224 If a network path is required, the PCC will send a path computation 225 request to the PCE. A PCE may then compute the end-to-end path 226 if it is aware of the topology and TE information required to 227 compute the entire path. If the PCE is unable to compute the 228 entire path, the PCE architecture provides co-operative PCE 229 mechanisms for the resolution of path computation requests when an 230 individual PCE does not have sufficient TE visibility. 232 End-to-end path segments may be kept confidential through the 233 application of path keys, to protect partial or full path 234 information. A path key that is a token that replaces a path segment 235 in an explicit route. The path key mechanism is described in 236 [RFC5520] 238 1.3 Traffic Engineering Aggregation and Abstraction 240 Networks are often constructed from multiple areas or ASes that are 241 interconnected via multiple interconnect points. To maintain 242 network confidentiality and scalability TE properties of each area 243 and AS are not generally advertized outside each specific area or AS. 245 TE aggregation or abstraction provide mechanism to hide information 246 but may cause failed path setups or the selection of suboptimal 247 end-to-end paths [RFC4726]. The aggregation process may also have 248 significant scaling issues for networks with many possible routes 249 and multiple TE metrics. Flooding TE information breaks 250 confidentiality and does not scale in the routing protocol. 252 The PCE architecture and associated mechanisms provide a solution 253 to avoid the use of TE aggregation and abstraction. 255 1.4 Traffic Engineered Label Switched Paths 257 This document highlights the PCE techniques and mechanisms that exist 258 for establishing TE packet and optical LSPs across multiple areas 259 (inter-area TE LSP) and ASes (inter-AS TE LSP). In this context and 260 within the remainder of this document, we consider all LSPs to be 261 constraint-based and traffic engineered. 263 Three signaling options are defined for setting up an inter-area or 264 inter-AS LSP [RFC4726]: 266 o Contiguous LSP 267 o Stitched LSP 268 o Nested LSP 270 All three signaling methods are applicable to the architectures and 271 procedures discussed in this document. 273 1.5 Inter-area and Inter-AS Capable PCE Discovery 275 When using a PCE-based approach for inter-area and inter-AS path 276 computation, a PCE in one area or AS may need to learn information 277 related to inter-AS capable PCEs located in other ASes. The PCE 278 discovery mechanism defined in [RFC5088] and [RFC5089] facilitates 279 the discovery of PCEs, and disclosure of information related to 280 inter-area and inter-AS capable PCEs. 282 1.6 Objective Functions 284 An Objective Function (OF) [RFC5541], or set of OFs, specifies the 285 intentions of the path computation and so defines the "optimality", 286 in the context of the computation request. 288 An OF specifies the desired outcome of a computation. An OF does not 289 describe or specify the algorithm to use. Also, an implementation 290 may apply any algorithm, or set of algorithms, to achieve the result 291 indicated by the OF. A number of general OFs are specified in 292 [RFC5541]. 294 Various OFs may be included in the PCE computation request to 295 satisfy the policies encoded or configured at the PCC, and a PCE 296 may be subject to policy in determining whether it meets the OFs 297 included in the computation request or applies its own OFs. 299 During inter-domain path computation, the selection of a domain 300 sequence, the computation of each (per-domain) path fragment, and 301 the determination of the end-to-end path may each be subject to 302 different OFs and policy. 304 2. Terminology 306 This document also uses the terminology defined in [RFC4655] and 307 [RFC5440]. Additional terminology is defined below: 309 ABR: IGP Area Border Router, a router that is attached to more than 310 one IGP area. 312 ASBR: Autonomous System Border Router, a router used to connect 313 together ASes of a different or the same Service Provider via one or 314 more inter-AS links. 316 Inter-area TE LSP: A TE LSP whose path transits through two or more 317 IGP areas. 319 Inter-AS MPLS TE LSP: A TE LSP whose path transits through two or 320 more ASes or sub-ASes (BGP confederations 322 SRLG: Shared Risk Link Group. 324 TED: Traffic Engineering Database, which contains the topology and 325 resource information of the domain. The TED may be fed by Interior 326 Gateway Protocol (IGP) extensions or potentially by other means. 328 3. Issues and Considerations 330 3.1 Multi-homing 331 Networks constructed from multi-areas or multi-AS environments 332 may have multiple interconnect points (multi-homing). End-to-end path 333 computations may need to use different interconnect points to avoid 334 a single point failure disrupting both primary and backup services. 336 3.2 Destination Location 338 The PCC asking for an inter-domain path computation is typically 339 aware of the identity of the destination node. If the PCC is aware 340 of the destination domain, it may supply the destination domain 341 information as part of the path computation request. However, if the 342 PCC does not know the destination domain this information must be 343 determined by another method. 345 3.3 Domain Confidentiality 347 Where the end-to-end path crosses multiple domains, it may be 348 possible that each domain (AS or area) are administered by separate 349 Service Providers, it would break confidentiality rules for a PCE 350 to supply a path segment to a PCE in another domain, thus disclosing 351 AS-internal topology information. 353 If confidentiality is required between domains (ASes and areas) 354 belonging to different Service Providers, then cooperating PCEs 355 cannot exchange path segments or else the receiving PCE or PCC will 356 be able to see the individual hops through another domain. 358 This topic is discussed further in Section 8 of this document. 360 4. Domain Topologies 362 Constraint-based inter-domain path computation is a fundamental 363 requirement for operating traffic engineered MPLS [RFC3209] and 364 GMPLS [RFC3473] networks, in inter-area and inter-AS (multi-domain) 365 environments. Path computation across multi-domain networks is 366 complex and requires computational co-operational entities like the 367 PCE. 369 4.1 Selecting Domain Paths 371 Where the sequence of domains is known a priori, various techniques 372 can be employed to derive an optimal multi-domain path. If the 373 domains are connected to a simple path with no branches and single 374 links between all domains, or if the preferred points of 375 interconnection is also known, the Per-Domain Path Computation 376 [RFC5152] technique may be used. Where there are multiple connections 377 between domains and there is no preference for the choice of points 378 of interconnection, BRPC [RFC5441] can be used to derive an optimal 379 path. 381 When the sequence of domains is not known in advance, or the 382 end-to-end path will have to navigate a mesh of small domains 383 (especially typical in optical networks), the optimum path may be 384 derived through the application of a Hierarchical PCE [RFC6805]. 386 4.2 Domain Sizes 388 Very frequently network domains are composed of dozens or hundreds of 389 network elements. These network elements are usually interconnected 390 in a partial-mesh fashion, to provide survivability against dual 391 failures, and to benefit from the traffic engineering capabilities 392 from MPLS and GMPLS protocols. Network operator feedback in the 393 development of the document highlighted that node degree (the number 394 of neighbors per node) typically ranges from 3 to 10 (4-5 is quite 395 common). 397 4.3 Domain Diversity 399 Domain and path diversity may also be required when computing 400 end-to-end paths. Domain diversity should facilitate the selection 401 of paths that share ingress and egress domains, but do not share 402 transit domains. Therefore, there must be a method allowing the 403 inclusion or exclusion of specific domains when computing end-to-end 404 paths. 406 4.4 Synchronized Path Computations 408 In some scenarios, it would be beneficial for the operator to rely on 409 the capability of the PCE to perform synchronized path computation. 411 Synchronized path computations, known as Synchronization VECtors 412 (SVECs) are used for dependent path computations. SVECs are 413 defined in [RFC5440] and [RFC6007] provides an overview for the 414 use of the PCE SVEC list for synchronized path computations when 415 computing dependent requests. 417 In H-PCE deployments, a child PCE will be able to request both 418 dependent and synchronized domain diverse end to end paths from its 419 parent PCE. 421 4.5 Domain Inclusion or Exclusion 423 A domain sequence is an ordered sequence of domains traversed to 424 reach the destination domain. A domain sequence may be supplied 425 during path computation to guide the PCEs or derived via the use of 426 Hierarchical PCE (H-PCE). 428 During multi-domain path computation, a PCC may request 429 specific domains to be included or excluded in the domain sequence 430 using the Include Route Object (IRO) [RFC5440] and Exclude Route 431 Object (XRO) [RFC5521]. The use of Autonomous Number (AS) as an 432 abstract node representing a domain is defined in [RFC3209]. 433 [RFC7897] specifies new sub-objects to include or exclude domains 434 such as an IGP area or a 4-Byte AS number. 436 An operator may also need to avoid a path that uses specified nodes 437 for administrative reasons, or if a specific connectivity 438 service required to have a 1+1 protection capability, two 439 completely disjoint paths must be established. A mechanism known as 440 Shared Risk Link Group (SRLG) information may be used to ensure 441 path diversity. 443 5. Applicability of the PCE to Inter-area Traffic Engineering 445 As networks increase in size and complexity, it may be required to 446 introduce scaling methods to reduce the amount of information 447 flooded within the network and make the network more manageable. An 448 IGP hierarchy is designed to improve IGP scalability by dividing the 449 IGP domain into areas and limiting the flooding scope of topology 450 information to within area boundaries. This restricts visibility of 451 the area to routers in a single area. If a router needs to compute 452 the route to a destination located in another area, a method would 453 be required to compute a path across area boundaries. 455 In order to support multiple vendors in a network, in cases where 456 data or control plane technologies cannot interoperate, it is useful 457 to divide the network into vendor domains. Each vendor domain is 458 an IGP area, and the flooding scope of the topology (as well as any 459 other relevant information) is limited to the area boundaries. 461 Per-domain path computation [RFC5152] exists to provide a method of 462 inter-area path computation. The per-domain solution is based on 463 loose hop routing with an Explicit Route Object (ERO) expansion on 464 each Area Border Router (ABR). This allows an LSP to be established 465 using a constrained path, however at least two issues exist: 467 o This method does not guarantee an optimal constrained path. 469 o The method may require several crankback signaling messages, as per 470 [RFC4920], increasing signaling traffic and delaying the LSP setup. 472 The PCE-based architecture [RFC4655] is designed to solve inter-area 473 path computation problems. The issue of limited topology visibility 474 is resolved by introducing path computation entities that are able to 475 cooperate in order to establish LSPs with source and destinations 476 located in different areas. 478 5.1. Inter-area Routing 480 An inter-area TE-LSP is an LSP that transits through at least two 481 IGP areas. In a multi-area network, topology visibility remains 482 local to a given area for scaling and privacy purposes, a node 483 in one area will not be able to compute an end-to-end path across 484 multiple areas without the use of a PCE. 486 5.1.1. Area Inclusion and Exclusion 488 The BRPC method [RFC5441] of path computation provides a more optimal 489 method to specify inclusion or exclusion of an ABR. Using the BRPC 490 procedure an end-to-end path is recursively computed in reverse from 491 the destination domain, towards the source domain. Using this method, 492 an operator might decide if an area must be included or excluded from 493 the inter-area path computation. 495 5.1.2. Strict Explicit Path and Loose Path 497 A strict explicit Path is defined as a set of strict hops, while a 498 loose path is defined as a set of at least one loose hop and zero or 499 more strict hops. It may be useful to indicate, during the 500 path computation request, if a strict explicit path is required or 501 not. An inter-area path may be strictly explicit or loose (e.g., a 502 list of ABRs as loose hops). 504 A PCC request to a PCE does allow the indication of whether a strict 505 explicit path across specific areas ([RFC7897]) is required or 506 desired, or if the path request is loose. 508 5.1.3. Inter-Area Diverse Path Computation 510 It may be necessary to compute a path that is partially or entirely 511 diverse, from a previously computed path, to avoid fate sharing of 512 a primary service with a corresponding backup service. There are 513 various levels of diversity in the context of an inter-area network: 515 o Per-area diversity (intra-area path segments are link, node or 516 SRLG disjoint. 518 o Inter-area diversity (end-to-end inter-area paths are link, 519 node or SRLG disjoint). 521 Note that two paths may be disjoint in the backbone area but non- 522 disjoint in peripheral areas. Also, two paths may be node disjoint 523 within areas but may share ABRs, in which case path segments within 524 an area is node disjoint, but end-to-end paths are not node-disjoint. 525 Per-Domain [RFC5152], BRPC [RFC5441] and H-PCE [RFC6805] mechanisms 526 all support the capability to compute diverse paths across multi-area 527 topologies. 529 6. Applicability of the PCE to Inter-AS Traffic Engineering 531 As discussed in section 4 (Applicability of the PCE to Inter-area 532 Traffic Engineering) it is necessary to divide the network into 533 smaller administrative domains, or ASes. If an LSR within an AS needs 534 to compute a path across an AS boundary, it must also use an inter-AS 535 computation technique. [RFC5152] defines mechanisms for the 536 computation of inter-domain TE LSPs using network elements along the 537 signaling paths to compute per-domain constrained path segments. 539 The PCE was designed to be capable of computing MPLS and GMPLS paths 540 across AS boundaries. This section outlines the features of a 541 PCE-enabled solution for computing inter-AS paths. 543 6.1 Inter-AS Routing 545 6.1.1. AS Inclusion and Exclusion 547 [RFC5441] allows the specifying of inclusion or exclusion of an AS 548 or an ASBR. Using this method, an operator might decide if an AS 549 must be include or exclude from the inter-AS path computation. 550 Exclusion and/or inclusion could also be specified at any step in 551 the LSP path computation process by a PCE (within the BRPC 552 algorithm) but the best practice would be to specify them at the 553 edge. In opposition to the strict and loose path, AS inclusion or 554 exclusion doesn't impose topology disclosure as ASes are public 555 entity as well as their interconnection. 557 6.2 Inter-AS Bandwidth Guarantees 559 Many operators with multi-AS domains will have deployed MPLS-TE 560 DiffServ either across their entire network or at the domain edges 561 on CE-PE links. In situations where strict QOS bounds are required, 562 admission control inside the network may also be required. 564 When the propagation delay can be bounded, the performance targets, 565 such as maximum one-way transit delay may be guaranteed by providing 566 bandwidth guarantees along the DiffServ-enabled path, these 567 requirements are described in [RFC4216]. 569 One typical example of the requirements in [RFC4216] is to provide 570 bandwidth guarantees over an end-to-end path for VoIP traffic 571 classified as EF (Expedited Forwarding) class in a DiffServ-enabled 572 network. In the case where the EF path is extended across multiple 573 ASes, inter-AS bandwidth guarantee would be required. 575 Another case for inter-AS bandwidth guarantee is the requirement for 576 guaranteeing a certain amount of transit bandwidth across one or 577 multiple ASes. 579 6.3 Inter-AS Recovery 581 During a path computation process, a PCC request may contain the 582 requirement to compute a backup LSP for protecting the primary LSP, 583 1+1 protection. A single LSP or multiple backup LSPs may also be 584 used for a group of primary LSPs, this is typically known as m:n 585 protection. 587 Other inter-AS recovery mechanisms include [RFC4090] which adds fast 588 re-route (FRR) protection to an LSP. So, the PCE could be used to 589 trigger computation of backup tunnels in order to protect Inter-AS 590 connectivity. 592 Inter-AS recovery clearly requires backup LSPs for service 593 protection but it would also be advisable to have multiple PCEs 594 deployed for path computation redundancy, especially for service 595 restoration in the event of catastrophic network failure. 597 6.4 Inter-AS PCE Peering Policies 599 Like BGP peering policies, inter-AS PCE peering policies is a 600 requirement for operator. In inter-AS BRPC process, PCE must 601 cooperate in order to compute the end-to-end LSP. So, the AS path 602 must not only follow technical constraints, e.g. bandwidth 603 availability, but also policies defined by the operator. 605 Typically PCE interconnections at an AS level must follow agreed 606 contract obligations, also known as peering agreements. The PCE 607 peering policies are the result of the contract negotiation and 608 govern the relation between the different PCE. 610 7. Multi-domain PCE Deployment Options 612 7.1 Traffic Engineering Database and Synchronization 614 An optimal path computation requires knowledge of the available 615 network resources, including nodes and links, constraints, 616 link connectivity, available bandwidth, and link costs. The PCE 617 operates on a view of the network topology as presented by a 618 TED. As discussed in [RFC4655] the TED used by a PCE may be learnt 619 by the relevant IGP extensions. 621 Thus, the PCE may operate its TED is by participating 622 in the IGP running in the network. In an MPLS-TE network, this 623 would require OSPF-TE [RFC3630] or ISIS-TE [RFC5305]. In a GMPLS 624 network it would utilize the GMPLS extensions to OSPF and IS-IS 625 defined in [RFC4203] and [RFC5307]. Inter-as connectivity 626 information may be populated via [RFC5316] and [RFC5392]. 628 An alternative method to provide network topology and resource 629 information is offered by [RFC7752], which is described in the 630 following section. 632 7.1.1 Applicability of BGP-LS to PCE 634 The concept of exchange of TE information between Autonomous Systems 635 (ASes) is discussed in [RFC7752]. The information exchanged in this 636 way could be the full TE information from the AS, an aggregation of 637 that information, or a representation of the potential connectivity 638 across the AS. Furthermore, that information could be updated 639 frequently (for example, for every new LSP that is set up across the 640 AS) or only at threshold-crossing events. 642 In an H-PCE deployment, the parent PCE will require the inter-domain 643 topology and link status between child domains. This information may 644 be learnt by a BGP-LS speaker and provided to the parent PCE, 645 furthermore link-state performance including delay, available 646 bandwidth and utilized bandwidth may also be provided to the parent 647 PCE for optimal path link selection. 649 7.2 Pre-Planning and Management-Based Solutions 651 Offline path computation is performed ahead of time, before the LSP 652 setup is requested. That means that it is requested by, or performed 653 as part of, an Operation Support System (OSS) management application. 654 This model can be seen in Section 5.5 of [RFC4655]. 656 The offline model is particularly appropriate to long-lived LSPs 657 (such as those present in a transport network) or for planned 658 responses to network failures. In these scenarios, more planning is 659 normally a feature of LSP provisioning. 661 The management system may also use a PCE and BRPC to pre-plan an AS 662 sequence, and the source domain PCE and per-domain path 663 computation to be used when the actual end-to-end path is 664 required. This model may also be used where the operator 665 wishes to retain full manual control of the placement of LSPs, 666 using the PCE only as a computation tool to assist the operator, 667 not as part of an automated network. 669 In environments where operators peer with each other to provide end- 670 to-end paths, the operator responsible for each domain must agree 671 to what extent paths must be pre-planned or manually controlled. 673 8. Domain Confidentiality 675 This section discusses the techniques that co-operating PCEs 676 can use to compute inter-domain paths without each domain 677 disclosing sensitive internal topology information (such as 678 explicit nodes or links within the domain) to the other domains. 680 Confidentiality typically applies to inter-provider (inter-AS) PCE 681 communication. Where the TE LSP crosses multiple domains (ASes or 682 areas), the path may be computed by multiple PCEs that cooperate 683 together. With each local PCE responsible for computing a segment 684 of the path. 686 In situations where ASes are administered by separate Service 687 Providers, it would break confidentiality rules for a PCE to supply 688 a path segment details to a PCE responsible another domain, thus 689 disclosing AS-internal or area topology information. 691 8.1 Loose Hops 693 A method for preserving the confidentiality of the path segment is 694 for the PCE to return a path containing a loose hop in place of the 695 segment that must be kept confidential. The concept of loose and 696 strict hops for the route of a TE LSP is described in [RFC3209]. 698 [RFC5440] supports the use of paths with loose hops, and it is a 699 local policy decision at a PCE whether it returns a full explicit 700 path with strict hops or uses loose hops. A path computation 701 request may require an explicit path with strict hops, or may allow 702 loose hops as detailed in [RFC5440]. 704 8.2 Confidential Path Segments and Path Keys 706 [RFC5520] defines the concept and mechanism of Path-Key. A Path-Key 707 is a token that replaces the path segment information in an explicit 708 route. The Path-Key allows the explicit route information to be 709 encoded and in the PCEP ([RFC5440]) messages exchanged between the 710 PCE and PCC. 712 This Path-Key technique allows explicit route information to be used 713 for end-to-end path computation, without disclosing internal topology 714 information between domains. 716 9. Point-to-Multipoint 718 For inter-domain point-to-multipoint application scenarios using 719 MPLS-TE LSPs, the complexity of domain sequences, domain policies, 720 choice and number of domain interconnects is magnified compared to 721 point-to-point path computations. As the size of the network 722 grows, the number of leaves and branches increase, further 723 increasing the complexity of the overall path computation problem. 724 A solution for managing point-to-multipoint path computations may 725 be achieved using the PCE inter-domain point-to-multipoint path 726 computation [RFC7334] procedure. 728 10. Optical Domains 730 The International Telecommunications Union (ITU) defines the ASON 731 architecture in [G-8080]. [G-7715] defines the routing architecture 732 for ASON and introduces a hierarchical architecture. In this 733 architecture, the Routing Areas (RAs) have a hierarchical 734 relationship between different routing levels, which means a parent 735 (or higher level) RA can contain multiple child RAs. The 736 interconnectivity of the lower RAs is visible to the higher-level RA. 738 In the ASON framework, a path computation request is termed a Route 739 Query. This query is executed before signaling is used to establish 740 an LSP termed a Switched Connection (SC) or a Soft Permanent 741 Connection (SPC). [G-7715-2] defines the requirements and 742 architecture for the functions performed by Routing Controllers (RC) 743 during the operation of remote route queries - an RC is synonymous 744 with a PCE. 746 In the ASON routing environment, an RC responsible for an RA may 747 communicate with its neighbor RC to request the computation of an 748 end-to-end path across several RAs. The path computation components 749 and sequences are defined as follows: 751 o Remote route query. An operation where a routing controller 752 communicates with another routing controller, which does not have 753 the same set of layer resources, in order to compute a routing 754 path in a collaborative manner. 756 o Route query requester. The connection controller or RC that sends a 757 route query message to a routing controller requesting for one or 758 more routing paths that satisfy a set of routing constraints. 760 o Route query responder. An RC that performs path computation upon 761 reception of a route query message from a routing controller or 762 connection controller, sending a response back at the end of 763 computation. 765 When computing an end-to-end connection, the route may be computed by 766 a single RC or multiple RCs in a collaborative manner and the two 767 scenarios can be considered a centralized remote route query model 768 and distributed remote route query model. RCs in an ASON environment 769 can also use the hierarchical PCE [RFC6805] model to match fully the 770 ASON hierarchical routing model. 772 10.1 Abstraction and Control of TE Networks (ACTN) 774 Where a single operator operates multiple TE domains (including 775 optical environments) then Abstraction and Control of TE Networks 776 (ACTN) framework [RFC8453] may be used to create an abstracted 777 (virtualized network) view of underlay interconnected domains. This 778 underlay connectivity then be exposed to higher-layer control 779 entities and applications. 781 ACTN describes the method and procedure for coordinating the 782 underlay per-domain Physical Network Controllers (PNCs), which may 783 be PCEs, via a hierarchical model to facilitate setup of 784 end-to-end connections across inter-connected TE domains. 786 11. Policy 788 Policy is important in the deployment of new services and the 789 operation of the network. [RFC5394] provides a framework for PCE- 790 based policy-enabled path computation. This framework is based on 791 the Policy Core Information Model (PCIM) as defined in [RFC3060] and 792 further extended by [RFC3460]. 794 When using a PCE to compute inter-domain paths, policy may be 795 invoked by specifying: 797 o Each PCC must select which computations will be requested to a PCE; 799 o Each PCC must select which PCEs it will use; 801 o Each PCE must determine which PCCs are allowed to use its services 802 and for what computations; 804 o The PCE must determine how to collect the information in its TED, 805 whom to trust for that information, and how to refresh/update the 806 information; 808 o Each PCE must determine which objective functions and which 809 algorithms to apply. 811 12. Manageability Considerations 813 General PCE management considerations are discussed in [RFC4655]. 814 In the case of multi-domains within a single service provider 815 network, the management responsibility for each PCE would most 816 likely be handled by the same service provider. In the case of 817 multiple ASes within different service provider networks, it will 818 likely be necessary for each PCE to be configured and managed 819 separately by each participating service provider, with policy 820 being implemented based on a previously agreed set of principles. 822 12.1 Control of Function and Policy 824 As per PCEP [RFC5440] implementation allow the user to configure 825 a number of PCEP session parameters. These are detailed in section 826 8.1 of [RFC5440]. 828 In H-PCE deployments the administrative entity responsible for the 829 management of the parent PCEs for multi-areas would typically be a 830 single service provider. In the multiple ASes (managed by different 831 service providers), it may be necessary for a third party to manage 832 the parent PCE. 834 12.2 Information and Data Models 836 A PCEP MIB module is defined in [RFC7420] that describes managed 837 objects for modeling of PCEP communication including: 839 o PCEP client configuration and status, 841 o PCEP peer configuration and information, 843 o PCEP session configuration and information, 845 o Notifications to indicate PCEP session changes. 847 A YANG module for PCEP has also been proposed [PCEP-YANG]. 849 An H-PCE MIB module, or YANG data model, will be required to 850 report parent PCE and child PCE information, including: 852 o parent PCE configuration and status, 854 o child PCE configuration and information, 856 o notifications to indicate session changes between parent PCEs and 857 child PCEs, and 859 o notification of parent PCE TED updates and changes. 861 12.3 Liveness Detection and Monitoring 863 PCEP includes a keepalive mechanism to check the liveliness of a PCEP 864 peer and a notification procedure allowing a PCE to advertise its 865 overloaded state to a PCC. In a multi-domain environment [RFC5886] 866 provides the procedures necessary to monitor the liveliness and 867 performances of a given PCE chain. 869 12.4 Verifying Correct Operation 871 It is important to verify the correct operation of PCEP, [RFC5440] 872 specifies the monitoring of key parameters. These parameters are 873 detailed in [RFC5520]. 875 12.5 Impact on Network Operation 877 [RFC5440] states that in order to avoid any unacceptable impact on 878 network operations, a PCEP implementation should allow a limit to be 879 placed on the number of sessions that can be set up on a PCEP 880 speaker, it may also be practical to place a limit on the rate 881 of messages sent by a PCC and received my the PCE. 883 13. Security Considerations 885 PCEP Security considerations are discussed in [RFC5440] and [RFC6952] 886 Potential vulnerabilities include spoofing, snooping, falsification 887 and using PCEP as a mechanism for denial of service attacks. 889 As PCEP operates over TCP, it may make use of TCP security 890 encryption mechanisms, such as Transport Layer Security (TLS) and TCP 891 Authentication Option (TCP-AO). Usage of these security mechanisms 892 for PCEP is described in [RFC8253], and recommendations and best 893 current practices in [RFC7525]. 895 13.1 Multi-domain Security 897 Any multi-domain operation necessarily involves the exchange of 898 information across domain boundaries. This does represent 899 significant security and confidentiality risk. 901 It is expected that PCEP is used between PCCs and PCEs belonging to 902 the same administrative authority, and using one of the 903 aforementioned encryption mechanisms. Furthermore, PCEP allows 904 individual PCEs to maintain confidentiality of their domain path 905 information using path-keys. 907 14. IANA Considerations 909 This document makes no requests for IANA action. 911 15. Acknowledgements 913 The author would like to thank Adrian Farrel for his review, and 914 Meral Shirazipour and Francisco Javier Jimenex Chico for their 915 comments. 917 16. References 919 16.1. Normative References 921 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 922 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 923 Tunnels", RFC 3209, December 2001. 925 [RFC3473] Berger, L., Ed., "Generalized Multi-Protocol Label 926 Switching (GMPLS) Signaling Resource ReserVation 927 Protocol-Traffic Engineering (RSVP-TE) Extensions", RFC 928 3473, January 2003. 930 [RFC4216] Zhang, R., Ed., and J.-P. Vasseur, Ed., "MPLS Inter- 931 Autonomous System (AS) Traffic Engineering (TE) 932 Requirements", RFC 4216, November 2005. 934 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 935 Element (PCE)-Based Architecture", RFC 4655, August 2006. 937 [RFC4726] Farrel, A., Vasseur, J., and A. Ayyangar, "A Framework 938 for Inter-Domain Multiprotocol Label Switching Traffic 939 Engineering", RFC 4726, November 2006. 941 [RFC5152] Vasseur, JP., Ayyangar, A., and R. Zhang, "A Per-Domain 942 Path Computation Method for Establishing Inter-Domain 943 Traffic Engineering (TE) Label Switched Paths (LSPs)", 944 RFC 5152, February 2008. 946 [RFC5440] Ayyangar, A., Farrel, A., Oki, E., Atlas, A., Dolganow, 947 A., Ikejiri, Y., Kumaki, K., Vasseur, J., and J. Roux, 948 "Path Computation Element (PCE) Communication Protocol 949 (PCEP)", RFC 5440, March 2009. 951 [RFC5441] Vasseur, J.P., Ed., "A Backward Recursive PCE-based 952 Computation (BRPC) procedure to compute shortest inter- 953 domain Traffic Engineering Label Switched Paths", 954 RFC5441, April 2009. 956 [RFC5520] Bradford, R., Ed., Vasseur, JP., and A. Farrel, 957 "Preserving Topology Confidentiality in Inter-Domain Path 958 Computation Using a Path-Key-Based Mechanism", RFC 5520, 959 April 2009. 961 [RFC5541] Le Roux, J., Vasseur, J., Lee, Y., "Encoding 962 of Objective Functions in the Path Computation Element 963 Communication Protocol (PCEP)", RFC5541, December 2008. 965 [RFC6805] King, D. and A. Farrel, "The Application of the Path 966 Computation Element Architecture to the Determination 967 of a Sequence of Domains in MPLS & GMPLS", RFC6805, July 968 2010. 970 16.2. Informative References 972 [RFC3060] Moore, B., Ellesson, E., Strassner, J., and A. 973 Westerinen, "Policy Core Information Model -- Version 1 974 Specification", RFC 3060, February 2001. 976 [RFC3460] Moore, B., Ed., "Policy Core Information Model (PCIM) 977 Extensions", RFC 3460, January 2003. 979 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic 980 Engineering (TE) Extensions to OSPF Version 2", RFC 981 3630, September 2003. 983 [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 984 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, May 985 2005. 987 [RFC4203] Kompella, K., Ed., and Y. Rekhter, Ed., "OSPF 988 Extensions in Support of Generalized Multi- 989 Protocol Label Switching (GMPLS)", RFC 990 4203, October 2005. 992 [RFC4920] Farrel, A., Ed., Satyanarayana, A., Iwata, A., Fujita, 993 N., and G. Ash, "Crankback Signaling Extensions for MPLS 994 and GMPLS RSVP-TE", RFC 4920, July 2007. 996 [RFC5088] Le Roux, JL., Vasseur, JP., Ikejiri, Y., and R. Zhang, 997 "OSPF Protocol Extensions for Path Computation Element 998 (PCE) Discovery", RFC 5088, January 2008. 1000 [RFC5089] Le Roux, JL., Ed., Vasseur, JP., Ed., Ikejiri, Y., and R. 1001 Zhang, "IS-IS Protocol Extensions for Path Computation 1002 Element (PCE) Discovery", RFC 5089, January 2008. 1004 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 1005 Engineering", RFC 5305, October 2008. 1007 [RFC5307] Kompella, K., Ed., and Y. Rekhter, Ed., "IS-IS 1008 Extensions in Support of Generalized Multi-Protocol 1009 Label Switching (GMPLS)", RFC 5307, 1010 October 2008. 1012 [RFC5316] Chen, M., Zhang, R., and X. Duan, "ISIS Extensions in 1013 Support of Inter-Autonomous System (AS) MPLS and GMPLS 1014 Traffic Engineering", December 2008. 1016 [RFC5392] Chen, M., Zhang, R., and X. Duan, "OSPF Extensions in 1017 Support of Inter-Autonomous System (AS) MPLS and GMPLS 1018 Traffic Engineering", RFC 5392, January 2009. 1020 [RFC5394] Bryskin, I., Papadimitriou, D., Berger, L., and J. Ash, 1021 "Policy-Enabled Path Computation Framework", RFC 5394, 1022 December 2008. 1024 [RFC5521] Oki, E., Takeda, T., and A. Farrel, "Extensions to the 1025 Path Computation Element Communication Protocol (PCEP) 1026 for Route Exclusions", RFC 5521, April 2009. 1028 [RFC5886] Vasseur, JP., Le Roux, JL., and Y. Ikejiri, "A Set of 1029 Monitoring Tools for Path ComputationElement (PCE)-Based 1030 Architecture", RFC 5886, June 2010. 1032 [RFC6007] Nishioka, I., King, D., "Use of the Synchronization 1033 VECtor (SVEC) List for Synchronized Dependent Path 1034 Computations", RFC6007, September 2010. 1036 [G-8080] ITU-T Recommendation G.8080/Y.1304, Architecture for 1037 the automatically switched optical network (ASON). 1039 [G-7715] ITU-T Recommendation G.7715 (2002), Architecture 1040 and Requirements for the Automatically Switched 1041 Optical Network (ASON). 1043 [G-7715-2] ITU-T Recommendation G.7715.2 (2007), ASON routing 1044 architecture and requirements for remote route query. 1046 [RFC6952] Jethanandani, M., Patel, K., and L. Zheng, "Analysis of 1047 BGP, LDP, PCEP, and MSDP Issues According to the Keying 1048 and Authentication for Routing Protocols (KARP) Design 1049 Guide", RFC 6952, May 2013. 1051 [RFC7334] Zhao, Q., Dhody, D., Ali Z., King, D., 1052 Casellas, R., "PCE-based Computation 1053 Procedure To Compute Shortest Constrained 1054 P2MP Inter-domain Traffic Engineering Label Switched 1055 Paths", August 2014. 1057 [RFC7420] Stephan, E., Koushik, K., Zhao, Q., King, D., "PCE 1058 Communication Protocol (PCEP) Management Information 1059 Base", December 2014. 1061 [RFC7525] Sheffer, Y., Holz, R., and P. Saint-Andre, 1062 "Recommendations for Secure Use of Transport Layer 1063 Security (TLS) and Datagram Transport Layer Security 1064 (DTLS)", BCP 195, RFC 7525, May 2015. 1066 [RFC7752] Gredler, H., Medved, J., Previdi, S., Farrel, A., and 1067 S. Ray, "North-Bound Distribution of Link-State and TE 1068 Information using BGP", March 2016. 1070 [RFC7897] Dhody, D., Palle, U., and R. Casellas, "Domain Subobjects 1071 for the Path Computation Element Communication Protocol 1072 (PCEP)", June 2016. 1074 [RFC8253] Lopez, D., Gonzalez de Dios, O., Wu, Q., and D. Dhody, 1075 "PCEPS: Usage of TLS to Provide a Secure Transport for 1076 the Path Computation Element Communication Protocol 1077 (PCEP)", RFC 8253, October 2017. 1079 [RFC8453] Ceccarelli, D., Lee, Y. et al., "Framework for 1080 Abstraction and Control of TE Networks (ACTN)", RFC8453, 1081 August 2018. 1083 [PCEP-YANG] Dhody, D., Hardwick, J., Beeram, V., and J. Tantsura, "A 1084 YANG Data Model for Path Computation Element 1085 Communications Protocol (PCEP)", work in progress, 1086 October 2018. 1088 17. Contributors 1090 Dhruv Dhody 1091 Huawei Technologies 1092 Divyashree Techno Park, Whitefield 1093 Bangalore, Karnataka 560066 1094 India 1096 Email: dhruv.ietf@gmail.com 1098 Quintin Zhao 1099 Huawei Technology 1100 125 Nagog Technology Park 1101 Acton, MA 01719 1102 US 1104 Email: qzhao@huawei.com 1106 Julien Meuric 1107 France Telecom 1108 2, avenue Pierre-Marzin 1109 22307 Lannion Cedex 1111 Email: julien.meuric@orange-ftgroup.com 1113 Olivier Dugeon 1114 France Telecom 1115 2, avenue Pierre-Marzin 1116 22307 Lannion Cedex 1118 Email: olivier.dugeon@orange-ftgroup.com 1120 Jon Hardwick 1121 Metaswitch Networks 1122 100 Church Street 1123 Enfield, Middlesex 1124 United Kingdom 1126 Email: jonathan.hardwick@metaswitch.com 1128 Oscar Gonzalez de Dios 1129 Telefonica I+D 1130 Emilio Vargas 6, Madrid 1131 Spain 1133 Email: ogondio@tid.es 1135 18. Author's Addresses 1137 Daniel King 1138 Old Dog Consulting 1139 UK 1141 Email: daniel@olddog.co.uk 1143 Haomian Zheng 1144 Huawei Technologies 1145 F3 R&D Center, Huawei Industrial Base, Bantian, Longgang District 1146 Shenzhen, Guangdong 518129 1147 P.R.China 1149 Email: zhenghaomian@huawei.com