idnits 2.17.1 draft-ietf-pce-inter-area-as-applicability-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 20, 2016) is 2835 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 5316 (Obsoleted by RFC 9346) -- Obsolete informational reference (is this intentional?): RFC 6006 (Obsoleted by RFC 8306) -- Obsolete informational reference (is this intentional?): RFC 7752 (Obsoleted by RFC 9552) Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 PCE Working Group D. King 2 Internet Draft Old Dog Consulting 3 Intended status: Informational J. Meuric 4 Expires: January 21, 2017 O. Dugeon 5 France Telecom 6 Q. Zhao 7 D. Dhody 8 Huawei Technologies 9 Oscar Gonzalez de Dios 10 Telefonica I+D 11 July 20, 2016 13 Applicability of the Path Computation Element to Inter-Area and 14 Inter-AS MPLS and GMPLS Traffic Engineering 16 draft-ietf-pce-inter-area-as-applicability-06 18 Abstract 20 The Path Computation Element (PCE) may be used for computing services 21 that traverse multi-area and multi-AS Multiprotocol Label Switching 22 (MPLS) and Generalized MPLS (GMPLS) Traffic Engineered (TE) networks. 24 This document examines the applicability of the PCE architecture, 25 protocols, and protocol extensions for computing multi-area and 26 multi-AS paths in MPLS and GMPLS networks. 28 Status of this Memo 30 This Internet-Draft is submitted to IETF in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF), its areas, and its working groups. Note that 35 other groups may also distribute working documents as Internet- 36 Drafts. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 The list of current Internet-Drafts can be accessed at 44 http://www.ietf.org/ietf/1id-abstracts.txt 46 The list of Internet-Draft Shadow Directories can be accessed at 47 http://www.ietf.org/shadow.html 49 This Internet-Draft will expire on January 21, 2017. 51 Copyright Notice 53 Copyright (c) 2016 IETF Trust and the persons identified as the 54 document authors. All rights reserved. 56 This document is subject to BCP 78 and the IETF Trust's Legal 57 Provisions Relating to IETF Documents 58 (http://trustee.ietf.org/license-info) in effect on the date of 59 publication of this document. Please review these documents 60 carefully, as they describe your rights and restrictions with respect 61 to this document. Code Components extracted from this document must 62 include Simplified BSD License text as described in Section 4.e of 63 the Trust Legal Provisions and are provided without warranty as 64 described in the Simplified BSD License. 66 Table of Contents 68 1. Introduction.................................................3 69 1.1. Domains.................................................4 70 1.2. Path Computation........................................4 71 1.2.1 PCE-based Path Computation Procedure.................5 72 1.3. Traffic Engineering Aggregation and Abstraction.........5 73 1.4. Traffic Engineered Label Switched Paths................. 74 1.5. Inter-area and Inter-AS Capable PCE Discovery...........6 75 1.6. Objective Functions.....................................6 76 2. Terminology..................................................7 77 3. Issues and Considerations....................................7 78 3.1 Multi-homing.............................................7 79 3.2 Domain Confidentiality ..................................8 80 3.3 Destination Location.....................................8 81 4. Domain Topologies............................................8 82 4.1 Selecting Domain Paths...................................8 83 4.2 Multi-Homed Domains......................................9 84 4.3 Domain Topologies........................................9 85 4.4 Domain Diversity.........................................9 86 4.5 Synchronized Path Computations...........................9 87 4.6 Domain Inclusion or Exclusion............................10 88 5. Applicability of the PCE to Inter-area Traffic Engineering...11 89 5.1. Inter-area Routing......................................11 90 5.1.1. Area Inclusion and Exclusion..........................11 91 5.1.2. Strict Explicit Path and Loose Path...................12 92 5.1.3. Inter-Area Diverse Path Computation...................12 93 5.2. Control and Recording of Area Crossing..................12 94 6. Applicability of the PCE to Inter-AS Traffic Engineering.....12 95 6.1. Inter-AS Routing........................................13 96 6.1.1. Strict Explicit Path and Loose Path...................13 97 6.1.2. AS Inclusion and Exclusion............................13 98 6.2. Inter-AS Bandwidth Guarantees...........................13 99 6.3. Inter-AS Recovery.......................................14 100 6.4. Inter-AS PCE Peering Policies...........................14 101 7. Multi-Domain PCE Deployment..................................14 102 7.1 Traffic Engineering Database.............................15 103 7.1.1 Provisioning Techniques................................16 104 7.3 Pre-Planning and Management-Based Solutions..............16 105 7.4 Per-Domain Computation...................................16 106 7.5 Cooperative PCEs.........................................16 107 7.6 Hierarchical PCEs ......................................17 108 8. Domain Confidentiality.......................................17 109 8.1 Loose Hops...............................................17 110 8.2 Confidential Path Segments and Path Keys.................17 111 9. Point-to-Multipoint..........................................18 112 10. Optical Domains.............................................18 113 10.1. PCE applied to the ASON Architecture....................18 114 11. Policy......................................................19 115 12. TED Topology and Synchronization............................19 116 12.1. Applicability of BGP-LS to PCE..........................20 117 13. Manageability Considerations................................20 118 13.1 Control of Function and Policy...........................20 119 13.2 Information and Data Models..............................21 120 13.3 Liveness Detection and Monitoring........................21 121 13.4 Verifying Correct Operation..............................22 122 13.5 Impact on Network Operation..............................22 123 14. Security Considerations.....................................22 124 15. IANA Considerations.........................................22 125 16. Acknowledgements............................................22 126 17. References..................................................22 127 17.1. Normative References....................................22 128 17.2. Informative References..................................22 129 18. Author's Addresses..........................................26 131 1. Introduction 133 Computing paths across large multi-domain environments may 134 require special computational components and cooperation between 135 entities in different domains capable of complex path computation. 136 The Path Computation Element (PCE) [RFC4655] provides an architecture 137 and a set of functional components to address this problem space. 139 A PCE may be used to compute end-to-end paths across multi-domain 140 environments using a per-domain path computation technique [RFC5152]. 141 The so called backward recursive path computation (BRPC) mechanism 142 [RFC5441] defines a PCE-based path computation procedure to compute 143 inter-domain constrained Multiprotocol Label Switching (MPLS) and 144 Generalized MPLS (GMPLS) Traffic Engineered (TE) networks. However, 145 both per-domain and BRPC techniques assume that the sequence of 146 domains to be crossed from source to destination is known, either 147 fixed by the network operator or obtained by other means. 149 In more advanced deployments (including multi-area and multi- 150 Autonomous System (multi-AS) environments) the sequence of domains 151 may not be known in advance and the choice of domains in the end-to- 152 end domain sequence might be critical to the determination of an 153 optimal end-to-end path. In this case the use of the Hierarchical PCE 154 [RFC6805] architecture and mechanisms may be used to discover the 155 intra-area path and select the optimal end-to-end domain sequence. 157 This document describes the processes and procedures available when 158 using the PCE architecture, protocols and protocol extensions for 159 computing inter-area and inter-AS MPLS and GMPLS Traffic TE paths. 161 This document does not discuss stateful PCE, active PCE, or remotely 162 initiated PCE, deployment scenarios. 164 1.1 Domains 166 A domain can be defined as a separate administrative, geographic, or 167 switching environment within the network. A domain may be further 168 defined as a zone of routing or computational ability. Under these 169 definitions a domain might be categorized as an Antonymous System 170 (AS) or an Interior Gateway Protocol (IGP) area (as per [RFC4726] 171 and [RFC4655]). 173 For the purposes of this document, a domain is considered to be a 174 collection of network elements within an area or AS that has a common 175 sphere of address management or path computational responsibility. 176 Wholly or partially overlapping domains are not within the scope of 177 this document. 179 In the context of GMPLS, a particularly important example of a domain 180 is the Automatically Switched Optical Network (ASON) subnetwork 181 [G-8080]. In this case, computation of an end-to-end path requires 182 the selection of nodes and links within a parent domain where some 183 nodes may, in fact, be subnetworks. Furthermore, a domain might be an 184 ASON routing area [G-7715]. A PCE may perform the path computation 185 function of an ASON routing controller as described in [G-7715-2]. 187 It is assumed that the PCE architecture is not applied to a large 188 group of domains, such as Internet. 190 1.2 Path Computation 192 For the purpose of this document it is assumed that the 193 path computation is the sole responsibility of the PCE as per the 194 architecture defined in [RFC4655]. When a path is required the Path 195 Computation Client (PCC) will send a request to the PCE. The PCE will 196 apply the required constraints and compute a path and return a 197 response to the PCC. In the context of this document it maybe 198 necessary for the PCE to co-operate with other PCEs in adjacent 199 domains (as per BRPC [RFC5441]) or cooperate with a Parent PCE 200 (as per [RFC6805]). 202 It is entirely feasible that an operator could compute a path across 203 multiple domains without the use of a PCE if the relevant domain 204 information is available to the network planner or network management 205 platform. The definition of what relevant information is required to 206 perform this network planning operation and how that information is 207 discovered and applied is outside the scope of this document. 209 1.2.1 PCE-based Path Computation Procedure 211 As highlighted, the PCE is an entity capable of computing an 212 inter-domain TE path upon receiving a request from a PCC. There could 213 be a single PCE per domain, or single PCE responsible for all 214 domains. A PCE may or may not reside on the same node as the 215 requesting PCC. A path may be computed by either a single PCE node 216 or a set of distributed PCE nodes that collaborate during path 217 computation. 219 [RFC4655] defines that a PCC should send a path computation request 220 to a particular PCE, using [RFC5440] (PCC-to-PCE communication). 221 This negates the need to broadcast a request to all the PCEs. Each 222 PCC can maintain information about the computation capabilities 223 of the PCEs it is aware of. The PCC-PCE capability awareness can be 224 configured using static configurations or by automatic and dynamic 225 PCE discovery procedures. 227 Once a path computation request is received, the PCC will send a 228 request to the PCE. A PCE may compute the end-to-end path 229 if it is aware of the topology and TE information required to 230 compute the entire path. If the PCE is unable to compute the 231 entire path, the PCE architecture provides co-operative PCE 232 mechanisms for the resolution of path computation requests when an 233 individual PCE does not have sufficient TE visibility. 235 End-to-end path segments may be kept confidential through the 236 application of path keys, to protect partial or full path 237 information. A path key that is a token that replaces a path segment 238 in an explicit route. The path key mechanism is described in 239 [RFC5520] 241 1.3 Traffic Engineering Aggregation and Abstraction 243 Networks are often constructed from multiple areas or ASes that are 244 interconnected via multiple interconnect points. To maintain 245 network confidentiality and scalability TE properties of each area 246 and AS are not generally advertized outside each specific area or AS. 248 TE aggregation or abstraction provide mechanism to hide information 249 but may cause failed path setups or the selection of suboptimal 250 end-to-end paths [RFC4726]. The aggregation process may also have 251 significant scaling issues for networks with many possible routes 252 and multiple TE metrics. Flooding TE information breaks 253 confidentiality and does not scale in the routing protocol. 255 The PCE architecture and associated mechanisms provide a solution 256 to avoid the use of TE aggregation and abstraction. 258 1.4 Traffic Engineered Label Switched Paths 260 This document highlights the PCE techniques and mechanisms that exist 261 for establishing TE packet and optical LSPs across multiple areas 262 (inter-area TE LSP) and ASes (inter-AS TE LSP). In this context and 263 within the remainder of this document, we consider all LSPs to be 264 constraint-based and traffic engineered. 266 Three signaling options are defined for setting up an inter-area or 267 inter-AS LSP [RFC4726]: 269 - Contiguous LSP 270 - Stitched LSP 271 - Nested LSP 273 All three signaling methods are applicable to the architectures and 274 procedures discussed in this document. 276 1.5 Inter-area and Inter-AS Capable PCE Discovery 278 When using a PCE-based approach for inter-area and inter-AS path 279 computation, a PCE in one area or AS may need to learn information 280 related to inter-AS capable PCEs located in other ASes. The PCE 281 discovery mechanism defined in [RFC5088] and [RFC5089] allow 282 the discovery of PCEs and disclosure of information related to 283 inter-area and inter-AS capable PCEs. 285 1.6 Objective Functions 287 An Objective Function (OF) [RFC5541], or set of OFs, specify the 288 intentions of the path computation and so define the "optimality" 289 in the context of that computation request. 291 An OF specifies the desired outcome of a computation. An OF does not 292 describe or specify the algorithm to use, and an implementation may 293 apply any algorithm or set of algorithms to achieve the result 294 indicated by the OF. [RFC5541] provides the following OFs when 295 computing inter-domain paths: 297 o Minimum Cost Path (MCP); 298 o Minimum Load Path (MLP); 299 o Maximum residual Bandwidth Path (MBP); 300 o Minimize aggregate Bandwidth Consumption (MBC); 301 o Minimize the Load of the most loaded Link (MLL); 302 o Minimize the Cumulative Cost of a set of paths (MCC). 304 OFs can be included in the PCE computation requests to satisfy the 305 policies encoded or configured at the PCC, and a PCE may be 306 subject to policy in determining whether it meets the OFs included 307 in the computation request, or applies its own OFs. 309 During inter-domain path computation, the selection of a domain 310 sequence, the computation of each (per-domain) path fragment, and 311 the determination of the end-to-end path may each be subject to 312 different OFs and policy. 314 2. Terminology 316 This document also uses the terminology defined in [RFC4655] and 317 [RFC5440]. Additional terminology is defined below: 319 ABR: IGP Area Border Router, a router that is attached to more than 320 one IGP area. 322 ASBR: Autonomous System Border Router, a router used to connect 323 together ASes of a different or the same Service Provider via one or 324 more inter-AS links. 326 Inter-area TE LSP: A TE LSP whose path transits through two or more 327 IGP areas. 329 Inter-AS MPLS TE LSP: A TE LSP whose path transits through two or 330 more ASes or sub-ASes (BGP confederations 332 SRLG: Shared Risk Link Group. 334 TED: Traffic Engineering Database, which contains the topology and 335 resource information of the domain. The TED may be fed by Interior 336 Gateway Protocol (IGP) extensions or potentially by other means. 338 3. Issues and Considerations 340 3.1 Multi-homing 342 Networks constructed from multi-areas or multi-AS environments 343 may have multiple interconnect points (multi-homing). End-to-end path 344 computations may need to use different interconnect points to avoid 345 single point failures disrupting primary and backup services. 347 Domain and path diversity may also be required when computing 348 end-to-end paths. Domain diversity should facilitate the selection 349 of paths that share ingress and egress domains, but do not share 350 transit domains. Therefore, there must be a method allowing the 351 inclusion or exclusion of specific domains when computing end-to-end 352 paths. 354 3.2 Domain Confidentiality 356 Where the end-to-end path crosses multiple domains, it may be 357 possible that each domain (AS or area) are administered by separate 358 Service Providers, it would break confidentiality rules for a PCE 359 to supply a path segment to a PCE in another domain, thus disclosing 360 AS-internal topology information. 362 If confidentiality is required between domains (ASes and areas) 363 belonging to different Service Providers. Then cooperating PCEs 364 cannot exchange path segments or else the receiving PCE or PCC will 365 be able to see the individual hops through another domain. 367 3.3 Destination Location 369 The PCC asking for an inter-domain path computation is typically 370 aware of the identity of the destination node. Additionally, if the 371 PCC is aware of the destination domain, it can supply this 372 information as part of the path computation request. However, 373 if the PCC does not know the egress domain this information must 374 be determined by another method. 376 4. Domain Topologies 378 Constraint-based inter-domain path computation is a fundamental 379 requirement for operating traffic engineered MPLS [RFC3209] and 380 GMPLS [RFC3473] networks, in inter-area and inter-AS (multi-domain) 381 environments. Path computation across multi-domain networks is 382 complex and requires computational co-operational entities like the 383 PCE. 385 4.1 Selecting Domain Paths 387 Where the sequence of domains is known a priori, various techniques 388 can be employed to derive an optimal multi-domain path. If the 389 domains are simply-connected, or if the preferred points of 390 interconnection are also known, the Per-Domain Path Computation 391 [RFC5152] technique can be used. Where there are multiple connections 392 between domains and there is no preference for the choice of points 393 of interconnection, BRPC [RFC5441] can be used to derive an optimal 394 path. 396 When the sequence of domains is not known in advance, the optimum 397 end-to-end path can be derived through the use of a hierarchical 398 relationship between domains [RFC6805]. 400 4.2 Multi-Homed Domains 402 Networks constructed from multi-areas or multi-AS environments 403 may have multiple interconnect points (multi-homing). End-to-end path 404 computations may need to use different interconnect points to avoid 405 single point failures disrupting primary and backup services. 407 4.3 Domain Topologies 409 Very frequently network domains are composed by dozens or hundreds of 410 network elements. These network elements are usually interconnected 411 between them in a partial-mesh fashion, to provide survivability 412 against dual failures, and to benefit from the traffic engineering 413 capabilities from MPLS and GMPLS protocols. A typical node degree 414 ranges from 3 to 10 (4-5 is quite common), being the node degree the 415 number of neighbors per node. 417 Networks are sometimes divided into domains. Some reasons for it 418 range from manageability to separation into vendor-specific domains. 419 The size of the domain will be usually limited by control plane, but 420 it can also be stated by arbitrary design constraints. 422 4.4 Domain Diversity 424 Whenever an specific connectivity service is required to have 1+1 425 protection feature, two completely disjoint paths must be established 426 in an end to end fashion. In a multi-domain environment, this can be 427 accomplished either by selecting domain diversity, or by ensuring 428 diverse connection within a domain. The domain diversity ensures 429 diversity in the transit domain, the diverse path should be computed 430 within the ingress and egress domain. In order to compute the path 431 diversity, it could be helpful to also have SRLG information in the 432 domains, to ensure SRLG diversity. 434 4.5 Synchronized Path Computations 436 In some scenarios, it would be beneficial for the operator to rely on 437 the capability of the PCE to perform synchronized path computation. 439 Synchronized path computations, known as Synchronization VECtors 440 (SVECs) are used for dependent path computations. SVECs are 441 defined in [RFC5440] and [RFC6007] provides an overview for the 442 use of the PCE SVEC list for synchronized path computations when 443 computing dependent requests. 445 In a H-PCE deployment a child PCE will be able to request both 446 dependent and synchronized domain diverse end to end paths from its 447 parent PCE. 449 A non-comprehensive list of synchronized path computations include 450 the following examples: 452 o Route diversity: computation of two disjoint paths from a source to 453 a destination (as drafted in the previous section). 455 o Synchronous restoration: joint computation of a set of alternative 456 paths for a set of affected LSPs as a consequence of a failure 457 event. Note that in this case, the requests will potentially 458 involve different source-destination pairs. In this scenario, the 459 different path computation requests may arrive at different time 460 stamps. 462 o Batch provisioning: It is common that the operator sends a set of 463 LSPs requests together, e.g in a daily of weekly basis, mainly in 464 case of long lived LSPs. In order to optimize the resource usage, 465 a synchronized path computation is needed. 467 o Network optimization: After some time of operation, the 468 distribution of the established LSP paths results in a non optimal 469 use of resources. Also, inter-domain policies/agreements may have 470 been changed. In such cases, a full (or partial) network planning 471 action regarding the inter-domain connections will be triggered. 472 This process is also known as a Global Concurrent Optimization 473 (GCO) [RFC5557]. 475 4.6 Domain Inclusion or Exclusion 477 A domain sequence is an ordered sequence of domains traversed to 478 reach the destination domain, a domain sequence may be supplied 479 during path computation to guide the PCEs or derived via use of 480 Hierarchical PCE (H-PCE). 482 During multi-domain path computation, a PCC may request 483 specific domains to be included or excluded in the domain sequence 484 using the Include Route Object (IRO) [RFC5440] and Exclude Route 485 Object (XRO) [RFC5521]. The use of Autonomous Number (AS) as an 486 abstract node representing a domain is defined in [RFC3209], 487 [RFC7897] specifies new sub-objects to include or exclude domains 488 such as an IGP area or a 4-Byte AS number. 490 5. Applicability of the PCE to Inter-area Traffic Engineering 492 As networks increase in size and complexity it may be required to 493 introduce scaling methods to reduce the amount information flooded 494 within the network and make the network more manageable. An IGP 495 hierarchy is designed to improve IGP scalability by dividing the 496 IGP domain into areas and limiting the flooding scope of topology 497 information to within area boundaries. This restricts visibility of 498 the area to routers in a single area. If a router needs to compute a 499 route to destination located in another area a method is required to 500 compute a path across area boundaries. 502 In order to support multiple vendors in a network, in cases where 503 data and/or control plane technologies cannot interoperate, it is 504 useful to divide the network in vendor domains. Each vendor domain is 505 an IGP area, and the flooding scope of the topology (as well as any 506 other relevant information) is limited to the area boundaries. 508 Per-domain path computation [RFC5152] exists to provide a method of 509 inter-area path computation. The per-domain solution is based on 510 loose hop routing with an Explicit Route Object (ERO) expansion on 511 each Area Border Router (ABR). This allows an LSP to be established 512 using a constrained path, however at least two issues exist: 514 - This method does not guarantee an optimal constrained path. 516 - The method may require several crankback signaling messages 517 increasing signaling traffic and delaying the LSP setup. 519 The PCE-based architecture [RFC4655] is designed to solve inter-area 520 path computation problems. The issue of limited topology visibility 521 is resolved by introducing path computation entities that are able to 522 cooperate in order to establish LSPs with source and destinations 523 located in different areas. 525 5.1. Inter-area Routing 527 An inter-area TE-LSP is an LSP that transits through at least two 528 IGP areas. In a multi-area network, topology visibility remains 529 local to a given area, and a node in one area will not be able to 530 compute an end-to-end path across multiple areas without the use 531 of a PCE. 533 5.1.1. Area Inclusion and Exclusion 535 [RFC5441] provides a more optimal method to specify inclusion or 536 exclusion of an ABR. Using this method, an operator might decide if 537 an area must be include or exclude from the inter-area path 538 computation. 540 5.1.2. Strict Explicit Path and Loose Path 542 A strict explicit Path is defined as a set of strict hops, while a 543 loose path is defined as a set of at least one loose hop and zero, 544 one or more strict hops. It may be useful to indicate, during the 545 path computation request, if a strict explicit path is required or 546 not. An inter-area path may be strictly explicit or loose (e.g., a 547 list of ABRs as loose hops). 549 A PCC request to a PCE does allow the indication of if a strict 550 explicit path across specific areas ([RFC7897]) is required or 551 desired, or if the path request is loose. 553 5.1.3. Inter-Area Diverse Path Computation 555 It may be necessary (for protection or load-balancing) to compute 556 a path that is diverse, from a previously computed path. There are 557 various levels of diversity in the context of an inter-area network: 559 - Per-area diversity (intra-area path segments are link, node or 560 SRLG disjoint. 562 - Inter-area diversity (end-to-end inter-area paths are link, 563 node or SRLG disjoint). 565 Note that two paths may be disjoint in the backbone area but non- 566 disjoint in peripheral areas. Also two paths may be node disjoint 567 within areas but may share ABRs, in which case path segments within 568 an area are node disjoint but end-to-end paths are not node-disjoint. 570 Per-Domain [RFC5152], BRPC [RFC5441] and H-PCE [RFC6805] mechanisms 571 support the capability to compute diverse across multi-area 572 topologies. 574 5.2. Control and Recording of Area Crossing 576 In some environments it be useful for the PCE to provide a PCC the 577 set of areas crossed by the end-to-end path. Then an operator may 578 either want to avoid crossing specific areas, and choose to select a 579 sub-optimal intra-area path. Additionally the PCE can provide the 580 path information and mark each segment so the PCC has visibility of 581 which piece of the path lies within which area. Although by 582 implementing Path-Key, the hop-by-hop (area topology) information is 583 kept confidential. 585 6. Applicability of the PCE to Inter-AS Traffic Engineering 587 As discussed in section 4 (Applicability of the PCE to Inter-area 588 Traffic Engineering) it is necessary to divide the network into 589 smaller administrative domains, or ASes. If an LSR within an AS needs 590 to compute a path across an AS boundary it must also use an inter-AS 591 computation technique. [RFC5152] defines mechanisms for the 592 computation of inter-domain TE LSPs using network elements along the 593 signaling paths to compute per-domain constrained path segments. 595 The PCE was designed to be capable of computing MPLS and GMPLS paths 596 across AS boundaries. This section outlines the features of a 597 PCE-enabled solution for computing inter-AS paths. 599 6.1 Inter-AS Routing 601 6.1.1. Strict Explicit Path and Loose Path 603 During path computation, the PCE architecture and BRPC algorithm 604 allow operators to specify if the resultant LSP must follow a strict 605 or a loose path. By explicitly specify the path, the operator 606 request a strict explicit path which must pass through one or many 607 LSR. If this behaviour is well define and appropriate for inter-area, 608 it implies some topology discovery for inter-AS. So, this feature 609 when the operator owns several ASes (and so, knows the topology of 610 its ASes) or restricts to the well-known ASBR to avoid topology 611 discovery between operators. The loose path, even if it does not 612 allow granular specification of the path, protects topology 613 disclosure as it not obligatory for the operator to disclose 614 information about its networks. 616 6.1.2. AS Inclusion and Exclusion 618 Like explicit and loose path, [RFC5441] allows to specify inclusion 619 or exclusion of respectively an AS or an ASBR. Using this method, 620 an operator might decide if an AS must be include or exclude from 621 the inter-AS path computation. Exclusion and/or inclusion could also 622 be specified at any step in the LSP path computation process by a PCE 623 (within the BRPC algorithm) but the best practice would be to specify 624 them at the edge. In opposition to the strict and loose path, AS 625 inclusion or exclusion doesn't impose topology disclosure as ASes are 626 public entity as well as their interconnection. 628 6.2 Inter-AS Bandwidth Guarantees 630 Many operators with multi-AS domains will have deployed MPLS-TE 631 DiffServ either across their entire network or at the domain edges 632 on CE-PE links. In situations where strict QOS bounds are required, 633 admission control inside the network may also be required. 635 When the propagation delay can be bounded, the performance targets, 636 such as maximum one-way transit delay may be guaranteed by providing 637 bandwidth guarantees along the DiffServ-enabled path, these 638 requirements are described in [RFC4216]. 640 One typical example of the requirements in [RFC4216] is to provide 641 bandwidth guarantees over an end-to-end path for VoIP traffic 642 classified as EF (Expedited Forwarding) class in a DiffServ-enabled 643 network. In the case where the EF path is extended across multiple 644 ASes, inter-AS bandwidth guarantee would be required. 646 Another case for inter-AS bandwidth guarantee is the requirement for 647 guaranteeing a certain amount of transit bandwidth across one or 648 multiple ASes. 650 6.3 Inter-AS Recovery 652 During a path computation process, a PCC request may contain a 653 requirement to compute a backup LSP for protecting the primary LSP, 654 1+1 protection. A single, or multiple backup LSPs may also be used 655 for a group of primary LSPs, m:n protection. 657 Other inter-AS recovery mechanisms include [RFC4090] which adds fast 658 re-route (FRR) protection to an LSP. So, the PCE could be used to 659 trigger computation of backup tunnels in order to protect Inter-AS 660 connectivity. 662 Inter-AS recovery needs not only LSP protection but it would also be 663 advisable to have multiple PCEs deployed for redundancy. 665 6.4 Inter-AS PCE Peering Policies 667 Like BGP peering policies, inter-AS PCE peering policies is a 668 requirement for operator. In inter-AS BRPC process, PCE must 669 cooperate in order to compute the end-to-end LSP. So, the AS path 670 must not only follow technical constraints e.g. bandwidth 671 availability, but also policies define by the operator. 673 Typically PCE interconnections at an AS level must follow contract 674 obligations, also known as peering agreements. The PCE peering 675 policies are the result of the contract negotiation and govern 676 the relation between the different PCE. 678 7. Multi-domain PCE Deployment Options 680 The PCE provides the architecture and mechanisms to compute 681 inter-area and inter-AS LSPs. The objective of this document is not 682 to reprint the techniques and mechanisms available, but to highlight 683 their existence and reference the relevant documents that introduce 684 and describe the techniques and mechanisms necessary for computing 685 inter-area and inter-AS LSP based services. 687 An area or AS may contain multiple PCEs: 689 - The path computation load may be balanced among a set of PCEs to 690 improve scalability. 692 - For the purpose of redundancy, primary and backup PCEs may be used. 694 - PCEs may have distinct path computation capabilities (P2P or P2MP). 696 Discovery of PCEs and capabilities per area or AS is defined in 697 [RFC5088] and [RFC5089]. 699 Each PCE per domain can be deployed in a centralized or distributed 700 architecture, the latter model having local visibility and 701 collaborating in a distributed fashion to compute a path across the 702 domain. Each PCE may collect topology and TE information from the 703 same sources as the LSR, such as the IGP TED. 705 When the PCC sends a path computation request to the PCE, the PCE 706 will compute the path across a domain based on the required 707 constraints. The PCE will generate the full set of strict hops from 708 source to destination. This information, encoded as an ERO, is then 709 sent back to the PCC that requested the path. In the event that a 710 path request from a PCC contains source and destination nodes that 711 are located in different domains the PCE is required to co-operate 712 between multiple PCEs, each responsible for its own domain. 714 Techniques for inter-domain path computation are described in 715 [RFC5152] and [RFC5441], both techniques assume that the sequence of 716 domains to be crossed from source to destination is well known. In 717 the event that the sequence of domains is not well known, [RFC6805] 718 might be used. The sequence could also be retrieve locally from 719 information previously stored in the PCE database (preferably in 720 the TED) by Operational Support Systems (OSS) management or other 721 protocols. 723 7.1 Traffic Engineering Database 725 TEDs are automatically populated by the IGP-TE like IS-IS-TE or 726 OSPF-TE. However, no information related to AS path are provided 727 by such IGP-TE. It could be helpful for BRPC algorithm as AS path 728 helper, to populate a TED with suitable information regarding 729 inter-AS connectivity. Such information could be obtain from 730 various sources, such as BGP protocol, peering policies, OSS of the 731 operator or from neighbor PCE. In any case, no topology disclosure 732 must be impose in order to provide such information. 734 In particular, for both inter-area and inter-AS, the TED must be 735 populated. Inter-as connectivity information may be populated via 736 [RFC5316] and [RFC5392]. 738 7.1.1 Provisioning Techniques 740 As PCE algorithms rely on information contained in the TED, it 741 is possible to populate TED information by means of provisioning. In 742 this case, the operator must regularly update and store all suitable 743 information in the TED in order for the PCE to correctly compute LSP. 744 Such information range from policies (e.g. avoid this LSR, or use 745 this ASBR for a specific IP prefix) up to topology information (e.g. 746 AS X is reachable trough a 100 Mbit/s link on this ASBR and 30 Mbit/s 747 are reserved for EF traffic). Operators may choose the type and 748 amount of information they can use to manage their traffic engineered 749 network. 751 However, some LSPs might be provisioned to link ASes or areas. In 752 this case, these LSP must be announced by the IGP-TE in order to 753 automatically populate the TED. 755 7.3 Pre-Planning and Management-Based Solutions 757 Offline path computation is performed ahead of time, before the LSP 758 setup is requested. That means that it is requested by, or performed 759 as part of, a management application. This model can be seen in 760 Section 5.5 of [RFC4655]. 762 The offline model is particularly appropriate to long-lived LSPs 763 (such as those present in a transport network) or for planned 764 responses to network failures. In these scenarios, more planning is 765 normally a feature of LSP provisioning. 767 This model may also be used where the network operator wishes to 768 retain full manual control of the placement of LSPs, using the PCE 769 only as a computation tool to assist the operator, not as part of an 770 automated network. 772 The management based solutions could also be used in conjunction 773 with the BRPC algorithm. Operator just computes the AS-Path as 774 parameter for the inter-AS path computation request and let each 775 PCE along the AS path compute the LSP part on its own domain. 777 7.4 Per-Domain Computation 779 [RFC5152] defines the mechanism to compute per-domain path and must 780 be used in that condition. Otherwise, BRPC [RFC5441] or HPCE RFC6805] 781 will be used.. 783 7.5 Cooperative PCEs 785 When PCE cooperatation is required to compute an inter-area or inter- 786 AS LSP, the techniques described in [RFC5441] and [RFC6805] could be 787 used. 789 7.6 Hierarchical PCEs 791 The H-PCE [RFC6805] proposal defines how a hierarchy of PCEs may be 792 used. An operator must enable a parent PCE, and a child PCE per 793 domain (AS or area). A parent PCE can be announced in the other areas 794 or ASes in order for the parent PCE to contact remote child PCEs. 795 Reciprocally, child PCEs are announced in remote areas or ASes in 796 order to be contacted by a remote parent PCE. Parent and each child 797 PCE could also be provisioned in the TED if they are not announced. 799 8. Domain Confidentiality 801 Confidentiality typically applies to inter-provider (inter-AS) PCE 802 communication. Where the TE LSP crosses multiple domains (ASes or 803 areas), the path may be computed by multiple PCEs that cooperate 804 together. With each local PCE responsible for computing a segment 805 of the path. However, in some cases (e.g., when ASes are 806 administered by separate Service Providers), it would break 807 confidentiality rules for a PCE to supply a path segment to a 808 PCE in another domain, thus disclosing AS-internal or area 809 topology information. 811 8.1 Loose Hops 813 A method for preserving the confidentiality of the path segment is 814 for the PCE to return a path containing a loose hop in place of the 815 segment that must be kept confidential. The concept of loose and 816 strict hops for the route of a TE LSP is described in [RFC3209]. 818 [RFC5440] supports the use of paths with loose hops, and it is a 819 local policy decision at a PCE whether it returns a full explicit 820 path with strict hops or uses loose hops. A path computation 821 request may request an explicit path with strict hops, or 822 may allow loose hops as detailed in [RFC5440]. 824 8.2 Confidential Path Segments and Path Keys 826 [RFC5520] defines the concept and mechanism of Path-Key. A Path-Key 827 is a token that replaces the path segment information in an explicit 828 route. The Path-Key allows the explicit route information to be 829 encoded and in the PCEP ([RFC5440]) messages exchanged between the 830 PCE and PCC. 832 This Path-Key technique allows explicit route information to used 833 for end-to-end path computation, without disclosing internal topology 834 information between domains. 836 9. Point-to-Multipoint 838 For the Point-to-Multipoint application scenarios for MPLS-TE LSP, 839 the complexity of domain sequences, domain policies, choice and 840 number of domain interconnects is magnified comparing to P2P path 841 computations. Also as the size of the network grows, the number of 842 leaves and branches increase and it in turn puts the scalability of 843 the path computation and optimization into a bigger issue. A 844 solution for the point-to-multipoint path computations may be 845 achieved using the PCEP protocol extension for P2MP [RFC6006] and 846 using the inter-domain P2MP procedures defined in [RFC7334]. 848 10. Optical Domains 850 The International Telecommunications Union (ITU) defines the ASON 851 architecture in [G-8080]. [G-7715] defines the routing architecture 852 for ASON and introduces a hierarchical architecture. In this 853 architecture, the Routing Areas (RAs) have a hierarchical 854 relationship between different routing levels, which means a parent 855 (or higher level) RA can contain multiple child RAs. The 856 interconnectivity of the lower RAs is visible to the higher level RA. 858 10.1. PCE applied to the ASON Architecture 860 In the ASON framework, a path computation request is termed a Route 861 Query. This query is executed before signaling is used to establish 862 an LSP termed a Switched Connection (SC) or a Soft Permanent 863 Connection (SPC). [G-7715-2] defines the requirements and 864 architecture for the functions performed by Routing Controllers (RC) 865 during the operation of remote route queries - an RC is synonymous 866 with a PCE. 868 In the ASON routing environment, a RC responsible for an RA may 869 communicate with its neighbor RC to request the computation of an 870 end-to-end path across several RAs. The path computation components 871 and sequences are defined as follows: 873 o Remote route query. An operation where a routing controller 874 communicates with another routing controller, which does not have 875 the same set of layer resources, in order to compute a routing 876 path in a collaborative manner. 878 o Route query requester. The connection controller or RC that sends a 879 route query message to a routing controller requesting for one or 880 more routing path that satisfies a set of routing constraints. 882 o Route query responder. An RC that performs path computation upon 883 reception of a route query message from a routing controller or 884 connection controller, sending a response back at the end of 885 computation. 887 When computing an end-to-end connection, the route may be computed by 888 a single RC or multiple RCs in a collaborative manner and the two 889 scenarios can be considered a centralized remote route query model 890 and distributed remote route query model. RCs in an ASON environment 891 can also use the hierarchical PCE [RFC6805] model to fully match the 892 ASON hierarchical routing model. 894 11. Policy 896 Policy is important in the deployment of new services and the 897 operation of the network. [RFC5394] provides a framework for PCE- 898 based policy-enabled path computation. This framework is based on 899 the Policy Core Information Model (PCIM) as defined in [RFC3060] and 900 further extended by [RFC3460]. 902 When using a PCE to compute inter-domain paths, policy may be 903 invoked by specifying: 905 - Each PCC must select which computations will be requested to a PCE; 907 - Each PCC must select which PCEs it will use; 909 - Each PCE must determine which PCCs are allowed to use its services 910 and for what computations; 912 - The PCE must determine how to collect the information in its TED, 913 who to trust for that information, and how to refresh/update the 914 information; 916 - Each PCE must determine which objective functions and which 917 algorithms to apply. 919 Finally, due to the nature of inter-domain (and particularly using 920 H-PCE based) path computations, deployment of policy should also 921 consider the need to be sensitive to commercial and reliability 922 information about domains and the interactions of services crossing 923 domains. 925 12. TED Topology and Synchronization 927 The PCE operates on a view of the network topology as presented by a 928 Traffic Engineering Database. As discussed in [RFC4655] the TED 929 used by a PCE may be learnt by the relevant IGP extensions. 931 Thus, the PCE may operate its TED is by participating 932 in the IGP running in the network. In an MPLS-TE network, this 933 would require OSPF-TE [RFC3630] or ISIS-TE [RFC5305]. In a GMPLS 934 network it would utilize the GMPLS extensions to OSPF and IS-IS 935 defined in [RFC4203] and [RFC5307]. 937 An alternative method to provide network topology and resource 938 information is offered by [RFC7752], which is described in the 939 following section. 941 12.1 Applicability of BGP-LS to PCE 943 The concept of exchange of TE information between Autonomous Systems 944 (ASes) is discussed in [RFC7752]. The information exchanged in this 945 way could be the full TE information from the AS, an aggregation of 946 that information, or a representation of the potential connectivity 947 across the AS. Furthermore, that information could be updated 948 frequently (for example, for every new LSP that is set up across the 949 AS) or only at threshold-crossing events. 951 In a H-PCE deployment, the parent PCE will require the inter-domain 952 topology and link status between child domains. This information may 953 be learnt by a BGP-LS speaker and provided to the parent PCE, 954 furthermore link-state performance including: delay, available 955 bandwidth and utilized bandwidth, may also be provided to the parent 956 PCE for optimal link selection. 958 13. Manageability Considerations 960 General PCE management considerations are discussed in [RFC4655]. 961 In the case of multi-domains within a single service provider 962 network, the management responsibility for each PCE would most 963 likely be handled by the same service provider. In the case of 964 multiple ASes within different service provider networks, it will 965 likely be necessary for each PCE to be configured and managed 966 separately by each participating service provider, with policy 967 being implemented based on an a previously agreed set of principles. 969 13.1 Control of Function and Policy 971 As per PCEP [RFC5440] implementation allow the user to configure 972 a number of PCEP session parameters. These are detailed in section 973 8.1 of [RFC5440]. 975 In H-PCE deployments the administrative entity responsible for the 976 management of the parent PCEs for multi-areas would typically be a 977 single service provider. In the multiple ASes (managed by different 978 service providers), it may be necessary for a third party to manage 979 the parent PCE. 981 13.2 Information and Data Models 983 A PCEP MIB module is defined in [RFC7420] that describes managed 984 objects for modeling of PCEP communication including: 986 o PCEP client configuration and status, 988 o PCEP peer configuration and information, 990 o PCEP session configuration and information, 992 o Notifications to indicate PCEP session changes. 994 A YANG module for PCEP has also been proposed [PCEP-YANG]. 996 A H-PCE MIB module, or YANG data model, will be required to 997 report parent PCE and child PCE information, including: 999 o parent PCE configuration and status, 1001 o child PCE configuration and information, 1003 o notifications to indicate session changes between parent PCEs and 1004 child PCEs, and 1006 o notification of parent PCE TED updates and changes. 1007 section 8.4 of [RFC5440] and will not be repeated here. 1009 13.5 Impact on Network Operation 1011 [RFC5440] states that in order to avoid any unacceptable impact on 1012 network operations, a PCEP implementation should allow a limit to be 1013 placed on the number of sessions that can be set up on a PCEP 1014 speaker, it may also be practical to place a limit on the rate 1015 of messages sent by a PCC and received my the PCE. 1017 14. Security Considerations 1019 PCEP security is defined [RFC5440]. Any multi-domain operation 1020 necessarily involves the exchange of information across domain 1021 boundaries. This does represent a significant security and 1022 confidentiality risk. PCEP allows individual PCEs to maintain 1023 confidentiality of their domain path information using path-keys 1025 13.3 Liveness Detection and Monitoring 1027 PCEP includes a keepalive mechanism to check the liveliness of a PCEP 1028 peer and a notification procedure allowing a PCE to advertise its 1029 overloaded state to a PCC. In a multi-domain environment [RFC5886] 1030 provides the procedures necessary to monitor the liveliness and 1031 performances of a given PCE chain. 1033 13.4 Verifying Correct Operation 1035 In order to verify the correct operation of PCEP, [RFC5440] specifies 1036 the monitoring of key parameters. These parameters are detailed in 1037 [RFC5520]. 1039 As PCEP operates over TCP, it may also make use of TCP security 1040 mechanisms, such as Transport Layer Security (TLS) and TCP 1041 Authentication Option (TCP-AO). Usage of these security mechanisms 1042 for PCEP is described in [PCEPS]. 1044 For further considerations of the security issues related PCECP and 1045 to inter-domain path computation, see [RFC6952] and [RFC5376]. 1047 15. IANA Considerations 1049 This document makes no requests for IANA action. 1051 16. Acknowledgements 1053 The author would like to thank Adrian Farrel for his review, and 1054 Meral Shirazipour and Francisco Javier Jimenex Chico for their 1055 comments. 1057 17. References 1059 17.1. Normative References 1061 17.2. Informative References 1063 [RFC3060] Moore, B., Ellesson, E., Strassner, J., and A. 1064 Westerinen, "Policy Core Information Model -- Version 1 1065 Specification", RFC 3060, February 2001. 1067 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 1068 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 1069 Tunnels", RFC 3209, December 2001. 1071 [RFC3460] Moore, B., Ed., "Policy Core Information Model (PCIM) 1072 Extensions", RFC 3460, January 2003. 1074 [RFC3473] Berger, L., Ed., "Generalized Multi-Protocol Label 1075 Switching (GMPLS) Signaling Resource ReserVation 1076 Protocol-Traffic Engineering (RSVP-TE) Extensions", RFC 1077 3473, January 2003. 1079 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic 1080 Engineering (TE) Extensions to OSPF Version 2", RFC 1081 3630, September 2003. 1083 [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 1084 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, May 1085 2005. 1087 [RFC4203] Kompella, K., Ed., and Y. Rekhter, Ed., "OSPF 1088 Extensions in Support of Generalized Multi- 1089 Protocol Label Switching (GMPLS)", RFC 1090 4203, October 2005. 1092 [RFC4216] Zhang, R., Ed., and J.-P. Vasseur, Ed., "MPLS Inter- 1093 Autonomous System (AS) Traffic Engineering (TE) 1094 Requirements", RFC 4216, November 2005. 1096 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 1097 Element (PCE)-Based Architecture", RFC 4655, August 2006. 1099 [RFC4726] Farrel, A., Vasseur, J., and A. Ayyangar, "A Framework 1100 for Inter-Domain Multiprotocol Label Switching Traffic 1101 Engineering", RFC 4726, November 2006. 1103 [RFC5088] Le Roux, JL., Vasseur, JP., Ikejiri, Y., and R. Zhang, 1104 "OSPF Protocol Extensions for Path Computation Element 1105 (PCE) Discovery", RFC 5088, January 2008. 1107 [RFC5089] Le Roux, JL., Ed., Vasseur, JP., Ed., Ikejiri, Y., and R. 1108 Zhang, "IS-IS Protocol Extensions for Path Computation 1109 Element (PCE) Discovery", RFC 5089, January 2008. 1111 [RFC5152] Vasseur, JP., Ayyangar, A., and R. Zhang, "A Per-Domain 1112 Path Computation Method for Establishing Inter-Domain 1113 Traffic Engineering (TE) Label Switched Paths (LSPs)", 1114 RFC 5152, February 2008. 1116 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 1117 Engineering", RFC 5305, October 2008. 1119 [RFC5307] Kompella, K., Ed., and Y. Rekhter, Ed., "IS-IS 1120 Extensions in Support of Generalized Multi-Protocol 1121 Label Switching (GMPLS)", RFC 5307, 1122 October 2008. 1124 [RFC5316] Chen, M., Zhang, R., and X. Duan, "ISIS Extensions in 1125 Support of Inter-Autonomous System (AS) MPLS and GMPLS 1126 Traffic Engineering", December 2008. 1128 [RFC5376] Bitar, N., et al., "Inter-AS Requirements for the Path 1129 Computation Element Communication Protocol (PCECP)", RFC 1130 5376, November 2008. 1132 [RFC5392] Chen, M., Zhang, R., and X. Duan, "OSPF Extensions in 1133 Support of Inter-Autonomous System (AS) MPLS and GMPLS 1134 Traffic Engineering", RFC 5392, January 2009. 1136 [RFC5394] Bryskin, I., Papadimitriou, D., Berger, L., and J. Ash, 1137 "Policy-Enabled Path Computation Framework", RFC 5394, 1138 December 2008. 1140 [RFC5440] Ayyangar, A., Farrel, A., Oki, E., Atlas, A., Dolganow, 1141 A., Ikejiri, Y., Kumaki, K., Vasseur, J., and J. Roux, 1142 "Path Computation Element (PCE) Communication Protocol 1143 (PCEP)", RFC 5440, March 2009. 1145 [RFC5441] Vasseur, J.P., Ed., "A Backward Recursive PCE-based 1146 Computation (BRPC) procedure to compute shortest inter- 1147 domain Traffic Engineering Label Switched Paths", 1148 RFC5441, April 2009. 1150 [RFC5520] Bradford, R., Ed., Vasseur, JP., and A. Farrel, 1151 "Preserving Topology Confidentiality in Inter-Domain Path 1152 Computation Using a Path-Key-Based Mechanism", RFC 5520, 1153 April 2009. 1155 [RFC5521] Oki, E., Takeda, T., and A. Farrel, "Extensions to the 1156 Path Computation Element Communication Protocol (PCEP) 1157 for Route Exclusions", RFC 5521, April 2009. 1159 [RFC5541] Le Roux, J., Vasseur, J., Lee, Y., "Encoding 1160 of Objective Functions in the Path Computation Element 1161 Communication Protocol (PCEP)", RFC5541, December 2008. 1163 [RFC5557] Lee, Y., Le Roux, JL., King, D., and E. Oki, "Path 1164 Computation Element Communication Protocol (PCEP) 1165 Requirements and Protocol Extensions in Support of Global 1166 Concurrent Optimization", RFC 5557, July 2009. 1168 [RFC5886] Vasseur, JP., Le Roux, JL., and Y. Ikejiri, "A Set of 1169 Monitoring Tools for Path ComputationElement (PCE)-Based 1170 Architecture", RFC 5886, June 2010. 1172 [RFC6006] Takeda, T., Chaitou M., Le Roux, J.L., Ali Z., 1173 Zhao, Q., King, D., "Extensions to the Path Computation 1174 Element Communication Protocol (PCEP) for 1175 Point-to-Multipoint Traffic Engineering Label Switched 1176 Paths", RFC6006, September 2010. 1178 [RFC6007] Nishioka, I., King, D., "Use of the Synchronization 1179 VECtor (SVEC) List for Synchronized Dependent Path 1180 Computations", RFC6007, September 2010. 1182 [G-8080] ITU-T Recommendation G.8080/Y.1304, Architecture for 1183 the automatically switched optical network (ASON). 1185 [G-7715] ITU-T Recommendation G.7715 (2002), Architecture 1186 and Requirements for the Automatically Switched 1187 Optical Network (ASON). 1189 [G-7715-2] ITU-T Recommendation G.7715.2 (2007), ASON routing 1190 architecture and requirements for remote route query. 1192 [RFC6805] King, D. and A. Farrel, "The Application of the Path 1193 Computation Element Architecture to the Determination 1194 of a Sequence of Domains in MPLS & GMPLS", RFC6805, July 1195 2010. 1197 [RFC6952] Jethanandani, M., Patel, K., and L. Zheng, "Analysis of 1198 BGP, LDP, PCEP, and MSDP Issues According to the Keying 1199 and Authentication for Routing Protocols (KARP) Design 1200 Guide", RFC 6952, May 2013. 1202 [RFC7334] Zhao, Q., Dhody, D., Ali Z., King, D., 1203 Casellas, R., "PCE-based Computation 1204 Procedure To Compute Shortest Constrained 1205 P2MP Inter-domain Traffic Engineering Label Switched 1206 Paths", August 2014. 1208 [RFC7420] Stephan, E., Koushik, K., Zhao, Q., King, D., "PCE 1209 Communication Protocol (PCEP) Management Information 1210 Base", December 2014. 1212 [RFC7752] Gredler, H., Medved, J., Previdi, S., Farrel, A., and 1213 S. Ray, "North-Bound Distribution of Link-State and TE 1214 Information using BGP", March 2016. 1216 [RFC7897] Dhody, D., Palle, U., and R. Casellas, "Domain Subobjects 1217 for the Path Computation Element Communication Protocol 1218 (PCEP)", June 2016. 1220 [PCEPS] Lopez, D., Dios, O., Wu, W., and D. Dhody, "Secure 1221 Transport for PCEP", work in progress, November 2015. 1223 [PCEP-YANG] Dhody, D., Hardwick, J., Beeram, V., and J. Tantsura, "A 1224 YANG Data Model for Path Computation Element 1225 Communications Protocol (PCEP)", work in progress, 1226 January 2016. 1228 18. Author's Addresses 1230 Daniel King 1231 Old Dog Consulting 1232 UK 1234 EMail: daniel@olddog.co.uk 1236 Julien Meuric 1237 France Telecom 1238 2, avenue Pierre-Marzin 1239 22307 Lannion Cedex 1241 EMail: julien.meuric@orange-ftgroup.com 1243 Olivier Dugeon 1244 France Telecom 1245 2, avenue Pierre-Marzin 1246 22307 Lannion Cedex 1248 EMail: olivier.dugeon@orange-ftgroup.com 1250 Quintin Zhao 1251 Huawei Technology 1252 125 Nagog Technology Park 1253 Acton, MA 01719 1254 US 1256 EMail: qzhao@huawei.com 1258 Dhruv Dhody 1259 Huawei Technologies 1260 Divyashree Techno Park, Whitefield 1261 Bangalore, Karnataka 560066 1262 India 1264 Email: dhruv.ietf@gmail.com 1266 Oscar Gonzalez de Dios 1267 Telefonica I+D 1268 Emilio Vargas 6, Madrid 1269 Spain 1271 EMail: ogondio@tid.es