idnits 2.17.1 draft-chen-pce-protection-applicability-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 21, 2013) is 3840 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'R3' is mentioned on line 555, but not defined -- Obsolete informational reference (is this intentional?): RFC 5316 (Obsoleted by RFC 9346) -- Obsolete informational reference (is this intentional?): RFC 6006 (Obsoleted by RFC 8306) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 PCE Working Group H. Chen 3 Internet-Draft D. Dhody 4 Intended status: Informational Huawei Technologies 5 Expires: April 24, 2014 October 21, 2013 7 The Applicability of the PCE to Computing Protection and Recovery Paths 8 for Single Domain and Multi-Domain Networks. 9 draft-chen-pce-protection-applicability-04 11 Abstract 13 The Path Computation Element (PCE) provides path computation 14 functions in support of traffic engineering in Multiprotocol Label 15 Switching (MPLS) and Generalized MPLS (GMPLS) networks. 17 A link or node failure can significantly impact network services in 18 large-scale networks. Therefore it is important to ensure the 19 survivability of large scale networks which consist of various 20 connections provided over multiple interconnected networks with 21 varying technologies. 23 This document examines the applicability of the PCE architecture, 24 protocols, and procedures for computing protection paths and 25 restoration services, for single and multi-domain networks. 27 This document also explains the mechanism of Fast Re-Route (FRR) 28 where a point of local repair (PLR) needs to find the appropriate 29 merge point (MP) to do bypass path computation using PCE. 31 Status of This Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 Internet-Drafts are working documents of the Internet Engineering 37 Task Force (IETF). Note that other groups may also distribute 38 working documents as Internet-Drafts. The list of current Internet- 39 Drafts is at http://datatracker.ietf.org/drafts/current/. 41 Internet-Drafts are draft documents valid for a maximum of six months 42 and may be updated, replaced, or obsoleted by other documents at any 43 time. It is inappropriate to use Internet-Drafts as reference 44 material or to cite them other than as "work in progress." 46 This Internet-Draft will expire on April 24, 2014. 48 Copyright Notice 49 Copyright (c) 2013 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with respect 57 to this document. Code Components extracted from this document must 58 include Simplified BSD License text as described in Section 4.e of 59 the Trust Legal Provisions and are provided without warranty as 60 described in the Simplified BSD License. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 65 1.1. Domains . . . . . . . . . . . . . . . . . . . . . . . . . 4 66 1.1.1. Inter-domain LSPs . . . . . . . . . . . . . . . . . . 5 67 1.2. Recovery . . . . . . . . . . . . . . . . . . . . . . . . . 5 68 1.3. Requirements Language . . . . . . . . . . . . . . . . . . 5 69 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 70 3. Path Computation Element Architecture Considerations . . . . . 7 71 3.1. Online Path Computation . . . . . . . . . . . . . . . . . 7 72 3.2. Offline Path Computation . . . . . . . . . . . . . . . . . 7 73 4. Protection Service Traffic Engineering . . . . . . . . . . . . 8 74 4.1. Path Computation . . . . . . . . . . . . . . . . . . . . . 8 75 4.2. Bandwidth Reservation . . . . . . . . . . . . . . . . . . 8 76 4.3. Disjoint Path . . . . . . . . . . . . . . . . . . . . . . 8 77 4.4. Service Preemption . . . . . . . . . . . . . . . . . . . . 8 78 4.5. Share Risk Link Groups . . . . . . . . . . . . . . . . . . 8 79 4.6. Multi-Homing . . . . . . . . . . . . . . . . . . . . . . . 8 80 4.6.1. Ingress and Egress Protection . . . . . . . . . . . . 9 81 5. Packet Protection Applications . . . . . . . . . . . . . . . . 9 82 5.1. Single Domain Service Protection . . . . . . . . . . . . . 10 83 5.2. Multi-domain Service Protection . . . . . . . . . . . . . 10 84 5.3. Backup Path Computation . . . . . . . . . . . . . . . . . 10 85 5.4. Fast Reroute (FRR) Path Computation . . . . . . . . . . . 10 86 5.4.1. Methods to find MP and calculate the optimal 87 backup path . . . . . . . . . . . . . . . . . . . . . 11 88 5.4.1.1. Intra-domain node protection . . . . . . . . . . . 12 89 5.4.1.2. Boundary node protection . . . . . . . . . . . . . 12 90 5.5. Point-to-Multipoint Path Protection . . . . . . . . . . . 15 91 6. Optical Protection Applications . . . . . . . . . . . . . . . 16 92 6.1. ASON Applicability . . . . . . . . . . . . . . . . . . . . 16 93 6.2. Multi-domain Restoration . . . . . . . . . . . . . . . . . 16 94 7. Path and Service Protection Gaps . . . . . . . . . . . . . . . 16 95 8. Manageability Considerations . . . . . . . . . . . . . . . . . 16 96 8.1. Control of Function and Policy . . . . . . . . . . . . . . 16 97 8.2. Information and Data Models . . . . . . . . . . . . . . . 16 98 8.3. Liveness Detection and Monitoring . . . . . . . . . . . . 16 99 8.4. Verify Correct Operations . . . . . . . . . . . . . . . . 16 100 8.5. Requirements On Other Protocols . . . . . . . . . . . . . 16 101 8.6. Impact On Network Operations . . . . . . . . . . . . . . . 16 102 9. Security Considerations . . . . . . . . . . . . . . . . . . . 16 103 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 104 11. Contributors . . . . . . . . . . . . . . . . . . . . . . . . . 17 105 12. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 17 106 13. References . . . . . . . . . . . . . . . . . . . . . . . . . . 17 107 13.1. Normative References . . . . . . . . . . . . . . . . . . . 17 108 13.2. Informative References . . . . . . . . . . . . . . . . . . 17 110 1. Introduction 112 Network survivability remains a major concern for network operators 113 and service providers, particularly as expanding applications such as 114 private and Public Cloud drive increasingly more traffic across 115 longer ranges, to a wider number of users. A variety of well-known 116 pre-planned protection and post-fault recovery schemes have been 117 developed for IP, MPLS and GMPLS networks. 119 The Path Computation Element (PCE) [RFC4655] can be used to perform 120 complex path computation in large single domain, multi-domain and 121 multi-layered networks. The PCE can also be used to compute a 122 variety of restoration and protection paths and services. 124 This document examines the applicability of the PCE architecture, 125 protocols, and protocol extensions for computing protection paths and 126 restoration services. 128 1.1. Domains 130 A domain can be defined as a separate administrative, geographic, or 131 switching environment within the network. A domain may be further 132 defined as a zone of routing or computational ability. Under these 133 definitions a domain might be categorized as an Antonymous System 134 (AS) or an Interior Gateway Protocol (IGP) area (as per [RFC4726] and 135 [RFC4655]), or specific switching environment. 137 In the context of GMPLS, a particularly important example of a domain 138 is the Automatically Switched Optical Network (ASON) subnetwork 139 [G-8080]. In this case, computation of an end-to-end path requires 140 the selection of nodes and links within a parent domain where some 141 nodes may, in fact, be subnetworks. Furthermore, a domain might be 142 an ASON routing area [G-7715]. A PCE may perform the path 143 computation function of an ASON routing controller as described in 144 [G-7715-2]. 146 It is assumed that the PCE architecture should be applied to small 147 inter-domain topologies and not to solve route computation issues 148 across large groups of domains, I.E. the entire Internet. 150 Most existing protocol mechanisms for network survivability have 151 focused on single-domain scenarios. Multi-domain scenarios are much 152 more complex and challenging as domain topology information is 153 typically not shared outside each specific domain. 155 Therefore multi-domain survivability is a key requirement for today's 156 complex networks. It is important to develop more adaptive multi- 157 domain recovery solutions for various failure scenarios. 159 1.1.1. Inter-domain LSPs 161 Three signaling options are defined for setting up an inter-area or 162 inter-AS LSP [RFC4726]: 164 o Contiguous LSP 166 o Stitched LSP 168 o Nested LSP 170 1.2. Recovery 172 Typically traffic-engineered networks such as MPLS-TE and GMPLS, use 173 protection and recovery mechanisms based on the pre-established use 174 of a packet or optical LSP and/or the availability of spare resources 175 and the network topology. 177 1.3. Requirements Language 179 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 180 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 181 document are to be interpreted as described in [RFC2119]. 183 In this document, these words will appear with that interpretation 184 only when in ALL CAPS. Lower case uses of these words are not to be 185 interpreted as carrying [RFC2119] significance. 187 2. Terminology 189 The following terminology is used in this document. 191 ABR: Area Border Router. Router used to connect two IGP areas 192 (Areas in OSPF or levels in IS-IS). 194 ASBR: Autonomous System Border Router. Router used to connect 195 together ASes of the same or different service providers via one 196 or more inter-AS links. 198 BN: Boundary Node (BN). A boundary node is either an ABR in the 199 context of inter-area Traffic Engineering or an ASBR in the 200 context of inter-AS Traffic Engineering. 202 CPS: Confidential Path Segment. A segment of a path that contains 203 nodes and links that the AS policy requires not to be disclosed 204 outside the AS. 206 CSP: Communication Service Provide. 208 CSPF: Constrained Shorted Path First Algorithm. 210 ERO: Explicit Route Object. 212 FRR: Fast Re-Route. 214 IGP: Interior Gateway Protocol. Either of the two routing 215 protocols, Open Shortest Path First (OSPF) or Intermediate System 216 to Intermediate System (IS-IS). 218 Inter-area TE LSP: A TE LSP whose path transits through two or more 219 IGP areas. 221 Inter-AS TE LSP: A TE LSP whose path transits through two or more 222 ASs or sub-ASs (BGP confederations). 224 IS-IS: Intermediate System to Intermediate System. 226 LSP: Label Switched Path. 228 LSR: Label Switching Router. 230 MP: Merge Point. The LSR where one or more backup tunnels rejoin 231 the path of the protected LSP downstream of the potential failure. 233 OSPF: Open Shortest Path First. 235 PCC: Path Computation Client. Any client application requesting a 236 path computation to be performed by a Path Computation Element. 238 PCE: Path Computation Element. An entity (component, application, 239 or network node) that is capable of computing a network path or 240 route based on a network graph and applying computational 241 constraints. 243 PKS: Path Key Subobject. A subobject of an Explicit Route Object or 244 Record Route Object that encodes a CPS so as to preserve 245 confidentiality. 247 PLR: Point of Local Repair. The head-end LSR of a backup tunnel or 248 a detour LSP. 250 RRO: Record Route Object. 252 RSVP: Resource Reservation Protocol. 254 SRLG: Shared Risk Link Group. 256 TE: Traffic Engineering. 258 TED: Traffic Engineering Database, which contains the topology and 259 resource information of the domain. The TED may be fed by 260 Interior Gateway Protocol (IGP) extensions or potentially by other 261 means. 263 This document also uses the terminology defined in [RFC4655] and 264 [RFC5440]. 266 3. Path Computation Element Architecture Considerations 268 For the purpose of this document it is assumed that the path 269 computation is the sole responsibility of the PCE as per the 270 architecture defined in [RFC4655]. When a path is required the Path 271 Computation Client (PCC) will send a request to the PCE. The PCE 272 will apply the required constraints and compute a path and return a 273 response to the PCC. In the context of this document it may be 274 necessary for the PCE to co-operate with other PCEs in adjacent 275 domains (as per BRPC [RFC5441]) or cooperate with the Parent PCE (as 276 per [PCE-HIERARCHY-FWK]). 278 A PCE may be used to compute end-to-end paths across single or 279 multiple domains. Multiple PCEs may be dedicated to each area to 280 provide sufficient path computation capacity and redundancy for each 281 domain. 283 During path computation [RFC5440], a PCC request may contain backup 284 LSP requirements in order to setup in the same time the primary and 285 backup LSPs. This request is known as dependent path computations. 286 A typical dependent request for a primary and backup service would 287 request that the computation assign a set of diverse paths, so both 288 services are disjointed from each other. 290 3.1. Online Path Computation 292 Online path computation is performed on-demand as nodes in the 293 network determine that they need to know the paths to use for 294 services. 296 3.2. Offline Path Computation 298 Offline path computation is performed ahead of time, before the LSP 299 setup is requested. That means that it is requested by, or performed 300 as part of, a management application. 302 This method of computation allows the optimal placement of services 303 and explicit control of services. A Communication Service Provide 304 (CSP) can plan where new protection services will be placed ahead of 305 time. Furthermore by computing paths offline specific scenarios can 306 be considered and a global view of network resources is available. 308 Finally, offline path computation provides a method to compute 309 protection paths in the event of a single, or multiple, link 310 failures. This allows the placement of backup services in the event 311 of catastrophic network failures. 313 4. Protection Service Traffic Engineering 315 4.1. Path Computation 317 This document describes how the PCE architecture defined in [RFC4655] 318 may be utilized to compute protection and recovery paths for critical 319 network services. In the context of this document (inter-domain) it 320 may be necessary for the PCE to co-operate with other PCEs in 321 adjacent domains (as per BRPC [RFC5441]) or cooperate with the Parent 322 PCE (as per [PCE-HIERARCHY-FWK]). 324 4.2. Bandwidth Reservation 326 4.3. Disjoint Path 328 Disjoint paths are required for end-to-end protection services. A 329 backup service may be required to be fully disjoint from the primary 330 service, link disjoint (allowing common nodes on the paths), or best- 331 effort disjoint (allowing shared links or nodes when no other path 332 can be found). 334 4.4. Service Preemption 336 4.5. Share Risk Link Groups 338 4.6. Multi-Homing 340 Networks constructed from multi-areas or multi-AS environments may 341 have multiple interconnect points (multi-homing). End-to-end path 342 computations may need to use different interconnect points to avoid 343 single point failures disrupting primary and backup services. 345 Domain and path diversity may also be required when computing end-to- 346 end paths. Domain diversity should facilitate the selection of paths 347 that share ingress and egress domains, but do not share transit 348 domains. Therefore, there must be a method allowing the inclusion or 349 exclusion of specific domains when computing end-to-end paths. 351 4.6.1. Ingress and Egress Protection 353 An end-to-end primary service carried by a primary TE LSP from a 354 primary ingress node to a primary egress node may need to be 355 protected against the failures in the ingress and the egress. In 356 this case, a backup ingress and a backup egress are required, which 357 are different from the primary ingress and the primary egress 358 respectively. The backup ingress should be in the same domain as the 359 primary ingress, and the backup egress should be in the same domain 360 as the primary egress. 362 A source of the service traffic may be sent to both the primary 363 ingress and the backup ingress (dual-homing). The source may not be 364 in the same domain as the primary ingress and the backup ingress. 365 When the primary ingress fails, the service traffic is delivered 366 through the backup ingress. 368 A receiver of the service traffic may be connected to both the 369 primary egress and the backup egress (dual-homing). The receiver may 370 not be in the same domain as the primary egress and the backup 371 egress. When the primary egress fails, the receiver gets the service 372 traffic from the backup egress. 374 5. Packet Protection Applications 376 Network survivability is a key objective for CSPs, particularly as 377 expanding revenue services (cloud and data center applications) are 378 increasing exponentially. 380 Pre-fault paths are pre-computed and protection resources are 381 reserved a priory for rapid recovery. In the event of a network 382 failure on the primary path, the traffic is fast switched to the 383 backup path. These pre-provisioned mechanisms are capable of 384 ensuring protection against single link failures. 386 Post-fault restoration schemes are reactive and require a reactive 387 routing procedure to set up new working paths in the event of a 388 failure. Post fault restoration can significantly impact network 389 services as they are typically impacted by longer restoration delays 390 and cannot guarantee recovery of a service. However, they are much 391 more network resource efficient and are capable of handling multi- 392 failure situations. 394 5.1. Single Domain Service Protection 396 A variety of pre-planned protection and post-fault restoration 397 recovery schemes are available for single domain MPLS and GMPLS 398 networks, these include: 400 o Path Recovery 402 o Path Segment Recovery 404 o Local Recovery (Fast Reroute) 406 5.2. Multi-domain Service Protection 408 Typically network survivability has focused on single-domain 409 scenarios. By contrast, broader multi-domain scenarios are much more 410 challenging as no single entity has a global view of topology 411 information. As a result, multi-domain survivability is very 412 important. 414 A PCE may be used to compute end-to-end paths across multi-domain 415 environments using a per-domain path computation technique [RFC5152]. 416 The so called backward recursive path computation (BRPC) mechanism 417 [RFC5441] defines a PCE-based path computation procedure to compute 418 inter-domain constrained LSPs. 420 5.3. Backup Path Computation 422 A PCE can be used to compute backup paths in the context of fast 423 reroute protection of TE LSPs. In this model, all backup TE LSPs 424 protecting a given facility are computed in a coordinated manner by a 425 PCE. This allows complete bandwidth sharing between backup tunnels 426 protecting independent elements, while avoiding any extensions to TE 427 LSP signaling. Both centralized and distributed computation models 428 are applicable. In the distributed case each LSR can be a PCE to 429 compute the paths of backup tunnels to protect against the failure of 430 adjacent network links or nodes. 432 5.4. Fast Reroute (FRR) Path Computation 434 As stated in [RFC4090], there are two independent methods (one-to-one 435 backup and facility backup) of doing fast reroute (FRR). PCE can be 436 used to compute backup path for both of the methods. Cooperating 437 PCEs may be used to compute inter-domain backup path. 439 In case of one to one backup method, the destination MUST be the 440 tail-end of the protected LSP. Whereas for facility backup, 441 destination MUST be the address of the merge point (MP) from the 442 corresponding point of local repair (PLR). The problem of finding 443 the MP using the interface addresses or node-ids present in Record 444 Route Object (RRO) of protected path can be easily solved in the case 445 of a single Interior Gateway Protocol (IGP) area because the PLR has 446 the complete Traffic Engineering Database (TED). Thus, the PLR can 447 unambiguously determine - 449 o The MP address regardless of RRO IPv4 or IPv6 sub-objects 450 (interface address or LSR ID). 452 o Does a backup tunnel intersecting a protected TE LSP on MP node 453 exist? This is the case where facility backup tunnel already 454 exists either due to another protected TE LSP or it is pre- 455 configured. 457 It is complex for a PLR to find the MP in case of boundary node 458 protection for computing a bypass path because the PLR doesn't have 459 the full TED visibility. When confidentiality (via path key) 460 [RFC5520] is enabled, finding MP is very complex. 462 This document describes the mechanism to find MP and to setup bypass 463 tunnel to protect a boundary node. 465 5.4.1. Methods to find MP and calculate the optimal backup path 467 The Merge Point (MP) address is required at the PLR in order to 468 select a bypass tunnel intersecting a protected Traffic Engineering 469 Label Switched Path (TE LSP) on a downstream LSR. 471 Some implementations may choose to pre-configure a bypass tunnel on 472 PLR with destination address as MP. MP's Domain to be traversed by 473 bypass path can be administratively configured or learned via some 474 other means (ex Hierarchical PCE (HPCE) [PCE-HIERARCHY-FWK]). Path 475 Computation Client (PCC) on PLR can request its local PCE to compute 476 bypass path from PLR to MP, excluding links and node between PLR and 477 MP. At PLR once primary tunnel is up, a pre-configured bypass tunnel 478 is bound to the primary tunnel, note that multiple bypass tunnels can 479 also exist. 481 Most implementations may choose to create a bypass tunnel on PLR 482 after primary tunnel is signaled with Record Route Object (RRO) being 483 present in primary path's Resource Reservation Protocol (RSVP) Path 484 Reserve message. MP address has to be determined (described below) 485 to create a bypass tunnel. PCC on PLR can request its local PCE to 486 compute bypass path from PLR to MP, excluding links and node between 487 PLR and MP. 489 5.4.1.1. Intra-domain node protection 491 [R1]----[R2]----[R3]----[R4]---[R5] 492 \ / 493 [R6]--[R7]--[R8] 495 Protected LSP Path: [R1->R2->R3->R4->R5] 496 Bypass LSP Path: [R2->R6->R7->R8->R4] 498 Figure 1: Node Protection for R3 500 In Figure 1, R2 has to build a bypass tunnel that protects against 501 the failure of link [R2->R3] and node [R3]. R2 is PLR and R4 is MP 502 in this case. Since, both PLR and MP belong to the same area. The 503 problem of finding the MP using the interface addresses or node-ids 504 can be easily solved. Thus, the PLR can unambiguously find the MP 505 address regardless of RRO IPv4 or IPv6 sub-objects (interface address 506 or LSR ID) and also determine whether a backup tunnel intersecting a 507 protected TE LSP on a downstream node (MP) already exists. 509 TED on PLR will have the information of both R2 and R4, which can be 510 used to find MP's TE router IP address and compute optimal backup 511 path from R2 to R4, excluding link [R2->R3] and node [R3]. 513 Thus, RSVP-TE can signal bypass tunnel along the computed path. 515 5.4.1.2. Boundary node protection 517 5.4.1.2.1. Area Boundary Router (ABR) node protection 519 | 520 PCE-1 | PCE-2 521 | 522 IGP area 0 | IGP area 1 523 | 524 | 525 [R1]----[R2]----[R3]----[R4]---[R5] 526 \ | / 527 [R6]--[R7]--[R8] 528 | 529 | 530 | 532 Protected LSP Path: [R1->R2->R3->R4->R5] 533 Bypass LSP Path: [R2->R6->R7->R8->R4] 534 Figure 2: Node Protection for R3 (ABR) 536 In Figure 2, cooperating PCE(s) (PCE-1 and PCE-2) have computed the 537 primary LSP Path [R1->R2->R3->R4->R5] and provided to R1 (PCC). 539 R2 has to build a bypass tunnel that protects against the failure of 540 link [R2->R3] and node [R3]. R2 is PLR and R4 is MP. Both PLR and 541 MP are in different area. TED on PLR doesn't have the information of 542 R4. 544 The problem of finding the MP address in a network with inter-domain 545 TE LSP is solved by inserting a node-id sub-object [RFC4561] in the 546 RRO object carried in the RSVP Path Reserve message. PLR can find 547 out the MP from the RRO it has received in Path Reserve message from 548 its downstream LSR. 550 But the computation of optimal backup path from R2 to R4, excluding 551 link [R2->R3] and node [R3] is not possible with running of 552 Constrained Shortest Path First (CSPF) algorithm locally at R2. PCE 553 can be used to compute backup path in this case. R2 acting as PCC on 554 PLR can request PCE-1 to compute bypass path from PLR(R2) to MP(R4), 555 excluding link [R2->R3] and node [R3]. PCE MAY use inter-domain path 556 computation mechanism (like HPCE ([PCE-HIERARCHY-FWK]) etc) when the 557 domain information of MP is unknown at PLR. Further, RSVP-TE can 558 signal bypass tunnel along the computed path. 560 5.4.1.2.2. Autonomous System Border Router (ASBR) node protection 562 | | 563 PCE-1 | | PCE-2 564 | | 565 AS 100 | | AS 200 566 | | 567 | | 568 [R1]----[R2]-------[R3]---------[R4]---[R5] 569 |\ | / 570 | +-----[R6]--[R7]--[R8] 571 | | 572 | | 574 Protected LSP Path: [R1->R2->R3->R4->R5] 575 Bypass LSP Path: [R2->R6->R7->R8->R4] 577 Figure 3: Node Protection for R3 (ASBR) 579 In Figure 3, Links [R2->R3] and [R2->R6] are inter-AS links. IGP 580 extensions ([RFC5316] and [RFC5392]) describe the flooding of 581 inter-AS TE information for inter-AS path computation. Cooperating 582 PCE(s) (PCE-1 and PCE-2) have computed the primary LSP Path 583 [R1->R2->R3->R4->R5] and provided to R1 (PCC). 585 R2 is PLR and R4 is MP. Both PLR and MP are in different AS. TED on 586 PLR doesn't have the information of R4. 588 The address of MP can be found using node-id sub-object [RFC4561] in 589 the RRO object carried in the RSVP Path Reserve message. And 590 Cooperating PCEs could be used to compute the inter-AS bypass path. 591 Thus ASBR boundary node protection is similar to ABR protection. 593 5.4.1.2.3. Boundary node protection with Path-Key Confidentiality 595 [RFC5520] defines a mechanism to hide the contents of a segment of a 596 path, called the Confidential Path Segment (CPS). The CPS may be 597 replaced by a path-key that can be conveyed in the PCE Communication 598 Protocol (PCEP) and signaled within in a Resource Reservation 599 Protocol TE (RSVP-TE) explicit route object. 601 [RFC5553] states that, when the signaling message crosses a domain 602 boundary, the path segment that needs to be hidden (that is, a CPS) 603 MAY be replaced in the RRO with a PKS. Note that RRO in Resv message 604 carries the same PKS as originally signaled in the ERO of the Path 605 message. 607 5.4.1.2.3.1. Area Boundary Router (ABR) node protection 609 | 610 PCE-1 | PCE-2 611 | 612 IGP area 0 | IGP area 1 613 | 614 | 615 [R1]----[R2]----[R3]----[R4]---[R5]---[R9] 616 \ | / 617 [R6]--[R7]--[R8] 618 | 619 | 620 | 622 Figure 4: Node Protection for R3 (ABR) and Path-Key 624 In Figure 4, when path-key is enabled, cooperating PCE(s) (PCE-1 and 625 PCE-2) have computed the primary LSP Path [R1->R2->R3->PKS->R9] and 626 provided to R1 (PCC). 628 When the ABR node (R3) replaces the CPS with PKS (as originally 629 signaled) during the Reserve message handling, it MAY also add the 630 immediate downstream node-id (R4) (so that the PLR (R2) can identify 631 the MP (R4)). Further the PLR (R2) SHOULD remove the MP node-id (R4) 632 before sending the Reserve message upstream to head end router. 634 Once MP is identified, the backup path computation using PCE is as 635 described earlier. (Section 5.4.1.2.1) 637 5.4.1.2.3.2. Autonomous System Border Router (ASBR) node protection 639 | | 640 PCE-1 | | PCE-2 641 | | 642 AS 100 | | AS 200 643 | | 644 | | 645 [R1]----[R2]-------[R3]---------[R4]---[R5] 646 |\ | / 647 | +-----[R6]--[R7]--[R8] 648 | | 649 | | 651 Figure 5: Node Protection for R3 (ASBR) 653 The address of MP can be found using the same mechanism as explained 654 above. Thus ASBR boundary node protection is similar to ABR 655 protection. 657 5.5. Point-to-Multipoint Path Protection 659 A PCE utilizing the extensions outlined in [RFC6006] (Extensions to 660 PCEP for Point-to-Multipoint Traffic Engineering Label Switched 661 Paths), can be used to compute point-to-multipoint (P2MP) paths. A 662 PCC requesting path computation for a primary and backup path can 663 request that these dependent computations use diverse paths. 664 Furthermore, the specification also defines two new options for P2MP 665 path dependent computation requests. The first option allows the PCC 666 to request that the PCE should compute a secondary P2MP path tree 667 with partial path diversity for specific leaves or a specific source- 668 to-leaf (sub-path to the primary P2MP path tree. The second option, 669 allows the PCC to request that partial paths should be link direction 670 diverse. 672 6. Optical Protection Applications 674 6.1. ASON Applicability 676 6.2. Multi-domain Restoration 678 7. Path and Service Protection Gaps 680 8. Manageability Considerations 682 8.1. Control of Function and Policy 684 TBD 686 8.2. Information and Data Models 688 TBD 690 8.3. Liveness Detection and Monitoring 692 TBD 694 8.4. Verify Correct Operations 696 TBD 698 8.5. Requirements On Other Protocols 700 TBD 702 8.6. Impact On Network Operations 704 TBD 706 9. Security Considerations 708 This document does not introduce new security issues. However, MP's 709 node-id is carried as subobject in RRO across domain. This 710 relaxation is required to find MP in case of BN protection. The 711 security considerations pertaining to the [RFC3209], [RFC4090] and 712 [RFC5440] protocols remain relevant. 714 10. IANA Considerations 716 This document makes no requests for IANA action. 718 11. Contributors 720 Venugopal Reddy Kondreddy 721 Huawei Technologies 722 Leela Palace 723 Bangalore, Karnataka 560008 724 INDIA 726 EMail: venugopalreddyk@huawei.com 728 12. Acknowledgement 730 We would like to thank Daniel King, Udayashree Palle, Sandeep Boina & 731 Reeja Paul for their useful comments and suggestions. 733 13. References 735 13.1. Normative References 737 [RFC2119] Bradner, S., "Key words for use in RFCs to 738 Indicate Requirement Levels", BCP 14, RFC 2119, 739 March 1997. 741 13.2. Informative References 743 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., 744 Srinivasan, V., and G. Swallow, "RSVP-TE: 745 Extensions to RSVP for LSP Tunnels", RFC 3209, 746 December 2001. 748 [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast 749 Reroute Extensions to RSVP-TE for LSP Tunnels", 750 RFC 4090, May 2005. 752 [RFC4561] Vasseur, J., Ali, Z., and S. Sivabalan, 753 "Definition of a Record Route Object (RRO) 754 Node-Id Sub-Object", RFC 4561, June 2006. 756 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path 757 Computation Element (PCE)-Based Architecture", 758 RFC 4655, August 2006. 760 [RFC4726] Farrel, A., Vasseur, J., and A. Ayyangar, "A 761 Framework for Inter-Domain Multiprotocol Label 762 Switching Traffic Engineering", RFC 4726, 763 November 2006. 765 [RFC5152] Vasseur, JP., Ayyangar, A., and R. Zhang, "A 766 Per-Domain Path Computation Method for 767 Establishing Inter-Domain Traffic Engineering 768 (TE) Label Switched Paths (LSPs)", RFC 5152, 769 February 2008. 771 [RFC5316] Chen, M., Zhang, R., and X. Duan, "ISIS 772 Extensions in Support of Inter-Autonomous System 773 (AS) MPLS and GMPLS Traffic Engineering", 774 RFC 5316, December 2008. 776 [RFC5392] Chen, M., Zhang, R., and X. Duan, "OSPF 777 Extensions in Support of Inter-Autonomous System 778 (AS) MPLS and GMPLS Traffic Engineering", 779 RFC 5392, January 2009. 781 [RFC5440] Vasseur, JP. and JL. Le Roux, "Path Computation 782 Element (PCE) Communication Protocol (PCEP)", 783 RFC 5440, March 2009. 785 [RFC5441] Vasseur, JP., Zhang, R., Bitar, N., and JL. Le 786 Roux, "A Backward-Recursive PCE-Based 787 Computation (BRPC) Procedure to Compute Shortest 788 Constrained Inter-Domain Traffic Engineering 789 Label Switched Paths", RFC 5441, April 2009. 791 [RFC5520] Bradford, R., Vasseur, JP., and A. Farrel, 792 "Preserving Topology Confidentiality in Inter- 793 Domain Path Computation Using a Path-Key-Based 794 Mechanism", RFC 5520, April 2009. 796 [RFC5553] Farrel, A., Bradford, R., and JP. Vasseur, 797 "Resource Reservation Protocol (RSVP) Extensions 798 for Path Key Support", RFC 5553, May 2009. 800 [RFC6006] Zhao, Q., King, D., Verhaeghe, F., Takeda, T., 801 Ali, Z., and J. Meuric, "Extensions to the Path 802 Computation Element Communication Protocol 803 (PCEP) for Point-to-Multipoint Traffic 804 Engineering Label Switched Paths", RFC 6006, 805 September 2010. 807 [PCE-HIERARCHY-FWK] King, D. and A. Farrel, "The Application of the 808 Path Computation Element Architecture to the 809 Determination of a Sequence of Domains in MPLS 810 and GMPLS. (draft-ietf-pce-hierarchy-fwk-05)", 811 August 2012. 813 [G-7715] ITU-T, "ITU-T Recommendation G.7715 (2002), 814 Architecture and Requirements for the 815 Automatically Switched Optical Network (ASON).". 817 [G-7715-2] ITU-T, "ITU-T Recommendation G.7715.2 (2007), 818 ASON routing architecture and requirements for 819 remote route query.". 821 [G-8080] ITU-T, "ITU-T Recommendation G.8080/Y.1304, 822 Architecture for the automatically switched 823 optical network (ASON).". 825 Authors' Addresses 827 Huaimo Chen 828 Huawei Technologies 829 Boston, MA 830 USA 832 EMail: huaimo.chen@huawei.com 834 Dhruv Dhody 835 Huawei Technologies 836 Leela Palace 837 Bangalore, Karnataka 560008 838 INDIA 840 EMail: dhruv.dhody@huawei.com