idnits 2.17.1 draft-chen-pce-protection-applicability-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (September 27, 2012) is 4229 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'R3' is mentioned on line 556, but not defined -- Obsolete informational reference (is this intentional?): RFC 5316 (Obsoleted by RFC 9346) -- Obsolete informational reference (is this intentional?): RFC 6006 (Obsoleted by RFC 8306) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 PCE Working Group H. Chen 3 Internet-Draft V. Kondreddy 4 Intended status: Informational D. Dhody 5 Expires: March 31, 2013 Huawei Technologies 6 September 27, 2012 8 The Applicability of the PCE to Computing Protection and Recovery Paths 9 for Single Domain and Multi-Domain Networks. 10 draft-chen-pce-protection-applicability-01 12 Abstract 14 The Path Computation Element (PCE) provides path computation 15 functions in support of traffic engineering in Multiprotocol Label 16 Switching (MPLS) and Generalized MPLS (GMPLS) networks. 18 A link or node failure can significantly impact network services in 19 large-scale networks. Therefore it is important to ensure the 20 survivability of large scale networks which consist of various 21 connections provided over multiple interconnected networks with 22 varying technologies. 24 This document examines the applicability of the PCE architecture, 25 protocols, and procedures for computing protection paths and 26 restoration services, for single and multi-domain networks. 28 This document also explains the mechanism of Fast Re-Route (FRR) 29 where a point of local repair (PLR) needs to find the appropriate 30 merge point (MP) to do bypass path computation using PCE. 32 Status of This Memo 34 This Internet-Draft is submitted in full conformance with the 35 provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet Engineering 38 Task Force (IETF). Note that other groups may also distribute 39 working documents as Internet-Drafts. The list of current Internet- 40 Drafts is at http://datatracker.ietf.org/drafts/current/. 42 Internet-Drafts are draft documents valid for a maximum of six months 43 and may be updated, replaced, or obsoleted by other documents at any 44 time. It is inappropriate to use Internet-Drafts as reference 45 material or to cite them other than as "work in progress." 47 This Internet-Draft will expire on March 31, 2013. 49 Copyright Notice 51 Copyright (c) 2012 IETF Trust and the persons identified as the 52 document authors. All rights reserved. 54 This document is subject to BCP 78 and the IETF Trust's Legal 55 Provisions Relating to IETF Documents 56 (http://trustee.ietf.org/license-info) in effect on the date of 57 publication of this document. Please review these documents 58 carefully, as they describe your rights and restrictions with respect 59 to this document. Code Components extracted from this document must 60 include Simplified BSD License text as described in Section 4.e of 61 the Trust Legal Provisions and are provided without warranty as 62 described in the Simplified BSD License. 64 Table of Contents 66 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 67 1.1. Domains . . . . . . . . . . . . . . . . . . . . . . . . . 4 68 1.1.1. Inter-domain LSPs . . . . . . . . . . . . . . . . . . 5 69 1.2. Recovery . . . . . . . . . . . . . . . . . . . . . . . . . 5 70 1.3. Requirements Language . . . . . . . . . . . . . . . . . . 5 71 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 72 3. Path Computation Element Architecture Considerations . . . . . 7 73 3.1. Online Path Computation . . . . . . . . . . . . . . . . . 7 74 3.2. Offline Path Computation . . . . . . . . . . . . . . . . . 7 75 4. Protection Service Traffic Engineering . . . . . . . . . . . . 8 76 4.1. Path Computation . . . . . . . . . . . . . . . . . . . . . 8 77 4.2. Bandwidth Reservation . . . . . . . . . . . . . . . . . . 8 78 4.3. Disjoint Path . . . . . . . . . . . . . . . . . . . . . . 8 79 4.4. Service Preemption . . . . . . . . . . . . . . . . . . . . 8 80 4.5. Share Risk Link Groups . . . . . . . . . . . . . . . . . . 8 81 4.6. Multi-Homing . . . . . . . . . . . . . . . . . . . . . . . 8 82 4.6.1. Ingress and Egress Protection . . . . . . . . . . . . 9 83 5. Packet Protection Applications . . . . . . . . . . . . . . . . 9 84 5.1. Single Domain Service Protection . . . . . . . . . . . . . 10 85 5.2. Multi-domain Service Protection . . . . . . . . . . . . . 10 86 5.3. Backup Path Computation . . . . . . . . . . . . . . . . . 10 87 5.4. Fast Reroute (FRR) Path Computation . . . . . . . . . . . 10 88 5.4.1. Methods to find MP and calculate the optimal 89 backup path . . . . . . . . . . . . . . . . . . . . . 11 90 5.4.1.1. Intra-domain node protection . . . . . . . . . . . 12 91 5.4.1.2. Boundary node protection . . . . . . . . . . . . . 12 92 5.5. Point-to-Multipoint Path Protection . . . . . . . . . . . 15 93 6. Optical Protection Applications . . . . . . . . . . . . . . . 16 94 6.1. ASON Applicability . . . . . . . . . . . . . . . . . . . . 16 95 6.2. Multi-domain Restoration . . . . . . . . . . . . . . . . . 16 96 7. Path and Service Protection Gaps . . . . . . . . . . . . . . . 16 97 8. Manageability Considerations . . . . . . . . . . . . . . . . . 16 98 8.1. Control of Function and Policy . . . . . . . . . . . . . . 16 99 8.2. Information and Data Models . . . . . . . . . . . . . . . 16 100 8.3. Liveness Detection and Monitoring . . . . . . . . . . . . 16 101 8.4. Verify Correct Operations . . . . . . . . . . . . . . . . 16 102 8.5. Requirements On Other Protocols . . . . . . . . . . . . . 16 103 8.6. Impact On Network Operations . . . . . . . . . . . . . . . 16 104 9. Security Considerations . . . . . . . . . . . . . . . . . . . 16 105 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 106 11. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 17 107 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 17 108 12.1. Normative References . . . . . . . . . . . . . . . . . . . 17 109 12.2. Informative References . . . . . . . . . . . . . . . . . . 17 111 1. Introduction 113 Network survivability remains a major concern for network operators 114 and service providers, particularly as expanding applications such as 115 private and Public Cloud drive increasingly more traffic across 116 longer ranges, to a wider number of users. A variety of well-known 117 pre-planned protection and post-fault recovery schemes have been 118 developed for IP, MPLS and GMPLS networks. 120 The Path Computation Element (PCE) [RFC4655] can be used to perform 121 complex path computation in large single domain, multi-domain and 122 multi-layered networks. The PCE can also be used to compute a 123 variety of restoration and protection paths and services. 125 This document examines the applicability of the PCE architecture, 126 protocols, and protocol extensions for computing protection paths and 127 restoration services. 129 1.1. Domains 131 A domain can be defined as a separate administrative, geographic, or 132 switching environment within the network. A domain may be further 133 defined as a zone of routing or computational ability. Under these 134 definitions a domain might be categorized as an Antonymous System 135 (AS) or an Interior Gateway Protocol (IGP) area (as per [RFC4726] and 136 [RFC4655]), or specific switching environment. 138 In the context of GMPLS, a particularly important example of a domain 139 is the Automatically Switched Optical Network (ASON) subnetwork 140 [G-8080]. In this case, computation of an end-to-end path requires 141 the selection of nodes and links within a parent domain where some 142 nodes may, in fact, be subnetworks. Furthermore, a domain might be 143 an ASON routing area [G-7715]. A PCE may perform the path 144 computation function of an ASON routing controller as described in 145 [G-7715-2]. 147 It is assumed that the PCE architecture should be applied to small 148 inter-domain topologies and not to solve route computation issues 149 across large groups of domains, I.E. the entire Internet. 151 Most existing protocol mechanisms for network survivability have 152 focused on single-domain scenarios. Multi-domain scenarios are much 153 more complex and challenging as domain topology information is 154 typically not shared outside each specific domain. 156 Therefore multi-domain survivability is a key requirement for today's 157 complex networks. It is important to develop more adaptive multi- 158 domain recovery solutions for various failure scenarios. 160 1.1.1. Inter-domain LSPs 162 Three signaling options are defined for setting up an inter-area or 163 inter-AS LSP [RFC4726]: 165 o Contiguous LSP 167 o Stitched LSP 169 o Nested LSP 171 1.2. Recovery 173 Typically traffic-engineered networks such as MPLS-TE and GMPLS, use 174 protection and recovery mechanisms based on the pre-established use 175 of a packet or optical LSP and/or the availability of spare resources 176 and the network topology. 178 1.3. Requirements Language 180 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 181 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 182 document are to be interpreted as described in [RFC2119]. 184 In this document, these words will appear with that interpretation 185 only when in ALL CAPS. Lower case uses of these words are not to be 186 interpreted as carrying [RFC2119] significance. 188 2. Terminology 190 The following terminology is used in this document. 192 ABR: Area Border Router. Router used to connect two IGP areas 193 (Areas in OSPF or levels in IS-IS). 195 ASBR: Autonomous System Border Router. Router used to connect 196 together ASes of the same or different service providers via one 197 or more inter-AS links. 199 BN: Boundary Node (BN). A boundary node is either an ABR in the 200 context of inter-area Traffic Engineering or an ASBR in the 201 context of inter-AS Traffic Engineering. 203 CPS: Confidential Path Segment. A segment of a path that contains 204 nodes and links that the AS policy requires not to be disclosed 205 outside the AS. 207 CSP: Communication Service Provide. 209 CSPF: Constrained Shorted Path First Algorithm. 211 ERO: Explicit Route Object. 213 FRR: Fast Re-Route. 215 IGP: Interior Gateway Protocol. Either of the two routing 216 protocols, Open Shortest Path First (OSPF) or Intermediate System 217 to Intermediate System (IS-IS). 219 Inter-area TE LSP: A TE LSP whose path transits through two or more 220 IGP areas. 222 Inter-AS TE LSP: A TE LSP whose path transits through two or more 223 ASs or sub-ASs (BGP confederations). 225 IS-IS: Intermediate System to Intermediate System. 227 LSP: Label Switched Path. 229 LSR: Label Switching Router. 231 MP: Merge Point. The LSR where one or more backup tunnels rejoin 232 the path of the protected LSP downstream of the potential failure. 234 OSPF: Open Shortest Path First. 236 PCC: Path Computation Client. Any client application requesting a 237 path computation to be performed by a Path Computation Element. 239 PCE: Path Computation Element. An entity (component, application, 240 or network node) that is capable of computing a network path or 241 route based on a network graph and applying computational 242 constraints. 244 PKS: Path Key Subobject. A subobject of an Explicit Route Object or 245 Record Route Object that encodes a CPS so as to preserve 246 confidentiality. 248 PLR: Point of Local Repair. The head-end LSR of a backup tunnel or 249 a detour LSP. 251 RRO: Record Route Object. 253 RSVP: Resource Reservation Protocol. 255 SRLG: Shared Risk Link Group. 257 TE: Traffic Engineering. 259 TED: Traffic Engineering Database, which contains the topology and 260 resource information of the domain. The TED may be fed by 261 Interior Gateway Protocol (IGP) extensions or potentially by other 262 means. 264 This document also uses the terminology defined in [RFC4655] and 265 [RFC5440]. 267 3. Path Computation Element Architecture Considerations 269 For the purpose of this document it is assumed that the path 270 computation is the sole responsibility of the PCE as per the 271 architecture defined in [RFC4655]. When a path is required the Path 272 Computation Client (PCC) will send a request to the PCE. The PCE 273 will apply the required constraints and compute a path and return a 274 response to the PCC. In the context of this document it may be 275 necessary for the PCE to co-operate with other PCEs in adjacent 276 domains (as per BRPC [RFC5441]) or cooperate with the Parent PCE (as 277 per [PCE-HIERARCHY-FWK]). 279 A PCE may be used to compute end-to-end paths across single or 280 multiple domains. Multiple PCEs may be dedicated to each area to 281 provide sufficient path computation capacity and redundancy for each 282 domain. 284 During path computation [RFC5440], a PCC request may contain backup 285 LSP requirements in order to setup in the same time the primary and 286 backup LSPs. This request is known as dependent path computations. 287 A typical dependent request for a primary and backup service would 288 request that the computation assign a set of diverse paths, so both 289 services are disjointed from each other. 291 3.1. Online Path Computation 293 Online path computation is performed on-demand as nodes in the 294 network determine that they need to know the paths to use for 295 services. 297 3.2. Offline Path Computation 299 Offline path computation is performed ahead of time, before the LSP 300 setup is requested. That means that it is requested by, or performed 301 as part of, a management application. 303 This method of computation allows the optimal placement of services 304 and explicit control of services. A Communication Service Provide 305 (CSP) can plan where new protection services will be placed ahead of 306 time. Furthermore by computing paths offline specific scenarios can 307 be considered and a global view of network resources is available. 309 Finally, offline path computation provides a method to compute 310 protection paths in the event of a single, or multiple, link 311 failures. This allows the placement of backup services in the event 312 of catastrophic network failures. 314 4. Protection Service Traffic Engineering 316 4.1. Path Computation 318 This document describes how the PCE architecture defined in [RFC4655] 319 may be utilized to compute protection and recovery paths for critical 320 network services. In the context of this document (inter-domain) it 321 may be necessary for the PCE to co-operate with other PCEs in 322 adjacent domains (as per BRPC [RFC5441]) or cooperate with the Parent 323 PCE (as per [PCE-HIERARCHY-FWK]). 325 4.2. Bandwidth Reservation 327 4.3. Disjoint Path 329 Disjoint paths are required for end-to-end protection services. A 330 backup service may be required to be fully disjoint from the primary 331 service, link disjoint (allowing common nodes on the paths), or best- 332 effort disjoint (allowing shared links or nodes when no other path 333 can be found). 335 4.4. Service Preemption 337 4.5. Share Risk Link Groups 339 4.6. Multi-Homing 341 Networks constructed from multi-areas or multi-AS environments may 342 have multiple interconnect points (multi-homing). End-to-end path 343 computations may need to use different interconnect points to avoid 344 single point failures disrupting primary and backup services. 346 Domain and path diversity may also be required when computing end-to- 347 end paths. Domain diversity should facilitate the selection of paths 348 that share ingress and egress domains, but do not share transit 349 domains. Therefore, there must be a method allowing the inclusion or 350 exclusion of specific domains when computing end-to-end paths. 352 4.6.1. Ingress and Egress Protection 354 An end-to-end primary service carried by a primary TE LSP from a 355 primary ingress node to a primary egress node may need to be 356 protected against the failures in the ingress and the egress. In 357 this case, a backup ingress and a backup egress are required, which 358 are different from the primary ingress and the primary egress 359 respectively. The backup ingress should be in the same domain as the 360 primary ingress, and the backup egress should be in the same domain 361 as the primary egress. 363 A source of the service traffic may be sent to both the primary 364 ingress and the backup ingress (dual-homing). The source may not be 365 in the same domain as the primary ingress and the backup ingress. 366 When the primary ingress fails, the service traffic is delivered 367 through the backup ingress. 369 A receiver of the service traffic may be connected to both the 370 primary egress and the backup egress (dual-homing). The receiver may 371 not be in the same domain as the primary egress and the backup 372 egress. When the primary egress fails, the receiver gets the service 373 traffic from the backup egress. 375 5. Packet Protection Applications 377 Network survivability is a key objective for CSPs, particularly as 378 expanding revenue services (cloud and data center applications) are 379 increasing exponentially. 381 Pre-fault paths are pre-computed and protection resources are 382 reserved a priory for rapid recovery. In the event of a network 383 failure on the primary path, the traffic is fast switched to the 384 backup path. These pre-provisioned mechanisms are capable of 385 ensuring protection against single link failures. 387 Post-fault restoration schemes are reactive and require a reactive 388 routing procedure to set up new working paths in the event of a 389 failure. Post fault restoration can significantly impact network 390 services as they are typically impacted by longer restoration delays 391 and cannot guarantee recovery of a service. However, they are much 392 more network resource efficient and are capable of handling multi- 393 failure situations. 395 5.1. Single Domain Service Protection 397 A variety of pre-planned protection and post-fault restoration 398 recovery schemes are available for single domain MPLS and GMPLS 399 networks, these include: 401 o Path Recovery 403 o Path Segment Recovery 405 o Local Recovery (Fast Reroute) 407 5.2. Multi-domain Service Protection 409 Typically network survivability has focused on single-domain 410 scenarios. By contrast, broader multi-domain scenarios are much more 411 challenging as no single entity has a global view of topology 412 information. As a result, multi-domain survivability is very 413 important. 415 A PCE may be used to compute end-to-end paths across multi-domain 416 environments using a per-domain path computation technique [RFC5152]. 417 The so called backward recursive path computation (BRPC) mechanism 418 [RFC5441] defines a PCE-based path computation procedure to compute 419 inter-domain constrained LSPs. 421 5.3. Backup Path Computation 423 A PCE can be used to compute backup paths in the context of fast 424 reroute protection of TE LSPs. In this model, all backup TE LSPs 425 protecting a given facility are computed in a coordinated manner by a 426 PCE. This allows complete bandwidth sharing between backup tunnels 427 protecting independent elements, while avoiding any extensions to TE 428 LSP signaling. Both centralized and distributed computation models 429 are applicable. In the distributed case each LSR can be a PCE to 430 compute the paths of backup tunnels to protect against the failure of 431 adjacent network links or nodes. 433 5.4. Fast Reroute (FRR) Path Computation 435 As stated in [RFC4090], there are two independent methods (one-to-one 436 backup and facility backup) of doing fast reroute (FRR). PCE can be 437 used to compute backup path for both of the methods. Cooperating 438 PCEs may be used to compute inter-domain backup path. 440 In case of one to one backup method, the destination MUST be the 441 tail-end of the protected LSP. Whereas for facility backup, 442 destination MUST be the address of the merge point (MP) from the 443 corresponding point of local repair (PLR). The problem of finding 444 the MP using the interface addresses or node-ids present in Record 445 Route Object (RRO) of protected path can be easily solved in the case 446 of a single Interior Gateway Protocol (IGP) area because the PLR has 447 the complete Traffic Engineering Database (TED). Thus, the PLR can 448 unambiguously determine - 450 o The MP address regardless of RRO IPv4 or IPv6 sub-objects 451 (interface address or LSR ID). 453 o Does a backup tunnel intersecting a protected TE LSP on MP node 454 exist? This is the case where facility backup tunnel already 455 exists either due to another protected TE LSP or it is pre- 456 configured. 458 It is complex for a PLR to find the MP in case of boundary node 459 protection for computing a bypass path because the PLR doesn't have 460 the full TED visibility. When confidentiality (via path key) 461 [RFC5520] is enabled, finding MP is very complex. 463 This document describes the mechanism to find MP and to setup bypass 464 tunnel to protect a boundary node. 466 5.4.1. Methods to find MP and calculate the optimal backup path 468 The Merge Point (MP) address is required at the PLR in order to 469 select a bypass tunnel intersecting a protected Traffic Engineering 470 Label Switched Path (TE LSP) on a downstream LSR. 472 Some implementations may choose to pre-configure a bypass tunnel on 473 PLR with destination address as MP. MP's Domain to be traversed by 474 bypass path can be administratively configured or learned via some 475 other means (ex Hierarchical PCE (HPCE) [PCE-HIERARCHY-FWK]). Path 476 Computation Client (PCC) on PLR can request its local PCE to compute 477 bypass path from PLR to MP, excluding links and node between PLR and 478 MP. At PLR once primary tunnel is up, a pre-configured bypass tunnel 479 is bound to the primary tunnel, note that multiple bypass tunnels can 480 also exist. 482 Most implementations may choose to create a bypass tunnel on PLR 483 after primary tunnel is signaled with Record Route Object (RRO) being 484 present in primary path's Resource Reservation Protocol (RSVP) Path 485 Reserve message. MP address has to be determined (described below) 486 to create a bypass tunnel. PCC on PLR can request its local PCE to 487 compute bypass path from PLR to MP, excluding links and node between 488 PLR and MP. 490 5.4.1.1. Intra-domain node protection 492 [R1]----[R2]----[R3]----[R4]---[R5] 493 \ / 494 [R6]--[R7]--[R8] 496 Protected LSP Path: [R1->R2->R3->R4->R5] 497 Bypass LSP Path: [R2->R6->R7->R8->R4] 499 Figure 1: Node Protection for R3 501 In Figure 1, R2 has to build a bypass tunnel that protects against 502 the failure of link [R2->R3] and node [R3]. R2 is PLR and R4 is MP 503 in this case. Since, both PLR and MP belong to the same area. The 504 problem of finding the MP using the interface addresses or node-ids 505 can be easily solved. Thus, the PLR can unambiguously find the MP 506 address regardless of RRO IPv4 or IPv6 sub-objects (interface address 507 or LSR ID) and also determine whether a backup tunnel intersecting a 508 protected TE LSP on a downstream node (MP) already exists. 510 TED on PLR will have the information of both R2 and R4, which can be 511 used to find MP's TE router IP address and compute optimal backup 512 path from R2 to R4, excluding link [R2->R3] and node [R3]. 514 Thus, RSVP-TE can signal bypass tunnel along the computed path. 516 5.4.1.2. Boundary node protection 518 5.4.1.2.1. Area Boundary Router (ABR) node protection 520 | 521 PCE-1 | PCE-2 522 | 523 IGP area 0 | IGP area 1 524 | 525 | 526 [R1]----[R2]----[R3]----[R4]---[R5] 527 \ | / 528 [R6]--[R7]--[R8] 529 | 530 | 531 | 533 Protected LSP Path: [R1->R2->R3->R4->R5] 534 Bypass LSP Path: [R2->R6->R7->R8->R4] 535 Figure 2: Node Protection for R3 (ABR) 537 In Figure 2, cooperating PCE(s) (PCE-1 and PCE-2) have computed the 538 primary LSP Path [R1->R2->R3->R4->R5] and provided to R1 (PCC). 540 R2 has to build a bypass tunnel that protects against the failure of 541 link [R2->R3] and node [R3]. R2 is PLR and R4 is MP. Both PLR and 542 MP are in different area. TED on PLR doesn't have the information of 543 R4. 545 The problem of finding the MP address in a network with inter-domain 546 TE LSP is solved by inserting a node-id sub-object [RFC4561] in the 547 RRO object carried in the RSVP Path Reserve message. PLR can find 548 out the MP from the RRO it has received in Path Reserve message from 549 its downstream LSR. 551 But the computation of optimal backup path from R2 to R4, excluding 552 link [R2->R3] and node [R3] is not possible with running of 553 Constrained Shortest Path First (CSPF) algorithm locally at R2. PCE 554 can be used to compute backup path in this case. R2 acting as PCC on 555 PLR can request PCE-1 to compute bypass path from PLR(R2) to MP(R4), 556 excluding link [R2->R3] and node [R3]. PCE MAY use inter-domain path 557 computation mechanism (like HPCE ([PCE-HIERARCHY-FWK]) etc) when the 558 domain information of MP is unknown at PLR. Further, RSVP-TE can 559 signal bypass tunnel along the computed path. 561 5.4.1.2.2. Autonomous System Border Router (ASBR) node protection 563 | | 564 PCE-1 | | PCE-2 565 | | 566 AS 100 | | AS 200 567 | | 568 | | 569 [R1]----[R2]-------[R3]---------[R4]---[R5] 570 |\ | / 571 | +-----[R6]--[R7]--[R8] 572 | | 573 | | 575 Protected LSP Path: [R1->R2->R3->R4->R5] 576 Bypass LSP Path: [R2->R6->R7->R8->R4] 578 Figure 3: Node Protection for R3 (ASBR) 580 In Figure 3, Links [R2->R3] and [R2->R6] are inter-AS links. IGP 581 extensions ([RFC5316] and [RFC5392]) describe the flooding of 582 inter-AS TE information for inter-AS path computation. Cooperating 583 PCE(s) (PCE-1 and PCE-2) have computed the primary LSP Path 584 [R1->R2->R3->R4->R5] and provided to R1 (PCC). 586 R2 is PLR and R4 is MP. Both PLR and MP are in different AS. TED on 587 PLR doesn't have the information of R4. 589 The address of MP can be found using node-id sub-object [RFC4561] in 590 the RRO object carried in the RSVP Path Reserve message. And 591 Cooperating PCEs could be used to compute the inter-AS bypass path. 592 Thus ASBR boundary node protection is similar to ABR protection. 594 5.4.1.2.3. Boundary node protection with Path-Key Confidentiality 596 [RFC5520] defines a mechanism to hide the contents of a segment of a 597 path, called the Confidential Path Segment (CPS). The CPS may be 598 replaced by a path-key that can be conveyed in the PCE Communication 599 Protocol (PCEP) and signaled within in a Resource Reservation 600 Protocol TE (RSVP-TE) explicit route object. 602 [RFC5553] states that, when the signaling message crosses a domain 603 boundary, the path segment that needs to be hidden (that is, a CPS) 604 MAY be replaced in the RRO with a PKS. Note that RRO in Resv message 605 carries the same PKS as originally signaled in the ERO of the Path 606 message. 608 5.4.1.2.3.1. Area Boundary Router (ABR) node protection 610 | 611 PCE-1 | PCE-2 612 | 613 IGP area 0 | IGP area 1 614 | 615 | 616 [R1]----[R2]----[R3]----[R4]---[R5]---[R9] 617 \ | / 618 [R6]--[R7]--[R8] 619 | 620 | 621 | 623 Figure 4: Node Protection for R3 (ABR) and Path-Key 625 In Figure 4, when path-key is enabled, cooperating PCE(s) (PCE-1 and 626 PCE-2) have computed the primary LSP Path [R1->R2->R3->PKS->R9] and 627 provided to R1 (PCC). 629 When the ABR node (R3) replaces the CPS with PKS (as originally 630 signaled) during the Reserve message handling, it MAY also add the 631 immediate downstream node-id (R4) (so that the PLR (R2) can identify 632 the MP (R4)). Further the PLR (R2) SHOULD remove the MP node-id (R4) 633 before sending the Reserve message upstream to head end router. 635 Once MP is identified, the backup path computation using PCE is as 636 described earlier. (Section 5.4.1.2.1) 638 5.4.1.2.3.2. Autonomous System Border Router (ASBR) node protection 640 | | 641 PCE-1 | | PCE-2 642 | | 643 AS 100 | | AS 200 644 | | 645 | | 646 [R1]----[R2]-------[R3]---------[R4]---[R5] 647 |\ | / 648 | +-----[R6]--[R7]--[R8] 649 | | 650 | | 652 Figure 5: Node Protection for R3 (ASBR) 654 The address of MP can be found using the same mechanism as explained 655 above. Thus ASBR boundary node protection is similar to ABR 656 protection. 658 5.5. Point-to-Multipoint Path Protection 660 A PCE utilizing the extensions outlined in [RFC6006] (Extensions to 661 PCEP for Point-to-Multipoint Traffic Engineering Label Switched 662 Paths), can be used to compute point-to-multipoint (P2MP) paths. A 663 PCC requesting path computation for a primary and backup path can 664 request that these dependent computations use diverse paths. 665 Furthermore, the specification also defines two new options for P2MP 666 path dependent computation requests. The first option allows the PCC 667 to request that the PCE should compute a secondary P2MP path tree 668 with partial path diversity for specific leaves or a specific source- 669 to-leaf (sub-path to the primary P2MP path tree. The second option, 670 allows the PCC to request that partial paths should be link direction 671 diverse. 673 6. Optical Protection Applications 675 6.1. ASON Applicability 677 6.2. Multi-domain Restoration 679 7. Path and Service Protection Gaps 681 8. Manageability Considerations 683 8.1. Control of Function and Policy 685 TBD 687 8.2. Information and Data Models 689 TBD 691 8.3. Liveness Detection and Monitoring 693 TBD 695 8.4. Verify Correct Operations 697 TBD 699 8.5. Requirements On Other Protocols 701 TBD 703 8.6. Impact On Network Operations 705 TBD 707 9. Security Considerations 709 This document does not introduce new security issues. However, MP's 710 node-id is carried as subobject in RRO across domain. This 711 relaxation is required to find MP in case of BN protection. The 712 security considerations pertaining to the [RFC3209], [RFC4090] and 713 [RFC5440] protocols remain relevant. 715 10. IANA Considerations 717 This document makes no requests for IANA action. 719 11. Acknowledgement 721 We would like to thank Daniel King, Udayashree Palle, Sandeep Boina & 722 Reeja Paul for their useful comments and suggestions. 724 12. References 726 12.1. Normative References 728 [RFC2119] Bradner, S., "Key words for use in RFCs to 729 Indicate Requirement Levels", BCP 14, RFC 2119, 730 March 1997. 732 12.2. Informative References 734 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., 735 Srinivasan, V., and G. Swallow, "RSVP-TE: 736 Extensions to RSVP for LSP Tunnels", RFC 3209, 737 December 2001. 739 [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast 740 Reroute Extensions to RSVP-TE for LSP Tunnels", 741 RFC 4090, May 2005. 743 [RFC4561] Vasseur, J., Ali, Z., and S. Sivabalan, 744 "Definition of a Record Route Object (RRO) 745 Node-Id Sub-Object", RFC 4561, June 2006. 747 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path 748 Computation Element (PCE)-Based Architecture", 749 RFC 4655, August 2006. 751 [RFC4726] Farrel, A., Vasseur, J., and A. Ayyangar, "A 752 Framework for Inter-Domain Multiprotocol Label 753 Switching Traffic Engineering", RFC 4726, 754 November 2006. 756 [RFC5152] Vasseur, JP., Ayyangar, A., and R. Zhang, "A 757 Per-Domain Path Computation Method for 758 Establishing Inter-Domain Traffic Engineering 759 (TE) Label Switched Paths (LSPs)", RFC 5152, 760 February 2008. 762 [RFC5316] Chen, M., Zhang, R., and X. Duan, "ISIS 763 Extensions in Support of Inter-Autonomous System 764 (AS) MPLS and GMPLS Traffic Engineering", 765 RFC 5316, December 2008. 767 [RFC5392] Chen, M., Zhang, R., and X. Duan, "OSPF 768 Extensions in Support of Inter-Autonomous System 769 (AS) MPLS and GMPLS Traffic Engineering", 770 RFC 5392, January 2009. 772 [RFC5440] Vasseur, JP. and JL. Le Roux, "Path Computation 773 Element (PCE) Communication Protocol (PCEP)", 774 RFC 5440, March 2009. 776 [RFC5441] Vasseur, JP., Zhang, R., Bitar, N., and JL. Le 777 Roux, "A Backward-Recursive PCE-Based 778 Computation (BRPC) Procedure to Compute Shortest 779 Constrained Inter-Domain Traffic Engineering 780 Label Switched Paths", RFC 5441, April 2009. 782 [RFC5520] Bradford, R., Vasseur, JP., and A. Farrel, 783 "Preserving Topology Confidentiality in Inter- 784 Domain Path Computation Using a Path-Key-Based 785 Mechanism", RFC 5520, April 2009. 787 [RFC5553] Farrel, A., Bradford, R., and JP. Vasseur, 788 "Resource Reservation Protocol (RSVP) Extensions 789 for Path Key Support", RFC 5553, May 2009. 791 [RFC6006] Zhao, Q., King, D., Verhaeghe, F., Takeda, T., 792 Ali, Z., and J. Meuric, "Extensions to the Path 793 Computation Element Communication Protocol 794 (PCEP) for Point-to-Multipoint Traffic 795 Engineering Label Switched Paths", RFC 6006, 796 September 2010. 798 [PCE-HIERARCHY-FWK] King, D. and A. Farrel, "The Application of the 799 Path Computation Element Architecture to the 800 Determination of a Sequence of Domains in MPLS 801 and GMPLS. (draft-ietf-pce-hierarchy-fwk-05)", 802 August 2012. 804 [G-7715] ITU-T, "ITU-T Recommendation G.7715 (2002), 805 Architecture and Requirements for the 806 Automatically Switched Optical Network (ASON).". 808 [G-7715-2] ITU-T, "ITU-T Recommendation G.7715.2 (2007), 809 ASON routing architecture and requirements for 810 remote route query.". 812 [G-8080] ITU-T, "ITU-T Recommendation G.8080/Y.1304, 813 Architecture for the automatically switched 814 optical network (ASON).". 816 Authors' Addresses 818 Huaimo Chen 819 Huawei Technologies 820 Boston, MA 821 USA 823 EMail: huaimochen@huawei.com 825 Venugopal Reddy Kondreddy 826 Huawei Technologies 827 Leela Palace 828 Bangalore, Karnataka 560008 829 INDIA 831 EMail: venugopalreddyk@huawei.com 833 Dhruv Dhody 834 Huawei Technologies 835 Leela Palace 836 Bangalore, Karnataka 560008 837 INDIA 839 EMail: dhruv.dhody@huawei.com