idnits 2.17.1 draft-ietf-ccamp-inter-domain-framework-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 22. -- Found old boilerplate from RFC 3978, Section 5.5 on line 1027. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 894. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 901. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 907. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing document type: Expected "INTERNET-DRAFT" in the upper left hand corner of the first page == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 2006) is 6462 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'LMP' is mentioned on line 126, but not defined == Missing Reference: 'GMPLS-SEG' is mentioned on line 804, but not defined == Unused Reference: 'RFC3667' is defined on line 923, but no explicit reference was found in the text == Unused Reference: 'RFC3668' is defined on line 926, but no explicit reference was found in the text ** Obsolete normative reference: RFC 3667 (Obsoleted by RFC 3978) ** Obsolete normative reference: RFC 3668 (Obsoleted by RFC 3979) ** Obsolete normative reference: RFC 3784 (Obsoleted by RFC 5305) -- Obsolete informational reference (is this intentional?): RFC 4420 (Obsoleted by RFC 5420) -- No information found for draft-otani-ccamp-interas-GMPLS-TE - is the name correct? -- Obsolete informational reference (is this intentional?): RFC 4379 (Obsoleted by RFC 8029) Summary: 7 errors (**), 0 flaws (~~), 6 warnings (==), 10 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Adrian Farrel 2 IETF Internet Draft Old Dog Consulting 3 Proposed Status: Informational 4 Expires: February 2007 Jean-Philippe Vasseur 5 Cisco Systems, Inc. 7 Arthi Ayyangar 8 Nuova Systems 10 August 2006 12 A Framework for Inter-Domain Multiprotocol Label Switching 13 Traffic Engineering 15 draft-ietf-ccamp-inter-domain-framework-06.txt 17 Status of this Memo 19 By submitting this Internet-Draft, each author represents that any 20 applicable patent or other IPR claims of which he or she is aware 21 have been or will be disclosed, and any of which he or she becomes 22 aware will be disclosed, in accordance with Section 6 of BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF), its areas, and its working groups. Note that 26 other groups may also distribute working documents as 27 Internet-Drafts. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 The list of current Internet-Drafts can be accessed at 35 http://www.ietf.org/ietf/1id-abstracts.txt. 37 The list of Internet-Draft Shadow Directories can be accessed at 38 http://www.ietf.org/shadow.html. 40 Abstract 42 This document provides a framework for establishing and controlling 43 Multiprotocol Label Switching (MPLS) and Generalized MPLS (GMPLS) 44 Traffic Engineered (TE) Label Switched Paths (LSPs) in multi-domain 45 networks. 47 For the purposes of this document, a domain is considered to be any 48 collection of network elements within a common sphere of address 49 management or path computational responsibility. Examples of such 50 domains include Interior Gateway Protocol (IGP) areas and Autonomous 51 Systems (ASs). 53 Contents 55 1. Introduction ............................................... 3 56 1.1. Nested Domains ......................................... 3 57 2. Signaling Options .......................................... 4 58 2.1. LSP Nesting ............................................ 4 59 2.2. Contiguous LSP ......................................... 5 60 2.3. LSP Stitching .......................................... 5 61 2.4. Hybrid Methods ......................................... 6 62 2.5. Control of Downstream Choice of Signaling Method ....... 6 63 3. Path Computation Techniques ................................ 6 64 3.1. Management Configuration ............................... 7 65 3.2. Head End Computation ................................... 7 66 3.2.1. Multi-Domain Visibility Computation ................ 7 67 3.2.2. Partial Visibility Computation ..................... 7 68 3.2.3. Local Domain Visibility Computation ................ 8 69 3.3. Domain Boundary Computation ............................ 8 70 3.4. Path Computation Element ............................... 9 71 3.4.1. Multi-Domain Visibility Computation ................ 9 72 3.4.2. Path Computation Use of PCE When Preserving 73 Confidentiality ................................... 10 74 3.4.3. Per-Domain Computation Servers .................... 10 75 3.5. Optimal Path Computation .............................. 10 76 4. Distributing Reachability and TE Information .............. 11 77 5. Comments on Advanced Functions ............................ 12 78 5.1. LSP Re-Optimization ................................... 12 79 5.2. LSP Setup Failure ..................................... 13 80 5.3. LSP Repair ............................................ 13 81 5.4. Fast Reroute .......................................... 14 82 5.5. Comments on Path Diversity ............................ 15 83 5.6. Domain-Specific Constraints ........................... 15 84 5.7. Policy Control ........................................ 16 85 5.8. Inter-domain Operations and Management (OAM) .......... 16 86 5.9. Point-to-Multipoint ................................... 16 87 5.10. Applicability to Non-Packet Technologies ............. 17 88 6. Security Considerations ................................... 17 89 7. IANA Considerations ....................................... 18 90 8. Acknowledgements .......................................... 18 91 9. Intellectual Property Considerations ...................... 18 92 10. Normative References ..................................... 19 93 11. Informational References ................................. 19 94 12. Authors' Addresses ....................................... 21 95 13. Full Copyright Statement ................................. 21 97 1. Introduction 99 The Traffic Engineering Working Group has developed requirements for 100 inter-area and inter-AS Multiprotocol Label Switching (MPLS) Traffic 101 Engineering in [RFC4105] and [RFC4216]. 103 Various proposals have subsequently been made to address some or all 104 of these requirements through extensions to the Resource Reservation 105 Protocol Traffic Engineering extensions (RSVP-TE) and to the Interior 106 Gateway Protocols (IGPs) (i.e., ISIS and OSPF). 108 This document introduces the techniques for establishing Traffic 109 Engineered (TE) Label Switched Paths (LSPs) across multiple domains. 110 In this context and within the remainder of this document, we 111 consider all source-based and constraint-based routed LSPs and refer 112 to them interchangeably as "TE LSPs" or "LSPs". 114 The functional components of these techniques are separated into the 115 mechanisms for discovering reachability and TE information, for 116 computing the paths of LSPs, and for signaling the LSPs. Note that 117 the aim of this document is not to detail each of those techniques 118 which are covered in separate documents referenced from the sections 119 of this document that introduce the techniques, but rather to propose 120 a framework for inter-domain MPLS Traffic Engineering. 122 Note that in the remainder of this document, the term "MPLS Traffic 123 Engineering" is used equally to apply to MPLS and Generalized MPLS 124 (GMPLS) traffic. Specific issues pertaining to the use of GMPLS in 125 inter-domain environments (for example, policy implications of the 126 use of the Link Management Protocol [LMP] on inter-domain links) 127 these are covered in separate documents such as [GMPLS-AS]. 129 For the purposes of this document, a domain is considered to be any 130 collection of network elements within a common sphere of address 131 management or path computational responsibility. Examples of such 132 domains include IGP areas and Autonomous Systems. Wholly or partially 133 overlapping domains (e.g. path computation sub-domains of areas or 134 ASs) are not within the scope of this document. 136 1.1. Nested Domains 138 Nested domains are outside the scope of this document. It may be that 139 some domains that are nested administratively or for the purposes of 140 address space management can be considered as adjacent domains for 141 the purposes of this document, however the fact that the domains are 142 nested is then immaterial. In the context of MPLS TE, domain A is 143 considered to be nested within domain B if domain A is wholly 144 contained in Domain B, and domain B is fully or partially aware of 145 the TE characteristics and topology of domain A. 147 2. Signaling Options 149 Three distinct options for signaling TE LSPs across multiple domains 150 are identified. The choice of which options to use may be influenced 151 by the path computation technique used (see section 3), although some 152 path computation techniques may apply to multiple signaling options. 153 The choice may further depend on the application to which the TE LSPs 154 are put and the nature, topology and switching capabilities of the 155 network. 157 A comparison of the usages of the different signaling options is 158 beyond the scope of this document and should be the subject of a 159 separate applicability statement. 161 2.1. LSP Nesting 163 Hierarchical LSPs form a fundamental part of MPLS [RFC3031] and are 164 discussed in further detail in [RFC4206]. Hierarchical LSPs may 165 optionally be advertised as TE links. Note that a hierarchical LSP 166 that spans multiple domains cannot be advertised in this way because 167 there is no concept of TE information that spans domains. 169 Hierarchical LSPs can be used in support of inter-domain TE LSPs. 170 In particular, a hierarchical LSP may be used to achieve connectivity 171 between any pair of Label Switching Routers (LSRs) within a domain. 172 The ingress and egress of the hierarchical LSP could be the edge 173 nodes of the domain in which case connectivity is achieved across the 174 entire domain, or they could be any other pair of LSRs in the domain. 176 The technique of carrying one TE LSP within another is termed LSP 177 nesting. A hierarchical LSP may provide a TE LSP tunnel to transport 178 (i.e. nest) multiple TE LSPs along a common part of their paths. 179 Alternatively, a TE LSP may carry (i.e. nest) a single LSP in a 180 one-to-one mapping. 182 The signaling trigger for the establishment of a hierarchical LSP may 183 be the receipt of a signaling request for the TE LSP that it will 184 carry, or may be a management action to "pre-engineer" a domain to be 185 crossed by TE LSPs that would be used as hierarchical LSPs by the 186 traffic that has to traverse the domain. Furthermore, the mapping 187 (inheritance rules) between attributes of the nested and the 188 hierarchical LSPs (including bandwidth) may be statically 189 pre-configured or, for on-demand hierarchical LSPs, may be dynamic 190 according to the properties of the nested LSPs. Even in the dynamic 191 case inheritance from the properties of the nested LSP(s) can be 192 complemented by local or domain-wide policy rules. 194 Note that a hierarchical LSP may be constructed to span multiple 195 domains or parts of domains. However, such an LSP cannot be 196 advertised as a TE link that spans domains. The end points of a 197 hierarchical LSP are not necessarily on domain boundaries, so nesting 198 is not limited to domain boundaries. 200 Note also that the Interior/Exterior Gateway Protocol (IGP/EGP) 201 routing topology is maintained unaffected by the LSP connectivity and 202 TE links introduced by hierarchical LSPs even if they are advertised 203 as TE links. That is, the routing protocols do not exchange messages 204 over the hierarchical LSPs, and LSPs are not used to create routing 205 adjacencies between routers. 207 During the operation of establishing a nested LSP that uses a 208 hierarchical LSP, the SENDER_TEMPLATE and SESSION objects remain 209 unchanged along the entire length of the nested LSP, as do all other 210 objects that have end-to-end significance. 212 2.2. Contiguous LSP 214 A single contiguous LSP is established from ingress to egress in a 215 single signaling exchange. No further LSPs are required to be 216 established to support this LSP so that hierarchical or stitched LSPs 217 are not needed. 219 A contiguous LSP uses the same Session/LSP ID along the whole of its 220 path (that is, at each LSR). The notions of "splicing" together 221 different LSPs, or of "shuffling" Session or LSP identifiers is not 222 considered. 224 2.3. LSP Stitching 226 LSP Stitching is described in [STITCH]. In the LSP stitching model 227 separate LSPs (referred to as a TE LSP segments) are established and 228 are "stitched" together in the data plane so that a single end-to-end 229 label switched path is achieved. The distinction is that the 230 component LSP segments are signaled as distinct TE LSPs in the 231 control plane. Each signaled TE LSP segment has a different source 232 and destination. 234 LSP stitching can be used in support of inter-domain TE LSPs. In 235 particular, an LSP segment may be used to achieve connectivity 236 between any pair of LSRs within a domain. The ingress and egress of 237 the LSP segment could be the edge nodes of the domain in which case 238 connectivity is achieved across the entire domain, or they could be 239 any other pair of LSRs in the domain. 241 The signaling trigger for the establishment of a TE LSP segment may 242 be the establishment of the previous TE LSP segment, the receipt of 243 a setup request for TE LSP that it plans to stitch to a local TE LSP 244 segment, or may be a management action. 246 LSP segments may be managed and advertised as TE links. 248 2.4. Hybrid Methods 250 There is nothing to prevent the mixture of signaling methods 251 described above when establishing a single, end-to-end, inter-domain 252 TE LSP. It may be desirable in this case for the choice of the 253 various methods to be reported along the path, perhaps through the 254 Record Route Object (RRO). 256 If there is a desire to restrict which methods are used, this must be 257 signaled as described in the next section. 259 2.5. Control of Downstream Choice of Signaling Method 261 Notwithstanding the previous section, an ingress LSR may wish to 262 restrict the signaling methods applied to a particular LSP at domain 263 boundaries across the network. Such control, where it is required, 264 may be achieved by the definition of appropriate new flags in the 265 SESSION-ATTRIBUTE object or the Attributes Flags TLV of the 266 LSP_ATTRIBUTES object [RFC4420]. Before defining a mechanism to 267 provide this level of control, the functional requirement to control 268 the way in which the network delivers a service must be established 269 and due consideration must be given to the impact on 270 interoperability since new mechanisms must be backwards compatible, 271 and care must be taken to avoid allowing standards-conformant 272 implementations each supporting a different functional subset such 273 that they are not capable of establishing LSPs. 275 3. Path Computation Techniques 277 The discussion of path computation techniques within this document is 278 limited significantly to the determination of where computation may 279 take place and what components of the full path may be determined. 281 The techniques used are closely tied to the signaling methodologies 282 described in the previous section in that certain computation 283 techniques may require the use of particular signaling approaches and 284 vice versa. 286 Any discussion of the appropriateness of a particular path 287 computation technique in any given circumstance is beyond the scope 288 of this document and should be described in a separate applicability 289 statement. 291 Path computation algorithms are firmly out of scope of this document. 293 3.1. Management Configuration 295 Path computation may be performed by offline tools or by a network 296 planner. The resultant path may be supplied to the ingress LSR as 297 part of the TE LSP or service request, and encoded by the ingress LSR 298 as an Explicit Route Object (ERO) on the Path message that is sent 299 out. 301 There is no reason why the path provided by the operator should not 302 span multiple domains if the relevant information is available to the 303 planner or the offline tool. The definition of what information is 304 needed to perform this operation and how that information is 305 gathered, is outside the scope of this document. 307 3.2. Head End Computation 309 The head end, or ingress, LSR may assume responsibility for path 310 computation when the operator supplies part or none of the explicit 311 path. The operator must, in any case, supply at least the destination 312 address (egress) of the LSP. 314 3.2.1. Multi-Domain Visibility Computation 316 If the ingress has sufficient visibility of the topology and TE 317 information for all of the domains across which it will route the LSP 318 to its destination then it may compute and provide the entire path. 319 The quality of this path (that is, its optimality as discussed in 320 section 3.5) can be better if the ingress has full visibility into 321 all relevant domains rather than just sufficient visibility to 322 provide some path to the destination. 324 Extreme caution must be exercised in consideration of the 325 distribution of the requisite TE information. See section 4. 327 3.2.2. Partial Visibility Computation 329 It may be that the ingress does not have full visibility of the 330 topology of all domains, but does have information about the 331 connectedness of the domains and the TE resource availability across 332 the domains. In this case, the ingress is not able to provide a fully 333 specified strict explicit path from ingress to egress. However, for 334 example, the ingress might supply an explicit path that comprises: 335 - explicit hops from ingress to the local domain boundary 336 - loose hops representing the domain entry points across the network 337 - a loose hop identifying the egress. 339 Alternatively, the explicit path might be expressed as: 340 - explicit hops from ingress to the local domain boundary 341 - strict hops giving abstract nodes representing each domain in turn 342 - a loose hop identifying the egress. 344 These two explicit path formats could be mixed according to the 345 information available resulting in different combinations of loose 346 hops and abstract nodes. 348 This form of explicit path relies on some further computation 349 technique being applied at the domain boundaries. See section 3.3. 351 As with the multi-domain visibility option, extreme caution must be 352 exercised in consideration of the distribution of the requisite TE 353 information. See section 4. 355 3.2.3. Local Domain Visibility Computation 357 A final possibility for ingress-based computation is that the ingress 358 LSR has visibility only within its own domain, and connectivity 359 information only as far as determining one or more domain exit points 360 that may be suitable for carrying the LSP to its egress. 362 In this case the ingress builds an explicit path that comprises just: 363 - explicit hops from ingress to the local domain boundary 364 - a loose hop identifying the egress. 366 3.3. Domain Boundary Computation 368 If the partial explicit path methods described in sections 3.2.2 or 369 3.2.3 are applied then the LSR at each domain boundary is responsible 370 for ensuring that there is sufficient path information added to the 371 Path message to carry it at least to the next domain boundary (that 372 is, out of the new domain). 374 If the LSR at the domain boundary has full visibility to the egress 375 then it can supply the entire explicit path. Note however, that the 376 ERO processing rules of [RFC3209] state that it should only update 377 the ERO as far as the next specified hop (that is, the next domain 378 boundary if one was supplied in the original ERO) and, of course, 379 must not insert ERO subobjects immediately before a strict hop. 381 If the LSR at the domain boundary has only partial visibility (using 382 the definitions of section 3.2.2) it will fill in the path as far as 383 the next domain boundary, and will supply further domain/domain 384 boundary information if not already present in the ERO. 386 If the LSR at the domain boundary has only local visibility into the 387 immediate domain it will simply add information to the ERO to carry 388 the Path message as far as the next domain boundary. 390 Domain boundary path computations are performed independently from 391 each other. Domain boundary LSRs may have different computation 392 capabilities, run different path computation algorithms, apply 393 different sets of constraints and optimization criteria, and so 394 forth, which might result in path segment quality which is 395 unpredictable to and out of the control of the ingress LSR. A 396 solution to this issue lies in enhancing the information signaled 397 during LSP setup to include a larger set of constraints and to 398 include the paths of related LSPs (such as diverse protected LSPs) 399 as described in [GMPLS-E2E]. 401 It is also the case that paths generated on domain boundaries may 402 produce loops. Specifically, the paths computed may loop back into a 403 domain that has already been crossed by the LSP. This may, or may not 404 be a problem, and might even be desirable, but could also give rise 405 to real loops. This can be avoided by using the recorded route (RRO) 406 to provide exclusions within the path computation algorithm, but in 407 the case of lack of trust between domains it may be necessary for the 408 RRO to indicate the previously visited domains. Even this solution is 409 not available where the RRO is not available on a Path message. Note 410 that when an RRO is used to provide exclusions, and a loop-free path 411 is found to be not available by the computation at a downstream 412 border node, crankback [CRANKBACK] may enable an upstream border node 413 to select an alternate path. 415 3.4. Path Computation Element 417 The computation techniques in sections 3.2 and 3.3 rely on topology 418 and TE information being distributed to the ingress LSR and those 419 LSRs at domain boundaries. These LSRs are responsible for computing 420 paths. Note that there may be scaling concerns with distributing the 421 required information - see section 4. 423 An alternative technique places the responsibility for path 424 computation with a Path Computation Element (PCE) [PCE]. There may be 425 either a centralized PCE, or multiple PCEs (each having local 426 visibility and collaborating in a distributed fashion to compute an 427 end-to-end path) across the entire network and even within any one 428 domain. The PCE may collect topology and TE information from the same 429 sources as would be used by LSRs in the previous paragraph, or though 430 other means. 432 Each LSR called upon to perform path computation (and even the 433 offline management tools described in section 3.1) may abdicate the 434 task to a PCE of its choice. The selection of PCE(s) may be driven by 435 static configuration or the dynamic discovery. 437 3.4.1. Multi-Domain Visibility Computation 439 A PCE may have full visibility, perhaps through connectivity to 440 multiple domains. In this case it is able to supply a full explicit 441 path as in section 3.2.1. 443 3.4.2. Path Computation Use of PCE When Preserving Confidentiality 445 Note that although a centralized PCE or multiple collaborative PCEs 446 may have full visibility into one or more domains, it may be 447 desirable (e.g. to preserve topology confidentiality) that the full 448 path is not provided to the ingress LSR. Instead, a partial path is 449 supplied (as in section 3.2.2 or 3.2.3) and the LSRs at each domain 450 boundary are required to make further requests for each successive 451 segment of the path. 453 In this way an end-to-end path may be computed using the full network 454 capabilities, but confidentiality between domains may be preserved. 455 Optionally, the PCE(s) may compute the entire path at the first 456 request and hold it in storage for subsequent requests, or it may 457 recompute each leg of the path on each request or at regular 458 intervals until requested by the LSRs establishing the LSP. 460 It may be the case that the centralized PCE or the collaboration 461 between PCEs may define a trust relationship greater than that 462 normally operational between domains. 464 3.4.3. Per-Domain Computation Elements 466 A third way that PCEs may be used is simply to have one (or more) per 467 domain. Each LSR within a domain that wishes to derive a path across 468 the domain may consult its local PCE. 470 This mechanism could be used for all path computations within the 471 domain, or specifically limited to computations for LSPs that will 472 leave the domain where external connectivity information can then be 473 restricted to just the PCE. 475 3.5. Optimal Path Computation 477 There are many definitions of an optimal path depending on the 478 constraints applied to the path computation. In a multi-domain 479 environment the definitions are multiplied so that an optimal route 480 might be defined as the route that would be computed in the absence 481 of domain boundaries. Alternatively, another constraint might be 482 applied to the path computation to reduce or limit the number of 483 domains crossed by the LSP. 485 It is easy to construct examples that show that partitioning a 486 network into domains, and the resulting loss or aggregation of 487 routing information may lead to the computation of routes that are 488 other than optimal. It is impossible to guarantee optimal routing in 489 the presence of aggregation / abstraction / summarization of routing 490 information. 492 It is beyond the scope of this document to define what is an optimum 493 path for an inter-domain TE LSP. This debate is abdicated in favor of 494 requirements documents and applicability statements for specific 495 deployment scenarios. Note, however, that the meaning of certain 496 computation metrics may differ between domains (see section 5.6). 498 4. Distributing Reachability and TE Information 500 Traffic Engineering information is collected into a TE Database (TED) 501 on which path computation algorithms operate either directly or by 502 first constructing a network graph. 504 The path computation techniques described in the previous section 505 make certain demands upon the distribution of reachability 506 information and the TE capabilities of nodes and links within domains 507 as well as the TE connectivity across domains. 509 Currently, TE information is distributed within domains by additions 510 to IGPs [RFC3630], [RFC3784]. 512 In cases where two domains are interconnected by one or more links 513 (that is, the domain boundary falls on a link rather than on a node), 514 there should be a mechanism to distribute the TE information 515 associated with the inter-domain links to the corresponding domains. 516 This would facilitate better path computation and reduce TE-related 517 crankbacks on these links. 519 Where a domain is a subset of an IGP area, filtering of TE 520 information may be applied at the domain boundary. This filtering may 521 be one way, or two way. 523 Where information needs to reach a PCE that spans multiple domains, 524 the PCE may snoop on the IGP traffic in each domain, or play an 525 active part as an IGP-capable node in each domain. The PCE might also 526 receive TED updates from a proxy within the domain. 528 It could be possible that an LSR that performs path computation (for 529 example, an ingress LSR) obtains the topology and TE information of 530 not just its own domain, but other domains as well. This information 531 may be subject to filtering applied by the advertising domain (for 532 example, the information may be limited to Forwarding Adjacencies 533 (FAs) across other domains, or the information may be aggregated or 534 abstracted). 536 Before starting work on any protocols or protocol extensions to 537 enable cross-domain reachability and TE advertisement in support of 538 inter-domain TE, the requirements and benefits must be clearly 539 established. This has not been done to date. Where any cross-domain 540 reachability and TE information needs to be advertised, consideration 541 must be given to TE extensions to existing protocols such as BGP, and 542 how the information advertised may be fed to the IGPs. It must be 543 noted that any extensions that cause a significant increase in the 544 amount of processing (such as aggregation computation) at domain 545 boundaries, or a significant increase in the amount of information 546 flooded (such as detailed TE information) need to be treated with 547 extreme caution and compared carefully with the scaling requirements 548 expressed in [RFC4105] and [RFC4216]. 550 5. Comments on Advanced Functions 552 This section provides some non-definitive comments on the constraints 553 placed on advanced MPLS TE functions by inter-domain MPLS. It does 554 not attempt to state the implications of using one inter-domain 555 technique or another. Such material is deferred to appropriate 556 applicability statements where statements about the capabilities of 557 existing or future signaling, routing and computation techniques to 558 deliver the functions listed should be made. 560 5.1. LSP Re-Optimization 562 Re-optimization is the process of moving a TE LSP from one path to 563 another, more preferable path (where no attempt is made in this 564 document to define "preferable" as no attempt was made to define 565 "optimal"). Make-before-break techniques are usually applied to 566 ensure that traffic is disrupted as little as possible. The Shared 567 Explicit style is usually used to avoid double booking of network 568 resources. 570 Re-optimization may be available within a single domain. 571 Alternatively, re-optimization may involve a change in route across 572 several domains or might involve a choice of different transit 573 domains. 575 Re-optimization requires that all or part of the path of the LSP be 576 re-computed. The techniques used may be selected as described in 577 section 3, and this will influence whether the whole or part of the 578 path is re-optimized. 580 The trigger for path computation and re-optimization may be an 581 operator request, a timer, information about a change in 582 availability of network resources, or a change in operational 583 parameters (for example bandwidth) of an LSP. This trigger must be 584 applied to the point in the network that requests re-computation and 585 controls re-optimization and may require additional signaling. 587 Note also that where multiple mutually-diverse paths are applied 588 end-to-end (i.e. not simply within protection domains - see section 589 5.5) the point of calculation for re-optimization (whether it is PCE, 590 ingress, or domain entry point) needs to know all such paths before 591 attempting re-optimization of any one path. Mutual diversity here 592 means that a set of computed paths have no commonality. Such 593 diversity might be link, node, Shared Risk Link Group (SRLG) or even 594 domain disjointedness according to circumstances and the service 595 being delivered. 597 It may be the case that re-optimization is best achieved by 598 recomputing the paths of multiple LSPs at once. Indeed, this can be 599 shown to be most efficient when the paths of all LSPs are known, not 600 simply those LSPs that originate at a particular ingress. While this 601 problem is inherited from single domain re-optimization and is out of 602 scope within this document, it should be noted that the problem grows 603 in complexity when LSPs wholly within one domain affect the 604 re-optimization path calculations performed in another domain. 606 5.2. LSP Setup Failure 608 When an inter-domain LSP setup fails in some domain other than the 609 first, various options are available for reporting and retrying the 610 LSP. 612 In the first instance, a retry may be attempted within the domain 613 that contains the failure. That retry may be attempted by nodes 614 wholly within the domain, or the failure may be referred back to the 615 LSR at the domain boundary. 617 If the failure cannot be bypassed within the domain where the failure 618 occurred (perhaps there is no suitable alternate route, perhaps 619 rerouting is not allowed by domain policy, or perhaps the Path 620 message specifically bans such action), the error must be reported 621 back to the previous or head-end domain. 623 Subsequent repair attempts may be made by domains further upstream, 624 but will only be properly effective if sufficient information about 625 the failure and other failed repair attempts is also passed back 626 upstream [CRANKBACK]. Note that there is a tension between this 627 requirement and that of topology confidentiality although crankback 628 aggregation may be applicable at domain boundaries. 630 Further attempts to signal the failed LSP may apply the information 631 about the failures as constraints to path computation, or may signal 632 them as specific path exclusions [EXCLUDE]. 634 When requested by signaling, the failure may also be systematically 635 reported to the head-end LSR. 637 5.3. LSP Repair 639 An LSP that fails after it has been established may be repaired 640 dynamically by re-routing. The behavior in this case is either like 641 that for re-optimization, or for handling setup failures (see 642 previous two sections). Fast Reroute may also be used (see below). 644 5.4. Fast Reroute 646 MPLS Traffic Engineering Fast Reroute ([RFC4090]) defines local 647 protection schemes intended to provide fast recovery (in 10s of 648 msecs) of fast-reroutable packet-based TE LSPs upon link/SRLG/Node 649 failure. A backup TE LSP is configured and signaled at each hop, and 650 activated upon detecting or being informed of a network element 651 failure. The node immediately upstream of the failure (called the PLR 652 - Point of Local Repair) reroutes the set of protected TE LSPs onto 653 the appropriate backup tunnel(s) and around the failed resource. 655 In the context of inter-domain TE, there are several different 656 failure scenarios that must be analyzed. Provision of suitable 657 solutions may be further complicated by the fact that [RFC4090] 658 specifies two distinct modes of operation referred to as the "one to 659 one mode" and the "facility back-up mode". 661 The failure scenarios specific to inter-domain TE are as follows: 663 - Failure of a domain edge node that is present in both domains. 664 There are two sub-cases: 666 - The Point of Local Repair (PLR) and the Merge Point (MP) are in 667 the same domain 669 - The PLR and the MP are in different domains. 671 - Failure of a domain edge node that is only present in one of the 672 domains. 674 - Failure of an inter-domain link. 676 Although it may be possible to apply the same techniques for FRR to 677 the different methods of signaling inter-domain LSPs described in 678 section 2, the results of protection may be different when it is the 679 boundary nodes that need to be protected, and when they are the 680 ingress and egress of a hierarchical LSP or stitched LSP segment. In 681 particular, the choice of PLR and MP may be different, and the length 682 of the protection path may be greater. These use of FRR techniques 683 should be explained further in applicability statements or, in the 684 case of a change in base behavior, in implementation guidelines 685 specific to the signaling techniques. 687 Note that after local repair has been performed, it may be desirable 688 to re-optimize the LSP (see section 5.1). If the point of 689 re-optimization (for example the ingress LSR) lies in a different 690 domain to the failure, it may rely on the delivery of a PathErr or 691 Notify message to inform it of the local repair event. 693 It is important to note that Fast Reroute techniques are only 694 applicable to packet switching networks because other network 695 technologies cannot apply label stacking within the same switching 696 type. Segment protection [SEG-PROT] provides a suitable alternative 697 that is applicable to packet and non-packet networks. 699 5.5. Comments on Path Diversity 701 Diverse paths may be required in support of load sharing and/or 702 protection. Such diverse paths may be required to be node diverse, 703 link diverse, fully path diverse (that is, link and node diverse), or 704 SRLG diverse. 706 Diverse path computation is a classic problem familiar to all graph 707 theory majors. The problem is compounded when there are areas of 708 "private knowledge" such as when domains do not share topology 709 information. The problem can be resolved more efficiently (e.g. 710 avoiding the "trap problem") when mutually resource disjoint paths 711 can be computed "simultaneously" on the fullest set of information. 713 That being said, various techniques (out of the scope of this 714 document) exist to ensure end-to-end path diversity across multiple 715 domains. 717 Many network technologies utilize "protection domains" because they 718 fit well with the capabilities of the technology. As a result, many 719 domains are operated as protection domains. In this model, protection 720 paths converge at domain boundaries. 722 Note that the question of SRLG identification is not yet fully 723 answered. There are two classes of SRLG: 725 - those that indicate resources that are all contained within one 726 domain 728 - those that span domains. 730 The former might be identified using a combination of a globally 731 scoped domain ID, and an SRLG ID that is administered by the domain. 732 The latter requires a global scope to the SRLG ID. Both schemes, 733 therefore, require external administration. The former is able to 734 leverage existing domain ID administration (for example, area and AS 735 numbers), but the latter would require a new administrative policy. 737 5.6. Domain-Specific Constraints 739 While the meaning of certain constraints, like bandwidth, can be 740 assumed to be constant across different domains, other TE constraints 741 (such as resource affinity, color, metric, priority, etc.) may have 742 different meanings in different domains and this may impact the 743 ability to support DiffServ-aware MPLS, or to manage pre-emption. 745 In order to achieve consistent meaning and LSP establishment, this 746 fact must be considered when performing constraint-based path 747 computation or when signaling across domain boundaries. 749 A mapping function can be derived for most constraints based on 750 policy agreements between the Domain administrators. The details of 751 such a mapping function are outside the scope of this document, but 752 it is important to note that the default behavior must either be 753 that a constant mapping is applied or that any requirement to apply 754 these constraints across a domain boundary must fail in the absence 755 of explicit mapping rules. 757 5.7. Policy Control 759 Domain boundaries are natural points for policy control. There is 760 little to add on this subject except to note that a TE LSP that 761 cannot be established on a path through one domain because of a 762 policy applied at the domain boundary, may be satisfactorily 763 established using a path that avoids the demurring domain. In any 764 case, when a TE LSP signaling attempt is rejected due to 765 non-compliance with some policy constraint, this should be reflected 766 to the ingress LSR. 768 5.8. Inter-domain Operations and Management (OAM) 770 Some elements of OAM may be intentionally confined within a domain. 771 Others (such as end-to-end liveness and connectivity testing) clearly 772 need to span the entire multi-domain TE LSP. Where issues of 773 topology confidentiality are strong, collaboration between PCEs or 774 domain boundary nodes might be required in order to provide 775 end-to-end OAM, and a significant issue to be resolved is to ensure 776 that the end-points support the various OAM capabilities. 778 The different signaling mechanisms described above may need 779 refinements to [RFC4379], and [BFD-MPLS], etc., to gain full 780 end-to-end visibility. These protocols should, however, be considered 781 in the light of topology confidentiality requirements. 783 Route recording is a commonly used feature of signaling that provides 784 OAM information about the path of an established LSP. When an LSP 785 traverses a domain boundary, the border node may remove or aggregate 786 some of the recorded information for topology confidentiality or 787 other policy reasons. 789 5.9. Point-to-Multipoint 791 Inter-domain point-to-multipoint (P2MP) requirements are explicitly 792 out of scope of this document. They may be covered by other documents 793 dependent on the details of MPLS TE P2MP solutions. 795 5.10. Applicability to Non-Packet Technologies 797 Non-packet switching technologies may present particular issues for 798 inter-domain LSPs. While packet switching networks may utilize 799 control planes built on MPLS or GMPLS technology, non-packet networks 800 are limited to GMPLS. 802 On the other hand, some problems such as Fast Re-Route on domain 803 boundaries (see section 5.4) may be handled by the GMPLS technique of 804 segment protection [GMPLS-SEG] that is applicable to both packet and 805 non-packet switching technologies. 807 The specific architectural considerations and requirements for 808 inter-domain LSP setup in non-packet networks are covered in a 809 separate document [GMPLS-AS]. 811 6. Security Considerations 813 Requirements for security within domains are unchanged from [RFC3209] 814 and [RFC3473], and from [RFC3630] and [RFC3784]. That is, all 815 security procedures for existing protocols in the MPLS context 816 continue to apply for the intra-domain cases. 818 Inter-domain security may be considered as a more important and more 819 sensitive issue than intra-domain security since in inter-domain 820 traffic engineering control and information may be passed across 821 administrative boundaries. The most obvious, and most sensitive case 822 is inter-AS TE. 824 All of the intra-domain security measures for the signaling and 825 routing protocols are equally applicable in the inter-domain case. 826 There is, however, a greater likelihood of them being applied in the 827 inter-domain case. 829 Security for inter-domain MPLS TE is the subject of a separate 830 document that analyses the security deployment models and risks. This 831 separate document must be completed before inter-domain MPLS TE 832 solution documents can be advanced. 834 Similarly, the PCE procedures [PCE] are subject to security measures 835 for the exchange computation information between PCEs, and for LSRs 836 that request path computations from a PCE. The requirements for this 837 security (set out in [PCE-REQ]) apply whether the LSR and PCE (or the 838 cooperating PCEs) are in the same domain or lie across domain 839 boundaries. 841 It should be noted, however, that techniques used for (for example) 842 authentication require coordination of secrets, keys, or passwords 843 between sender and receiver. Where sender and receiver lie within a 844 single administrative domain, this process may be simple. But where 845 sender and receiver lie in different administrative domains, 846 cross-domain coordination between network administrators will be 847 required in order to provide adequate security. At this stage, it is 848 not proposed that this coordination be provided through an automatic 849 process nor through the use of a protocol. Human-to-human 850 coordination is more likely to provide the required level of 851 confidence in the inter-domain security. 853 One new security concept is introduced by inter-domain MPLS TE. This 854 is the preservation of confidentiality of topology information. That 855 is, one domain may wish to keep secret the way that its network is 856 constructed and the availability (or otherwise) of end-to-end network 857 resources. This issue is discussed in sections 3.4.2, 5.2, and 5.8 of 858 this document. When there is a requirement to preserve inter-domain 859 topology confidentiality, policy filters must be applied at the 860 domain boundaries to avoid distributing such information. This is the 861 responsibility of the domain that distributes information, and may be 862 adequately addressed by aggregation of information as described in 863 the referenced sections. 865 Applicability statements for particular combinations of signaling, 866 routing, and path computation techniques to provide inter-domain MPLS 867 TE solutions are expected to contain detailed security sections. 869 7. IANA Considerations 871 This document makes no requests for any IANA action. 873 8. Acknowledgements 875 The authors would like to extend their warmest thanks to Kireeti 876 Kompella for convincing them to expend effort on this document. 878 Grateful thanks to Dimitri Papadimitriou, Tomohiro Otani and Igor 879 Bryskin for their review and suggestions on the text. 881 Thanks to Jari Arkko, Gonzalo Camarillo, Brian Carpenter, 882 Lisa Dusseault, Sam Hartman, Russ Housley, and Dan Romascanu for 883 final review of the text. 885 9. Intellectual Property Considerations 887 The IETF takes no position regarding the validity or scope of any 888 Intellectual Property Rights or other rights that might be claimed to 889 pertain to the implementation or use of the technology described in 890 this document or the extent to which any license under such rights 891 might or might not be available; nor does it represent that it has 892 made any independent effort to identify any such rights. Information 893 on the procedures with respect to rights in RFC documents can be 894 found in BCP 78 and BCP 79. 896 Copies of IPR disclosures made to the IETF Secretariat and any 897 assurances of licenses to be made available, or the result of an 898 attempt made to obtain a general license or permission for the use of 899 such proprietary rights by implementers or users of this 900 specification can be obtained from the IETF on-line IPR repository at 901 http://www.ietf.org/ipr. 903 The IETF invites any interested party to bring to its attention any 904 copyrights, patents or patent applications, or other proprietary 905 rights that may cover technology that may be required to implement 906 this standard. Please address the information to the IETF at 907 ietf-ipr@ietf.org. 909 10. Normative References 911 [RFC3031] Rosen, E., Viswanathan, A. and R. Callon, 912 "Multiprotocol Label Switching Architecture", RFC 3031, 913 January 2001. 915 [RFC3209] Awduche, et al, "Extensions to RSVP for LSP Tunnels", 916 RFC 3209, December 2001. 918 [RFC3473] Berger, L., Editor "Generalized Multi-Protocol Label 919 Switching (GMPLS) Signaling - Resource ReserVation 920 Protocol-Traffic Engineering (RSVP-TE) Extensions", 921 RFC 3473, January 2003. 923 [RFC3667] Bradner, S., "IETF Rights in Contributions", BCP 78, 924 RFC 3667, February 2004. 926 [RFC3668] Bradner, S., Ed., "Intellectual Property Rights in IETF 927 Technology", BCP 79, RFC 3668, February 2004. 929 [RFC3630] Katz, D., Yeung, D., Kompella, K., "Traffic Engineering 930 Extensions to OSPF Version 2", RFC 3630, September 2003 932 [RFC3784] Li, T., Smit, H., "IS-IS extensions for Traffic 933 Engineering", RFC 3784, June 2004. 935 11. Informational References 937 [RFC4420] A. Farrel, D. Papadimitriou, JP. Vasseur, "Encoding of 938 Attributes for Multiprotocol Label Switching (MPLS) 939 Label Switched Path (LSP) Establishment Using RSVP-TE", 940 RFC 4420, February 2006. 942 [BFD-MPLS] R. Aggarwal and K. Kompella, "BFD For MPLS LSPs", work 943 in progress. 945 [CRANKBACK] Farrel, A., et al., "Crankback Signaling Extensions for 946 MPLS Signaling", draft-ietf-ccamp-crankback, 947 work in progress. 949 [EXCLUDE] Lee et all, Exclude Routes - Extension to RSVP-TE, 950 draft-ietf-ccamp-rsvp-te-exclude-route, work in 951 progress. 953 [RFC4090] Ping Pan, et al, "Fast Reroute Extensions to RSVP-TE 954 for LSP Tunnels", RFC 4090, May 2005. 956 [GMPLS-AS] Otani, T., Kumaki, K., and Okamoto, S., "GMPLS Inter-AS 957 Traffic Engineering Requirements", 958 draft-otani-ccamp-interas-GMPLS-TE, work in progress. 960 [GMPLS-E2E] Lang, J.P., Rekhter, Y., Papadimitriou, D., Editors, 961 "RSVP-TE Extensions in support of End-to-End 962 GMPLS-based Recovery", 963 draft-lang-ccamp-gmpls-recovery-e2e-signaling, work in 964 progress. 966 [RFC4206] Kompella K., Rekhter Y., "LSP Hierarchy with 967 Generalized MPLS TE", RFC 4206, October 2005. 969 [RFC4105] Le Roux, Vasseur et Boyle, "Requirements for Inter-Area 970 MPLS Traffic Engineering", RFC 4105, June 2005. 972 [RFC4216] Zhang, R., Vasseur, JP. et al, "MPLS Inter-Autonomous 973 System (AS) Traffic Engineering (TE) Requirements", 974 RFC 4216, November 2005. 976 [RFC4379] Kompella, K., and Swallow, G., "Detecting Multi- 977 Protocol Label Switched (MPLS) Data Plane Failures", 978 RFC 4379, February 2006. 980 [PCE] Ash, G., Farrel, A., and Vasseur, JP., "Path 981 Computation Element (PCE) Architecture", 982 draft-ietf-pce-architecture, work in progress. 984 [PCE-REQ] Ash, G., and Le Roux, J.L., "PCE Communication Protocol 985 Generic Requirements", 986 draft-ietf-pce-comm-protocol-gen-reqs, work in 987 progress. 989 [SEG-PROT] Berger, L., Bryskin, I., Papadimitriou, D. and Farrel, 990 A., "GMPLS Based Segment Recovery", 991 draft-ietf-ccamp-gmpls-segment-recovery, work in 992 progress. 994 [STITCH] Ayyangar, A., and Vasseur, JP., "LSP Stitching with 995 Generalized MPLS TE", 996 draft-ietf-ccamp-lsp-stitching, work in progress. 998 12. Authors' Addresses 1000 Adrian Farrel 1001 Old Dog Consulting 1002 EMail: adrian@olddog.co.uk 1004 JP Vasseur 1005 Cisco Systems, Inc 1006 1414 Massachusetts Avenue 1007 Boxborough, MA 01719 1008 USA 1009 Email: jpv@cisco.com 1011 Arthi Ayyangar 1012 Nuova Systems 1013 Email: arthi@nuovasystems.com 1015 13. Full Copyright Statement 1017 Copyright (C) The Internet Society (2006). This document is subject 1018 to the rights, licenses and restrictions contained in BCP 78, and 1019 except as set forth therein, the authors retain all their rights. 1021 This document and the information contained herein are provided on an 1022 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1023 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 1024 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 1025 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 1026 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1027 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.