idnits 2.17.1 draft-ietf-rtgwg-ipfrr-framework-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 15. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 669. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 680. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 687. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 693. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 25, 2008) is 5904 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-11) exists of draft-ietf-bfd-base-07 == Outdated reference: A later version (-11) exists of draft-ietf-rtgwg-ipfrr-notvia-addresses-01 == Outdated reference: A later version (-12) exists of draft-ietf-rtgwg-ipfrr-spec-base-10 == Outdated reference: A later version (-07) exists of draft-ietf-rtgwg-lf-conv-frmwk-02 Summary: 1 error (**), 0 flaws (~~), 5 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Shand 3 Internet-Draft S. Bryant 4 Intended status: Informational Cisco Systems 5 Expires: August 28, 2008 February 25, 2008 7 IP Fast Reroute Framework 8 draft-ietf-rtgwg-ipfrr-framework-08.txt 10 Status of this Memo 12 By submitting this Internet-Draft, each author represents that any 13 applicable patent or other IPR claims of which he or she is aware 14 have been or will be disclosed, and any of which he or she becomes 15 aware will be disclosed, in accordance with Section 6 of BCP 79. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that 19 other groups may also distribute working documents as Internet- 20 Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six months 23 and may be updated, replaced, or obsoleted by other documents at any 24 time. It is inappropriate to use Internet-Drafts as reference 25 material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt. 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 This Internet-Draft will expire on August 28, 2008. 35 Copyright Notice 37 Copyright (C) The IETF Trust (2008). 39 Abstract 41 This document provides a framework for the development of IP fast- 42 reroute mechanisms which provide protection against link or router 43 failure by invoking locally determined repair paths. Unlike MPLS 44 Fast-reroute, the mechanisms are applicable to a network employing 45 conventional IP routing and forwarding. An essential part of such 46 mechanisms is the prevention of packet loss caused by the loops which 47 normally occur during the re-convergence of the network following a 48 failure. 50 Table of Contents 52 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 53 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 54 3. Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . 6 55 4. Mechanisms for IP Fast-reroute . . . . . . . . . . . . . . . . 7 56 4.1. Mechanisms for fast failure detection . . . . . . . . . . 7 57 4.2. Mechanisms for repair paths . . . . . . . . . . . . . . . 8 58 4.2.1. Scope of repair paths . . . . . . . . . . . . . . . . 9 59 4.2.2. Analysis of repair coverage . . . . . . . . . . . . . 9 60 4.2.3. Link or node repair . . . . . . . . . . . . . . . . . 10 61 4.2.4. Maintenance of Repair paths . . . . . . . . . . . . . 11 62 4.2.5. Multiple failures and Shared Risk Link Groups . . . . 11 63 4.3. Local Area Networks . . . . . . . . . . . . . . . . . . . 12 64 4.4. Mechanisms for micro-loop prevention . . . . . . . . . . . 12 65 5. Management Considerations . . . . . . . . . . . . . . . . . . 12 66 6. Scope and applicability . . . . . . . . . . . . . . . . . . . 13 67 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 68 8. Security Considerations . . . . . . . . . . . . . . . . . . . 13 69 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 13 70 10. Informative References . . . . . . . . . . . . . . . . . . . . 14 71 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 72 Intellectual Property and Copyright Statements . . . . . . . . . . 16 74 1. Terminology 76 This section defines words and acronyms used in this draft and other 77 drafts discussing IP Fast-reroute. 79 D Used to denote the destination router under 80 discussion. 82 Distance_opt(A,B) The distance of the shortest path from A to B. 84 Downstream Path This is a subset of the loop-free alternates 85 where the neighbor N meet the following 86 condition:- 88 Distance_opt(N, D) < Distance_opt(S,D) 90 E Used to denote the router which is the primary 91 next-hop neighbor to get from S to the 92 destination D. Where there is an ECMP set for the 93 shortest path from S to D, these are referred to 94 as E_1, E_2, etc. 96 ECMP Equal cost multi-path: Where, for a particular 97 destination D, multiple primary next-hops are 98 used to forward traffic because there exist 99 multiple shortest paths from S via different 100 output layer-3 interfaces. 102 FIB Forwarding Information Base. The database used 103 by the packet forwarder to determine what actions 104 to perform on a packet. 106 IPFRR IP fast-reroute. 108 Link(A->B) A link connecting router A to router B. 110 LFA Loop Free Alternate. This is a neighbor N, that 111 is not a primary next-hop neighbor E, whose 112 shortest path to the destination D does not go 113 back through the router S. The neighbor N must 114 meet the following condition:- 116 Distance_opt(N, D) < Distance_opt(N, S) + 117 Distance_opt(S, D) 119 Loop Free Neighbor A neighbor N_i, which is not the particular 120 primary neighbor E_k under discussion, and whose 121 shortest path to D does not traverse S. For 122 example, if there are two primary neighbors E_1 123 and E_2, E_1 is a loop-free neighbor with regard 124 to E_2 and vice versa. 126 Loop Free Link Protecting Alternate 127 This is a path via a Loop-Free Neighbor N_i which 128 does not go through the particular link of S 129 which is being protected to reach the destination 130 D. 132 Loop Free Node-protecting Alternate 133 This is a path via a Loop-Free Neighbor N_i which 134 does not go through the particular primary 135 neighbor of S which is being protected to reach 136 the destination D. 138 N_i The ith neighbor of S. 140 Primary Neighbor A neighbor N_i of S which is one of the next hops 141 for destination D in S's FIB prior to any 142 failure. 144 R_i_j The jth neighbor of N_i. 146 Routing Transition The process whereby routers converge on a new 147 topology. In conventional networks this process 148 frequently causes some disruption to packet 149 delivery. 151 RPF Reverse Path Forwarding. I.e. checking that a 152 packet is received over the interface which would 153 be used to send packets addressed to the source 154 address of the packet. 156 S Used to denote a router that is the source of a 157 repair that is computed in anticipation of the 158 failure of a neighboring router denoted as E, or 159 of the link between S and E. It is the viewpoint 160 from which IP Fast-Reroute is described. 162 S_i The set of neighbors of E, in addition to S, 163 which will independently take the role of S for 164 the traffic they carry. 166 SPF Shortest Path First, e.g. Dijkstra's algorithm. 168 SPT Shortest path tree 170 Upstream Forwarding Loop 171 This is a forwarding loop which involves a set of 172 routers, none of which are directly connected to 173 the link which has caused the topology change 174 that triggered a new SPF in any of the routers. 176 2. Introduction 178 When a link or node failure occurs in a routed network, there is 179 inevitably a period of disruption to the delivery of traffic until 180 the network re-converges on the new topology. Packets for 181 destinations which were previously reached by traversing the failed 182 component may be dropped or may suffer looping. Traditionally such 183 disruptions have lasted for periods of at least several seconds, and 184 most applications have been constructed to tolerate such a quality of 185 service. 187 Recent advances in routers have reduced this interval to under a 188 second for carefully configured networks using link state IGPs. 189 However, new Internet services are emerging which may be sensitive to 190 periods of traffic loss which are orders of magnitude shorter than 191 this. 193 Addressing these issues is difficult because the distributed nature 194 of the network imposes an intrinsic limit on the minimum convergence 195 time which can be achieved. 197 However, there is an alternative approach, which is to compute backup 198 routes that allow the failure to be repaired locally by the router(s) 199 detecting the failure without the immediate need to inform other 200 routers of the failure. In this case, the disruption time can be 201 limited to the small time taken to detect the adjacent failure and 202 invoke the backup routes. This is analogous to the technique 203 employed by MPLS Fast-Reroute [RFC4090], but the mechanisms employed 204 for the backup routes in pure IP networks are necessarily very 205 different. 207 This document provides a framework for the development of this 208 approach. 210 3. Problem Analysis 212 The duration of the packet delivery disruption caused by a 213 conventional routing transition is determined by a number of factors: 215 1. The time taken to detect the failure. This may be of the order 216 of a few mS when it can be detected at the physical layer, up to 217 several tens of seconds when a routing protocol hello is 218 employed. During this period packets will be unavoidably lost. 220 2. The time taken for the local router to react to the failure. 221 This will typically involve generating and flooding new routing 222 updates, perhaps after some hold-down delay, and re-computing the 223 router's FIB. 225 3. The time taken to pass the information about the failure to other 226 routers in the network. In the absence of routing protocol 227 packet loss, this is typically between 10mS and 100mS per hop. 229 4. The time taken to re-compute the forwarding tables. This is 230 typically a few mS for a link state protocol using Dijkstra's 231 algorithm. 233 5. The time taken to load the revised forwarding tables into the 234 forwarding hardware. This time is very implementation dependant 235 and also depends on the number of prefixes affected by the 236 failure, but may be several hundred mS. 238 The disruption will last until the routers adjacent to the failure 239 have completed steps 1 and 2, and then all the routers in the network 240 whose paths are affected by the failure have completed the remaining 241 steps. 243 The initial packet loss is caused by the router(s) adjacent to the 244 failure continuing to attempt to transmit packets across the failure 245 until it is detected. This loss is unavoidable, but the detection 246 time can be reduced to a few tens of mS as described in Section 4.1. 248 Subsequent packet loss is caused by the "micro-loops" which form 249 because of temporary inconsistencies between routers' forwarding 250 tables. These occur as a result of the different times at which 251 routers update their forwarding tables to reflect the failure. These 252 variable delays are caused by steps 3, 4 and 5 above and in many 253 routers it is step 5 which is both the largest factor and which has 254 the greatest variance between routers. The large variance arises 255 from implementation differences and from the differing impact that a 256 failure has on each individual router. For example, the number of 257 prefixes affected by the failure may vary dramatically from one 258 router to another. 260 In order to achieve packet disruption times which are commensurate 261 with the failure detection times it is necessary to perform two 262 distinct tasks: 264 1. Provide a mechanism for the router(s) adjacent to the failure to 265 rapidly invoke a repair path, which is unaffected by any 266 subsequent re-convergence. 268 2. Provide a mechanism to prevent the effects of micro-loops during 269 subsequent re-convergence. 271 Performing the first task without the second will result in the 272 repair path being starved of traffic and hence being redundant. 273 Performing the second without the first will result in traffic being 274 discarded by the router(s) adjacent to the failure. Both tasks are 275 necessary for an effective solution to the problem. 277 However, repair paths can be used in isolation where the failure is 278 short-lived. The repair paths can be kept in place until the failure 279 is repaired and there is no need to advertise the failure to other 280 routers. 282 Similarly, micro-loop avoidance can be used in isolation to prevent 283 loops arising from pre-planned management action, because the link or 284 node being shut down can remain in service for a short time after its 285 removal has been announced into the network, and hence it can 286 function as its own "repair path". 288 Note that micro-loops can also occur when a link or node is restored 289 to service and thus a micro-loop avoidance mechanism is required for 290 both link up and link down cases. 292 4. Mechanisms for IP Fast-reroute 294 The set of mechanisms required for an effective solution to the 295 problem can be broken down into the following sub-problems. 297 4.1. Mechanisms for fast failure detection 299 It is critical that the failure detection time is minimized. A 300 number of approaches are possible, such as: 302 1. Physical detection; for example, loss of light. 304 2. Routing protocol independent protocol detection; for example, The 305 Bidirectional Failure Detection protocol [I-D.ietf-bfd-base]. 307 3. Routing protocol detection; for example, use of "fast hellos". 309 4.2. Mechanisms for repair paths 311 Once a failure has been detected by one of the above mechanisms, 312 traffic which previously traversed the failure is transmitted over 313 one or more repair paths. The design of the repair paths should be 314 such that they can be pre-calculated in anticipation of each local 315 failure and made available for invocation with minimal delay. There 316 are three basic categories of repair paths: 318 1. Equal cost multi-paths (ECMP). Where such paths exist, and one 319 or more of the alternate paths do not traverse the failure, they 320 may trivially be used as repair paths. 322 2. Loop free alternate paths. Such a path exists when a direct 323 neighbor of the router adjacent to the failure has a path to the 324 destination which can be guaranteed not to traverse the failure. 326 3. Multi-hop repair paths. When there is no feasible loop free 327 alternate path it may still be possible to locate a router, which 328 is more than one hop away from the router adjacent to the 329 failure, from which traffic will be forwarded to the destination 330 without traversing the failure. 332 ECMP and loop free alternate paths (as described in 333 [I-D.ietf-rtgwg-ipfrr-spec-base]) offer the simplest repair paths and 334 would normally be used when they are available. It is anticipated 335 that around 80% of failures (see Section 4.2.2) can be repaired using 336 these basic methods alone. 338 Multi-hop repair paths are more complex, both in the computations 339 required to determine their existence, and in the mechanisms required 340 to invoke them. They can be further classified as: 342 1. Mechanisms where one or more alternate FIBs are pre-computed in 343 all routers and the repaired packet is instructed to be forwarded 344 using a "repair FIB" by some method of per packet signaling such 345 as detecting a "U-turn" [I-D.atlas-ip-local-protect-uturn] , 346 [FIFR] or by marking the packet [SIMULA]. 348 2. Mechanisms functionally equivalent to a loose source route which 349 is invoked using the normal FIB. These include tunnels 350 [I-D.bryant-ipfrr-tunnels], alternative shortest paths 351 [I-D.tian-frr-alt-shortest-path] and label based mechanisms. 353 3. Mechanisms employing special addresses or labels which are 354 installed in the FIBs of all routers with routes pre-computed to 355 avoid certain components of the network. For example 356 [I-D.ietf-rtgwg-ipfrr-notvia-addresses]. 358 In many cases a repair path which reaches two hops away from the 359 router detecting the failure will suffice, and it is anticipated that 360 around 98% of failures (see Section 4.2.2) can be repaired by this 361 method. However, to provide complete repair coverage some use of 362 longer multi-hop repair paths is generally necessary. 364 4.2.1. Scope of repair paths 366 A particular repair path may be valid for all destinations which 367 require repair or may only be valid for a subset of destinations. If 368 a repair path is valid for a node immediately downstream of the 369 failure, then it will be valid for all destinations previously 370 reachable by traversing the failure. However, in cases where such a 371 repair path is difficult to achieve because it requires a high order 372 multi-hop repair path, it may still be possible to identify lower 373 order repair paths (possibly even loop free alternate paths) which 374 allow the majority of destinations to be repaired. When IPFRR is 375 unable to provide complete repair, it is desirable that the extent of 376 the repair coverage can be determined and reported via network 377 management. 379 There is a tradeoff to be achieved between minimizing the number of 380 repair paths to be computed, and minimizing the overheads incurred in 381 using higher order multi-hop repair paths for destinations for which 382 they are not strictly necessary. However, the computational cost of 383 determining repair paths on an individual destination basis can be 384 very high. 386 It will frequently be the case that the majority of destinations may 387 be repaired using only the "basic" repair mechanism, leaving a 388 smaller subset of the destinations to be repaired using one of the 389 more complex multi-hop methods. Such a hybrid approach may go some 390 way to resolving the conflict between completeness and complexity. 392 The use of repair paths may result in excessive traffic passing over 393 a link, resulting in congestion discard. This reduces the 394 effectiveness of IPFRR. Mechanisms to influence the distribution of 395 repaired traffic to minimize this effect are therefore desirable. 397 4.2.2. Analysis of repair coverage 399 In some cases the repair strategy will permit the repair of all 400 single link or node failures in the network for all possible 401 destinations. This can be defined as 100% coverage. However, where 402 the coverage is less than 100% it is important for the purposes of 403 comparisons between different proposed repair strategies to define 404 what is meant by such a percentage. There are four possibilities: 406 1. The percentage of links (or nodes) which can be fully protected 407 for all destinations. This is appropriate where the requirement 408 is to protect all traffic, but some percentage of the possible 409 failures may be identified as being un-protectable. 411 2. The percentage of destinations which can be fully protected for 412 all link (or node) failures. This is appropriate where the 413 requirement is to protect against all possible failures, but some 414 percentage of destinations may be identified as being un- 415 protectable. 417 3. For all destinations (d) and for all failures (f), the percentage 418 of the total potential failure cases (d*f) which are protected. 419 This is appropriate where the requirement is an overall "best 420 effort" protection. 422 4. The percentage of packets normally passing though the network 423 that will continue to reach their destination. This requires a 424 traffic matrix for the network as part of the analysis. 426 The coverage obtained is dependent on the repair strategy and highly 427 dependent on the detailed topology and metrics. Any figures quoted 428 in this document are for illustrative purposes only. 430 4.2.3. Link or node repair 432 A repair path may be computed to protect against failure of an 433 adjacent link, or failure of an adjacent node. In general, link 434 protection is simpler to achieve. A repair which protects against 435 node failure will also protect against link failure for all 436 destinations except those for which the adjacent node is a single 437 point of failure. 439 In some cases it may be necessary to distinguish between a link or 440 node failure in order that the optimal repair strategy is invoked. 441 Methods for link/node failure determination may be based on 442 techniques such as BFD[I-D.ietf-bfd-base]. This determination may be 443 made prior to invoking any repairs, but this will increase the period 444 of packet loss following a failure unless the determination can be 445 performed as part of the failure detection mechanism itself. 446 Alternatively, a subsequent determination can be used to optimise an 447 already invoked default strategy. 449 4.2.4. Maintenance of Repair paths 451 In order to meet the response time goals, it is expected (though not 452 required) that repair paths, and their associated FIB entries, will 453 be pre-computed and installed ready for invocation when a failure is 454 detected. Following invocation the repair paths remain in effect 455 until they are no longer required. This will normally be when the 456 routing protocol has re-converged on the new topology taking into 457 account the failure, and traffic will no longer be using the repair 458 paths. 460 The repair paths have the property that they are unaffected by any 461 topology changes resulting from the failure which caused their 462 instantiation. Therefore there is no need to re-compute them during 463 the convergence period. They may be affected by an unrelated 464 simultaneous topology change, but such events are out of scope of 465 this work (see Section 4.2.5). 467 Once the routing protocol has re-converged it is necessary for all 468 repair paths to take account of the new topology. Various 469 optimizations may permit the efficient identification of repair paths 470 which are unaffected by the change, and hence do not require full re- 471 computation. Since the new repair paths will not be required until 472 the next failure occurs, the re-computation may be performed as a 473 background task and be subject to a hold-down, but excessive delay in 474 completing this operation will increase the risk of a new failure 475 occurring before the repair paths are in place. 477 4.2.5. Multiple failures and Shared Risk Link Groups 479 Complete protection against multiple unrelated failures is out of 480 scope of this work. However, it is important that the occurrence of 481 a second failure while one failure is undergoing repair should not 482 result in a level of service which is significantly worse than that 483 which would have been achieved in the absence of any repair strategy. 485 Shared Risk Link Groups are an example of multiple related failures, 486 and the more complex aspects of their protection is a matter for 487 further study. 489 One specific example of an SRLG which is clearly within the scope of 490 this work is a node failure. This causes the simultaneous failure of 491 multiple links, but their closely defined topological relationship 492 makes the problem more tractable. 494 4.3. Local Area Networks 496 Protection against partial or complete failure of LANs is more 497 complex than the point to point case. In general there is a tradeoff 498 between the simplicity of the repair and the ability to provide 499 complete and optimal repair coverage. 501 4.4. Mechanisms for micro-loop prevention 503 Control of micro-loops is important not only because they can cause 504 packet loss in traffic which is affected by the failure, but because 505 by saturating a link with looping packets they can also cause 506 congestion loss of traffic flowing over that link which would 507 otherwise be unaffected by the failure. 509 A number of solutions to the problem of micro-loop formation have 510 been proposed and are summarized in [I-D.ietf-rtgwg-lf-conv-frmwk]. 511 The following factors are significant in their classification: 513 1. Partial or complete protection against micro-loops. 515 2. Delay imposed upon convergence. 517 3. Tolerance of multiple failures (from node failures, and in 518 general). 520 4. Computational complexity (pre-computed or real time). 522 5. Applicability to scheduled events. 524 6. Applicability to link/node reinstatement. 526 5. Management Considerations 528 While many of the management requirements will be specific to 529 particular IPFRR solutions, the following general aspects need to be 530 addressed: 532 1. Configuration 534 A. Enabling/disabling IPFRR support. 536 B. Enabling/disabling protection on a per link/node basis. 538 C. Expressing preferences regarding the links/nodes used for 539 repair paths. 541 D. Configuration of failure detection mechanisms. 543 E. Configuration of loop avoidance strategies 545 2. Monitoring 547 A. Notification of links/nodes/destinations which cannot be 548 protected. 550 B. Notification of pre-computed repair paths, and anticipated 551 traffic patterns. 553 C. Counts of failure detections, protection invocations and 554 packets forwarded over repair paths. 556 6. Scope and applicability 558 The initial scope of this work is in the context of link state IGPs. 559 Link state protocols provide ubiquitous topology information, which 560 facilitates the computation of repairs paths. 562 Provision of similar facilities in non-link state IGPs and BGP is a 563 matter for further study, but the correct operation of the repair 564 mechanisms for traffic with a destination outside the IGP domain is 565 an important consideration for solutions based on this framework 567 7. IANA Considerations 569 There are no IANA considerations that arise from this framework 570 document. 572 8. Security Considerations 574 This framework document does not itself introduce any security 575 issues, but attention must be paid to the security implications of 576 any proposed solutions to the problem. 578 9. Acknowledgements 580 The authors would like to acknowledge contributions made by Alia 581 Atlas, Clarence Filsfils, Pierre Francois, Joel Halpern, Stefano 582 Previdi and Alex Zinin. 584 10. Informative References 586 [FIFR] Nelakuditi, S., Lee, S., Lu, Y., Zhang, Z., and C. Chuah, 587 "Fast local rerouting for handling transient link 588 failures."", Tech. Rep. TR-2004-004, 2004. 590 [I-D.atlas-ip-local-protect-uturn] 591 Atlas, A., "U-turn Alternates for IP/LDP Fast-Reroute", 592 draft-atlas-ip-local-protect-uturn-03 (work in progress), 593 March 2006. 595 [I-D.bryant-ipfrr-tunnels] 596 Bryant, S., Filsfils, C., Previdi, S., and M. Shand, "IP 597 Fast Reroute using tunnels", draft-bryant-ipfrr-tunnels-03 598 (work in progress), November 2007. 600 [I-D.ietf-bfd-base] 601 Katz, D. and D. Ward, "Bidirectional Forwarding 602 Detection", draft-ietf-bfd-base-07 (work in progress), 603 January 2008. 605 [I-D.ietf-rtgwg-ipfrr-notvia-addresses] 606 Bryant, S., "IP Fast Reroute Using Not-via Addresses", 607 draft-ietf-rtgwg-ipfrr-notvia-addresses-01 (work in 608 progress), July 2007. 610 [I-D.ietf-rtgwg-ipfrr-spec-base] 611 Atlas, A., Zinin, A., Torvi, R., Choudhury, G., Martin, 612 C., Imhoff, B., and D. Fedyk, "Basic Specification for IP 613 Fast-Reroute: Loop-free Alternates", 614 draft-ietf-rtgwg-ipfrr-spec-base-10 (work in progress), 615 November 2007. 617 [I-D.ietf-rtgwg-lf-conv-frmwk] 618 Shand, M. and S. Bryant, "A Framework for Loop-free 619 Convergence", draft-ietf-rtgwg-lf-conv-frmwk-02 (work in 620 progress), February 2008. 622 [I-D.tian-frr-alt-shortest-path] 623 Tian, A., "Fast Reroute using Alternative Shortest Paths", 624 draft-tian-frr-alt-shortest-path-01 (work in progress), 625 July 2004. 627 [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 628 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 629 May 2005. 631 [SIMULA] Lysne, O., Kvalbein, A., Cicic, T., Gjessing, S., and A. 633 Hansen, "Fast IP Network Recovery using Multiple Routing 634 Configurations."", Infocom 10.1109/INFOCOM.2006.227, 2006, 635 . 637 Authors' Addresses 639 Mike Shand 640 Cisco Systems 641 250, Longwater Avenue. 642 Reading, Berks RG2 6GB 643 UK 645 Email: mshand@cisco.com 647 Stewart Bryant 648 Cisco Systems 649 250, Longwater Avenue. 650 Reading, Berks RG2 6GB 651 UK 653 Email: stbryant@cisco.com 655 Full Copyright Statement 657 Copyright (C) The IETF Trust (2008). 659 This document is subject to the rights, licenses and restrictions 660 contained in BCP 78, and except as set forth therein, the authors 661 retain all their rights. 663 This document and the information contained herein are provided on an 664 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 665 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 666 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 667 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 668 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 669 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 671 Intellectual Property 673 The IETF takes no position regarding the validity or scope of any 674 Intellectual Property Rights or other rights that might be claimed to 675 pertain to the implementation or use of the technology described in 676 this document or the extent to which any license under such rights 677 might or might not be available; nor does it represent that it has 678 made any independent effort to identify any such rights. Information 679 on the procedures with respect to rights in RFC documents can be 680 found in BCP 78 and BCP 79. 682 Copies of IPR disclosures made to the IETF Secretariat and any 683 assurances of licenses to be made available, or the result of an 684 attempt made to obtain a general license or permission for the use of 685 such proprietary rights by implementers or users of this 686 specification can be obtained from the IETF on-line IPR repository at 687 http://www.ietf.org/ipr. 689 The IETF invites any interested party to bring to its attention any 690 copyrights, patents or patent applications, or other proprietary 691 rights that may cover technology that may be required to implement 692 this standard. Please address the information to the IETF at 693 ietf-ipr@ietf.org. 695 Acknowledgment 697 Funding for the RFC Editor function is provided by the IETF 698 Administrative Support Activity (IASA).