idnits 2.17.1 draft-ietf-rtgwg-ipfrr-framework-13.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 23, 2009) is 5299 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-11) exists of draft-ietf-bfd-base-09 == Outdated reference: A later version (-11) exists of draft-ietf-rtgwg-ipfrr-notvia-addresses-04 Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Shand 3 Internet-Draft S. Bryant 4 Intended status: Informational Cisco Systems 5 Expires: April 26, 2010 October 23, 2009 7 IP Fast Reroute Framework 8 draft-ietf-rtgwg-ipfrr-framework-13 10 Status of this Memo 12 This Internet-Draft is submitted to IETF in full conformance with the 13 provisions of BCP 78 and BCP 79. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six months 21 and may be updated, replaced, or obsoleted by other documents at any 22 time. It is inappropriate to use Internet-Drafts as reference 23 material or to cite them other than as "work in progress." 25 The list of current Internet-Drafts can be accessed at 26 http://www.ietf.org/ietf/1id-abstracts.txt. 28 The list of Internet-Draft Shadow Directories can be accessed at 29 http://www.ietf.org/shadow.html. 31 This Internet-Draft will expire on April 26, 2010. 33 Copyright Notice 35 Copyright (c) 2009 IETF Trust and the persons identified as the 36 document authors. All rights reserved. 38 This document is subject to BCP 78 and the IETF Trust's Legal 39 Provisions Relating to IETF Documents in effect on the date of 40 publication of this document (http://trustee.ietf.org/license-info). 41 Please review these documents carefully, as they describe your rights 42 and restrictions with respect to this document. 44 Abstract 46 This document provides a framework for the development of IP fast- 47 reroute mechanisms which provide protection against link or router 48 failure by invoking locally determined repair paths. Unlike MPLS 49 fast-reroute, the mechanisms are applicable to a network employing 50 conventional IP routing and forwarding. 52 Table of Contents 54 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 55 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 56 3. Scope and applicability . . . . . . . . . . . . . . . . . . . 6 57 4. Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . 6 58 5. Mechanisms for IP Fast-reroute . . . . . . . . . . . . . . . . 8 59 5.1. Mechanisms for fast failure detection . . . . . . . . . . 8 60 5.2. Mechanisms for repair paths . . . . . . . . . . . . . . . 8 61 5.2.1. Scope of repair paths . . . . . . . . . . . . . . . . 9 62 5.2.2. Analysis of repair coverage . . . . . . . . . . . . . 10 63 5.2.3. Link or node repair . . . . . . . . . . . . . . . . . 11 64 5.2.4. Maintenance of Repair paths . . . . . . . . . . . . . 11 65 5.2.5. Local Area Networks . . . . . . . . . . . . . . . . . 12 66 5.2.6. Multiple failures and Shared Risk Link Groups . . . . 12 67 5.3. Mechanisms for micro-loop prevention . . . . . . . . . . . 12 68 6. Management Considerations . . . . . . . . . . . . . . . . . . 13 69 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 70 8. Security Considerations . . . . . . . . . . . . . . . . . . . 14 71 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 14 72 10. Informative References . . . . . . . . . . . . . . . . . . . . 14 73 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 75 1. Terminology 77 This section defines words and acronyms used in this draft and other 78 drafts discussing IP fast-reroute. 80 D Used to denote the destination router under 81 discussion. 83 Distance_opt(A,B) The metric sum of the shortest path from A to B. 85 Downstream Path This is a subset of the loop-free alternates 86 where the neighbor N meets the following 87 condition:- 89 Distance_opt(N, D) < Distance_opt(S,D) 91 E Used to denote the router which is the primary 92 neighbor to get from S to the destination D. 93 Where there is an ECMP set for the shortest path 94 from S to D, these are referred to as E_1, E_2, 95 etc. 97 ECMP Equal cost multi-path: Where, for a particular 98 destination D, multiple primary next-hops are 99 used to forward traffic because there exist 100 multiple shortest paths from S via different 101 output layer-3 interfaces. 103 FIB Forwarding Information Base. The database used 104 by the packet forwarder to determine what actions 105 to perform on a packet. 107 IPFRR IP fast-reroute. 109 Link(A->B) A link connecting router A to router B. 111 LFA Loop Free Alternate. A neighbor N, that is not a 112 primary neighbor E, whose shortest path to the 113 destination D does not go back through the router 114 S. The neighbor N must meet the following 115 condition:- 117 Distance_opt(N, D) < Distance_opt(N, S) + 118 Distance_opt(S, D) 120 Loop Free Neighbor A neighbor N_i, which is not the particular 121 primary neighbor E_k under discussion, and whose 122 shortest path to D does not traverse S. For 123 example, if there are two primary neighbors E_1 124 and E_2, E_1 is a loop-free neighbor with regard 125 to E_2 and vice versa. 127 Loop Free Link Protecting Alternate 128 A path via a Loop-Free Neighbor N_i that reaches 129 destination D without going through the 130 particular link of S that is being protected. In 131 some cases the path to D may go through the 132 primary neighbor E. 134 Loop Free Node-protecting Alternate 135 A path via a Loop-Free Neighbor N_i that reaches 136 destination D without going through the 137 particular primary neighbor (E) of S which is 138 being protected. 140 N_i The ith neighbor of S. 142 Primary Neighbor A neighbor N_i of S which is one of the next hops 143 for destination D in S's FIB prior to any 144 failure. 146 R_i_j The jth neighbor of N_i. 148 Repair Path The path used by a repairing node to send traffic 149 that it is unable to send via the normal path 150 owing to a failure. 152 Routing Transition The process whereby routers converge on a new 153 topology. In conventional networks this process 154 frequently causes some disruption to packet 155 delivery. 157 RPF Reverse Path Forwarding. I.e. checking that a 158 packet is received over the interface which would 159 be used to send packets addressed to the source 160 address of the packet. 162 S Used to denote a router that is the source of a 163 repair that is computed in anticipation of the 164 failure of a neighboring router denoted as E, or 165 of the link between S and E. It is the viewpoint 166 from which IP fast-reroute is described. 168 SPF Shortest Path First, e.g. Dijkstra's algorithm. 170 SPT Shortest path tree 172 Upstream Forwarding Loop 173 A forwarding loop that involves a set of routers, 174 none of which is directly connected to the link 175 that has caused the topology change that 176 triggered a new SPF in any of the routers. 178 2. Introduction 180 When a link or node failure occurs in a routed network, there is 181 inevitably a period of disruption to the delivery of traffic until 182 the network re-converges on the new topology. Packets for 183 destinations which were previously reached by traversing the failed 184 component may be dropped or may suffer looping. Traditionally such 185 disruptions have lasted for periods of at least several seconds, and 186 most applications have been constructed to tolerate such a quality of 187 service. 189 Recent advances in routers have reduced this interval to under a 190 second for carefully configured networks using link state IGPs. 191 However, new Internet services are emerging which may be sensitive to 192 periods of traffic loss which are orders of magnitude shorter than 193 this. 195 Addressing these issues is difficult because the distributed nature 196 of the network imposes an intrinsic limit on the minimum convergence 197 time which can be achieved. 199 However, there is an alternative approach, which is to compute backup 200 routes that allow the failure to be repaired locally by the router(s) 201 detecting the failure without the immediate need to inform other 202 routers of the failure. In this case, the disruption time can be 203 limited to the small time taken to detect the adjacent failure and 204 invoke the backup routes. This is analogous to the technique 205 employed by MPLS fast-reroute [RFC4090], but the mechanisms employed 206 for the backup routes in pure IP networks are necessarily very 207 different. 209 This document provides a framework for the development of this 210 approach. 212 Note that in order to further minimise the impact on user 213 applications, it may be necessary to design the network such that 214 backup paths with suitable characteristics, for example capacity 215 and/or delay, are available for the algorithms to select. Such 216 considerations are outside the scope of this document. 218 3. Scope and applicability 220 The initial scope of this work is in the context of link state IGPs. 221 Link state protocols provide ubiquitous topology information, which 222 facilitates the computation of repairs paths. 224 Provision of similar facilities in non-link state IGPs and BGP is a 225 matter for further study, but the correct operation of the repair 226 mechanisms for traffic with a destination outside the IGP domain is 227 an important consideration for solutions based on this framework. 229 Complete protection against multiple unrelated failures is out of 230 scope of this work. 232 4. Problem Analysis 234 The duration of the packet delivery disruption caused by a 235 conventional routing transition is determined by a number of factors: 237 1. The time taken to detect the failure. This may be of the order 238 of a few milliseconds when it can be detected at the physical 239 layer, up to several tens of seconds when a routing protocol 240 Hello is employed. During this period packets will be 241 unavoidably lost. 243 2. The time taken for the local router to react to the failure. 244 This will typically involve generating and flooding new routing 245 updates, perhaps after some hold-down delay, and re-computing the 246 router's FIB. 248 3. The time taken to pass the information about the failure to other 249 routers in the network. In the absence of routing protocol 250 packet loss, this is typically between 10 milliseconds and 100 251 milliseconds per hop. 253 4. The time taken to re-compute the forwarding tables. This is 254 typically a few milliseconds for a link state protocol using 255 Dijkstra's algorithm. 257 5. The time taken to load the revised forwarding tables into the 258 forwarding hardware. This time is very implementation dependant 259 and also depends on the number of prefixes affected by the 260 failure, but may be several hundred milliseconds. 262 The disruption will last until the routers adjacent to the failure 263 have completed steps 1 and 2, and then all the routers in the network 264 whose paths are affected by the failure have completed the remaining 265 steps. 267 The initial packet loss is caused by the router(s) adjacent to the 268 failure continuing to attempt to transmit packets across the failure 269 until it is detected. This loss is unavoidable, but the detection 270 time can be reduced to a few tens of milliseconds as described in 271 Section 5.1. 273 In some topologies subsequent packet loss may be caused by the 274 "micro-loops" which may form as a result of temporary inconsistencies 275 between routers' forwarding tables[I-D.ietf-rtgwg-lf-conv-frmwk]. 276 These inconsistencies are caused by steps 3, 4 and 5 above and in 277 many routers it is step 5 which is both the largest factor and which 278 has the greatest variance between routers. The large variance arises 279 from implementation differences and from the differing impact that a 280 failure has on each individual router. For example, the number of 281 prefixes affected by the failure may vary dramatically from one 282 router to another. 284 In order to reduce packet disruption times to a duration commensurate 285 with the failure detection times, two mechanisms may be required:- 287 a. A mechanism for the router(s) adjacent to the failure to rapidly 288 invoke a repair path, which is unaffected by any subsequent re- 289 convergence. 291 b. In topologies that are susceptible to micro-loops, a micro-loop 292 control mechanism may be required[I-D.ietf-rtgwg-lf-conv-frmwk]. 294 Performing the first task without the second may result in the repair 295 path being starved of traffic and hence being redundant. Performing 296 the second without the first will result in traffic being discarded 297 by the router(s) adjacent to the failure. 299 Repair paths may always be used in isolation where the failure is 300 short-lived. In this case, the repair paths can be kept in place 301 until the failure is repaired in which case there is no need to 302 advertise the failure to other routers. 304 Similarly, micro-loop avoidance may be used in isolation to prevent 305 loops arising from pre-planned management action. In which case the 306 link or node being shut down can remain in service for a short time 307 after its removal has been announced into the network, and hence it 308 can function as its own "repair path". 310 Note that micro-loops may also occur when a link or node is restored 311 to service and thus a micro-loop avoidance mechanism may be required 312 for both link up and link down cases. 314 5. Mechanisms for IP Fast-reroute 316 The set of mechanisms required for an effective solution to the 317 problem can be broken down into the sub-problems described in this 318 section. 320 5.1. Mechanisms for fast failure detection 322 It is critical that the failure detection time is minimized. A 323 number of well documented approaches are possible, such as: 325 1. Physical detection; for example, loss of light. 327 2. Routing protocol independent protocol detection; for example, The 328 Bidirectional Failure Detection protocol [I-D.ietf-bfd-base]. 330 3. Routing protocol detection; for example, use of "fast Hellos". 332 When configuring packet based failure detection mechanisms it is 333 important that consideration be given to the likelihood and 334 consequences of false indications of failure. The incidence of false 335 indication of failure may be minimised by appropriately prioritizing 336 of the transmission, reception and processing of the packets used to 337 detect link or node failure. Note that this is not an issue that is 338 specific to IPFRR. 340 5.2. Mechanisms for repair paths 342 Once a failure has been detected by one of the above mechanisms, 343 traffic which previously traversed the failure is transmitted over 344 one or more repair paths. The design of the repair paths should be 345 such that they can be pre-calculated in anticipation of each local 346 failure and made available for invocation with minimal delay. There 347 are three basic categories of repair paths: 349 1. Equal cost multi-paths (ECMP). Where such paths exist, and one 350 or more of the alternate paths do not traverse the failure, they 351 may trivially be used as repair paths. 353 2. Loop free alternate paths. Such a path exists when a direct 354 neighbor of the router adjacent to the failure has a path to the 355 destination which can be guaranteed not to traverse the failure. 357 3. Multi-hop repair paths. When there is no feasible loop free 358 alternate path it may still be possible to locate a router, which 359 is more than one hop away from the router adjacent to the 360 failure, from which traffic will be forwarded to the destination 361 without traversing the failure. 363 ECMP and loop free alternate paths (as described in [RFC5286]) offer 364 the simplest repair paths and would normally be used when they are 365 available. It is anticipated that around 80% of failures (see 366 Section 5.2.2) can be repaired using these basic methods alone. 368 Multi-hop repair paths are more complex, both in the computations 369 required to determine their existence, and in the mechanisms required 370 to invoke them. They can be further classified as: 372 a. Mechanisms where one or more alternate FIBs are pre-computed in 373 all routers and the repaired packet is instructed to be forwarded 374 using a "repair FIB" by some method of per packet signaling such 375 as detecting a "U-turn" [I-D.atlas-ip-local-protect-uturn] , 376 [FIFR] or by marking the packet [SIMULA]. 378 b. Mechanisms functionally equivalent to a loose source route which 379 is invoked using the normal FIB. These include tunnels 380 [I-D.bryant-ipfrr-tunnels], alternative shortest paths 381 [I-D.tian-frr-alt-shortest-path] and label based mechanisms. 383 c. Mechanisms employing special addresses or labels which are 384 installed in the FIBs of all routers with routes pre-computed to 385 avoid certain components of the network. For example 386 [I-D.ietf-rtgwg-ipfrr-notvia-addresses]. 388 In many cases a repair path which reaches two hops away from the 389 router detecting the failure will suffice, and it is anticipated that 390 around 98% of failures (see Section 5.2.2) can be repaired by this 391 method. However, to provide complete repair coverage some use of 392 longer multi-hop repair paths is generally necessary. 394 5.2.1. Scope of repair paths 396 A particular repair path may be valid for all destinations which 397 require repair or may only be valid for a subset of destinations. If 398 a repair path is valid for a node immediately downstream of the 399 failure, then it will be valid for all destinations previously 400 reachable by traversing the failure. However, in cases where such a 401 repair path is difficult to achieve because it requires a high order 402 multi-hop repair path, it may still be possible to identify lower 403 order repair paths (possibly even loop free alternate paths) which 404 allow the majority of destinations to be repaired. When IPFRR is 405 unable to provide complete repair, it is desirable that the extent of 406 the repair coverage can be determined and reported via network 407 management. 409 There is a trade-off to be achieved between minimizing the number of 410 repair paths to be computed, and minimizing the overheads incurred in 411 using higher order multi-hop repair paths for destinations for which 412 they are not strictly necessary. However, the computational cost of 413 determining repair paths on an individual destination basis can be 414 very high. 416 It will frequently be the case that the majority of destinations may 417 be repaired using only the "basic" repair mechanism, leaving a 418 smaller subset of the destinations to be repaired using one of the 419 more complex multi-hop methods. Such a hybrid approach may go some 420 way to resolving the conflict between completeness and complexity. 422 The use of repair paths may result in excessive traffic passing over 423 a link, resulting in congestion discard. This reduces the 424 effectiveness of IPFRR. Mechanisms to influence the distribution of 425 repaired traffic to minimize this effect are therefore desirable. 427 5.2.2. Analysis of repair coverage 429 The repair coverage obtained is dependent on the repair strategy and 430 highly dependent on the detailed topology and metrics. Estimates of 431 the repair coverage quoted in this document are for illustrative 432 purposes only and may not be always be achievable. 434 In some cases the repair strategy will permit the repair of all 435 single link or node failures in the network for all possible 436 destinations. This can be defined as 100% coverage. However, where 437 the coverage is less than 100% it is important for the purposes of 438 comparisons between different proposed repair strategies to define 439 what is meant by such a percentage. There are four possibilities: 441 1. The percentage of links (or nodes) which can be fully protected 442 (i.e. for all destinations). This is appropriate where the 443 requirement is to protect all traffic, but some percentage of the 444 possible failures may be identified as being un-protectable. 446 2. The percentage of destinations which can be protected for all 447 link (or node) failures. This is appropriate where the 448 requirement is to protect against all possible failures, but some 449 percentage of destinations may be identified as being un- 450 protectable. 452 3. For all destinations (d) and for all failures (f), the percentage 453 of the total potential failure cases (d*f) which are protected. 454 This is appropriate where the requirement is an overall "best 455 effort" protection. 457 4. The percentage of packets normally passing though the network 458 that will continue to reach their destination. This requires a 459 traffic matrix for the network as part of the analysis. 461 5.2.3. Link or node repair 463 A repair path may be computed to protect against failure of an 464 adjacent link, or failure of an adjacent node. In general, link 465 protection is simpler to achieve. A repair which protects against 466 node failure will also protect against link failure for all 467 destinations except those for which the adjacent node is a single 468 point of failure. 470 In some cases it may be necessary to distinguish between a link or 471 node failure in order that the optimal repair strategy is invoked. 472 Methods for link/node failure determination may be based on 473 techniques such as BFD[I-D.ietf-bfd-base]. This determination may be 474 made prior to invoking any repairs, but this will increase the period 475 of packet loss following a failure unless the determination can be 476 performed as part of the failure detection mechanism itself. 477 Alternatively, a subsequent determination can be used to optimise an 478 already invoked default strategy. 480 5.2.4. Maintenance of Repair paths 482 In order to meet the response time goals, it is expected (though not 483 required) that repair paths, and their associated FIB entries, will 484 be pre-computed and installed ready for invocation when a failure is 485 detected. Following invocation the repair paths remain in effect 486 until they are no longer required. This will normally be when the 487 routing protocol has re-converged on the new topology taking into 488 account the failure, and traffic will no longer be using the repair 489 paths. 491 The repair paths have the property that they are unaffected by any 492 topology changes resulting from the failure which caused their 493 instantiation. Therefore there is no need to re-compute them during 494 the convergence period. They may be affected by an unrelated 495 simultaneous topology change, but such events are out of scope of 496 this work (see Section 5.2.6). 498 Once the routing protocol has re-converged it is necessary for all 499 repair paths to take account of the new topology. Various 500 optimizations may permit the efficient identification of repair paths 501 which are unaffected by the change, and hence do not require full re- 502 computation. Since the new repair paths will not be required until 503 the next failure occurs, the re-computation may be performed as a 504 background task and be subject to a hold-down, but excessive delay in 505 completing this operation will increase the risk of a new failure 506 occurring before the repair paths are in place. 508 5.2.5. Local Area Networks 510 Protection against partial or complete failure of LANs is more 511 complex than the point to point case. In general there is a trade- 512 off between the simplicity of the repair and the ability to provide 513 complete and optimal repair coverage. 515 5.2.6. Multiple failures and Shared Risk Link Groups 517 Complete protection against multiple unrelated failures is out of 518 scope of this work. However, it is important that the occurrence of 519 a second failure while one failure is undergoing repair should not 520 result in a level of service which is significantly worse than that 521 which would have been achieved in the absence of any repair strategy. 523 Shared Risk Link Groups (SRLGs) are an example of multiple related 524 failures, and the more complex aspects of their protection is a 525 matter for further study. 527 One specific example of an SRLG which is clearly within the scope of 528 this work is a node failure. This causes the simultaneous failure of 529 multiple links, but their closely defined topological relationship 530 makes the problem more tractable. 532 5.3. Mechanisms for micro-loop prevention 534 Ensuring the absence of micro-loops is important not only because 535 they can cause packet loss in traffic which is affected by the 536 failure, but because by saturating a link with looping packets they 537 can also cause congestion loss of traffic flowing over that link 538 which would otherwise be unaffected by the failure. 540 A number of solutions to the problem of micro-loop formation have 541 been proposed and are summarized in [I-D.ietf-rtgwg-lf-conv-frmwk]. 542 The following factors are significant in their classification: 544 1. Partial or complete protection against micro-loops. 546 2. Delay imposed upon convergence. 548 3. Tolerance of multiple failures (from node failures, and in 549 general). 551 4. Computational complexity (pre-computed or real time). 553 5. Applicability to scheduled events. 555 6. Applicability to link/node reinstatement. 557 7. Topological constraints. 559 6. Management Considerations 561 While many of the management requirements will be specific to 562 particular IPFRR solutions, the following general aspects need to be 563 addressed: 565 1. Configuration 567 A. Enabling/disabling IPFRR support. 569 B. Enabling/disabling protection on a per link/node basis. 571 C. Expressing preferences regarding the links/nodes used for 572 repair paths. 574 D. Configuration of failure detection mechanisms. 576 E. Configuration of loop avoidance strategies 578 2. Monitoring and operational support 580 A. Notification of links/nodes/destinations which cannot be 581 protected. 583 B. Notification of pre-computed repair paths, and anticipated 584 traffic patterns. 586 C. Counts of failure detections, protection invocations and 587 packets forwarded over repair paths. 589 D. Testing repairs. 591 7. IANA Considerations 593 There are no IANA considerations that arise from this framework 594 document. 596 8. Security Considerations 598 This framework document does not itself introduce any security 599 issues, but attention must be paid to the security implications of 600 any proposed solutions to the problem. 602 Where the chosen solution uses tunnels it is necessary to ensure that 603 the tunnel is not used as an attack vector. One method of addressing 604 this is to use a set of tunnel endpoint addresses that are excluded 605 from use by user traffic. 607 There is a compatibility issue between IPFRR and reverse path 608 forwarding (RPF) checking. Many of the solutions described in this 609 document result in traffic arriving from a direction inconsistent 610 with a standard RPF check. When a network relies on RPF checking for 611 security purposes, an alternative security mechanism will need to be 612 deployed in order to permit IPFRR to used. 614 Because the repair path will often be of a different length to the 615 pre-failure path, security mechanisms which rely on specific TTL 616 values will be adversely affected. 618 9. Acknowledgements 620 The authors would like to acknowledge contributions made by Alia 621 Atlas, Clarence Filsfils, Pierre Francois, Joel Halpern, Stefano 622 Previdi and Alex Zinin. 624 10. Informative References 626 [FIFR] Nelakuditi, S., Lee, S., Lu, Y., Zhang, Z., and C. Chuah, 627 "Fast local rerouting for handling transient link 628 failures."", Tech. Rep. TR-2004-004, 2004. 630 [I-D.atlas-ip-local-protect-uturn] 631 Atlas, A., "U-turn Alternates for IP/LDP Fast-Reroute", 632 draft-atlas-ip-local-protect-uturn-03 (work in progress), 633 March 2006. 635 [I-D.bryant-ipfrr-tunnels] 636 Bryant, S., Filsfils, C., Previdi, S., and M. Shand, "IP 637 Fast Reroute using tunnels", draft-bryant-ipfrr-tunnels-03 638 (work in progress), November 2007. 640 [I-D.ietf-bfd-base] 641 Katz, D. and D. Ward, "Bidirectional Forwarding 642 Detection", draft-ietf-bfd-base-09 (work in progress), 643 February 2009. 645 [I-D.ietf-rtgwg-ipfrr-notvia-addresses] 646 Shand, M., Bryant, S., and S. Previdi, "IP Fast Reroute 647 Using Not-via Addresses", 648 draft-ietf-rtgwg-ipfrr-notvia-addresses-04 (work in 649 progress), July 2009. 651 [I-D.ietf-rtgwg-lf-conv-frmwk] 652 Shand, M. and S. Bryant, "A Framework for Loop-free 653 Convergence", draft-ietf-rtgwg-lf-conv-frmwk-07 (work in 654 progress), October 2009. 656 [I-D.tian-frr-alt-shortest-path] 657 Tian, A., "Fast Reroute using Alternative Shortest Paths", 658 draft-tian-frr-alt-shortest-path-01 (work in progress), 659 July 2004. 661 [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 662 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 663 May 2005. 665 [RFC5286] Atlas, A. and A. Zinin, "Basic Specification for IP Fast 666 Reroute: Loop-Free Alternates", RFC 5286, September 2008. 668 [SIMULA] Lysne, O., Kvalbein, A., Cicic, T., Gjessing, S., and A. 669 Hansen, "Fast IP Network Recovery using Multiple Routing 670 Configurations."", Infocom 10.1109/INFOCOM.2006.227, 2006, 671 . 673 Authors' Addresses 675 Mike Shand 676 Cisco Systems 677 250, Longwater Avenue. 678 Reading, Berks RG2 6GB 679 UK 681 Email: mshand@cisco.com 682 Stewart Bryant 683 Cisco Systems 684 250, Longwater Avenue. 685 Reading, Berks RG2 6GB 686 UK 688 Email: stbryant@cisco.com