idnits 2.17.1 draft-ietf-rtgwg-ipfrr-framework-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (September 18, 2009) is 5335 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-11) exists of draft-ietf-bfd-base-09 == Outdated reference: A later version (-11) exists of draft-ietf-rtgwg-ipfrr-notvia-addresses-04 == Outdated reference: A later version (-07) exists of draft-ietf-rtgwg-lf-conv-frmwk-05 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Shand 3 Internet-Draft S. Bryant 4 Intended status: Informational Cisco Systems 5 Expires: March 22, 2010 September 18, 2009 7 IP Fast Reroute Framework 8 draft-ietf-rtgwg-ipfrr-framework-12 10 Status of this Memo 12 This Internet-Draft is submitted to IETF in full conformance with the 13 provisions of BCP 78 and BCP 79. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six months 21 and may be updated, replaced, or obsoleted by other documents at any 22 time. It is inappropriate to use Internet-Drafts as reference 23 material or to cite them other than as "work in progress." 25 The list of current Internet-Drafts can be accessed at 26 http://www.ietf.org/ietf/1id-abstracts.txt. 28 The list of Internet-Draft Shadow Directories can be accessed at 29 http://www.ietf.org/shadow.html. 31 This Internet-Draft will expire on March 22, 2010. 33 Copyright Notice 35 Copyright (c) 2009 IETF Trust and the persons identified as the 36 document authors. All rights reserved. 38 This document is subject to BCP 78 and the IETF Trust's Legal 39 Provisions Relating to IETF Documents in effect on the date of 40 publication of this document (http://trustee.ietf.org/license-info). 41 Please review these documents carefully, as they describe your rights 42 and restrictions with respect to this document. 44 Abstract 46 This document provides a framework for the development of IP fast- 47 reroute mechanisms which provide protection against link or router 48 failure by invoking locally determined repair paths. Unlike MPLS 49 fast-reroute, the mechanisms are applicable to a network employing 50 conventional IP routing and forwarding. 52 Table of Contents 54 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 55 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 56 3. Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . 5 57 4. Mechanisms for IP Fast-reroute . . . . . . . . . . . . . . . . 7 58 4.1. Mechanisms for fast failure detection . . . . . . . . . . 7 59 4.2. Mechanisms for repair paths . . . . . . . . . . . . . . . 8 60 4.2.1. Scope of repair paths . . . . . . . . . . . . . . . . 9 61 4.2.2. Analysis of repair coverage . . . . . . . . . . . . . 9 62 4.2.3. Link or node repair . . . . . . . . . . . . . . . . . 10 63 4.2.4. Maintenance of Repair paths . . . . . . . . . . . . . 11 64 4.2.5. Multiple failures and Shared Risk Link Groups . . . . 11 65 4.3. Local Area Networks . . . . . . . . . . . . . . . . . . . 12 66 4.4. Mechanisms for micro-loop prevention . . . . . . . . . . . 12 67 5. Management Considerations . . . . . . . . . . . . . . . . . . 12 68 6. Scope and applicability . . . . . . . . . . . . . . . . . . . 13 69 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 70 8. Security Considerations . . . . . . . . . . . . . . . . . . . 13 71 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 14 72 10. Informative References . . . . . . . . . . . . . . . . . . . . 14 73 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 75 1. Terminology 77 This section defines words and acronyms used in this draft and other 78 drafts discussing IP fast-reroute. 80 D Used to denote the destination router under 81 discussion. 83 Distance_opt(A,B) The distance of the shortest path from A to B. 85 Downstream Path This is a subset of the loop-free alternates 86 where the neighbor N meets the following 87 condition:- 89 Distance_opt(N, D) < Distance_opt(S,D) 91 E Used to denote the router which is the primary 92 next-hop neighbor to get from S to the 93 destination D. Where there is an ECMP set for the 94 shortest path from S to D, these are referred to 95 as E_1, E_2, etc. 97 ECMP Equal cost multi-path: Where, for a particular 98 destination D, multiple primary next-hops are 99 used to forward traffic because there exist 100 multiple shortest paths from S via different 101 output layer-3 interfaces. 103 FIB Forwarding Information Base. The database used 104 by the packet forwarder to determine what actions 105 to perform on a packet. 107 IPFRR IP fast-reroute. 109 Link(A->B) A link connecting router A to router B. 111 LFA Loop Free Alternate. A neighbor N, that is not a 112 primary next-hop neighbor E, whose shortest path 113 to the destination D does not go back through the 114 router S. The neighbor N must meet the following 115 condition:- 117 Distance_opt(N, D) < Distance_opt(N, S) + 118 Distance_opt(S, D) 120 Loop Free Neighbor A neighbor N_i, which is not the particular 121 primary neighbor E_k under discussion, and whose 122 shortest path to D does not traverse S. For 123 example, if there are two primary neighbors E_1 124 and E_2, E_1 is a loop-free neighbor with regard 125 to E_2 and vice versa. 127 Loop Free Link Protecting Alternate 128 A path via a Loop-Free Neighbor N_i that reaches 129 destination D without going through the 130 particular link of S that is being protected. In 131 some cases the path to D may go through the 132 primary neighbor E. 134 Loop Free Node-protecting Alternate 135 A path via a Loop-Free Neighbor N_i that reaches 136 destination D without going through the 137 particular primary neighbor (E) of S which is 138 being protected. 140 N_i The ith neighbor of S. 142 Primary Neighbor A neighbor N_i of S which is one of the next hops 143 for destination D in S's FIB prior to any 144 failure. 146 R_i_j The jth neighbor of N_i. 148 Routing Transition The process whereby routers converge on a new 149 topology. In conventional networks this process 150 frequently causes some disruption to packet 151 delivery. 153 RPF Reverse Path Forwarding. I.e. checking that a 154 packet is received over the interface which would 155 be used to send packets addressed to the source 156 address of the packet. 158 S Used to denote a router that is the source of a 159 repair that is computed in anticipation of the 160 failure of a neighboring router denoted as E, or 161 of the link between S and E. It is the viewpoint 162 from which IP fast-reroute is described. 164 SPF Shortest Path First, e.g. Dijkstra's algorithm. 166 SPT Shortest path tree 168 Upstream Forwarding Loop 169 A forwarding loop that involves a set of routers, 170 none of which are directly connected to the link 171 that has caused the topology change that 172 triggered a new SPF in any of the routers. 174 2. Introduction 176 When a link or node failure occurs in a routed network, there is 177 inevitably a period of disruption to the delivery of traffic until 178 the network re-converges on the new topology. Packets for 179 destinations which were previously reached by traversing the failed 180 component may be dropped or may suffer looping. Traditionally such 181 disruptions have lasted for periods of at least several seconds, and 182 most applications have been constructed to tolerate such a quality of 183 service. 185 Recent advances in routers have reduced this interval to under a 186 second for carefully configured networks using link state IGPs. 187 However, new Internet services are emerging which may be sensitive to 188 periods of traffic loss which are orders of magnitude shorter than 189 this. 191 Addressing these issues is difficult because the distributed nature 192 of the network imposes an intrinsic limit on the minimum convergence 193 time which can be achieved. 195 However, there is an alternative approach, which is to compute backup 196 routes that allow the failure to be repaired locally by the router(s) 197 detecting the failure without the immediate need to inform other 198 routers of the failure. In this case, the disruption time can be 199 limited to the small time taken to detect the adjacent failure and 200 invoke the backup routes. This is analogous to the technique 201 employed by MPLS fast-reroute [RFC4090], but the mechanisms employed 202 for the backup routes in pure IP networks are necessarily very 203 different. 205 This document provides a framework for the development of this 206 approach. 208 3. Problem Analysis 210 The duration of the packet delivery disruption caused by a 211 conventional routing transition is determined by a number of factors: 213 1. The time taken to detect the failure. This may be of the order 214 of a few milliseconds when it can be detected at the physical 215 layer, up to several tens of seconds when a routing protocol 216 Hello is employed. During this period packets will be 217 unavoidably lost. 219 2. The time taken for the local router to react to the failure. 220 This will typically involve generating and flooding new routing 221 updates, perhaps after some hold-down delay, and re-computing the 222 router's FIB. 224 3. The time taken to pass the information about the failure to other 225 routers in the network. In the absence of routing protocol 226 packet loss, this is typically between 10 milliseconds and 100 227 milliseconds per hop. 229 4. The time taken to re-compute the forwarding tables. This is 230 typically a few milliseconds for a link state protocol using 231 Dijkstra's algorithm. 233 5. The time taken to load the revised forwarding tables into the 234 forwarding hardware. This time is very implementation dependant 235 and also depends on the number of prefixes affected by the 236 failure, but may be several hundred milliseconds. 238 The disruption will last until the routers adjacent to the failure 239 have completed steps 1 and 2, and then all the routers in the network 240 whose paths are affected by the failure have completed the remaining 241 steps. 243 The initial packet loss is caused by the router(s) adjacent to the 244 failure continuing to attempt to transmit packets across the failure 245 until it is detected. This loss is unavoidable, but the detection 246 time can be reduced to a few tens of milliseconds as described in 247 Section 4.1. 249 In some topologies subsequent packet loss may be caused by the 250 "micro-loops" which may form as a result of temporary inconsistencies 251 between routers' forwarding tables[I-D.ietf-rtgwg-lf-conv-frmwk]. 252 These inconsistencies are caused by steps 3, 4 and 5 above and in 253 many routers it is step 5 which is both the largest factor and which 254 has the greatest variance between routers. The large variance arises 255 from implementation differences and from the differing impact that a 256 failure has on each individual router. For example, the number of 257 prefixes affected by the failure may vary dramatically from one 258 router to another. 260 In order to achieve packet disruption times which are commensurate 261 with the failure detection times two mechanisms may be required:- 263 1. A mechanism for the router(s) adjacent to the failure to rapidly 264 invoke a repair path, which is unaffected by any subsequent re- 265 convergence. 267 2. In topologies that are susceptible to micro-loops, a mechanism to 268 prevent the effects of any micro-loops during subsequent re- 269 convergence. 271 Performing the first task without the second may result in the repair 272 path being starved of traffic and hence being redundant. Performing 273 the second without the first will result in traffic being discarded 274 by the router(s) adjacent to the failure. 276 Repair paths may always be used in isolation where the failure is 277 short-lived. In this case, the repair paths can be kept in place 278 until the failure is repaired in which case there is no need to 279 advertise the failure to other routers. 281 Similarly, micro-loop avoidance may be used in isolation to prevent 282 loops arising from pre-planned management action. In which case the 283 link or node being shut down can remain in service for a short time 284 after its removal has been announced into the network, and hence it 285 can function as its own "repair path". 287 Note that micro-loops may also occur when a link or node is restored 288 to service and thus a micro-loop avoidance mechanism may be required 289 for both link up and link down cases. 291 4. Mechanisms for IP Fast-reroute 293 The set of mechanisms required for an effective solution to the 294 problem can be broken down into the sub-problems described in this 295 section. 297 4.1. Mechanisms for fast failure detection 299 It is critical that the failure detection time is minimized. A 300 number of well documented approaches are possible, such as: 302 1. Physical detection; for example, loss of light. 304 2. Routing protocol independent protocol detection; for example, The 305 Bidirectional Failure Detection protocol [I-D.ietf-bfd-base]. 307 3. Routing protocol detection; for example, use of "fast Hellos". 309 4.2. Mechanisms for repair paths 311 Once a failure has been detected by one of the above mechanisms, 312 traffic which previously traversed the failure is transmitted over 313 one or more repair paths. The design of the repair paths should be 314 such that they can be pre-calculated in anticipation of each local 315 failure and made available for invocation with minimal delay. There 316 are three basic categories of repair paths: 318 1. Equal cost multi-paths (ECMP). Where such paths exist, and one 319 or more of the alternate paths do not traverse the failure, they 320 may trivially be used as repair paths. 322 2. Loop free alternate paths. Such a path exists when a direct 323 neighbor of the router adjacent to the failure has a path to the 324 destination which can be guaranteed not to traverse the failure. 326 3. Multi-hop repair paths. When there is no feasible loop free 327 alternate path it may still be possible to locate a router, which 328 is more than one hop away from the router adjacent to the 329 failure, from which traffic will be forwarded to the destination 330 without traversing the failure. 332 ECMP and loop free alternate paths (as described in [RFC5286]) offer 333 the simplest repair paths and would normally be used when they are 334 available. It is anticipated that around 80% of failures (see 335 Section 4.2.2) can be repaired using these basic methods alone. 337 Multi-hop repair paths are more complex, both in the computations 338 required to determine their existence, and in the mechanisms required 339 to invoke them. They can be further classified as: 341 1. Mechanisms where one or more alternate FIBs are pre-computed in 342 all routers and the repaired packet is instructed to be forwarded 343 using a "repair FIB" by some method of per packet signaling such 344 as detecting a "U-turn" [I-D.atlas-ip-local-protect-uturn] , 345 [FIFR] or by marking the packet [SIMULA]. 347 2. Mechanisms functionally equivalent to a loose source route which 348 is invoked using the normal FIB. These include tunnels 349 [I-D.bryant-ipfrr-tunnels], alternative shortest paths 350 [I-D.tian-frr-alt-shortest-path] and label based mechanisms. 352 3. Mechanisms employing special addresses or labels which are 353 installed in the FIBs of all routers with routes pre-computed to 354 avoid certain components of the network. For example 356 [I-D.ietf-rtgwg-ipfrr-notvia-addresses]. 358 In many cases a repair path which reaches two hops away from the 359 router detecting the failure will suffice, and it is anticipated that 360 around 98% of failures (see Section 4.2.2) can be repaired by this 361 method. However, to provide complete repair coverage some use of 362 longer multi-hop repair paths is generally necessary. 364 4.2.1. Scope of repair paths 366 A particular repair path may be valid for all destinations which 367 require repair or may only be valid for a subset of destinations. If 368 a repair path is valid for a node immediately downstream of the 369 failure, then it will be valid for all destinations previously 370 reachable by traversing the failure. However, in cases where such a 371 repair path is difficult to achieve because it requires a high order 372 multi-hop repair path, it may still be possible to identify lower 373 order repair paths (possibly even loop free alternate paths) which 374 allow the majority of destinations to be repaired. When IPFRR is 375 unable to provide complete repair, it is desirable that the extent of 376 the repair coverage can be determined and reported via network 377 management. 379 There is a trade-off to be achieved between minimizing the number of 380 repair paths to be computed, and minimizing the overheads incurred in 381 using higher order multi-hop repair paths for destinations for which 382 they are not strictly necessary. However, the computational cost of 383 determining repair paths on an individual destination basis can be 384 very high. 386 It will frequently be the case that the majority of destinations may 387 be repaired using only the "basic" repair mechanism, leaving a 388 smaller subset of the destinations to be repaired using one of the 389 more complex multi-hop methods. Such a hybrid approach may go some 390 way to resolving the conflict between completeness and complexity. 392 The use of repair paths may result in excessive traffic passing over 393 a link, resulting in congestion discard. This reduces the 394 effectiveness of IPFRR. Mechanisms to influence the distribution of 395 repaired traffic to minimize this effect are therefore desirable. 397 4.2.2. Analysis of repair coverage 399 The repair coverage obtained is dependent on the repair strategy and 400 highly dependent on the detailed topology and metrics. Estimates of 401 the repair coverage quoted in this document are for illustrative 402 purposes only and may not be always be achievable. 404 In some cases the repair strategy will permit the repair of all 405 single link or node failures in the network for all possible 406 destinations. This can be defined as 100% coverage. However, where 407 the coverage is less than 100% it is important for the purposes of 408 comparisons between different proposed repair strategies to define 409 what is meant by such a percentage. There are four possibilities: 411 1. The percentage of links (or nodes) which can be fully protected 412 for all destinations. This is appropriate where the requirement 413 is to protect all traffic, but some percentage of the possible 414 failures may be identified as being un-protectable. 416 2. The percentage of destinations which can be fully protected for 417 all link (or node) failures. This is appropriate where the 418 requirement is to protect against all possible failures, but some 419 percentage of destinations may be identified as being un- 420 protectable. 422 3. For all destinations (d) and for all failures (f), the percentage 423 of the total potential failure cases (d*f) which are protected. 424 This is appropriate where the requirement is an overall "best 425 effort" protection. 427 4. The percentage of packets normally passing though the network 428 that will continue to reach their destination. This requires a 429 traffic matrix for the network as part of the analysis. 431 4.2.3. Link or node repair 433 A repair path may be computed to protect against failure of an 434 adjacent link, or failure of an adjacent node. In general, link 435 protection is simpler to achieve. A repair which protects against 436 node failure will also protect against link failure for all 437 destinations except those for which the adjacent node is a single 438 point of failure. 440 In some cases it may be necessary to distinguish between a link or 441 node failure in order that the optimal repair strategy is invoked. 442 Methods for link/node failure determination may be based on 443 techniques such as BFD[I-D.ietf-bfd-base]. This determination may be 444 made prior to invoking any repairs, but this will increase the period 445 of packet loss following a failure unless the determination can be 446 performed as part of the failure detection mechanism itself. 447 Alternatively, a subsequent determination can be used to optimise an 448 already invoked default strategy. 450 4.2.4. Maintenance of Repair paths 452 In order to meet the response time goals, it is expected (though not 453 required) that repair paths, and their associated FIB entries, will 454 be pre-computed and installed ready for invocation when a failure is 455 detected. Following invocation the repair paths remain in effect 456 until they are no longer required. This will normally be when the 457 routing protocol has re-converged on the new topology taking into 458 account the failure, and traffic will no longer be using the repair 459 paths. 461 The repair paths have the property that they are unaffected by any 462 topology changes resulting from the failure which caused their 463 instantiation. Therefore there is no need to re-compute them during 464 the convergence period. They may be affected by an unrelated 465 simultaneous topology change, but such events are out of scope of 466 this work (see Section 4.2.5). 468 Once the routing protocol has re-converged it is necessary for all 469 repair paths to take account of the new topology. Various 470 optimizations may permit the efficient identification of repair paths 471 which are unaffected by the change, and hence do not require full re- 472 computation. Since the new repair paths will not be required until 473 the next failure occurs, the re-computation may be performed as a 474 background task and be subject to a hold-down, but excessive delay in 475 completing this operation will increase the risk of a new failure 476 occurring before the repair paths are in place. 478 4.2.5. Multiple failures and Shared Risk Link Groups 480 Complete protection against multiple unrelated failures is out of 481 scope of this work. However, it is important that the occurrence of 482 a second failure while one failure is undergoing repair should not 483 result in a level of service which is significantly worse than that 484 which would have been achieved in the absence of any repair strategy. 486 Shared Risk Link Groups (SRLGs) are an example of multiple related 487 failures, and the more complex aspects of their protection is a 488 matter for further study. 490 One specific example of an SRLG which is clearly within the scope of 491 this work is a node failure. This causes the simultaneous failure of 492 multiple links, but their closely defined topological relationship 493 makes the problem more tractable. 495 4.3. Local Area Networks 497 Protection against partial or complete failure of LANs is more 498 complex than the point to point case. In general there is a trade- 499 off between the simplicity of the repair and the ability to provide 500 complete and optimal repair coverage. 502 4.4. Mechanisms for micro-loop prevention 504 Ensuring the absence of micro-loops is important not only because 505 they can cause packet loss in traffic which is affected by the 506 failure, but because by saturating a link with looping packets they 507 can also cause congestion loss of traffic flowing over that link 508 which would otherwise be unaffected by the failure. 510 A number of solutions to the problem of micro-loop formation have 511 been proposed and are summarized in [I-D.ietf-rtgwg-lf-conv-frmwk]. 512 The following factors are significant in their classification: 514 1. Partial or complete protection against micro-loops. 516 2. Delay imposed upon convergence. 518 3. Tolerance of multiple failures (from node failures, and in 519 general). 521 4. Computational complexity (pre-computed or real time). 523 5. Applicability to scheduled events. 525 6. Applicability to link/node reinstatement. 527 7. Topological constraints. 529 5. Management Considerations 531 While many of the management requirements will be specific to 532 particular IPFRR solutions, the following general aspects need to be 533 addressed: 535 1. Configuration 537 A. Enabling/disabling IPFRR support. 539 B. Enabling/disabling protection on a per link/node basis. 541 C. Expressing preferences regarding the links/nodes used for 542 repair paths. 544 D. Configuration of failure detection mechanisms. 546 E. Configuration of loop avoidance strategies 548 2. Monitoring and operational support 550 A. Notification of links/nodes/destinations which cannot be 551 protected. 553 B. Notification of pre-computed repair paths, and anticipated 554 traffic patterns. 556 C. Counts of failure detections, protection invocations and 557 packets forwarded over repair paths. 559 D. Testing repairs. 561 6. Scope and applicability 563 The initial scope of this work is in the context of link state IGPs. 564 Link state protocols provide ubiquitous topology information, which 565 facilitates the computation of repairs paths. 567 Provision of similar facilities in non-link state IGPs and BGP is a 568 matter for further study, but the correct operation of the repair 569 mechanisms for traffic with a destination outside the IGP domain is 570 an important consideration for solutions based on this framework 572 7. IANA Considerations 574 There are no IANA considerations that arise from this framework 575 document. 577 8. Security Considerations 579 This framework document does not itself introduce any security 580 issues, but attention must be paid to the security implications of 581 any proposed solutions to the problem. 583 Where the chosen solution uses tunnels it is necessary to ensure that 584 the tunnel is not used as an attack vector. One method of addressing 585 this is to use a set of tunnel endpoint addresses that are excluded 586 from use by user traffic. 588 There is a compatibility issue between IPFRR and reverse path 589 forwarding (RPF) checking. Many of the solutions described in this 590 document result in traffic arriving from a direction inconsistent 591 with a standard RPF check. When a network relies on RPF checking for 592 security purposes, an alternative security mechanism will need to be 593 deployed in order to permit IPFRR to used. 595 Because the repair path will often be of a different length to the 596 pre-failure path, security mechanisms which rely on specific TTL 597 values will be adversely affected. 599 9. Acknowledgements 601 The authors would like to acknowledge contributions made by Alia 602 Atlas, Clarence Filsfils, Pierre Francois, Joel Halpern, Stefano 603 Previdi and Alex Zinin. 605 10. Informative References 607 [FIFR] Nelakuditi, S., Lee, S., Lu, Y., Zhang, Z., and C. Chuah, 608 "Fast local rerouting for handling transient link 609 failures."", Tech. Rep. TR-2004-004, 2004. 611 [I-D.atlas-ip-local-protect-uturn] 612 Atlas, A., "U-turn Alternates for IP/LDP Fast-Reroute", 613 draft-atlas-ip-local-protect-uturn-03 (work in progress), 614 March 2006. 616 [I-D.bryant-ipfrr-tunnels] 617 Bryant, S., Filsfils, C., Previdi, S., and M. Shand, "IP 618 Fast Reroute using tunnels", draft-bryant-ipfrr-tunnels-03 619 (work in progress), November 2007. 621 [I-D.ietf-bfd-base] 622 Katz, D. and D. Ward, "Bidirectional Forwarding 623 Detection", draft-ietf-bfd-base-09 (work in progress), 624 February 2009. 626 [I-D.ietf-rtgwg-ipfrr-notvia-addresses] 627 Shand, M., Bryant, S., and S. Previdi, "IP Fast Reroute 628 Using Not-via Addresses", 629 draft-ietf-rtgwg-ipfrr-notvia-addresses-04 (work in 630 progress), July 2009. 632 [I-D.ietf-rtgwg-lf-conv-frmwk] 633 Shand, M. and S. Bryant, "A Framework for Loop-free 634 Convergence", draft-ietf-rtgwg-lf-conv-frmwk-05 (work in 635 progress), June 2009. 637 [I-D.tian-frr-alt-shortest-path] 638 Tian, A., "Fast Reroute using Alternative Shortest Paths", 639 draft-tian-frr-alt-shortest-path-01 (work in progress), 640 July 2004. 642 [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 643 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 644 May 2005. 646 [RFC5286] Atlas, A. and A. Zinin, "Basic Specification for IP Fast 647 Reroute: Loop-Free Alternates", RFC 5286, September 2008. 649 [SIMULA] Lysne, O., Kvalbein, A., Cicic, T., Gjessing, S., and A. 650 Hansen, "Fast IP Network Recovery using Multiple Routing 651 Configurations."", Infocom 10.1109/INFOCOM.2006.227, 2006, 652 . 654 Authors' Addresses 656 Mike Shand 657 Cisco Systems 658 250, Longwater Avenue. 659 Reading, Berks RG2 6GB 660 UK 662 Email: mshand@cisco.com 664 Stewart Bryant 665 Cisco Systems 666 250, Longwater Avenue. 667 Reading, Berks RG2 6GB 668 UK 670 Email: stbryant@cisco.com