idnits 2.17.1 draft-ietf-rtgwg-ipfrr-framework-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 29, 2009) is 5415 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-11) exists of draft-ietf-bfd-base-09 == Outdated reference: A later version (-11) exists of draft-ietf-rtgwg-ipfrr-notvia-addresses-03 == Outdated reference: A later version (-07) exists of draft-ietf-rtgwg-lf-conv-frmwk-05 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Shand 3 Internet-Draft S. Bryant 4 Intended status: Informational Cisco Systems 5 Expires: December 31, 2009 June 29, 2009 7 IP Fast Reroute Framework 8 draft-ietf-rtgwg-ipfrr-framework-11 10 Status of this Memo 12 This Internet-Draft is submitted to IETF in full conformance with the 13 provisions of BCP 78 and BCP 79. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six months 21 and may be updated, replaced, or obsoleted by other documents at any 22 time. It is inappropriate to use Internet-Drafts as reference 23 material or to cite them other than as "work in progress." 25 The list of current Internet-Drafts can be accessed at 26 http://www.ietf.org/ietf/1id-abstracts.txt. 28 The list of Internet-Draft Shadow Directories can be accessed at 29 http://www.ietf.org/shadow.html. 31 This Internet-Draft will expire on December 31, 2009. 33 Copyright Notice 35 Copyright (c) 2009 IETF Trust and the persons identified as the 36 document authors. All rights reserved. 38 This document is subject to BCP 78 and the IETF Trust's Legal 39 Provisions Relating to IETF Documents in effect on the date of 40 publication of this document (http://trustee.ietf.org/license-info). 41 Please review these documents carefully, as they describe your rights 42 and restrictions with respect to this document. 44 Abstract 46 This document provides a framework for the development of IP fast- 47 reroute mechanisms which provide protection against link or router 48 failure by invoking locally determined repair paths. Unlike MPLS 49 fast-reroute, the mechanisms are applicable to a network employing 50 conventional IP routing and forwarding. 52 Table of Contents 54 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 55 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 56 3. Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . 5 57 4. Mechanisms for IP Fast-reroute . . . . . . . . . . . . . . . . 7 58 4.1. Mechanisms for fast failure detection . . . . . . . . . . 7 59 4.2. Mechanisms for repair paths . . . . . . . . . . . . . . . 8 60 4.2.1. Scope of repair paths . . . . . . . . . . . . . . . . 9 61 4.2.2. Analysis of repair coverage . . . . . . . . . . . . . 9 62 4.2.3. Link or node repair . . . . . . . . . . . . . . . . . 10 63 4.2.4. Maintenance of Repair paths . . . . . . . . . . . . . 11 64 4.2.5. Multiple failures and Shared Risk Link Groups . . . . 11 65 4.3. Local Area Networks . . . . . . . . . . . . . . . . . . . 12 66 4.4. Mechanisms for micro-loop prevention . . . . . . . . . . . 12 67 5. Management Considerations . . . . . . . . . . . . . . . . . . 12 68 6. Scope and applicability . . . . . . . . . . . . . . . . . . . 13 69 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 70 8. Security Considerations . . . . . . . . . . . . . . . . . . . 13 71 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 14 72 10. Informative References . . . . . . . . . . . . . . . . . . . . 14 73 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 75 1. Terminology 77 This section defines words and acronyms used in this draft and other 78 drafts discussing IP fast-reroute. 80 D Used to denote the destination router under 81 discussion. 83 Distance_opt(A,B) The distance of the shortest path from A to B. 85 Downstream Path This is a subset of the loop-free alternates 86 where the neighbor N meets the following 87 condition:- 89 Distance_opt(N, D) < Distance_opt(S,D) 91 E Used to denote the router which is the primary 92 next-hop neighbor to get from S to the 93 destination D. Where there is an ECMP set for the 94 shortest path from S to D, these are referred to 95 as E_1, E_2, etc. 97 ECMP Equal cost multi-path: Where, for a particular 98 destination D, multiple primary next-hops are 99 used to forward traffic because there exist 100 multiple shortest paths from S via different 101 output layer-3 interfaces. 103 FIB Forwarding Information Base. The database used 104 by the packet forwarder to determine what actions 105 to perform on a packet. 107 IPFRR IP fast-reroute. 109 Link(A->B) A link connecting router A to router B. 111 LFA Loop Free Alternate. This is a neighbor N, that 112 is not a primary next-hop neighbor E, whose 113 shortest path to the destination D does not go 114 back through the router S. The neighbor N must 115 meet the following condition:- 117 Distance_opt(N, D) < Distance_opt(N, S) + 118 Distance_opt(S, D) 120 Loop Free Neighbor A neighbor N_i, which is not the particular 121 primary neighbor E_k under discussion, and whose 122 shortest path to D does not traverse S. For 123 example, if there are two primary neighbors E_1 124 and E_2, E_1 is a loop-free neighbor with regard 125 to E_2 and vice versa. 127 Loop Free Link Protecting Alternate 128 This is a path via a Loop-Free Neighbor N_i which 129 does not go through the particular link of S 130 which is being protected to reach the destination 131 D. 133 Loop Free Node-protecting Alternate 134 This is a path via a Loop-Free Neighbor N_i which 135 does not go through the particular primary 136 neighbor of S which is being protected to reach 137 the destination D. 139 N_i The ith neighbor of S. 141 Primary Neighbor A neighbor N_i of S which is one of the next hops 142 for destination D in S's FIB prior to any 143 failure. 145 R_i_j The jth neighbor of N_i. 147 Routing Transition The process whereby routers converge on a new 148 topology. In conventional networks this process 149 frequently causes some disruption to packet 150 delivery. 152 RPF Reverse Path Forwarding. I.e. checking that a 153 packet is received over the interface which would 154 be used to send packets addressed to the source 155 address of the packet. 157 S Used to denote a router that is the source of a 158 repair that is computed in anticipation of the 159 failure of a neighboring router denoted as E, or 160 of the link between S and E. It is the viewpoint 161 from which IP fast-reroute is described. 163 SPF Shortest Path First, e.g. Dijkstra's algorithm. 165 SPT Shortest path tree 167 Upstream Forwarding Loop 168 This is a forwarding loop which involves a set of 169 routers, none of which are directly connected to 170 the link which has caused the topology change 171 that triggered a new SPF in any of the routers. 173 2. Introduction 175 When a link or node failure occurs in a routed network, there is 176 inevitably a period of disruption to the delivery of traffic until 177 the network re-converges on the new topology. Packets for 178 destinations which were previously reached by traversing the failed 179 component may be dropped or may suffer looping. Traditionally such 180 disruptions have lasted for periods of at least several seconds, and 181 most applications have been constructed to tolerate such a quality of 182 service. 184 Recent advances in routers have reduced this interval to under a 185 second for carefully configured networks using link state IGPs. 186 However, new Internet services are emerging which may be sensitive to 187 periods of traffic loss which are orders of magnitude shorter than 188 this. 190 Addressing these issues is difficult because the distributed nature 191 of the network imposes an intrinsic limit on the minimum convergence 192 time which can be achieved. 194 However, there is an alternative approach, which is to compute backup 195 routes that allow the failure to be repaired locally by the router(s) 196 detecting the failure without the immediate need to inform other 197 routers of the failure. In this case, the disruption time can be 198 limited to the small time taken to detect the adjacent failure and 199 invoke the backup routes. This is analogous to the technique 200 employed by MPLS fast-reroute [RFC4090], but the mechanisms employed 201 for the backup routes in pure IP networks are necessarily very 202 different. 204 This document provides a framework for the development of this 205 approach. 207 3. Problem Analysis 209 The duration of the packet delivery disruption caused by a 210 conventional routing transition is determined by a number of factors: 212 1. The time taken to detect the failure. This may be of the order 213 of a few milliseconds when it can be detected at the physical 214 layer, up to several tens of seconds when a routing protocol 215 hello is employed. During this period packets will be 216 unavoidably lost. 218 2. The time taken for the local router to react to the failure. 219 This will typically involve generating and flooding new routing 220 updates, perhaps after some hold-down delay, and re-computing the 221 router's FIB. 223 3. The time taken to pass the information about the failure to other 224 routers in the network. In the absence of routing protocol 225 packet loss, this is typically between 10 milliseconds and 100 226 milliseconds per hop. 228 4. The time taken to re-compute the forwarding tables. This is 229 typically a few milliseconds for a link state protocol using 230 Dijkstra's algorithm. 232 5. The time taken to load the revised forwarding tables into the 233 forwarding hardware. This time is very implementation dependant 234 and also depends on the number of prefixes affected by the 235 failure, but may be several hundred milliseconds. 237 The disruption will last until the routers adjacent to the failure 238 have completed steps 1 and 2, and then all the routers in the network 239 whose paths are affected by the failure have completed the remaining 240 steps. 242 The initial packet loss is caused by the router(s) adjacent to the 243 failure continuing to attempt to transmit packets across the failure 244 until it is detected. This loss is unavoidable, but the detection 245 time can be reduced to a few tens of milliseconds as described in 246 Section 4.1. 248 In some topologies subsequent packet loss may be caused by the 249 "micro-loops" which may form as a result of temporary inconsistencies 250 between routers' forwarding tables[I-D.ietf-rtgwg-lf-conv-frmwk]. 251 When micro-loops occur, this is as a result of the different times at 252 which routers update their forwarding tables to reflect the failure. 253 These variable delays are caused by steps 3, 4 and 5 above and in 254 many routers it is step 5 which is both the largest factor and which 255 has the greatest variance between routers. The large variance arises 256 from implementation differences and from the differing impact that a 257 failure has on each individual router. For example, the number of 258 prefixes affected by the failure may vary dramatically from one 259 router to another. 261 In order to achieve packet disruption times which are commensurate 262 with the failure detection times two factors must be considered:- 264 1. The provision of a mechanism for the router(s) adjacent to the 265 failure to rapidly invoke a repair path, which is unaffected by 266 any subsequent re-convergence. 268 2. In topologies that are susceptible to micro-loops, the provision 269 of a mechanism to prevent the effects of any micro-loops during 270 subsequent re-convergence. 272 Performing the first task without the second may result in the repair 273 path being starved of traffic and hence being redundant. Performing 274 the second without the first will result in traffic being discarded 275 by the router(s) adjacent to the failure. 277 Repair paths may always be used in isolation where the failure is 278 short-lived. In this case, the repair paths can be kept in place 279 until the failure is repaired in which case there is no need to 280 advertise the failure to other routers. 282 Similarly, micro-loop avoidance may be used in isolation to prevent 283 loops arising from pre-planned management action. In which case the 284 link or node being shut down can remain in service for a short time 285 after its removal has been announced into the network, and hence it 286 can function as its own "repair path". 288 Note that micro-loops may also occur when a link or node is restored 289 to service and thus a micro-loop avoidance mechanism may be required 290 for both link up and link down cases. 292 4. Mechanisms for IP Fast-reroute 294 The set of mechanisms required for an effective solution to the 295 problem can be broken down into the sub-problems described in this 296 section. 298 4.1. Mechanisms for fast failure detection 300 It is critical that the failure detection time is minimized. A 301 number of well documented approaches are possible, such as: 303 1. Physical detection; for example, loss of light. 305 2. Routing protocol independent protocol detection; for example, The 306 Bidirectional Failure Detection protocol [I-D.ietf-bfd-base]. 308 3. Routing protocol detection; for example, use of "fast hellos". 310 4.2. Mechanisms for repair paths 312 Once a failure has been detected by one of the above mechanisms, 313 traffic which previously traversed the failure is transmitted over 314 one or more repair paths. The design of the repair paths should be 315 such that they can be pre-calculated in anticipation of each local 316 failure and made available for invocation with minimal delay. There 317 are three basic categories of repair paths: 319 1. Equal cost multi-paths (ECMP). Where such paths exist, and one 320 or more of the alternate paths do not traverse the failure, they 321 may trivially be used as repair paths. 323 2. Loop free alternate paths. Such a path exists when a direct 324 neighbor of the router adjacent to the failure has a path to the 325 destination which can be guaranteed not to traverse the failure. 327 3. Multi-hop repair paths. When there is no feasible loop free 328 alternate path it may still be possible to locate a router, which 329 is more than one hop away from the router adjacent to the 330 failure, from which traffic will be forwarded to the destination 331 without traversing the failure. 333 ECMP and loop free alternate paths (as described in [RFC5286]) offer 334 the simplest repair paths and would normally be used when they are 335 available. It is anticipated that around 80% of failures (see 336 Section 4.2.2) can be repaired using these basic methods alone. 338 Multi-hop repair paths are more complex, both in the computations 339 required to determine their existence, and in the mechanisms required 340 to invoke them. They can be further classified as: 342 1. Mechanisms where one or more alternate FIBs are pre-computed in 343 all routers and the repaired packet is instructed to be forwarded 344 using a "repair FIB" by some method of per packet signaling such 345 as detecting a "U-turn" [I-D.atlas-ip-local-protect-uturn] , 346 [FIFR] or by marking the packet [SIMULA]. 348 2. Mechanisms functionally equivalent to a loose source route which 349 is invoked using the normal FIB. These include tunnels 350 [I-D.bryant-ipfrr-tunnels], alternative shortest paths 351 [I-D.tian-frr-alt-shortest-path] and label based mechanisms. 353 3. Mechanisms employing special addresses or labels which are 354 installed in the FIBs of all routers with routes pre-computed to 355 avoid certain components of the network. For example 357 [I-D.ietf-rtgwg-ipfrr-notvia-addresses]. 359 In many cases a repair path which reaches two hops away from the 360 router detecting the failure will suffice, and it is anticipated that 361 around 98% of failures (see Section 4.2.2) can be repaired by this 362 method. However, to provide complete repair coverage some use of 363 longer multi-hop repair paths is generally necessary. 365 4.2.1. Scope of repair paths 367 A particular repair path may be valid for all destinations which 368 require repair or may only be valid for a subset of destinations. If 369 a repair path is valid for a node immediately downstream of the 370 failure, then it will be valid for all destinations previously 371 reachable by traversing the failure. However, in cases where such a 372 repair path is difficult to achieve because it requires a high order 373 multi-hop repair path, it may still be possible to identify lower 374 order repair paths (possibly even loop free alternate paths) which 375 allow the majority of destinations to be repaired. When IPFRR is 376 unable to provide complete repair, it is desirable that the extent of 377 the repair coverage can be determined and reported via network 378 management. 380 There is a tradeoff to be achieved between minimizing the number of 381 repair paths to be computed, and minimizing the overheads incurred in 382 using higher order multi-hop repair paths for destinations for which 383 they are not strictly necessary. However, the computational cost of 384 determining repair paths on an individual destination basis can be 385 very high. 387 It will frequently be the case that the majority of destinations may 388 be repaired using only the "basic" repair mechanism, leaving a 389 smaller subset of the destinations to be repaired using one of the 390 more complex multi-hop methods. Such a hybrid approach may go some 391 way to resolving the conflict between completeness and complexity. 393 The use of repair paths may result in excessive traffic passing over 394 a link, resulting in congestion discard. This reduces the 395 effectiveness of IPFRR. Mechanisms to influence the distribution of 396 repaired traffic to minimize this effect are therefore desirable. 398 4.2.2. Analysis of repair coverage 400 In some cases the repair strategy will permit the repair of all 401 single link or node failures in the network for all possible 402 destinations. This can be defined as 100% coverage. However, where 403 the coverage is less than 100% it is important for the purposes of 404 comparisons between different proposed repair strategies to define 405 what is meant by such a percentage. There are four possibilities: 407 1. The percentage of links (or nodes) which can be fully protected 408 for all destinations. This is appropriate where the requirement 409 is to protect all traffic, but some percentage of the possible 410 failures may be identified as being un-protectable. 412 2. The percentage of destinations which can be fully protected for 413 all link (or node) failures. This is appropriate where the 414 requirement is to protect against all possible failures, but some 415 percentage of destinations may be identified as being un- 416 protectable. 418 3. For all destinations (d) and for all failures (f), the percentage 419 of the total potential failure cases (d*f) which are protected. 420 This is appropriate where the requirement is an overall "best 421 effort" protection. 423 4. The percentage of packets normally passing though the network 424 that will continue to reach their destination. This requires a 425 traffic matrix for the network as part of the analysis. 427 The coverage obtained is dependent on the repair strategy and highly 428 dependent on the detailed topology and metrics. Any figures quoted 429 in this document are for illustrative purposes only. 431 4.2.3. Link or node repair 433 A repair path may be computed to protect against failure of an 434 adjacent link, or failure of an adjacent node. In general, link 435 protection is simpler to achieve. A repair which protects against 436 node failure will also protect against link failure for all 437 destinations except those for which the adjacent node is a single 438 point of failure. 440 In some cases it may be necessary to distinguish between a link or 441 node failure in order that the optimal repair strategy is invoked. 442 Methods for link/node failure determination may be based on 443 techniques such as BFD[I-D.ietf-bfd-base]. This determination may be 444 made prior to invoking any repairs, but this will increase the period 445 of packet loss following a failure unless the determination can be 446 performed as part of the failure detection mechanism itself. 447 Alternatively, a subsequent determination can be used to optimise an 448 already invoked default strategy. 450 4.2.4. Maintenance of Repair paths 452 In order to meet the response time goals, it is expected (though not 453 required) that repair paths, and their associated FIB entries, will 454 be pre-computed and installed ready for invocation when a failure is 455 detected. Following invocation the repair paths remain in effect 456 until they are no longer required. This will normally be when the 457 routing protocol has re-converged on the new topology taking into 458 account the failure, and traffic will no longer be using the repair 459 paths. 461 The repair paths have the property that they are unaffected by any 462 topology changes resulting from the failure which caused their 463 instantiation. Therefore there is no need to re-compute them during 464 the convergence period. They may be affected by an unrelated 465 simultaneous topology change, but such events are out of scope of 466 this work (see Section 4.2.5). 468 Once the routing protocol has re-converged it is necessary for all 469 repair paths to take account of the new topology. Various 470 optimizations may permit the efficient identification of repair paths 471 which are unaffected by the change, and hence do not require full re- 472 computation. Since the new repair paths will not be required until 473 the next failure occurs, the re-computation may be performed as a 474 background task and be subject to a hold-down, but excessive delay in 475 completing this operation will increase the risk of a new failure 476 occurring before the repair paths are in place. 478 4.2.5. Multiple failures and Shared Risk Link Groups 480 Complete protection against multiple unrelated failures is out of 481 scope of this work. However, it is important that the occurrence of 482 a second failure while one failure is undergoing repair should not 483 result in a level of service which is significantly worse than that 484 which would have been achieved in the absence of any repair strategy. 486 Shared Risk Link Groups are an example of multiple related failures, 487 and the more complex aspects of their protection is a matter for 488 further study. 490 One specific example of an SRLG which is clearly within the scope of 491 this work is a node failure. This causes the simultaneous failure of 492 multiple links, but their closely defined topological relationship 493 makes the problem more tractable. 495 4.3. Local Area Networks 497 Protection against partial or complete failure of LANs is more 498 complex than the point to point case. In general there is a tradeoff 499 between the simplicity of the repair and the ability to provide 500 complete and optimal repair coverage. 502 4.4. Mechanisms for micro-loop prevention 504 Ensuring the absence of micro-loops is important not only because 505 they can cause packet loss in traffic which is affected by the 506 failure, but because by saturating a link with looping packets they 507 can also cause congestion loss of traffic flowing over that link 508 which would otherwise be unaffected by the failure. 510 A number of solutions to the problem of micro-loop formation have 511 been proposed and are summarized in [I-D.ietf-rtgwg-lf-conv-frmwk]. 512 The following factors are significant in their classification: 514 1. Partial or complete protection against micro-loops. 516 2. Delay imposed upon convergence. 518 3. Tolerance of multiple failures (from node failures, and in 519 general). 521 4. Computational complexity (pre-computed or real time). 523 5. Applicability to scheduled events. 525 6. Applicability to link/node reinstatement. 527 7. Topological constraints. 529 5. Management Considerations 531 While many of the management requirements will be specific to 532 particular IPFRR solutions, the following general aspects need to be 533 addressed: 535 1. Configuration 537 A. Enabling/disabling IPFRR support. 539 B. Enabling/disabling protection on a per link/node basis. 541 C. Expressing preferences regarding the links/nodes used for 542 repair paths. 544 D. Configuration of failure detection mechanisms. 546 E. Configuration of loop avoidance strategies 548 2. Monitoring and operational support 550 A. Notification of links/nodes/destinations which cannot be 551 protected. 553 B. Notification of pre-computed repair paths, and anticipated 554 traffic patterns. 556 C. Counts of failure detections, protection invocations and 557 packets forwarded over repair paths. 559 D. Testing repairs. 561 6. Scope and applicability 563 The initial scope of this work is in the context of link state IGPs. 564 Link state protocols provide ubiquitous topology information, which 565 facilitates the computation of repairs paths. 567 Provision of similar facilities in non-link state IGPs and BGP is a 568 matter for further study, but the correct operation of the repair 569 mechanisms for traffic with a destination outside the IGP domain is 570 an important consideration for solutions based on this framework 572 7. IANA Considerations 574 There are no IANA considerations that arise from this framework 575 document. 577 8. Security Considerations 579 This framework document does not itself introduce any security 580 issues, but attention must be paid to the security implications of 581 any proposed solutions to the problem. 583 9. Acknowledgements 585 The authors would like to acknowledge contributions made by Alia 586 Atlas, Clarence Filsfils, Pierre Francois, Joel Halpern, Stefano 587 Previdi and Alex Zinin. 589 10. Informative References 591 [FIFR] Nelakuditi, S., Lee, S., Lu, Y., Zhang, Z., and C. Chuah, 592 "Fast local rerouting for handling transient link 593 failures."", Tech. Rep. TR-2004-004, 2004. 595 [I-D.atlas-ip-local-protect-uturn] 596 Atlas, A., "U-turn Alternates for IP/LDP Fast-Reroute", 597 draft-atlas-ip-local-protect-uturn-03 (work in progress), 598 March 2006. 600 [I-D.bryant-ipfrr-tunnels] 601 Bryant, S., Filsfils, C., Previdi, S., and M. Shand, "IP 602 Fast Reroute using tunnels", draft-bryant-ipfrr-tunnels-03 603 (work in progress), November 2007. 605 [I-D.ietf-bfd-base] 606 Katz, D. and D. Ward, "Bidirectional Forwarding 607 Detection", draft-ietf-bfd-base-09 (work in progress), 608 February 2009. 610 [I-D.ietf-rtgwg-ipfrr-notvia-addresses] 611 Shand, M., Bryant, S., and S. Previdi, "IP Fast Reroute 612 Using Not-via Addresses", 613 draft-ietf-rtgwg-ipfrr-notvia-addresses-03 (work in 614 progress), October 2008. 616 [I-D.ietf-rtgwg-lf-conv-frmwk] 617 Shand, M. and S. Bryant, "A Framework for Loop-free 618 Convergence", draft-ietf-rtgwg-lf-conv-frmwk-05 (work in 619 progress), June 2009. 621 [I-D.tian-frr-alt-shortest-path] 622 Tian, A., "Fast Reroute using Alternative Shortest Paths", 623 draft-tian-frr-alt-shortest-path-01 (work in progress), 624 July 2004. 626 [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 627 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 628 May 2005. 630 [RFC5286] Atlas, A. and A. Zinin, "Basic Specification for IP Fast 631 Reroute: Loop-Free Alternates", RFC 5286, September 2008. 633 [SIMULA] Lysne, O., Kvalbein, A., Cicic, T., Gjessing, S., and A. 634 Hansen, "Fast IP Network Recovery using Multiple Routing 635 Configurations."", Infocom 10.1109/INFOCOM.2006.227, 2006, 636 . 638 Authors' Addresses 640 Mike Shand 641 Cisco Systems 642 250, Longwater Avenue. 643 Reading, Berks RG2 6GB 644 UK 646 Email: mshand@cisco.com 648 Stewart Bryant 649 Cisco Systems 650 250, Longwater Avenue. 651 Reading, Berks RG2 6GB 652 UK 654 Email: stbryant@cisco.com