idnits 2.17.1 draft-ietf-rtgwg-ipfrr-framework-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 15. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 660. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 671. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 678. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 684. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 30, 2008) is 5656 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-11) exists of draft-ietf-bfd-base-08 == Outdated reference: A later version (-11) exists of draft-ietf-rtgwg-ipfrr-notvia-addresses-02 == Outdated reference: A later version (-07) exists of draft-ietf-rtgwg-lf-conv-frmwk-02 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Shand 3 Internet-Draft S. Bryant 4 Intended status: Informational Cisco Systems 5 Expires: May 3, 2009 October 30, 2008 7 IP Fast Reroute Framework 8 draft-ietf-rtgwg-ipfrr-framework-09 10 Status of this Memo 12 By submitting this Internet-Draft, each author represents that any 13 applicable patent or other IPR claims of which he or she is aware 14 have been or will be disclosed, and any of which he or she becomes 15 aware will be disclosed, in accordance with Section 6 of BCP 79. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that 19 other groups may also distribute working documents as Internet- 20 Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six months 23 and may be updated, replaced, or obsoleted by other documents at any 24 time. It is inappropriate to use Internet-Drafts as reference 25 material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt. 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 This Internet-Draft will expire on May 3, 2009. 35 Abstract 37 This document provides a framework for the development of IP fast- 38 reroute mechanisms which provide protection against link or router 39 failure by invoking locally determined repair paths. Unlike MPLS 40 Fast-reroute, the mechanisms are applicable to a network employing 41 conventional IP routing and forwarding. An essential part of such 42 mechanisms is the prevention of packet loss caused by the loops which 43 normally occur during the re-convergence of the network following a 44 failure. 46 Table of Contents 48 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 49 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 50 3. Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . 6 51 4. Mechanisms for IP Fast-reroute . . . . . . . . . . . . . . . . 7 52 4.1. Mechanisms for fast failure detection . . . . . . . . . . 7 53 4.2. Mechanisms for repair paths . . . . . . . . . . . . . . . 8 54 4.2.1. Scope of repair paths . . . . . . . . . . . . . . . . 9 55 4.2.2. Analysis of repair coverage . . . . . . . . . . . . . 9 56 4.2.3. Link or node repair . . . . . . . . . . . . . . . . . 10 57 4.2.4. Maintenance of Repair paths . . . . . . . . . . . . . 11 58 4.2.5. Multiple failures and Shared Risk Link Groups . . . . 11 59 4.3. Local Area Networks . . . . . . . . . . . . . . . . . . . 12 60 4.4. Mechanisms for micro-loop prevention . . . . . . . . . . . 12 61 5. Management Considerations . . . . . . . . . . . . . . . . . . 12 62 6. Scope and applicability . . . . . . . . . . . . . . . . . . . 13 63 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 64 8. Security Considerations . . . . . . . . . . . . . . . . . . . 13 65 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 13 66 10. Informative References . . . . . . . . . . . . . . . . . . . . 14 67 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 68 Intellectual Property and Copyright Statements . . . . . . . . . . 16 70 1. Terminology 72 This section defines words and acronyms used in this draft and other 73 drafts discussing IP Fast-reroute. 75 D Used to denote the destination router under 76 discussion. 78 Distance_opt(A,B) The distance of the shortest path from A to B. 80 Downstream Path This is a subset of the loop-free alternates 81 where the neighbor N meet the following 82 condition:- 84 Distance_opt(N, D) < Distance_opt(S,D) 86 E Used to denote the router which is the primary 87 next-hop neighbor to get from S to the 88 destination D. Where there is an ECMP set for the 89 shortest path from S to D, these are referred to 90 as E_1, E_2, etc. 92 ECMP Equal cost multi-path: Where, for a particular 93 destination D, multiple primary next-hops are 94 used to forward traffic because there exist 95 multiple shortest paths from S via different 96 output layer-3 interfaces. 98 FIB Forwarding Information Base. The database used 99 by the packet forwarder to determine what actions 100 to perform on a packet. 102 IPFRR IP fast-reroute. 104 Link(A->B) A link connecting router A to router B. 106 LFA Loop Free Alternate. This is a neighbor N, that 107 is not a primary next-hop neighbor E, whose 108 shortest path to the destination D does not go 109 back through the router S. The neighbor N must 110 meet the following condition:- 112 Distance_opt(N, D) < Distance_opt(N, S) + 113 Distance_opt(S, D) 115 Loop Free Neighbor A neighbor N_i, which is not the particular 116 primary neighbor E_k under discussion, and whose 117 shortest path to D does not traverse S. For 118 example, if there are two primary neighbors E_1 119 and E_2, E_1 is a loop-free neighbor with regard 120 to E_2 and vice versa. 122 Loop Free Link Protecting Alternate 123 This is a path via a Loop-Free Neighbor N_i which 124 does not go through the particular link of S 125 which is being protected to reach the destination 126 D. 128 Loop Free Node-protecting Alternate 129 This is a path via a Loop-Free Neighbor N_i which 130 does not go through the particular primary 131 neighbor of S which is being protected to reach 132 the destination D. 134 N_i The ith neighbor of S. 136 Primary Neighbor A neighbor N_i of S which is one of the next hops 137 for destination D in S's FIB prior to any 138 failure. 140 R_i_j The jth neighbor of N_i. 142 Routing Transition The process whereby routers converge on a new 143 topology. In conventional networks this process 144 frequently causes some disruption to packet 145 delivery. 147 RPF Reverse Path Forwarding. I.e. checking that a 148 packet is received over the interface which would 149 be used to send packets addressed to the source 150 address of the packet. 152 S Used to denote a router that is the source of a 153 repair that is computed in anticipation of the 154 failure of a neighboring router denoted as E, or 155 of the link between S and E. It is the viewpoint 156 from which IP Fast-Reroute is described. 158 S_i The set of neighbors of E, in addition to S, 159 which will independently take the role of S for 160 the traffic they carry. 162 SPF Shortest Path First, e.g. Dijkstra's algorithm. 164 SPT Shortest path tree 166 Upstream Forwarding Loop 167 This is a forwarding loop which involves a set of 168 routers, none of which are directly connected to 169 the link which has caused the topology change 170 that triggered a new SPF in any of the routers. 172 2. Introduction 174 When a link or node failure occurs in a routed network, there is 175 inevitably a period of disruption to the delivery of traffic until 176 the network re-converges on the new topology. Packets for 177 destinations which were previously reached by traversing the failed 178 component may be dropped or may suffer looping. Traditionally such 179 disruptions have lasted for periods of at least several seconds, and 180 most applications have been constructed to tolerate such a quality of 181 service. 183 Recent advances in routers have reduced this interval to under a 184 second for carefully configured networks using link state IGPs. 185 However, new Internet services are emerging which may be sensitive to 186 periods of traffic loss which are orders of magnitude shorter than 187 this. 189 Addressing these issues is difficult because the distributed nature 190 of the network imposes an intrinsic limit on the minimum convergence 191 time which can be achieved. 193 However, there is an alternative approach, which is to compute backup 194 routes that allow the failure to be repaired locally by the router(s) 195 detecting the failure without the immediate need to inform other 196 routers of the failure. In this case, the disruption time can be 197 limited to the small time taken to detect the adjacent failure and 198 invoke the backup routes. This is analogous to the technique 199 employed by MPLS Fast-Reroute [RFC4090], but the mechanisms employed 200 for the backup routes in pure IP networks are necessarily very 201 different. 203 This document provides a framework for the development of this 204 approach. 206 3. Problem Analysis 208 The duration of the packet delivery disruption caused by a 209 conventional routing transition is determined by a number of factors: 211 1. The time taken to detect the failure. This may be of the order 212 of a few mS when it can be detected at the physical layer, up to 213 several tens of seconds when a routing protocol hello is 214 employed. During this period packets will be unavoidably lost. 216 2. The time taken for the local router to react to the failure. 217 This will typically involve generating and flooding new routing 218 updates, perhaps after some hold-down delay, and re-computing the 219 router's FIB. 221 3. The time taken to pass the information about the failure to other 222 routers in the network. In the absence of routing protocol 223 packet loss, this is typically between 10mS and 100mS per hop. 225 4. The time taken to re-compute the forwarding tables. This is 226 typically a few mS for a link state protocol using Dijkstra's 227 algorithm. 229 5. The time taken to load the revised forwarding tables into the 230 forwarding hardware. This time is very implementation dependant 231 and also depends on the number of prefixes affected by the 232 failure, but may be several hundred mS. 234 The disruption will last until the routers adjacent to the failure 235 have completed steps 1 and 2, and then all the routers in the network 236 whose paths are affected by the failure have completed the remaining 237 steps. 239 The initial packet loss is caused by the router(s) adjacent to the 240 failure continuing to attempt to transmit packets across the failure 241 until it is detected. This loss is unavoidable, but the detection 242 time can be reduced to a few tens of mS as described in Section 4.1. 244 Subsequent packet loss is caused by the "micro-loops" which form 245 because of temporary inconsistencies between routers' forwarding 246 tables. These occur as a result of the different times at which 247 routers update their forwarding tables to reflect the failure. These 248 variable delays are caused by steps 3, 4 and 5 above and in many 249 routers it is step 5 which is both the largest factor and which has 250 the greatest variance between routers. The large variance arises 251 from implementation differences and from the differing impact that a 252 failure has on each individual router. For example, the number of 253 prefixes affected by the failure may vary dramatically from one 254 router to another. 256 In order to achieve packet disruption times which are commensurate 257 with the failure detection times it is necessary to perform two 258 distinct tasks: 260 1. Provide a mechanism for the router(s) adjacent to the failure to 261 rapidly invoke a repair path, which is unaffected by any 262 subsequent re-convergence. 264 2. Provide a mechanism to prevent the effects of micro-loops during 265 subsequent re-convergence. 267 Performing the first task without the second will result in the 268 repair path being starved of traffic and hence being redundant. 269 Performing the second without the first will result in traffic being 270 discarded by the router(s) adjacent to the failure. Both tasks are 271 necessary for an effective solution to the problem. 273 However, repair paths can be used in isolation where the failure is 274 short-lived. The repair paths can be kept in place until the failure 275 is repaired and there is no need to advertise the failure to other 276 routers. 278 Similarly, micro-loop avoidance can be used in isolation to prevent 279 loops arising from pre-planned management action, because the link or 280 node being shut down can remain in service for a short time after its 281 removal has been announced into the network, and hence it can 282 function as its own "repair path". 284 Note that micro-loops can also occur when a link or node is restored 285 to service and thus a micro-loop avoidance mechanism is required for 286 both link up and link down cases. 288 4. Mechanisms for IP Fast-reroute 290 The set of mechanisms required for an effective solution to the 291 problem can be broken down into the following sub-problems. 293 4.1. Mechanisms for fast failure detection 295 It is critical that the failure detection time is minimized. A 296 number of approaches are possible, such as: 298 1. Physical detection; for example, loss of light. 300 2. Routing protocol independent protocol detection; for example, The 301 Bidirectional Failure Detection protocol [I-D.ietf-bfd-base]. 303 3. Routing protocol detection; for example, use of "fast hellos". 305 4.2. Mechanisms for repair paths 307 Once a failure has been detected by one of the above mechanisms, 308 traffic which previously traversed the failure is transmitted over 309 one or more repair paths. The design of the repair paths should be 310 such that they can be pre-calculated in anticipation of each local 311 failure and made available for invocation with minimal delay. There 312 are three basic categories of repair paths: 314 1. Equal cost multi-paths (ECMP). Where such paths exist, and one 315 or more of the alternate paths do not traverse the failure, they 316 may trivially be used as repair paths. 318 2. Loop free alternate paths. Such a path exists when a direct 319 neighbor of the router adjacent to the failure has a path to the 320 destination which can be guaranteed not to traverse the failure. 322 3. Multi-hop repair paths. When there is no feasible loop free 323 alternate path it may still be possible to locate a router, which 324 is more than one hop away from the router adjacent to the 325 failure, from which traffic will be forwarded to the destination 326 without traversing the failure. 328 ECMP and loop free alternate paths (as described in [RFC5286]) offer 329 the simplest repair paths and would normally be used when they are 330 available. It is anticipated that around 80% of failures (see 331 Section 4.2.2) can be repaired using these basic methods alone. 333 Multi-hop repair paths are more complex, both in the computations 334 required to determine their existence, and in the mechanisms required 335 to invoke them. They can be further classified as: 337 1. Mechanisms where one or more alternate FIBs are pre-computed in 338 all routers and the repaired packet is instructed to be forwarded 339 using a "repair FIB" by some method of per packet signaling such 340 as detecting a "U-turn" [I-D.atlas-ip-local-protect-uturn] , 341 [FIFR] or by marking the packet [SIMULA]. 343 2. Mechanisms functionally equivalent to a loose source route which 344 is invoked using the normal FIB. These include tunnels 345 [I-D.bryant-ipfrr-tunnels], alternative shortest paths 346 [I-D.tian-frr-alt-shortest-path] and label based mechanisms. 348 3. Mechanisms employing special addresses or labels which are 349 installed in the FIBs of all routers with routes pre-computed to 350 avoid certain components of the network. For example 351 [I-D.ietf-rtgwg-ipfrr-notvia-addresses]. 353 In many cases a repair path which reaches two hops away from the 354 router detecting the failure will suffice, and it is anticipated that 355 around 98% of failures (see Section 4.2.2) can be repaired by this 356 method. However, to provide complete repair coverage some use of 357 longer multi-hop repair paths is generally necessary. 359 4.2.1. Scope of repair paths 361 A particular repair path may be valid for all destinations which 362 require repair or may only be valid for a subset of destinations. If 363 a repair path is valid for a node immediately downstream of the 364 failure, then it will be valid for all destinations previously 365 reachable by traversing the failure. However, in cases where such a 366 repair path is difficult to achieve because it requires a high order 367 multi-hop repair path, it may still be possible to identify lower 368 order repair paths (possibly even loop free alternate paths) which 369 allow the majority of destinations to be repaired. When IPFRR is 370 unable to provide complete repair, it is desirable that the extent of 371 the repair coverage can be determined and reported via network 372 management. 374 There is a tradeoff to be achieved between minimizing the number of 375 repair paths to be computed, and minimizing the overheads incurred in 376 using higher order multi-hop repair paths for destinations for which 377 they are not strictly necessary. However, the computational cost of 378 determining repair paths on an individual destination basis can be 379 very high. 381 It will frequently be the case that the majority of destinations may 382 be repaired using only the "basic" repair mechanism, leaving a 383 smaller subset of the destinations to be repaired using one of the 384 more complex multi-hop methods. Such a hybrid approach may go some 385 way to resolving the conflict between completeness and complexity. 387 The use of repair paths may result in excessive traffic passing over 388 a link, resulting in congestion discard. This reduces the 389 effectiveness of IPFRR. Mechanisms to influence the distribution of 390 repaired traffic to minimize this effect are therefore desirable. 392 4.2.2. Analysis of repair coverage 394 In some cases the repair strategy will permit the repair of all 395 single link or node failures in the network for all possible 396 destinations. This can be defined as 100% coverage. However, where 397 the coverage is less than 100% it is important for the purposes of 398 comparisons between different proposed repair strategies to define 399 what is meant by such a percentage. There are four possibilities: 401 1. The percentage of links (or nodes) which can be fully protected 402 for all destinations. This is appropriate where the requirement 403 is to protect all traffic, but some percentage of the possible 404 failures may be identified as being un-protectable. 406 2. The percentage of destinations which can be fully protected for 407 all link (or node) failures. This is appropriate where the 408 requirement is to protect against all possible failures, but some 409 percentage of destinations may be identified as being un- 410 protectable. 412 3. For all destinations (d) and for all failures (f), the percentage 413 of the total potential failure cases (d*f) which are protected. 414 This is appropriate where the requirement is an overall "best 415 effort" protection. 417 4. The percentage of packets normally passing though the network 418 that will continue to reach their destination. This requires a 419 traffic matrix for the network as part of the analysis. 421 The coverage obtained is dependent on the repair strategy and highly 422 dependent on the detailed topology and metrics. Any figures quoted 423 in this document are for illustrative purposes only. 425 4.2.3. Link or node repair 427 A repair path may be computed to protect against failure of an 428 adjacent link, or failure of an adjacent node. In general, link 429 protection is simpler to achieve. A repair which protects against 430 node failure will also protect against link failure for all 431 destinations except those for which the adjacent node is a single 432 point of failure. 434 In some cases it may be necessary to distinguish between a link or 435 node failure in order that the optimal repair strategy is invoked. 436 Methods for link/node failure determination may be based on 437 techniques such as BFD[I-D.ietf-bfd-base]. This determination may be 438 made prior to invoking any repairs, but this will increase the period 439 of packet loss following a failure unless the determination can be 440 performed as part of the failure detection mechanism itself. 441 Alternatively, a subsequent determination can be used to optimise an 442 already invoked default strategy. 444 4.2.4. Maintenance of Repair paths 446 In order to meet the response time goals, it is expected (though not 447 required) that repair paths, and their associated FIB entries, will 448 be pre-computed and installed ready for invocation when a failure is 449 detected. Following invocation the repair paths remain in effect 450 until they are no longer required. This will normally be when the 451 routing protocol has re-converged on the new topology taking into 452 account the failure, and traffic will no longer be using the repair 453 paths. 455 The repair paths have the property that they are unaffected by any 456 topology changes resulting from the failure which caused their 457 instantiation. Therefore there is no need to re-compute them during 458 the convergence period. They may be affected by an unrelated 459 simultaneous topology change, but such events are out of scope of 460 this work (see Section 4.2.5). 462 Once the routing protocol has re-converged it is necessary for all 463 repair paths to take account of the new topology. Various 464 optimizations may permit the efficient identification of repair paths 465 which are unaffected by the change, and hence do not require full re- 466 computation. Since the new repair paths will not be required until 467 the next failure occurs, the re-computation may be performed as a 468 background task and be subject to a hold-down, but excessive delay in 469 completing this operation will increase the risk of a new failure 470 occurring before the repair paths are in place. 472 4.2.5. Multiple failures and Shared Risk Link Groups 474 Complete protection against multiple unrelated failures is out of 475 scope of this work. However, it is important that the occurrence of 476 a second failure while one failure is undergoing repair should not 477 result in a level of service which is significantly worse than that 478 which would have been achieved in the absence of any repair strategy. 480 Shared Risk Link Groups are an example of multiple related failures, 481 and the more complex aspects of their protection is a matter for 482 further study. 484 One specific example of an SRLG which is clearly within the scope of 485 this work is a node failure. This causes the simultaneous failure of 486 multiple links, but their closely defined topological relationship 487 makes the problem more tractable. 489 4.3. Local Area Networks 491 Protection against partial or complete failure of LANs is more 492 complex than the point to point case. In general there is a tradeoff 493 between the simplicity of the repair and the ability to provide 494 complete and optimal repair coverage. 496 4.4. Mechanisms for micro-loop prevention 498 Control of micro-loops is important not only because they can cause 499 packet loss in traffic which is affected by the failure, but because 500 by saturating a link with looping packets they can also cause 501 congestion loss of traffic flowing over that link which would 502 otherwise be unaffected by the failure. 504 A number of solutions to the problem of micro-loop formation have 505 been proposed and are summarized in [I-D.ietf-rtgwg-lf-conv-frmwk]. 506 The following factors are significant in their classification: 508 1. Partial or complete protection against micro-loops. 510 2. Delay imposed upon convergence. 512 3. Tolerance of multiple failures (from node failures, and in 513 general). 515 4. Computational complexity (pre-computed or real time). 517 5. Applicability to scheduled events. 519 6. Applicability to link/node reinstatement. 521 5. Management Considerations 523 While many of the management requirements will be specific to 524 particular IPFRR solutions, the following general aspects need to be 525 addressed: 527 1. Configuration 529 A. Enabling/disabling IPFRR support. 531 B. Enabling/disabling protection on a per link/node basis. 533 C. Expressing preferences regarding the links/nodes used for 534 repair paths. 536 D. Configuration of failure detection mechanisms. 538 E. Configuration of loop avoidance strategies 540 2. Monitoring 542 A. Notification of links/nodes/destinations which cannot be 543 protected. 545 B. Notification of pre-computed repair paths, and anticipated 546 traffic patterns. 548 C. Counts of failure detections, protection invocations and 549 packets forwarded over repair paths. 551 6. Scope and applicability 553 The initial scope of this work is in the context of link state IGPs. 554 Link state protocols provide ubiquitous topology information, which 555 facilitates the computation of repairs paths. 557 Provision of similar facilities in non-link state IGPs and BGP is a 558 matter for further study, but the correct operation of the repair 559 mechanisms for traffic with a destination outside the IGP domain is 560 an important consideration for solutions based on this framework 562 7. IANA Considerations 564 There are no IANA considerations that arise from this framework 565 document. 567 8. Security Considerations 569 This framework document does not itself introduce any security 570 issues, but attention must be paid to the security implications of 571 any proposed solutions to the problem. 573 9. Acknowledgements 575 The authors would like to acknowledge contributions made by Alia 576 Atlas, Clarence Filsfils, Pierre Francois, Joel Halpern, Stefano 577 Previdi and Alex Zinin. 579 10. Informative References 581 [FIFR] Nelakuditi, S., Lee, S., Lu, Y., Zhang, Z., and C. Chuah, 582 "Fast local rerouting for handling transient link 583 failures."", Tech. Rep. TR-2004-004, 2004. 585 [I-D.atlas-ip-local-protect-uturn] 586 Atlas, A., "U-turn Alternates for IP/LDP Fast-Reroute", 587 draft-atlas-ip-local-protect-uturn-03 (work in progress), 588 March 2006. 590 [I-D.bryant-ipfrr-tunnels] 591 Bryant, S., Filsfils, C., Previdi, S., and M. Shand, "IP 592 Fast Reroute using tunnels", draft-bryant-ipfrr-tunnels-03 593 (work in progress), November 2007. 595 [I-D.ietf-bfd-base] 596 Katz, D. and D. Ward, "Bidirectional Forwarding 597 Detection", draft-ietf-bfd-base-08 (work in progress), 598 March 2008. 600 [I-D.ietf-rtgwg-ipfrr-notvia-addresses] 601 Shand, M., Bryant, S., and S. Previdi, "IP Fast Reroute 602 Using Not-via Addresses", 603 draft-ietf-rtgwg-ipfrr-notvia-addresses-02 (work in 604 progress), February 2008. 606 [I-D.ietf-rtgwg-lf-conv-frmwk] 607 Shand, M. and S. Bryant, "A Framework for Loop-free 608 Convergence", draft-ietf-rtgwg-lf-conv-frmwk-02 (work in 609 progress), February 2008. 611 [I-D.tian-frr-alt-shortest-path] 612 Tian, A., "Fast Reroute using Alternative Shortest Paths", 613 draft-tian-frr-alt-shortest-path-01 (work in progress), 614 July 2004. 616 [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 617 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 618 May 2005. 620 [RFC5286] Atlas, A. and A. Zinin, "Basic Specification for IP Fast 621 Reroute: Loop-Free Alternates", RFC 5286, September 2008. 623 [SIMULA] Lysne, O., Kvalbein, A., Cicic, T., Gjessing, S., and A. 624 Hansen, "Fast IP Network Recovery using Multiple Routing 625 Configurations."", Infocom 10.1109/INFOCOM.2006.227, 2006, 626 . 628 Authors' Addresses 630 Mike Shand 631 Cisco Systems 632 250, Longwater Avenue. 633 Reading, Berks RG2 6GB 634 UK 636 Email: mshand@cisco.com 638 Stewart Bryant 639 Cisco Systems 640 250, Longwater Avenue. 641 Reading, Berks RG2 6GB 642 UK 644 Email: stbryant@cisco.com 646 Full Copyright Statement 648 Copyright (C) The IETF Trust (2008). 650 This document is subject to the rights, licenses and restrictions 651 contained in BCP 78, and except as set forth therein, the authors 652 retain all their rights. 654 This document and the information contained herein are provided on an 655 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 656 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 657 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 658 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 659 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 660 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 662 Intellectual Property 664 The IETF takes no position regarding the validity or scope of any 665 Intellectual Property Rights or other rights that might be claimed to 666 pertain to the implementation or use of the technology described in 667 this document or the extent to which any license under such rights 668 might or might not be available; nor does it represent that it has 669 made any independent effort to identify any such rights. Information 670 on the procedures with respect to rights in RFC documents can be 671 found in BCP 78 and BCP 79. 673 Copies of IPR disclosures made to the IETF Secretariat and any 674 assurances of licenses to be made available, or the result of an 675 attempt made to obtain a general license or permission for the use of 676 such proprietary rights by implementers or users of this 677 specification can be obtained from the IETF on-line IPR repository at 678 http://www.ietf.org/ipr. 680 The IETF invites any interested party to bring to its attention any 681 copyrights, patents or patent applications, or other proprietary 682 rights that may cover technology that may be required to implement 683 this standard. Please address the information to the IETF at 684 ietf-ipr@ietf.org.