idnits 2.17.1 draft-ietf-bmwg-protection-meth-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 2 instances of too long lines in the document, the longest one being 4 characters in excess of 72. ** The abstract seems to contain references ([TERM-ID], [MPLS-FRR-EXT]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 487 has weird spacing: '...failure aft...' == Line 513 has weird spacing: '...failure aft...' == Line 533 has weird spacing: '...failure aft...' == Line 559 has weird spacing: '...failure aft...' == Line 583 has weird spacing: '...failure aft...' == (3 more instances...) -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (September 12, 2011) is 4609 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'MPLS-FRR-EXT' is mentioned on line 222, but not defined == Missing Reference: 'TERM-ID' is mentioned on line 932, but not defined == Missing Reference: 'Br97' is mentioned on line 220, but not defined == Missing Reference: 'Br91' is mentioned on line 1095, but not defined == Missing Reference: 'Ma98' is mentioned on line 230, but not defined == Missing Reference: 'Po06' is mentioned on line 232, but not defined == Missing Reference: 'IGP-METH' is mentioned on line 384, but not defined == Missing Reference: 'MPLS-FWD' is mentioned on line 688, but not defined == Unused Reference: 'RFC4090' is defined on line 1218, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-bmwg-igp-dataplane-conv-term' is defined on line 1224, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-bmwg-protection-term' is defined on line 1230, but no explicit reference was found in the text == Unused Reference: 'RFC1242' is defined on line 1236, but no explicit reference was found in the text == Unused Reference: 'RFC2119' is defined on line 1239, but no explicit reference was found in the text == Unused Reference: 'RFC2285' is defined on line 1242, but no explicit reference was found in the text == Unused Reference: 'RFC4689' is defined on line 1248, but no explicit reference was found in the text == Unused Reference: 'RFC5695' is defined on line 1252, but no explicit reference was found in the text ** Downref: Normative reference to an Informational draft: draft-ietf-bmwg-igp-dataplane-conv-term (ref. 'I-D.ietf-bmwg-igp-dataplane-conv-term') == Outdated reference: A later version (-09) exists of draft-ietf-bmwg-protection-term-08 ** Downref: Normative reference to an Informational draft: draft-ietf-bmwg-protection-term (ref. 'I-D.ietf-bmwg-protection-term') ** Downref: Normative reference to an Informational RFC: RFC 1242 ** Downref: Normative reference to an Informational RFC: RFC 2285 ** Downref: Normative reference to an Informational RFC: RFC 2544 ** Downref: Normative reference to an Informational RFC: RFC 4689 ** Downref: Normative reference to an Informational RFC: RFC 5695 Summary: 9 errors (**), 0 flaws (~~), 24 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group R. Papneja 3 Internet-Draft Isocore 4 Intended status: Standards Track S. Vapiwala 5 Expires: March 15, 2012 J. Karthik 6 Cisco Systems 7 S. Poretsky 8 Allot Communications 9 S. Rao 10 Qwest Communications 11 J. Roux 12 France Telecom 13 September 12, 2011 15 Methodology for benchmarking MPLS protection mechanisms 16 draft-ietf-bmwg-protection-meth-08.txt 18 Abstract 20 This draft describes the methodology for benchmarking MPLS Protection 21 mechanisms for link and node protection as defined in [MPLS-FRR-EXT]. 22 This document provides test methodologies and testbed setup for 23 measuring failover times while considering all dependencies that 24 might impact faster recovery of real-time applications bound to MPLS 25 based traffic engineered tunnels. The benchmarking terms used in 26 this document are defined in [TERM-ID]. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on March 15, 2012. 45 Copyright Notice 47 Copyright (c) 2011 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 This document may contain material from IETF Documents or IETF 61 Contributions published or made publicly available before November 62 10, 2008. The person(s) controlling the copyright in some of this 63 material may not have granted the IETF Trust the right to allow 64 modifications of such material outside the IETF Standards Process. 65 Without obtaining an adequate license from the person(s) controlling 66 the copyright in such materials, this document may not be modified 67 outside the IETF Standards Process, and derivative works of it may 68 not be created outside the IETF Standards Process, except to format 69 it for publication as an RFC or to translate it into languages other 70 than English. 72 Table of Contents 74 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 75 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 6 76 3. Existing Definitions and Requirements . . . . . . . . . . . . 6 77 4. General Reference Topology . . . . . . . . . . . . . . . . . . 7 78 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 8 79 5.1. Failover Events [TERM-ID] . . . . . . . . . . . . . . . . 8 80 5.2. Failure Detection [TERM-ID] . . . . . . . . . . . . . . . 9 81 5.3. Use of Data Traffic for MPLS Protection benchmarking . . . 9 82 5.4. LSP and Route Scaling . . . . . . . . . . . . . . . . . . 10 83 5.5. Selection of IGP . . . . . . . . . . . . . . . . . . . . . 10 84 5.6. Restoration and Reversion [TERM-ID] . . . . . . . . . . . 10 85 5.7. Offered Load . . . . . . . . . . . . . . . . . . . . . . . 11 86 5.8. Tester Capabilities . . . . . . . . . . . . . . . . . . . 11 87 6. Reference Test Setup . . . . . . . . . . . . . . . . . . . . . 12 88 6.1. Link Protection . . . . . . . . . . . . . . . . . . . . . 12 89 6.1.1. Link Protection - 1 hop primary (from PLR) and 1 90 hop backup TE tunnels . . . . . . . . . . . . . . . . 12 91 6.1.2. Link Protection - 1 hop primary (from PLR) and 2 92 hop backup TE tunnels . . . . . . . . . . . . . . . . 13 93 6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 94 hop backup TE tunnels . . . . . . . . . . . . . . . . 13 95 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 96 hop backup TE tunnels . . . . . . . . . . . . . . . . 14 97 6.2. Node Protection . . . . . . . . . . . . . . . . . . . . . 14 98 6.2.1. Node Protection - 2 hop primary (from PLR) and 1 99 hop backup TE tunnels . . . . . . . . . . . . . . . . 14 100 6.2.2. Node Protection - 2 hop primary (from PLR) and 2 101 hop backup TE tunnels . . . . . . . . . . . . . . . . 15 102 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 103 hop backup TE tunnels . . . . . . . . . . . . . . . . 16 104 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 105 hop backup TE tunnels . . . . . . . . . . . . . . . . 17 106 7. Test Methodology . . . . . . . . . . . . . . . . . . . . . . . 17 107 7.1. MPLS FRR Forwarding Performance . . . . . . . . . . . . . 18 108 7.1.1. Headend PLR Forwarding Performance . . . . . . . . . . 18 109 7.1.2. Mid-Point PLR Forwarding Performance . . . . . . . . . 19 110 7.1.3. Egress PLR Forwarding Performance . . . . . . . . . . 20 111 7.2. Headend PLR with Link Failure . . . . . . . . . . . . . . 21 112 7.3. Mid-Point PLR with Link Failure . . . . . . . . . . . . . 23 113 7.4. Headend PLR with Node Failure . . . . . . . . . . . . . . 24 114 7.5. Mid-Point PLR with Node Failure . . . . . . . . . . . . . 26 115 8. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 27 116 9. Security Considerations . . . . . . . . . . . . . . . . . . . 29 117 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 29 118 11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 30 119 11.1. Informative References . . . . . . . . . . . . . . . . . . 30 120 11.2. Normative References . . . . . . . . . . . . . . . . . . . 30 121 Appendix A. Acknowledgements . . . . . . . . . . . . . . . . . . 30 122 Appendix B. Fast Reroute Scalability Table . . . . . . . . . . . 31 123 Appendix C. Abbreviations . . . . . . . . . . . . . . . . . . . . 33 124 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33 126 1. Introduction 128 This draft describes the methodology for benchmarking MPLS based 129 protection mechanisms. The new terminology that this document 130 introduces is defined in [TERM-ID]. 132 MPLS based protection mechanisms provide fast recovery of real-time 133 services from a planned or an unplanned link or node failures. MPLS 134 protection mechanisms are generally deployed in a network 135 infrastructure where MPLS is used for provisioning of point-to- point 136 traffic engineered tunnels (tunnel). MPLS based protection 137 mechanisms promise to improve service disruption period by minimizing 138 recovery time from most common failures. 140 Network elements from different manufacturers behave differently to 141 network failures, which impacts the network's ability and performance 142 for failure recovery. It therefore becomes imperative for service 143 providers to have a common benchmark to understand the performance 144 behaviors of network elements. 146 There are two factors impacting service availability: frequency of 147 failures and duration for which the failures persist. Failures can 148 be classified further into two types: correlated and uncorrelated. 149 Correlated and uncorrelated failures may be planned or unplanned. 151 Planned failures are predictable. Network implementations should be 152 able to handle both planned and unplanned failures and recover 153 gracefully within a time frame to maintain service assurance. Hence, 154 failover recovery time is one of the most important benchmark that a 155 service provider considers in choosing the building blocks for their 156 network infrastructure. 158 A correlated failure is the simultaneous occurrence of two or more 159 failures. A typical example is failure of a logical resource (e.g. 160 layer-2 links) due to a dependency on a common physical resource 161 (e.g. common conduit) that fails. Within the context of MPLS 162 protection mechanisms, failures that arise due to Shared Risk Link 163 Groups (SRLG) [MPLS-FRR-EXT] can be considered as correlated 164 failures. Not all correlated failures are predictable in advance, 165 for example, those caused by natural disasters. 167 MPLS Fast Re-Route (MPLS-FRR) allows for the possibility that the 168 Label Switched Paths can be re-optimized in the minutes following 169 Failover. IP Traffic would be re-routed according to the preferred 170 path for the post-failure topology. Thus, MPLS-FRR includes an 171 additional step to the General model: 173 (1) Failover Event - Primary Path (Working Path) fails 175 (2) Failure Detection- Failover Event is detected 177 (3) 179 a. Failover - Working Path switched to Backup path 181 b. Re-Optimization of Working Path (possible change from 182 Backup Path) 184 (4) Restoration - Primary Path recovers from a Failover Event 186 (5) Reversion (optional) - Working Path returns to Primary Path 188 2. Document Scope 190 This document provides detailed test cases along with different 191 topologies and scenarios that should be considered to effectively 192 benchmark MPLS protection mechanisms and failover times on the Data 193 Plane. Different Failover Events and scaling considerations are also 194 provided in this document. 196 All benchmarking testcases defined in this document apply to both 197 facility backup and local protection enabled in detour mode. The 198 test cases cover all possible failure scenarios and the associated 199 procedures benchmark the performance of the Device Under Test (DUT) 200 to recover from failures. Data plane traffic is used to benchmark 201 failover times. 203 Benchmarking of correlated failures is out of scope of this document. 204 Protection from Bi-directional Forwarding Detection (BFD) is outside 205 the scope of this document. 207 As described above, MPLS-FRR may include a Re-optimization of the 208 Working Path, with possible packet transfer impairments. 209 Characterization of Re-optimization is beyond the scope of this memo. 211 3. Existing Definitions and Requirements 213 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 214 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 215 document are to be interpreted as described in BCP 14, RFC 2119 216 [Br97]. RFC 2119 defines the use of these key words to help make the 217 intent of standards track documents as clear as possible. While this 218 document uses these keywords, this document is not a standards track 219 document. 221 The reader is assumed to be familiar with the commonly used MPLS 222 terminology, some of which is defined in [MPLS-FRR-EXT]. 224 This document uses much of the terminology defined in [TERM-ID]. 225 This document also uses existing terminology defined in other BMWG 226 work. Examples include, but are not limited to: 228 Throughput [Ref.[Br91], section 3.17] 229 Device Under Test (DUT) [Ref.[Ma98], section 3.1.1] 230 System Under Test (SUT) [Ref.[Ma98], section 3.1.2] 231 Out-of-order Packet [Ref.[Po06], section 3.3.2] 232 Duplicate Packet [Ref.[Po06], section 3.3.3] 234 4. General Reference Topology 236 Figure 1 illustrates the basic reference testbed and is applicable to 237 all the test cases defined in this document. The Tester is comprised 238 of a Traffic Generator (TG) & Test Analyzer (TA). A Tester is 239 directly connected to the DUT. The Tester sends and receives IP 240 traffic to the tunnel ingress and performs signaling protocol 241 emulation to simulate real network scenarios in a lab environment. 242 The Tester may also support MPLS-TE signaling to act as the ingress 243 node to the MPLS tunnel. 245 +---------------------------+ 246 | +------------|---------------+ 247 | | | | 248 | | | | 249 +--------+ +--------+ +--------+ +--------+ +--------+ 250 TG--| R1 |-----| R2 |----| R3 | | R4 | | R5 | 251 | |-----| |----| |----| |---| | 252 +--------+ +--------+ +--------+ +--------+ +--------+ 253 | | | | | 254 | | | | | 255 | +--------+ | | TA 256 +---------| R6 |---------+ | 257 | |----------------------+ 258 +--------+ 259 Fig. 1 Fast Reroute Topology 261 The tester MUST record the number of lost, duplicate, and reordered 262 packets. It should further record arrival and departure times so 263 that Failover Time, Additive Latency, and Reversion Time can be 264 measured. The tester may be a single device or a test system 265 emulating all the different roles along a primary or backup path. 267 The label stack is dependent of the following 3 entities: 269 (1) Type of protection (Link Vs Node) 271 (2) # of remaining hops of the primary tunnel from the PLR 273 (3) # of remaining hops of the backup tunnel from the PLR 275 Due to this dependency, it is RECOMMENDED that the benchmarking of 276 failover times be performed on all the topologies provided in section 277 6. 279 5. Test Considerations 281 This section discusses the fundamentals of MPLS Protection testing: 283 (1) The types of network events that causes failover 285 (2) Indications for failover 287 (3) the use of data traffic 289 (4) Traffic generation 291 (5) LSP Scaling 293 (6) Reversion of LSP 295 (7) IGP Selection 297 5.1. Failover Events [TERM-ID] 299 The failover to the backup tunnel is primarily triggered by either 300 link or node failures observed downstream of the Point of Local 301 repair (PLR). Some of these failure events are listed below. 303 Link Failure Events 304 - Interface Shutdown on PLR side with POS Alarm 305 - Interface Shutdown on remote side with POS Alarm 306 - Interface Shutdown on PLR side with RSVP hello enabled 307 - Interface Shutdown on remote side with RSVP hello enabled 308 - Interface Shutdown on PLR side with BFD 309 - Interface Shutdown on remote side with BFD 310 - Fiber Pull on the PLR side (Both TX & RX or just the TX) 311 - Fiber Pull on the remote side (Both TX & RX or just the RX) 312 - Online insertion and removal (OIR) on PLR side 313 - OIR on remote side 314 - Sub-interface failure (e.g. shutting down of a VLAN) 315 - Parent interface shutdown (an interface bearing multiple sub- 316 interfaces 318 Node Failure Events 320 - A System reload initiated either by a graceful shutdown or by 321 a power failure. 322 - A system crash due to a software failure or an assert. 324 5.2. Failure Detection [TERM-ID] 326 Link failure detection time depends on the link type and failure 327 detection protocols running. For SONET/SDH, the alarm type (such as 328 LOS, AIS, or RDI) can be used. Other link types have layer-two 329 alarms, but they may not provide a short enough failure detection 330 time. Ethernet based links do not have layer 2 failure indicators, 331 and therefore relies on layer 3 signaling for failure detection. 332 However for directly connected devices, remote fault indication in 333 the ethernet auto-negotiation scheme could be considered as a type of 334 layer 2 link failure indicator. 336 MPLS has different failure detection techniques such as BFD, or use 337 of RSVP hellos. These methods can be used for the layer 3 failure 338 indicators required by Ethernet based links, or for some other non- 339 Ethernet based links to help improve failure detection time. 341 The test procedures in this document can be used for a local failure 342 or remote failure scenarios for comprehensive benchmarking and to 343 evaluate failover performance independent of the failure detection 344 techniques. 346 5.3. Use of Data Traffic for MPLS Protection benchmarking 348 Currently end customers use packet loss as a key metric for Failover 349 Time [TERM-ID]. Failover Packet Loss [TERM-ID] is an externally 350 observable event and has direct impact on application performance. 351 MPLS protection is expected to minimize the packet loss in the event 352 of a failure. For this reason it is important to develop a standard 353 router benchmarking methodology for measuring MPLS protection that 354 uses packet loss as a metric. At a known rate of forwarding, packet 355 loss can be measured and the failover time can be determined. 356 Measurement of control plane signaling to establish backup paths is 357 not enough to verify failover. Failover is best determined when 358 packets are actually traversing the backup path. 360 An additional benefit of using packet loss for calculation of 361 failover time is that it allows use of a black-box test environment. 362 Data traffic is offered at line-rate to the device under test (DUT) 363 an emulated network failure event is forced to occur, and packet loss 364 is externally measured to calculate the convergence time. This setup 365 is independent of the DUT architecture. 367 In addition, this methodology considers the packets in error and 368 duplicate packets that could have been generated during the failover 369 process. The methodologies consider lost, out-of-order, and 370 duplicate packets to be impaired packets that contribute to the 371 Failover Time. 373 5.4. LSP and Route Scaling 375 Failover time performance may vary with the number of established 376 primary and backup tunnel label switched paths (LSP) and installed 377 routes. However the procedure outlined here should be used for any 378 number of LSPs (L) and number of routes protected by PLR(R). The 379 amount of L and R must be recorded. 381 5.5. Selection of IGP 383 The underlying IGP could be ISIS-TE or OSPF-TE for the methodology 384 proposed here. See [IGP-METH] for IGP options to consider and 385 report. 387 5.6. Restoration and Reversion [TERM-ID] 389 Fast Reroute provides a method to return or restore an original 390 primary LSP upon recovery from the failure (Restoration) and to 391 switch traffic from the Backup Path to the restored Primary Path 392 (Reversion). In MPLS-FRR, Reversion can be implemented as Global 393 Reversion or Local Reversion. It is important to include Restoration 394 and Reversion as a step in each test case to measure the amount of 395 packet loss, out of order packets, or duplicate packets that is 396 produced. 398 Note: In addition to restoration and reversion, re-optimization can 399 take place while the failure is still not recovered but it depends on 400 the user configuration, and re-otimization timers. 402 5.7. Offered Load 404 It is suggested that there be one or more traffic streams as long as 405 there is a steady and constant rate of flow for all the streams. In 406 order to monitor the DUT performance for recovery times, a set of 407 route prefixes should be advertised before traffic is sent. The 408 traffic should be configured towards these routes. 410 At least 16 flows should be used, and more if possible. Prefix- 411 dependency behaviors are key in IP and tests with route-specific 412 flows spread across the routing table will reveal this dependency. 413 Generating traffic to all of the prefixes reachable by the protected 414 tunnel (probably in a Round-Robin fashion, where the traffic is 415 destined to all the prefixes but one prefix at a time in a cyclic 416 manner) is not recommended. The reason why traffic generation is not 417 recommended in a Round-Robin fashion to all the prefixes, one at a 418 time is that if there are many prefixes reachable through the LSP the 419 time interval between 2 packets destined to one prefix may be 420 significantly high and may be comparable with the failover time being 421 measured which does not aid in getting an accurate failover 422 measurement. 424 5.8. Tester Capabilities 426 It is RECOMMENDED that the Tester used to execute each test case have 427 the following capabilities: 429 1.Ability to establish MPLS-TE tunnels and push/pop labels. 431 2.Ability to produce Failover Event [TERM-ID]. 433 3.Ability to insert a timestamp in each data packet's IP 434 payload. 436 4.An internal time clock to control timestamping, time 437 measurements, and time calculations. 439 5.Ability to disable or tune specific Layer-2 and Layer-3 440 protocol functions on any interface(s). 442 6.Ability to react upon the receipt of path error from the PLR 444 The Tester MAY be capable to make non-data plane convergence 445 observations and use those observations for measurements. 447 6. Reference Test Setup 449 In addition to the general reference topology shown in figure 1, this 450 section provides detailed insight into various proposed test setups 451 that should be considered for comprehensively benchmarking the 452 failover time in different roles along the primary tunnel 454 This section proposes a set of topologies that covers all the 455 scenarios for local protection. All of these topologies can be 456 mapped to the reference topology shown in Figure 1. Topologies 457 provided in this section refer to the testbed required to benchmark 458 failover time when the DUT is configured as a PLR in either Headend 459 or midpoint role. Provided with each topology below is the label 460 stack at the PLR. Penultimate Hop Popping (PHP) MAY be used and must 461 be reported when used. 463 Figures 2 thru 9 use the following convention: 465 a) HE is Headend 466 b) TE is Tail-End 467 c) MID is Mid point 468 d) MP is Merge Point 469 e) PLR is Point of Local Repair 470 f) PRI is Primary Path 471 g) BKP denotes Backup Path and Nodes 473 6.1. Link Protection 475 6.1.1. Link Protection - 1 hop primary (from PLR) and 1 hop backup TE 476 tunnels 478 +-------+ +--------+ +--------+ 479 | R1 | | R2 | PRI| R3 | 480 TG-| HE |--| MID |----| TE |-TA 481 | | | PLR |----| | 482 +-------+ +--------+ BKP+--------+ 484 Figure 2. 486 Traffic Num of Labels Num of labels 487 before failure after failure 488 IP TRAFFIC (P-P) 0 0 489 Layer3 VPN (PE-PE) 1 1 490 Layer3 VPN (PE-P) 2 2 491 Layer2 VC (PE-PE) 1 1 492 Layer2 VC (PE-P) 2 2 493 Mid-point LSPs 0 0 495 6.1.2. Link Protection - 1 hop primary (from PLR) and 2 hop backup TE 496 tunnels 498 +-------+ +--------+ +--------+ 499 | R1 | | R2 | | R3 | 500 TG-| HE | | MID |PRI | TE |-TA 501 | |----| PLR |----| | 502 +-------+ +--------+ +--------+ 503 |BKP | 504 | +--------+ | 505 | | R6 | | 506 |----| BKP |----| 507 | MID | 508 +--------+ 510 Figure 3. 512 Traffic Num of Labels Num of labels 513 before failure after failure 514 IP TRAFFIC (P-P) 0 1 515 Layer3 VPN (PE-PE) 1 2 516 Layer3 VPN (PE-P) 2 3 517 Layer2 VC (PE-PE) 1 2 518 Layer2 VC (PE-P) 2 3 519 Mid-point LSPs 0 1 521 6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 hop backup TE 522 tunnels 524 +--------+ +--------+ +--------+ +--------+ 525 | R1 | | R2 |PRI | R3 |PRI | R4 | 526 TG-| HE |----| MID |----| MID |------| TE |-TA 527 | | | PLR |----| | | | 528 +--------+ +--------+ BKP+--------+ +--------+ 530 Figure 4. 532 Traffic Num of Labels Num of labels 533 before failure after failure 534 IP TRAFFIC (P-P) 1 1 535 Layer3 VPN (PE-PE) 2 2 536 Layer3 VPN (PE-P) 3 3 537 Layer2 VC (PE-PE) 2 2 538 Layer2 VC (PE-P) 3 3 539 Mid-point LSPs 1 1 541 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 hop backup TE 542 tunnels 544 +--------+ +--------+PRI +--------+ PRI +--------+ 545 | R1 | | R2 | | R3 | | R4 | 546 TG-| HE |----| MID |----| MID |------| TE |-TA 547 | | | PLR | | | | | 548 +--------+ +--------+ +--------+ +--------+ 549 BKP| | 550 | +--------+ | 551 | | R6 | | 552 +---| BKP |- 553 | MID | 554 +--------+ 556 Figure 5. 558 Traffic Num of Labels Num of labels 559 before failure after failure 561 IP TRAFFIC (P-P) 1 2 562 Layer3 VPN (PE-PE) 2 3 563 Layer3 VPN (PE-P) 3 4 564 Layer2 VC (PE-PE) 2 3 565 Layer2 VC (PE-P) 3 4 566 Mid-point LSPs 1 2 568 6.2. Node Protection 570 6.2.1. Node Protection - 2 hop primary (from PLR) and 1 hop backup TE 571 tunnels 572 +--------+ +--------+ +--------+ +--------+ 573 | R1 | | R2 |PRI | R3 | PRI | R4 | 574 TG-| HE |----| MID |----| MID |------| TE |-TA 575 | | | PLR | | | | | 576 +--------+ +--------+ +--------+ +--------+ 577 |BKP | 578 ----------------------------- 580 Figure 6. 582 Traffic Num of Labels Num of labels 583 before failure after failure 585 IP TRAFFIC (P-P) 1 0 586 Layer3 VPN (PE-PE) 2 1 587 Layer3 VPN (PE-P) 3 2 588 Layer2 VC (PE-PE) 2 1 589 Layer2 VC (PE-P) 3 2 590 Mid-point LSPs 1 0 592 6.2.2. Node Protection - 2 hop primary (from PLR) and 2 hop backup TE 593 tunnels 595 +--------+ +--------+ +--------+ +--------+ 596 | R1 | | R2 | | R3 | | R4 | 597 TG-| HE | | MID |PRI | MID |PRI | TE |-TA 598 | |----| PLR |----| |----| | 599 +--------+ +--------+ +--------+ +--------+ 600 | | 601 BKP| +--------+ | 602 | | R6 | | 603 ---------| BKP |--------- 604 | MID | 605 +--------+ 607 Figure 7. 609 Traffic Num of Labels Num of labels 610 before failure after failure 612 IP TRAFFIC (P-P) 1 1 613 Layer3 VPN (PE-PE) 2 2 614 Layer3 VPN (PE-P) 3 3 615 Layer2 VC (PE-PE) 2 2 616 Layer2 VC (PE-P) 3 3 617 Mid-point LSPs 1 1 619 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 hop backup TE 620 tunnels 622 +--------+ +--------+PRI+--------+PRI+--------+PRI+--------+ 623 | R1 | | R2 | | R3 | | R4 | | R5 | 624 TG-| HE |--| MID |---| MID |---| MP |---| TE |-TA 625 | | | PLR | | | | | | | 626 +--------+ +--------+ +--------+ +--------+ +--------+ 627 BKP| | 628 -------------------------- 630 Figure 8. 632 Traffic Num of Labels Num of labels 633 before failure after failure 635 IP TRAFFIC (P-P) 1 1 636 Layer3 VPN (PE-PE) 2 2 637 Layer3 VPN (PE-P) 3 3 638 Layer2 VC (PE-PE) 2 2 639 Layer2 VC (PE-P) 3 3 640 Mid-point LSPs 1 1 642 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 hop backup TE 643 tunnels 645 +--------+ +--------+ +--------+ +--------+ +--------+ 646 | R1 | | R2 | | R3 | | R4 | | R5 | 647 TG-| HE | | MID |PRI| MID |PRI| MP |PRI| TE |-TA 648 | |-- | PLR |---| |---| |---| | 649 +--------+ +--------+ +--------+ +--------+ +--------+ 650 BKP| | 651 | +--------+ | 652 | | R6 | | 653 ---------| BKP |------- 654 | MID | 655 +--------+ 657 Figure 9. 659 Traffic Num of Labels Num of labels 660 before failure after failure 662 IP TRAFFIC (P-P) 1 2 663 Layer3 VPN (PE-PE) 2 3 664 Layer3 VPN (PE-P) 3 4 665 Layer2 VC (PE-PE) 2 3 666 Layer2 VC (PE-P) 3 4 667 Mid-point LSPs 1 2 669 7. Test Methodology 671 The procedure described in this section can be applied to all the 8 672 base test cases and the associated topologies. The backup as well as 673 the primary tunnels are configured to be alike in terms of bandwidth 674 usage. In order to benchmark failover with all possible label stack 675 depth applicable as seen with current deployments, it is RECOMMENDED 676 to perform all of the test cases provided in this section. The 677 forwarding performance test cases in section 7.1 MUST be performed 678 prior to performing the failover test cases. 680 The considerations of Section 4 of [RFC2544] are applicable when 681 evaluating the results obtained using these methodologies as well. 683 7.1. MPLS FRR Forwarding Performance 685 Benchmarking Failover Time [TERM-ID] for MPLS protection first 686 requires baseline measurement of the forwarding performance of the 687 test topology including the DUT. Forwarding performance is 688 benchmarked by the Throughput as defined in [MPLS-FWD] and measured 689 in units pps. This section provides two test cases to benchmark 690 forwarding performance. These are with the DUT configured as a 691 Headend PLR, Mid-Point PLR, and Egress PLR. 693 7.1.1. Headend PLR Forwarding Performance 695 Objective: 697 To benchmark the maximum rate (pps) on the PLR (as headend) over 698 primary LSP and backup LSP. 700 Test Setup: 702 A. Select any one topology out of the 8 from section 6. 704 B. Select overlay technologies (e.g. IGP, VPN, or VC) with DUT 705 as Headend PLR. 707 C. The DUT will also have 2 interfaces connected to the traffic 708 Generator/analyzer. (If the node downstream of the PLR is not 709 a simulated node, then the Ingress of the tunnel should have 710 one link connected to the traffic generator and the node 711 downstream to the PLR or the egress of the tunnel should have 712 a link connected to the traffic analyzer). 714 Procedure: 716 1. Establish the primary LSP on R2 required by the topology 717 selected. 719 2. Establish the backup LSP on R2 required by the selected 720 topology. 722 3. Verify primary and backup LSPs are up and that primary is 723 protected. 725 4. Verify Fast Reroute protection is enabled and ready. 727 5. Setup traffic streams as described in section 5.7. 729 6. Send MPLS traffic over the primary LSP at the Throughput 730 supported by the DUT. 732 7. Record the Throughput over the primary LSP. 734 8. Trigger a link failure as described in section 5.1. 736 9. Verify that the offered load gets mapped to the backup tunnel 737 and measure the Additive Backup Delay. 739 10. 30 seconds after Failover, stop the offered load and measure 740 the Throughput, Packet Loss, Out-of-Order Packets, and 741 Duplicate Packets over the Backup LSP. 743 11. Adjust the offered load and repeat steps 6 through 10 until 744 the Throughput values for the primary and backup LSPs are 745 equal. 747 12. Record the Throughput. This is the offered load that will be 748 used for the Headend PLR failover test cases. 750 7.1.2. Mid-Point PLR Forwarding Performance 752 Objective: 754 To benchmark the maximum rate (pps) on the PLR (as mid-point) over 755 primary LSP and backup LSP. 757 Test Setup: 759 A. Select any one topology out of the 8 from section 6. 761 B. Select overlay technologies (e.g. IGP, VPN, or VC) with DUT 762 as Mid-Point PLR. 764 C. The DUT will also have 2 interfaces connected to the traffic 765 generator. 767 Procedure: 769 1. Establish the primary LSP on R1 required by the topology 770 selected. 772 2. Establish the backup LSP on R2 required by the selected 773 topology. 775 3. Verify primary and backup LSPs are up and that primary is 776 protected. 778 4. Verify Fast Reroute protection is enabled and ready. 780 5. Setup traffic streams as described in section 5.7. 782 6. Send MPLS traffic over the primary LSP at the Throughput 783 supported by the DUT. 785 7. Record the Throughput over the primary LSP. 787 8. Trigger a link failure as described in section 5.1. 789 9. Verify that the offered load gets mapped to the backup tunnel 790 and measure the Additive Backup Delay. 792 10. 30 seconds after Failover, stop the offered load and measure 793 the Throughput, Packet Loss, Out-of-Order Packets, and 794 Duplicate Packets over the Backup LSP. 796 11. Adjust the offered load and repeat steps 6 through 10 until 797 the Throughput values for the primary and backup LSPs are 798 equal. 800 12. Record the Throughput. This is the offered load that will be 801 used for the Mid-Point PLR failover test cases. 803 7.1.3. Egress PLR Forwarding Performance 805 Objective: 807 To benchmark the maximum rate (pps) on the PLR (as egress) over 808 primary LSP and backup LSP. 810 Test Setup: 812 A. Select any one topology out of the 8 from section 6. 814 B. Select overlay technologies (e.g. IGP, VPN, or VC) with DUT 815 as Egress PLR. 817 C. The DUT will also have 2 interfaces connected to the traffic 818 generator. 820 Procedure: 822 1. Establish the primary LSP on R1 required by the topology 823 selected. 825 2. Establish the backup LSP on R2 required by the selected 826 topology. 828 3. Verify primary and backup LSPs are up and that primary is 829 protected. 831 4. Verify Fast Reroute protection is enabled and ready. 833 5. Setup traffic streams as described in section 5.7. 835 6. Send MPLS traffic over the primary LSP at the Throughput 836 supported by the DUT. 838 7. Record the Throughput over the primary LSP. 840 8. Trigger a link failure as described in section 5.1. 842 9. Verify that the offered load gets mapped to the backup tunnel 843 and measure the Additive Backup Delay. 845 10. 30 seconds after Failover, stop the offered load and measure 846 the Throughput, Packet Loss, Out-of-Order Packets, and 847 Duplicate Packets over the Backup LSP. 849 11. Adjust the offered load and repeat steps 6 through 10 until 850 the Throughput values for the primary and backup LSPs are 851 equal. 853 12. Record the Throughput. This is the offered load that will be 854 used for the Egress PLR failover test cases. 856 7.2. Headend PLR with Link Failure 858 Objective: 860 To benchmark the MPLS failover time due to link failure events 861 described in section 5.1 experienced by the DUT which is the 862 Headend PLR. 864 Test Setup: 866 A. Select any one topology out of the 8 from section 6. 868 B. Select overlay technology for FRR test (e.g. IGP, VPN, or 869 VC). 871 C. The (Headend PLR) DUT will also have 2 interfaces connected to 872 Generator/analyzer. (If the node downstream of the PLR is not 873 the traffic a simulated node, then the Ingress of the tunnel 874 should have one link connected to the traffic generator and 875 the node downstream to the PLR or the egress of the tunnel 876 should have a link connected to the traffic analyzer). 878 Test Configuration: 880 1. Configure the number of primaries on R2 and the backups on R2 881 as required by the topology selected. 883 2. Configure the test setup to support Reversion. 885 3. Advertise prefixes (as per FRR Scalability Table described in 886 Appendix A) by the tail end. 888 Procedure: 890 Test Case "7.1.1. Headend PLR Forwarding Performance" MUST be 891 completed first to obtain the Throughput to use as the offered 892 load. 894 1. Establish the primary LSP on R2 required by the topology 895 selected. 897 2. Establish the backup LSP on R2 required by the selected 898 topology. 900 3. Verify primary and backup LSPs are up and that primary is 901 protected. 903 4. Verify Fast Reroute protection is enabled and ready. 905 5. Setup traffic streams for the offered load as described in 906 section 5.7. 908 6. Provide the offered load from the tester at the Throughput 909 [Br91] level obtained from test case 7.1.1. 911 7. Verify traffic is switched over Primary LSP without packet 912 loss. 914 8. Trigger a link failure as described in section 5.1. 916 9. Verify that the offered load gets mapped to the backup tunnel 917 and measure the Additive Backup Delay. 919 10. 30 seconds after Failover [TERM-ID], stop the offered load 920 and measure the total Failover Packet Loss [TERM-ID]. 922 11. Calculate the Failover Time [TERM-ID] benchmark using the 923 selected Failover Time Calculation Method (TBLM, PLBM, or 924 TBM) [TERM-ID]. 926 12. Restart the offered load and restore the primary LSP to 927 verify Reversion [TERM-ID] occurs and measure the Reversion 928 Packet Loss [TERM-ID]. 930 13. Calculate the Reversion Time [TERM-ID] benchmark using the 931 selected Failover Time Calculation Method (TBLM, PLBM, or 932 TBM) [TERM-ID]. 934 14. Verify Headend signals new LSP and protection should be in 935 place again. 937 IT is RECOMMENDED that this procedure be repeated for each of the 938 link failure triggers defined in section 5.1. 940 7.3. Mid-Point PLR with Link Failure 942 Objective: 944 To benchmark the MPLS failover time due to link failure events 945 described in section 5.1 experienced by the DUT which is the Mid- 946 Point PLR. 948 Test Setup: 950 A. Select any one topology out of the 8 from section 6. 952 B. Select overlay technology for FRR test as Mid-Point LSPs. 954 C. The DUT will also have 2 interfaces connected to the traffic 955 generator. 957 Test Configuration: 959 1. Configure the number of primaries on R1 and the backups on R2 960 as required by the topology selected. 962 2. Configure the test setup to support Reversion. 964 3. Advertise prefixes (as per FRR Scalability Table described in 965 Appendix A) by the tail end. 967 Procedure: 969 Test Case "7.1.2. Mid-Point PLR Forwarding Performance" MUST be 970 completed first to obtain the Throughput to use as the offered 971 load. 973 1. Establish the primary LSP on R1 required by the topology 974 selected. 976 2. Establish the backup LSP on R2 required by the selected 977 topology. 979 3. Perform steps 3 through 14 from section 7.2 Headend PLR with 980 Link Failure. 982 IT is RECOMMENDED that this procedure be repeated for each of the 983 link failure triggers defined in section 5.1. 985 7.4. Headend PLR with Node Failure 987 Objective: 989 To benchmark the MPLS failover time due to Node failure events 990 described in section 5.1 experienced by the DUT which is the 991 Headend PLR. 993 Test Setup: 995 A. Select any one topology from section 6. 997 B. Select overlay technology for FRR test (e.g. IGP, VPN, or 998 VC). 1000 C. The DUT will also have 2 interfaces connected to the traffic 1001 generator/analyzer. 1003 Test Configuration: 1005 1. Configure the number of primaries on R2 and the backups on R2 1006 as required by the topology selected. 1008 2. Configure the test setup to support Reversion. 1010 3. Advertise prefixes (as per FRR Scalability Table described in 1011 Appendix A) by the tail end. 1013 Procedure: 1015 Test Case "7.1.1. Headend PLR Forwarding Performance" MUST be 1016 completed first to obtain the Throughput to use as the offered 1017 load. 1019 1. Establish the primary LSP on R2 required by the topology 1020 selected. 1022 2. Establish the backup LSP on R2 required by the selected 1023 topology. 1025 3. Verify primary and backup LSPs are up and that primary is 1026 protected. 1028 4. Verify Fast Reroute protection. 1030 5. Setup traffic streams for the offered load as described in 1031 section 5.7. 1033 6. Provide the offered load from the tester at the Throughput 1034 [Br91] level obtained from test case 7.1.1. 1036 7. Verify traffic is switched over Primary LSP without packet 1037 loss. 1039 8. Trigger a node failure as described in section 5.1. 1041 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link 1042 Failure. 1044 IT is RECOMMENDED that this procedure be repeated for each of the 1045 node failure triggers defined in section 5.1. 1047 7.5. Mid-Point PLR with Node Failure 1049 Objective: 1051 To benchmark the MPLS failover time due to Node failure events 1052 described in section 5.1 experienced by the DUT which is the Mid- 1053 Point PLR. 1055 Test Setup: 1057 A. Select any one topology from section 6.1 to 6.2. 1059 B. Select overlay technology for FRR test as Mid-Point LSPs. 1061 C. The DUT will also have 2 interfaces connected to the traffic 1062 generator. 1064 Test Configuration: 1066 1. Configure the number of primaries on R1 and the backups on R2 1067 as required by the topology selected. 1069 2. Configure the test setup to support Reversion. 1071 3. Advertise prefixes (as per FRR Scalability Table described in 1072 Appendix A) by the tail end. 1074 Procedure: 1076 Test Case "7.1.1. Mid-Point PLR Forwarding Performance" MUST be 1077 completed first to obtain the Throughput to use as the offered 1078 load. 1080 1. Establish the primary LSP on R1 required by the topology 1081 selected. 1083 2. Establish the backup LSP on R2 required by the selected 1084 topology. 1086 3. Verify primary and backup LSPs are up and that primary is 1087 protected. 1089 4. Verify Fast Reroute protection. 1091 5. Setup traffic streams for the offered load as described in 1092 section 5.7. 1094 6. Provide the offered load from the tester at the Throughput 1095 [Br91] level obtained from test case 7.1.1. 1097 7. Verify traffic is switched over Primary LSP without packet 1098 loss. 1100 8. Trigger a node failure as described in section 5.1. 1102 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link 1103 Failure. 1105 IT is RECOMMENDED that this procedure be repeated for each of the 1106 node failure triggers defined in section 5.1. 1108 8. Reporting Format 1110 For each test, it is recommended that the results be reported in the 1111 following format. 1113 Parameter Units 1115 IGP used for the test ISIS-TE/ OSPF-TE 1117 Interface types Gige,POS,ATM,VLAN etc. 1119 Packet Sizes offered to the DUT Bytes (at layer 3) 1121 Offered Load packets per second 1122 IGP routes advertised Number of IGP routes 1124 Penultimate Hop Popping Used/Not Used 1126 RSVP hello timers Milliseconds 1128 Number of Protected tunnels Number of tunnels 1130 Number of VPN routes installed Number of VPN routes 1131 on the Headend 1133 Number of VC tunnels Number of VC tunnels 1135 Number of mid-point tunnels Number of tunnels 1137 Number of Prefixes protected by Number of LSPs 1138 Primary 1140 Topology being used Section number, and 1141 figure reference 1143 Failover Event Event type 1145 Re-optimization Yes/No 1147 Benchmarks (to be recorded for each test case): 1149 Failover- 1150 Failover Time seconds 1151 Failover Packet Loss packets 1152 Additive Backup Delay seconds 1153 Out-of-Order Packets packets 1154 Duplicate Packets packets 1155 Failover Time Calculation Method Method Used 1157 Reversion- 1158 Reversion Time seconds 1159 Reversion Packet Loss packets 1160 Additive Backup Delay seconds 1161 Out-of-Order Packets packets 1162 Duplicate Packets packets 1163 Failover Time Calculation Method Method Used 1165 Failover Time suggested above is calculated using one of the 1166 following three methods 1167 1. Packet-Loss Based method (PLBM): (Number of packets dropped/ 1168 packets per second * 1000) milliseconds. This method could also 1169 be referred as Loss-Derived method. 1171 2. Time-Based Loss Method (TBLM): This method relies on the ability 1172 of the Traffic generators to provide statistics which reveal the 1173 duration of failure in milliseconds based on when the packet loss 1174 occurred (interval between non-zero packet loss and zero loss). 1176 3. Timestamp Based Method (TBM): This method of failover calculation 1177 is based on the timestamp that gets transmitted as payload in the 1178 packets originated by the generator. The Traffic Analyzer 1179 records the timestamp of the last packet received before the 1180 failover event and the first packet after the failover and 1181 derives the time based on the difference between these 2 1182 timestamps. Note: The payload could also contain sequence 1183 numbers for out-of-order packet calculation and duplicate 1184 packets. 1186 The timestamp based method method would be able to detect Reversion 1187 impairments beyond loss, thus it is RECOMMENDED method as a Failover 1188 Time method. 1190 9. Security Considerations 1192 Benchmarking activities as described in this memo are limited to 1193 technology characterization using controlled stimuli in a laboratory 1194 environment, with dedicated address space and the constraints 1195 specified in the sections above. 1197 The benchmarking network topology will be an independent test setup 1198 and MUST NOT be connected to devices that may forward the test 1199 traffic into a production network, or misroute traffic to the test 1200 management network. 1202 Further, benchmarking is performed on a "black-box" basis, relying 1203 solely on measurements observable external to the DUT/SUT. 1205 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 1206 benchmarking purposes. Any implications for network security arising 1207 from the DUT/SUT SHOULD be identical in the lab and in production 1208 networks. 1210 10. IANA Considerations 1212 This draft does not require any new allocations by IANA. 1214 11. References 1216 11.1. Informative References 1218 [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 1219 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 1220 May 2005. 1222 11.2. Normative References 1224 [I-D.ietf-bmwg-igp-dataplane-conv-term] 1225 Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology 1226 for Benchmarking Link-State IGP Data Plane Route 1227 Convergence", draft-ietf-bmwg-igp-dataplane-conv-term-23 1228 (work in progress), February 2011. 1230 [I-D.ietf-bmwg-protection-term] 1231 Papneja, R., Poretsky, S., Vapiwala, S., and J. Karthik, 1232 "Benchmarking Terminology for Protection Performance", 1233 draft-ietf-bmwg-protection-term-08 (work in progress), 1234 December 2009. 1236 [RFC1242] Bradner, S., "Benchmarking terminology for network 1237 interconnection devices", RFC 1242, July 1991. 1239 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1240 Requirement Levels", BCP 14, RFC 2119, March 1997. 1242 [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN 1243 Switching Devices", RFC 2285, February 1998. 1245 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1246 Network Interconnect Devices", RFC 2544, March 1999. 1248 [RFC4689] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana, 1249 "Terminology for Benchmarking Network-layer Traffic 1250 Control Mechanisms", RFC 4689, October 2006. 1252 [RFC5695] Akhter, A., Asati, R., and C. Pignataro, "MPLS Forwarding 1253 Benchmarking Methodology for IP Flows", RFC 5695, 1254 November 2009. 1256 Appendix A. Acknowledgements 1258 We would like to thank Jean Philip Vasseur for his invaluable input 1259 to the document and Curtis Villamizar his contribution in suggesting 1260 text on definition and need for benchmarking Correlated failures. 1262 Additionally we would like to thank Al Morton, Arun Gandhi, Amrit 1263 Hanspal, Karu Ratnam, Raveesh Janardan, Andrey Kiselev, and Mohan 1264 Nanduri for their formal reviews of this document. 1266 Appendix B. Fast Reroute Scalability Table 1268 This section provides the recommended numbers for evaluating the 1269 scalability of fast reroute implementations. It also recommends the 1270 typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries. 1271 Based on the features supported by the device under test (DUT), 1272 appropriate scaling limits can be used for the test bed. 1274 A1. FRR IGP Table 1276 No. of Headend TE Tunnels IGP Prefixes 1278 1 100 1280 1 500 1282 1 1000 1284 1 2000 1286 1 5000 1288 2 (Load Balance) 100 1290 2 (Load Balance) 500 1292 2 (Load Balance) 1000 1294 2 (Load Balance) 2000 1296 2 (Load Balance) 5000 1298 100 100 1300 500 500 1302 1000 1000 1304 2000 2000 1306 A2. FRR VPN Table 1308 No. of Headend TE Tunnels VPNv4 Prefixes 1310 1 100 1312 1 500 1314 1 1000 1316 1 2000 1318 1 5000 1320 1 10000 1322 1 20000 1324 1 Max 1326 2 (Load Balance) 100 1328 2 (Load Balance) 500 1330 2 (Load Balance) 1000 1332 2 (Load Balance) 2000 1334 2 (Load Balance) 5000 1336 2 (Load Balance) 10000 1338 2 (Load Balance) 20000 1340 2 (Load Balance) Max 1342 A3. FRR Mid-Point LSP Table 1344 No of Mid-point TE LSPs could be configured at recommended levels - 1345 100, 500, 1000, 2000, or max supported number. 1347 A2. FRR VC Table 1348 No. of Headend TE Tunnels VC entries 1350 1 100 1351 1 500 1352 1 1000 1353 1 2000 1354 1 Max 1355 100 100 1356 500 500 1357 1000 1000 1358 2000 2000 1360 Appendix C. Abbreviations 1362 BFD - Bidirectional Fault Detection 1363 BGP - Border Gateway protocol 1364 CE - Customer Edge 1365 DUT - Device Under Test 1366 FRR - Fast Reroute 1367 IGP - Interior Gateway Protocol 1368 IP - Internet Protocol 1369 LSP - Label Switched Path 1370 MP - Merge Point 1371 MPLS - Multi Protocol Label Switching 1372 N-Nhop - Next - Next Hop 1373 Nhop - Next Hop 1374 OIR - Online Insertion and Removal 1375 P - Provider 1376 PE - Provider Edge 1377 PHP - Penultimate Hop Popping 1378 PLR - Point of Local Repair 1379 RSVP - Resource reSerVation Protocol 1380 SRLG - Shared Risk Link Group 1381 TA - Traffic Analyzer 1382 TE - Traffic Engineering 1383 TG - Traffic Generator 1384 VC - Virtual Circuit 1385 VPN - Virtual Private Network 1387 Authors' Addresses 1389 Rajiv Papneja 1390 Isocore 1391 12359 Sunrise Valley Dr. STE100 1392 Reston, VA 20191 1393 USA 1395 Email: rpapneja@isocore.com 1397 Samir Vapiwala 1398 Cisco Systems 1399 300 Beaver Brook Road 1400 Boxborough, MA 01719 1401 USA 1403 Email: svapiwal@cisco.com 1405 Jay Karthik 1406 Cisco Systems 1407 300 Beaver Brook Road 1408 Boxborough, MA 01719 1409 USA 1411 Email: jkarthik@cisco.com 1413 Scott Poretsky 1414 Allot Communications 1415 USA 1417 Email: sporetsky@allot.com 1419 Shankar Rao 1420 Qwest Communications 1421 950 17th Street 1422 Suite 1900 1423 Denver, CO 80210 1424 USA 1426 Email: shankar.rao@qwest.com 1427 Jean-Louis Le Roux 1428 France Telecom 1429 2 av Pierre Marzin 1430 22300 Lannion 1431 France 1433 Email: jeanlouis.leroux@orange-ft.com