idnits 2.17.1 draft-ietf-bmwg-protection-meth-13.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 2 longer pages, the longest (page 31) being 66 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 22 instances of too long lines in the document, the longest one being 22 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 513 has weird spacing: '...failure after...' == Line 548 has weird spacing: '...failure after...' == Line 576 has weird spacing: '...failure after...' == Line 610 has weird spacing: '...failure after...' == Line 642 has weird spacing: '...failure after...' == (3 more instances...) -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 14, 2012) is 4180 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 1 error (**), 0 flaws (~~), 8 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group R. Papneja 3 Internet-Draft Huawei Technologies 4 Intended status: Informational S. Vapiwala 5 Expires: May 13, 2013 J. Karthik 6 Cisco Systems 7 S. Poretsky 8 Allot Communications 9 S. Rao 10 Qwest Communications 11 JL. Le Roux 12 France Telecom 13 November 14, 2012 15 Methodology for Benchmarking MPLS-TE Fast Reroute Protection 16 draft-ietf-bmwg-protection-meth-13.txt 18 Abstract 20 This draft describes the methodology for benchmarking MPLS Fast 21 Reroute (FRR) protection mechanisms for link and node protection. 22 This document provides test methodologies and testbed setup for 23 measuring failover times of Fast Reroute techniques while considering 24 factors (such as underlying links) that might impact 25 recovery times for real-time applications bound to MPLS traffic 26 engineered (MPLS-TE) tunnels. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on May 9, 2013. 45 Copyright Notice 47 Copyright (c) 2012 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 This document may contain material from IETF Documents or IETF 61 Contributions published or made publicly available before November 62 10, 2008. The person(s) controlling the copyright in some of this 63 material may not have granted the IETF Trust the right to allow 64 modifications of such material outside the IETF Standards Process. 65 Without obtaining an adequate license from the person(s) controlling 66 the copyright in such materials, this document may not be modified 67 outside the IETF Standards Process, and derivative works of it may 68 not be created outside the IETF Standards Process, except to format 69 it for publication as an RFC or to translate it into languages other 70 than English. 72 Table of Contents 74 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 75 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 6 76 3. Existing Definitions and Requirements . . . . . . . . . . . . 6 77 4. General Reference Topology . . . . . . . . . . . . . . . . . . 7 78 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 8 79 5.1. Failover Events [RFC 6414] . . . . . . . . . . . . . . . . 8 80 5.2. Failure Detection [RFC 6414] . . . . . . . . . . . . . . . 9 81 5.3. Use of Data Traffic for MPLS Protection benchmarking . . . 10 82 5.4. LSP and Route Scaling . . . . . . . . . . . . . . . . . . 10 83 5.5. Selection of IGP . . . . . . . . . . . . . . . . . . . . . 10 84 5.6. Restoration and Reversion [RFC 6414] . . . . . . . . . . . 10 85 5.7. Offered Load . . . . . . . . . . . . . . . . . . . . . . . 11 86 5.8. Tester Capabilities . . . . . . . . . . . . . . . . . . . 11 87 5.9. Failover Time Measurement Methods . . . . . . . . . . . . 12 88 6. Reference Test Setup . . . . . . . . . . . . . . . . . . . . . 12 89 6.1. Link Protection . . . . . . . . . . . . . . . . . . . . . 13 90 6.1.1. Link Protection - 1 hop primary (from PLR) and 1 91 hop backup TE tunnels . . . . . . . . . . . . . . . . 13 92 6.1.2. Link Protection - 1 hop primary (from PLR) and 2 93 hop backup TE tunnels . . . . . . . . . . . . . . . . 14 94 6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 95 hop backup TE tunnels . . . . . . . . . . . . . . . . 14 96 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 97 hop backup TE tunnels . . . . . . . . . . . . . . . . 15 98 6.2. Node Protection . . . . . . . . . . . . . . . . . . . . . 16 99 6.2.1. Node Protection - 2 hop primary (from PLR) and 1 100 hop backup TE tunnels . . . . . . . . . . . . . . . . 16 101 6.2.2. Node Protection - 2 hop primary (from PLR) and 2 102 hop backup TE tunnels . . . . . . . . . . . . . . . . 17 103 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 104 hop backup TE tunnels . . . . . . . . . . . . . . . . 18 105 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 106 hop backup TE tunnels . . . . . . . . . . . . . . . . 19 107 7. Test Methodology . . . . . . . . . . . . . . . . . . . . . . . 20 108 7.1. MPLS FRR Forwarding Performance . . . . . . . . . . . . . 20 109 7.1.1. Headend PLR Forwarding Performance . . . . . . . . . . 20 110 7.1.2. Mid-Point PLR Forwarding Performance . . . . . . . . . 21 111 7.2. Headend PLR with Link Failure . . . . . . . . . . . . . . 23 112 7.3. Mid-Point PLR with Link Failure . . . . . . . . . . . . . 24 113 7.4. Headend PLR with Node Failure . . . . . . . . . . . . . . 26 114 7.5. Mid-Point PLR with Node Failure . . . . . . . . . . . . . 27 115 8. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 28 116 9. Security Considerations . . . . . . . . . . . . . . . . . . . 30 117 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 30 118 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 30 119 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 30 120 12.1. Informative References . . . . . . . . . . . . . . . . . . 30 121 12.2. Normative References . . . . . . . . . . . . . . . . . . . 30 122 Appendix A. Fast Reroute Scalability Table . . . . . . . . . . . 30 123 Appendix B. Abbreviations . . . . . . . . . . . . . . . . . . . . 33 124 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 34 126 1. Introduction 128 This document describes the methodology for benchmarking MPLS Fast 129 Reroute (FRR) protection mechanisms. This document uses much of the 130 terminology defined in [RFC 6414]. 132 Protection mechanisms provide recovery of client services from a 133 planned or an unplanned link or node failures. MPLS FRR protection 134 mechanisms are generally deployed in a network infrastructure where 135 MPLS is used for provisioning of point-to-point traffic engineered 136 tunnels (tunnel). MPLS FRR protection mechanisms aim to reduce 137 service disruption period by minimizing recovery time from most 138 common failures. 140 Network elements from different manufacturers behave differently to 141 network failures, which impacts the network's ability and performance 142 for failure recovery. It therefore becomes imperative for service 143 providers to have a common benchmark to understand the performance 144 behaviors of network elements. 146 There are two factors impacting service availability: frequency of 147 failures and duration for which the failures persist. Failures can 148 be classified further into two types: correlated and uncorrelated. 149 Correlated and uncorrelated failures may be planned or unplanned. 151 Planned failures are predictable. Network implementations should be 152 able to handle both planned and unplanned failures and recover 153 gracefully within a time frame to maintain service assurance. Hence, 154 failover recovery time is one of the most important benchmark that a 155 service provider considers in choosing the building blocks for their 156 network infrastructure. 158 A correlated failure is a result of the occurrence of two or more 159 failures. A typical example is failure of a logical resource (e.g. 160 layer-2 links) due to a dependency on a common physical resource 161 (e.g. common conduit) that fails. Within the context of MPLS 162 protection mechanisms, failures that arise due to Shared Risk Link 163 Groups (SRLG) [RFC 4090] can be considered as correlated failures. 165 MPLS FRR [RFC 4090] allows for the possibility that the Label 166 Switched Paths can be re-optimized in the minutes following Failover. 167 IP Traffic would be re-routed according to the preferred path for the 168 post-failure topology. Thus, MPLS-FRR may include additional steps 169 following the occurrence of the failure detection [RFC 6414] and 170 failover event [RFC 6414]. 172 (1) Failover Event - Primary Path (Working Path) fails 174 (2) Failure Detection- Failover Event is detected 176 (3) 178 a. Failover - Working Path switched to Backup path 180 b. Re-Optimization of Working Path (possible change from 181 Backup Path) 183 (4) Restoration [RFC 6414] 185 (5) Reversion [RFC 6414] 187 2. Document Scope 189 This document provides detailed test cases along with different 190 topologies and scenarios that should be considered to effectively 191 benchmark MPLS FRR protection mechanisms and failover times on the 192 Data Plane. Different Failover Events and scaling considerations are 193 also provided in this document. 195 All benchmarking test-cases defined in this document apply to 196 Facility backup [RFC 4090]. The test cases cover set of interesting 197 failure scenarios and the associated procedures benchmark the 198 performance of the Device Under Test (DUT) to recover from failures. 199 Data plane traffic is used to benchmark failover times. 201 Benchmarking of correlated failures is out of scope of this document. 202 Detection using Bi-directional Forwarding Detection (BFD) is outside 203 the scope of this document, but mentioned in discussion sections. 205 The Performance of control plane is outside the scope of this 206 benchmarking. 208 As described above, MPLS-FRR may include a Re-optimization of the 209 Working Path, with possible packet transfer impairments. 210 Characterization of Re-optimization is beyond the scope of this memo. 212 3. Existing Definitions and Requirements 214 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 215 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 216 document are to be interpreted as described in BCP 14, [RFC 2119]. 217 While [RFC 2119] defines the use of these key words primarily for 218 Standards Track documents however, this Informational track document 219 may use some of uses these keywords. 221 The reader is assumed to be familiar with the commonly used MPLS 222 terminology, some of which is defined in [RFC 4090]. 224 This document uses much of the terminology defined in [RFC 6414]. 225 This document also uses existing terminology defined in other BMWG 226 Work [RFC 1242], [RFC 2285], [RFC 4689]. Appendix B provide 227 abbreviations used in the document 229 4. General Reference Topology 231 Figure 1 illustrates the basic reference testbed and is applicable to 232 all the test cases defined in this document. The Tester is comprised 233 of a Traffic Generator (TG) & Test Analyzer (TA) and Emulator. A 234 Tester is connected to the test network and depending upon the test 235 case, the DUT could vary. The Tester sends and receives IP traffic 236 to the tunnel ingress and performs signaling protocol emulation to 237 simulate real network scenarios in a lab environment. The Tester may 238 also support MPLS-TE signaling to act as the ingress node to the MPLS 239 tunnel. The lines in figures represent physical connections. 241 +---------------------------+ 242 | +------------|---------------+ 243 | | | | 244 | | | | 245 +--------+ +--------+ +--------+ +--------+ +--------+ 246 TG--| R1 |-----| R2 |----| R3 | | R4 | | R5 | 247 | |-----| |----| |----| |---| | 248 +--------+ +--------+ +--------+ +--------+ +--------+ 249 | | | | | 250 | | | | | 251 | +--------+ | | TA 252 +---------| R6 |---------+ | 253 | |----------------------+ 254 +--------+ 256 Fig. 1 Fast Reroute Topology 258 The tester MUST record the number of lost, duplicate, and out-of-order 259 packets. It should further record arrival and departure times so 260 that Failover Time, Additive Latency, and Reversion Time can be 261 measured. The tester may be a single device or a test system 262 emulating all the different roles along a primary or backup path. 264 The label stack is dependent of the following 3 entities: 266 (1) Type of protection (Link Vs Node) 268 (2) # of remaining hops of the primary tunnel from the PLR[RFC 269 6414] 271 (3) # of remaining hops of the backup tunnel from the PLR 273 Due to this dependency, it is RECOMMENDED that the benchmarking of 274 failover times be performed on all the topologies provided in section 275 6. 277 5. Test Considerations 279 This section discusses the fundamentals of MPLS Protection testing: 281 (1) The types of network events that causes failover (section 5.1) 283 (2) Indications for failover (section 5.2) 285 (3) the use of data traffic (section 5.3) 287 (4) LSP Scaling (Section 5.4) 289 (5) IGP Selection (Section 5.5) 291 (6) Reversion of LSP (Section 5.6) 293 (7) Traffic generation (section 5.7) 295 5.1. Failover Events [RFC 6414] 297 The failover to the backup tunnel is primarily triggered by either 298 link or node failures observed downstream of the Point of Local 299 repair (PLR). The failure events are listed below. 301 Link Failure Events 302 - Interface Shutdown on PLR side with POS Alarm 303 - Interface Shutdown on remote side with POS Alarm 304 - Interface Shutdown on PLR side with RSVP hello enabled 305 - Interface Shutdown on remote side with RSVP hello enabled 306 - Interface Shutdown on PLR side with BFD 307 - Interface Shutdown on remote side with BFD 308 - Fiber Pull on the PLR side (Both TX & RX or just the TX) 309 - Fiber Pull on the remote side (Both TX & RX or just the RX) 310 - Online insertion and removal (OIR) on PLR side 311 - OIR on remote side 312 - Sub-interface failure on PLR side (e.g. shutting down of a VLAN) 313 - Sub-interface failure on remote side 314 - Parent interface shutdown on PLR side (an interface bearing multiple 315 sub-interfaces) 316 - Parent interface shutdown on remote side 318 Node Failure Events 320 - A System reload initiated either by a graceful shutdown 321 or by a power failure. 322 - A system crash due to a software failure or an assert. 324 5.2. Failure Detection [RFC 6414] 326 Link failure detection time depends on the link type and failure 327 detection protocols running. For SONET/SDH, the alarm type (such as 328 LOS, AIS, or RDI) can be used. Other link types have layer-two 329 alarms, but they may not provide a short enough failure detection 330 time. Ethernet based links enabled with MPLS/IP do not have layer 2 331 failure indicators, and therefore relies on layer 3 signaling for 332 failure detection. However for directly connected devices, remote 333 fault indication in the ethernet auto-negotiation scheme could be 334 considered as a type of layer 2 link failure indicator. 336 MPLS has different failure detection techniques such as BFD, or use 337 of RSVP hellos. These methods can be used for the layer 3 failure 338 indicators required by Ethernet based links, or for some other non- 339 Ethernet based links to help improve failure detection time. 340 However, these fast failure detection mechanisms are out of scope. 342 The test procedures in this document can be used for a local failure 343 or remote failure scenarios for comprehensive benchmarking and to 344 evaluate failover performance independent of the failure detection 345 techniques. 347 5.3. Use of Data Traffic for MPLS Protection benchmarking 349 Currently end customers use packet loss as a key metric for Failover 350 Time [RFC 6414]. Failover Packet Loss [RFC 6414] is an externally 351 observable event and has direct impact on application performance. 352 MPLS protection is expected to minimize the packet loss in the event 353 of a failure. For this reason it is important to develop a standard 354 router benchmarking methodology for measuring MPLS protection that 355 uses packet loss as a metric. At a known rate of forwarding, packet 356 loss can be measured and the failover time can be determined. 357 Measurement of control plane signaling to establish backup paths is 358 not enough to verify failover. Failover is best determined when 359 packets are actually traversing the backup path. 361 An additional benefit of using packet loss for calculation of 362 failover time is that it allows use of a black-box test environment. 363 Data traffic is offered at line-rate to the device under test (DUT) 364 an emulated network failure event is forced to occur, and packet loss 365 is externally measured to calculate the convergence time. This setup 366 is independent of the DUT architecture. 368 In addition, this methodology considers the packets in error and 369 duplicate packets [RFC 4689] that could have been generated during 370 the failover process. The methodologies consider lost, out-of-order 371 [RFC 4689] and duplicate packets to be impaired packets that 372 contribute to the Failover Time. 374 5.4. LSP and Route Scaling 376 Failover time performance may vary with the number of established 377 primary and backup tunnel label switched paths (LSP) and installed 378 routes. However the procedure outlined here should be used for any 379 number of LSPs (L) and number of routes protected by PLR(R). The 380 amount of L and R must be recorded. 382 5.5. Selection of IGP 384 The underlying IGP could be ISIS-TE or OSPF-TE for the methodology 385 proposed here. See [RFC 6412] for IGP options to consider and 386 report. 388 5.6. Restoration and Reversion [RFC 6414] 390 Path restoration provides a method to restore an alternate primary 391 LSP upon failure and to switch traffic from the Backup Path to the 392 restored Primary Path (Reversion). In MPLS-FRR, Reversion can be 393 implemented as Global Reversion or Local Reversion. It is important 394 to include Restoration and Reversion as a step in each test case to 395 measure the amount of packet loss, out of order packets, or duplicate 396 packets that is produced. 398 Note: In addition to restoration and reversion, re-optimization can 399 take place while the failure is still not recovered but it depends on 400 the user configuration, and re-optimization timers. 402 5.7. Offered Load 404 It is suggested that there be three or more traffic streams as long 405 as there is a steady and constant rate of flow for all the streams. 406 In order to monitor the DUT performance for recovery times, a set of 407 route prefixes should be advertised before traffic is sent. The 408 traffic should be configured towards these routes. 410 Prefix-dependency behaviors are key in IP and tests with route-specific 411 flows spread across the routing table will reveal this dependency. 412 Generating traffic to all of the prefixes reachable by the protected 413 tunnel (probably in a Round-Robin fashion, where the traffic is 414 destined to all the prefixes but one prefix at a time in a cyclic 415 manner) is not recommended. Round-Robin traffic generation 416 is not recommended to all prefixes, as time to hit all the prefixes 417 may be higher than the failover time. This phenomenon will reduce 418 the granularity of the measured results and the results observed 419 may not be accurate. 421 5.8. Tester Capabilities 423 It is RECOMMENDED that the Tester used to execute each test case have 424 the following capabilities: 426 1.Ability to establish MPLS-TE tunnels and push/pop labels. 428 2.Ability to produce Failover Event [RFC 6414]. 430 3.Ability to insert a timestamp in each data packet's IP 431 payload. 433 4.An internal time clock to control timestamping, time 434 measurements, and time calculations. 436 5.Ability to disable or tune specific Layer-2 and Layer-3 437 protocol functions on any interface(s). 439 6.Ability to react upon the receipt of path error from the PLR 441 The Tester MAY be capable to make non-data plane convergence 442 observations and use those observations for measurements. 444 5.9. Failover Time Measurement Methods 446 Failover Time is calculated using one of the following three methods 448 1. Packet-Loss Based method (PLBM): (Number of packets dropped/ 449 packets per second * 1000) milliseconds. This method could also 450 be referred as Loss-Derived method. 452 2. Time-Based Loss Method (TBLM): This method relies on the ability 453 of the Traffic generators to provide statistics which reveal the 454 duration of failure in milliseconds based on when the packet loss 455 occurred (interval between non-zero packet loss and zero loss). 457 3. Timestamp Based Method (TBM): This method of failover calculation 458 is based on the timestamp that gets transmitted as payload in the 459 packets originated by the generator. The Traffic Analyzer 460 records the timestamp of the last packet received before the 461 failover event and the first packet after the failover and 462 derives the time based on the difference between these 2 463 timestamps. Note: The payload could also contain sequence 464 numbers for out-of-order packet calculation and duplicate 465 packets. 467 The timestamp based method method would be able to detect Reversion 468 impairments beyond loss, thus it is RECOMMENDED method as a Failover 469 Time method. 471 6. Reference Test Setup 473 In addition to the general reference topology shown in figure 1, this 474 section provides detailed insight into various proposed test setups 475 that should be considered for comprehensively benchmarking the 476 failover time in different roles along the primary tunnel 478 This section proposes a set of topologies that covers all the 479 scenarios for local protection. All of these topologies can be 480 mapped to the reference topology shown in Figure 1. Topologies 481 provided in this section refer to the testbed required to benchmark 482 failover time when the DUT is configured as a PLR in either Headend 483 or midpoint role. Provided with each topology below is the label 484 stack at the PLR. Penultimate Hop Popping (PHP) MAY be used and must 485 be reported when used. 487 Figures 2 thru 9 use the following convention and are subset of 488 figure 1: 490 a) HE is Headend 491 b) TE is Tail-End 492 c) MID is Mid point 493 d) MP is Merge Point 494 e) PLR is Point of Local Repair 495 f) PRI is Primary Path 496 g) BKP denotes Backup Path and Nodes 497 h) UR is Upstream Router 499 6.1. Link Protection 501 6.1.1. Link Protection - 1 hop primary (from PLR) and 1 hop backup TE 502 tunnels 504 +-------+ +--------+ +--------+ 505 | R1 | | R2 | PRI| R3 | 506 | UR/HE |--| HE/MID |----| MP/TE | 507 | | | PLR |----| | 508 +-------+ +--------+ BKP+--------+ 510 Figure 2. 512 Traffic Num of Labels Num of labels 513 before failure after failure 514 IP TRAFFIC (P-P) 0 0 515 Layer3 VPN (PE-PE) 1 1 516 Layer3 VPN (PE-P) 2 2 517 Layer2 VC (PE-PE) 1 1 518 Layer2 VC (PE-P) 2 2 519 Mid-point LSPs 0 0 521 Note: Please note the following: 523 a) For P-P case, R2 and R3 acts as P routers 524 b) For PE-PE case,R2 acts as PE and R3 acts as a remote PE 525 c) For PE-P case,R2 acts as a PE router, R3 acts as a P router and R5 acts as remote 526 PE router (Please refer to figure 1 for complete setup) 527 d) For Mid-point case, R1, R2 and R3 act as shown in above figure HE, Midpoint/PLR and 528 TE respectively 530 6.1.2. Link Protection - 1 hop primary (from PLR) and 2 hop backup TE 531 tunnels 533 +-------+ +--------+ +--------+ 534 | R1 | | R2 | | R3 | 535 | UR/HE | | HE/MID |PRI | MP/TE | 536 | |----| PLR |----| | 537 +-------+ +--------+ +--------+ 538 |BKP | 539 | +--------+ | 540 | | R6 | | 541 |----| BKP |----| 542 | MID | 543 +--------+ 545 Figure 3. 547 Traffic Num of Labels Num of labels 548 before failure after failure 549 IP TRAFFIC (P-P) 0 1 550 Layer3 VPN (PE-PE) 1 2 551 Layer3 VPN (PE-P) 2 3 552 Layer2 VC (PE-PE) 1 2 553 Layer2 VC (PE-P) 2 3 554 Mid-point LSPs 0 1 556 Note: Please note the following: 558 a) For P-P case, R2 and R3 acts as P routers 559 b) For PE-PE case,R2 acts as PE and R3 acts as a remote PE 560 c) For PE-P case,R2 acts as a PE router, R3 acts as a P router and R5 acts as remote 561 PE router (Please refer to figure 1 for complete setup) 562 d) For Mid-point case, R1, R2 and R3 act as shown in above figure HE, Midpoint/PLR 563 and TE respectively 565 6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 hop backup TE 566 tunnels 567 +--------+ +--------+ +--------+ +--------+ 568 | R1 | | R2 |PRI | R3 |PRI | R4 | 569 | UR/HE |----| HE/MID |----| MP/MID |------| TE | 570 | | | PLR |----| | | | 571 +--------+ +--------+ BKP+--------+ +--------+ 573 Figure 4. 575 Traffic Num of Labels Num of labels 576 before failure after failure 577 IP TRAFFIC (P-P) 1 1 578 Layer3 VPN (PE-PE) 2 2 579 Layer3 VPN (PE-P) 3 3 580 Layer2 VC (PE-PE) 2 2 581 Layer2 VC (PE-P) 3 3 582 Mid-point LSPs 1 1 584 Note: Please note the following: 586 a) For P-P case, R2, R3 and R4 acts as P routers 587 b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE 588 c) For PE-P case,R2 acts as a PE router, R3 acts as a P router and R5 acts as remote 589 PE router (Please refer to figure 1 for complete setup) 590 d) For Mid-point case, R1, R2, R3 and R4 act as shown in above figure HE, Midpoint/PLR 591 and TE respectively 593 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 hop backup TE 594 tunnels 595 +--------+ +--------+PRI +--------+ PRI +--------+ 596 | R1 | | R2 | | R3 | | R4 | 597 | UR/HE |----| HE/MID |----| MP/MID|------| TE | 598 | | | PLR | | | | | 599 +--------+ +--------+ +--------+ +--------+ 600 BKP| | 601 | +--------+ | 602 | | R6 | | 603 +---| BKP |- 604 | MID | 605 +--------+ 607 Figure 5. 609 Traffic Num of Labels Num of labels 610 before failure after failure 611 IP TRAFFIC (P-P) 1 2 612 Layer3 VPN (PE-PE) 2 3 613 Layer3 VPN (PE-P) 3 4 614 Layer2 VC (PE-PE) 2 3 615 Layer2 VC (PE-P) 3 4 616 Mid-point LSPs 1 2 618 Note: Please note the following: 620 a) For P-P case, R2, R3 and R4 acts as P routers 621 b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE 622 c) For PE-P case,R2 acts as a PE router, R3 acts as a P router and R5 acts as remote 623 PE router (Please refer to figure 1 for complete setup) 624 d) For Mid-point case, R1, R2, R3 and R4 act as shown in above figure HE, Midpoint/PLR 625 and TE respectively 627 6.2. Node Protection 629 6.2.1. Node Protection - 2 hop primary (from PLR) and 1 hop backup TE 630 tunnels 631 +--------+ +--------+ +--------+ +--------+ 632 | R1 | | R2 |PRI | R3 | PRI | R4 | 633 | UR/HE |----| HE/MID |----| MID |------| MP/TE | 634 | | | PLR | | | | | 635 +--------+ +--------+ +--------+ +--------+ 636 |BKP | 637 ----------------------------- 639 Figure 6. 641 Traffic Num of Labels Num of labels 642 before failure after failure 643 IP TRAFFIC (P-P) 1 0 644 Layer3 VPN (PE-PE) 2 1 645 Layer3 VPN (PE-P) 3 2 646 Layer2 VC (PE-PE) 2 1 647 Layer2 VC (PE-P) 3 2 648 Mid-point LSPs 1 0 650 Note: Please note the following: 652 a) For P-P case, R2, R3 and R3 acts as P routers 653 b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE 654 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router and R5 acts as remote 655 PE router (Please refer to figure 1 for complete setup) 656 d) For Mid-point case, R1, R2, R3 and R4 act as shown in above figure HE, Midpoint/PLR 657 and TE respectively 659 6.2.2. Node Protection - 2 hop primary (from PLR) and 2 hop backup TE 660 tunnels 662 +--------+ +--------+ +--------+ +--------+ 663 | R1 | | R2 | | R3 | | R4 | 664 | UR/HE | | HE/MID |PRI | MID |PRI | MP/TE | 665 | |----| PLR |----| |----| | 666 +--------+ +--------+ +--------+ +--------+ 667 | | 668 BKP| +--------+ | 669 | | R6 | | 670 ---------| BKP |--------- 671 | MID | 672 +--------+ 674 Figure 7. 676 Traffic Num of Labels Num of labels 677 before failure after failure 678 IP TRAFFIC (P-P) 1 1 679 Layer3 VPN (PE-PE) 2 2 680 Layer3 VPN (PE-P) 3 3 681 Layer2 VC (PE-PE) 2 2 682 Layer2 VC (PE-P) 3 3 683 Mid-point LSPs 1 1 685 Note: Please note the following: 687 a) For P-P case, R2, R3 and R4 acts as P routers 688 b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE 689 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router and R5 acts as remote 690 PE router (Please refer to figure 1 for complete setup) 691 d) For Mid-point case, R1, R2, R3 and R4 act as shown in above figure HE, Midpoint/PLR 692 and TE respectively 694 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 hop backup TE 695 tunnels 697 +--------+ +--------+PRI+--------+PRI+--------+PRI+--------+ 698 | R1 | | R2 | | R3 | | R4 | | R5 | 699 | UR/HE |--| HE/MID |---| MID |---| MP |---| TE | 700 | | | PLR | | | | | | | 701 +--------+ +--------+ +--------+ +--------+ +--------+ 702 BKP| | 703 -------------------------- 705 Figure 8. 707 Traffic Num of Labels Num of labels 708 before failure after failure 709 IP TRAFFIC (P-P) 1 1 710 Layer3 VPN (PE-PE) 2 2 711 Layer3 VPN (PE-P) 3 3 712 Layer2 VC (PE-PE) 2 2 713 Layer2 VC (PE-P) 3 3 714 Mid-point LSPs 1 1 716 Note: Please note the following: 718 a) For P-P case, R2, R3, R4 and R5 acts as P routers 719 b) For PE-PE case,R2 acts as PE and R5 acts as a remote PE 720 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router and R5 acts as remote 721 PE router (Please refer to figure 1 for complete setup) 722 d) For Mid-point case, R1, R2, R3, R4 and R5 act as shown in above figure HE, 723 Midpoint/PLR and TE respectively 725 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 hop backup TE 726 tunnels 728 +--------+ +--------+ +--------+ +--------+ +--------+ 729 | R1 | | R2 | | R3 | | R4 | | R5 | 730 | UR/HE | | HE/MID |PRI| MID |PRI| MP |PRI| TE | 731 | |-- | PLR |---| |---| |---| | 732 +--------+ +--------+ +--------+ +--------+ +--------+ 733 BKP| | 734 | +--------+ | 735 | | R6 | | 736 ---------| BKP |------- 737 | MID | 738 +--------+ 740 Figure 9. 742 Traffic Num of Labels Num of labels 743 before failure after failure 744 IP TRAFFIC (P-P) 1 2 745 Layer3 VPN (PE-PE) 2 3 746 Layer3 VPN (PE-P) 3 4 747 Layer2 VC (PE-PE) 2 3 748 Layer2 VC (PE-P) 3 4 749 Mid-point LSPs 1 2 751 Note: Please note the following: 753 a) For P-P case, R2, R3, R4 and R5 acts as P routers 754 b) For PE-PE case,R2 acts as PE and R5 acts as a remote PE 755 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router and R5 acts as remote 756 PE router (Please refer to figure 1 for complete setup) 757 d) For Mid-point case, R1, R2, R3, R4 and R5 act as shown in above figure HE, 758 Midpoint/PLR and TE respectively 760 7. Test Methodology 762 The procedure described in this section can be applied to all the 8 763 base test cases and the associated topologies. The backup as well as 764 the primary tunnels are configured to be alike in terms of bandwidth 765 usage. In order to benchmark failover with all possible label stack 766 depth applicable as seen with current deployments, it is RECOMMENDED 767 to perform all of the test cases provided in this section. The 768 forwarding performance test cases in section 7.1 MUST be performed 769 prior to performing the failover test cases. 771 The considerations of Section 4 of [RFC 2544] are applicable when 772 evaluating the results obtained using these methodologies as well. 774 7.1. MPLS FRR Forwarding Performance 776 Benchmarking Failover Time [RFC 6414] for MPLS protection first 777 requires baseline measurement of the forwarding performance of the 778 test topology including the DUT. Forwarding performance is 779 benchmarked by the Throughput as defined in [RFC 5695] and measured 780 in units pps. This section provides two test cases to benchmark 781 forwarding performance. These are with the DUT configured as a 782 Headend PLR, Mid-Point PLR, and Egress PLR. 784 7.1.1. Headend PLR Forwarding Performance 786 Objective: 788 To benchmark the maximum rate (pps) on the PLR (as headend) over 789 primary LSP and backup LSP. 791 Test Setup: 793 A. Select any one topology out of the 8 from section 6. 795 B. Select or enable IP, Layer 3 VPN or Layer 2 VPN services with 796 DUT as Headend PLR. 798 C. The DUT will also have 2 interfaces connected to the traffic 799 Generator/analyzer. (If the node downstream of the PLR is not 800 a simulated node, then the Ingress of the tunnel should have 801 one link connected to the traffic generator and the node 802 downstream to the PLR or the egress of the tunnel should have 803 a link connected to the traffic analyzer). 805 Procedure: 807 1. Establish the primary LSP on R2 required by the topology 808 selected. 810 2. Establish the backup LSP on R2 required by the selected 811 topology. 813 3. Verify primary and backup LSPs are up and that primary is 814 protected. 816 4. Verify Fast Reroute protection is enabled and ready. 818 5. Setup traffic streams as described in section 5.7. 820 6. Send MPLS traffic over the primary LSP at the Throughput 821 supported by the DUT (section 6, RFC 2544). 823 7. Record the Throughput over the primary LSP. 825 8. Trigger a link failure as described in section 5.1. 827 9. Verify that the offered load gets mapped to the backup tunnel 828 and measure the Additive Backup Delay (RFC 6414). 830 10. 30 seconds after Failover, stop the offered load and measure 831 the Throughput, Packet Loss, Out-of-Order Packets, and 832 Duplicate Packets over the Backup LSP. 834 11. Adjust the offered load and repeat steps 6 through 10 until 835 the Throughput values for the primary and backup LSPs are 836 equal. 838 12. Record the final Throughput, which corresponds to the offered 839 load that will be used for the Headend PLR failover test 840 cases. 842 7.1.2. Mid-Point PLR Forwarding Performance 844 Objective: 846 To benchmark the maximum rate (pps) on the PLR (as mid-point) over 847 primary LSP and backup LSP. 849 Test Setup: 851 A. Select any one topology out of the 8 from section 6. 853 B. The DUT will also have 2 interfaces connected to the traffic 854 generator. 856 Procedure: 858 1. Establish the primary LSP on R1 required by the topology 859 selected. 861 2. Establish the backup LSP on R2 required by the selected 862 topology. 864 3. Verify primary and backup LSPs are up and that primary is 865 protected. 867 4. Verify Fast Reroute protection is enabled and ready. 869 5. Setup traffic streams as described in section 5.7. 871 6. Send MPLS traffic over the primary LSP at the Throughput 872 supported by the DUT (section 6, RFC 2544). 874 7. Record the Throughput over the primary LSP. 876 8. Trigger a link failure as described in section 5.1. 878 9. Verify that the offered load gets mapped to the backup tunnel 879 and measure the Additive Backup Delay (RFC 6414). 881 10. 30 seconds after Failover, stop the offered load and measure 882 the Throughput, Packet Loss, Out-of-Order Packets, and 883 Duplicate Packets over the Backup LSP. 885 11. Adjust the offered load and repeat steps 6 through 10 until 886 the Throughput values for the primary and backup LSPs are 887 equal. 889 12. Record the final Throughput which corresponds to the offered 890 load that will be used for the Mid-Point PLR failover test 891 cases. 893 7.2. Headend PLR with Link Failure 895 Objective: 897 To benchmark the MPLS failover time due to link failure events 898 described in section 5.1 experienced by the DUT which is the 899 Headend PLR. 901 Test Setup: 903 A. Select any one topology out of the 8 from section 6. 905 B. Select or enable IP, Layer 3 VPN or Layer 2 VPN services with 906 DUT as Headend PLR. 908 C. The DUT will also have 2 interfaces connected to the traffic 909 Generator/analyzer. (If the node downstream of the PLR is not 910 a simulated node, then the Ingress of the tunnel should have 911 one link connected to the traffic generator and the node 912 downstream to the PLR or the egress of the tunnel should have 913 a link connected to the traffic analyzer). 915 Test Configuration: 917 1. Configure the number of primaries on R2 and the backups on R2 918 as required by the topology selected. 920 2. Configure the test setup to support Reversion. 922 3. Advertise prefixes (as per FRR Scalability Table described in 923 Appendix A) by the tail end. 925 Procedure: 927 Test Case "7.1.1. Headend PLR Forwarding Performance" MUST be 928 completed first to obtain the Throughput to use as the offered 929 load. 931 1. Establish the primary LSP on R2 required by the topology 932 selected. 934 2. Establish the backup LSP on R2 required by the selected 935 topology. 937 3. Verify primary and backup LSPs are up and that primary is 938 protected. 940 4. Verify Fast Reroute protection is enabled and ready. 942 5. Setup traffic streams for the offered load as described in 943 section 5.7. 945 6. Provide the offered load from the tester at the Throughput 946 [RFC 1242] level obtained from test case 7.1.1. 948 7. Verify traffic is switched over Primary LSP without packet 949 loss. 951 8. Trigger a link failure as described in section 5.1. 953 9. Verify that the offered load gets mapped to the backup tunnel 954 and measure the Additive Backup Delay. 956 10. 30 seconds after Failover [RFC 6414], stop the offered load 957 and measure the total Failover Packet Loss [RFC 6414]. 959 11. Calculate the Failover Time [RFC 6414] benchmark using the 960 selected Failover Time Calculation Method (TBLM, PLBM, or 961 TBM) [RFC 6414]. 963 12. Restart the offered load and restore the primary LSP to 964 verify Reversion [RFC 6414] occurs and measure the Reversion 965 Packet Loss [RFC 6414]. 967 13. Calculate the Reversion Time [RFC 6414] benchmark using the 968 selected Failover Time Calculation Method (TBLM, PLBM, or 969 TBM) [RFC 6414]. 971 14. Verify Headend signals new LSP and protection should be in 972 place again. 974 IT is RECOMMENDED that this procedure be repeated for each of the 975 link failure triggers defined in section 5.1. 977 7.3. Mid-Point PLR with Link Failure 979 Objective: 981 To benchmark the MPLS failover time due to link failure events 982 described in section 5.1 experienced by the DUT which is the Mid- 983 Point PLR. 985 Test Setup: 987 A. Select any one topology out of the 8 from section 6. 989 B. The DUT will also have 2 interfaces connected to the traffic 990 generator. 992 Test Configuration: 994 1. Configure the number of primaries on R1 and the backups on R2 995 as required by the topology selected. 997 2. Configure the test setup to support Reversion. 999 3. Advertise prefixes (as per FRR Scalability Table described in 1000 Appendix A) by the tail end. 1002 Procedure: 1004 Test Case "7.1.2. Mid-Point PLR Forwarding Performance" MUST be 1005 completed first to obtain the Throughput to use as the offered 1006 load. 1008 1. Establish the primary LSP on R1 required by the topology 1009 selected. 1011 2. Establish the backup LSP on R2 required by the selected 1012 topology. 1014 3. Perform steps 3 through 14 from section 7.2 Headend PLR with 1015 Link Failure. 1017 IT is RECOMMENDED that this procedure be repeated for each of the 1018 link failure triggers defined in section 5.1. 1020 7.4. Headend PLR with Node Failure 1022 Objective: 1024 To benchmark the MPLS failover time due to Node failure events 1025 described in section 5.1 experienced by the DUT which is the 1026 Headend PLR. 1028 Test Setup: 1030 A. Select any one topology out of the 8 from section 6. 1032 B. Select or enable IP, Layer 3 VPN or Layer 2 VPN services with 1033 DUT as Headend PLR. 1035 C. The DUT will also have 2 interfaces connected to the traffic 1036 generator/analyzer. 1038 Test Configuration: 1040 1. Configure the number of primaries on R2 and the backups on R2 1041 as required by the topology selected. 1043 2. Configure the test setup to support Reversion. 1045 3. Advertise prefixes (as per FRR Scalability Table described in 1046 Appendix A) by the tail end. 1048 Procedure: 1050 Test Case "7.1.1. Headend PLR Forwarding Performance" MUST be 1051 completed first to obtain the Throughput to use as the offered 1052 load. 1054 1. Establish the primary LSP on R2 required by the topology 1055 selected. 1057 2. Establish the backup LSP on R2 required by the selected 1058 topology. 1060 3. Verify primary and backup LSPs are up and that primary is 1061 protected. 1063 4. Verify Fast Reroute protection is enabled and ready. 1065 5. Setup traffic streams for the offered load as described in 1066 section 5.7. 1068 6. Provide the offered load from the tester at the Throughput 1069 [RFC 1242] level obtained from test case 7.1.1. 1071 7. Verify traffic is switched over Primary LSP without packet 1072 loss. 1074 8. Trigger a node failure as described in section 5.1. 1076 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link 1077 Failure. 1079 IT is RECOMMENDED that this procedure be repeated for each of the 1080 node failure triggers defined in section 5.1. 1082 7.5. Mid-Point PLR with Node Failure 1084 Objective: 1086 To benchmark the MPLS failover time due to Node failure events 1087 described in section 5.1 experienced by the DUT which is the Mid- 1088 Point PLR. 1090 Test Setup: 1092 A. Select any one topology from section 6.1 to 6.2. 1094 B. The DUT will also have 2 interfaces connected to the traffic 1095 generator. 1097 Test Configuration: 1099 1. Configure the number of primaries on R1 and the backups on R2 1100 as required by the topology selected. 1102 2. Configure the test setup to support Reversion. 1104 3. Advertise prefixes (as per FRR Scalability Table described in 1105 Appendix A) by the tail end. 1107 Procedure: 1109 Test Case "7.1.1. Mid-Point PLR Forwarding Performance" MUST be 1110 completed first to obtain the Throughput to use as the offered 1111 load. 1113 1. Establish the primary LSP on R1 required by the topology 1114 selected. 1116 2. Establish the backup LSP on R2 required by the selected 1117 topology. 1119 3. Verify primary and backup LSPs are up and that primary is 1120 protected. 1122 4. Verify Fast Reroute protection is enabled and ready. 1124 5. Setup traffic streams for the offered load as described in 1125 section 5.7. 1127 6. Provide the offered load from the tester at the Throughput 1128 [RFC 1242] level obtained from test case 7.1.1. 1130 7. Verify traffic is switched over Primary LSP without packet 1131 loss. 1133 8. Trigger a node failure as described in section 5.1. 1135 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link 1136 Failure. 1138 IT is RECOMMENDED that this procedure be repeated for each of the 1139 node failure triggers defined in section 5.1. 1141 8. Reporting Format 1143 For each test, it is RECOMMENDED that the results be reported in the 1144 following format. 1146 Parameter Units 1148 IGP used for the test ISIS-TE/ OSPF-TE 1149 Interface types Gige,POS,ATM,VLAN etc. 1151 Packet Sizes offered to the DUT Bytes (at layer 3) 1153 Offered Load (Throughput) packets per second 1155 IGP routes advertised Number of IGP routes 1157 Penultimate Hop Popping Used/Not Used 1159 RSVP hello timers Milliseconds 1161 Number of Protected tunnels Number of tunnels 1163 Number of VPN routes installed Number of VPN routes 1164 on the Headend 1166 Number of VC tunnels Number of VC tunnels 1168 Number of mid-point tunnels Number of tunnels 1170 Number of Prefixes protected by Number of LSPs 1171 Primary 1173 Topology being used Section number, and 1174 figure reference 1176 Failover Event Event type 1178 Re-optimization Yes/No 1180 Benchmarks (to be recorded for each test case): 1182 Failover- 1183 Failover Time seconds 1184 Failover Packet Loss packets 1185 Additive Backup Delay seconds 1186 Out-of-Order Packets packets 1187 Duplicate Packets packets 1188 Failover Time Calculation Method Method Used 1190 Reversion- 1191 Reversion Time seconds 1192 Reversion Packet Loss packets 1193 Additive Backup Delay seconds 1194 Out-of-Order Packets packets 1195 Duplicate Packets packets 1196 Failover Time Calculation Method Method Used 1198 9. Security Considerations 1200 Benchmarking activities as described in this memo are limited to 1201 technology characterization using controlled stimuli in a laboratory 1202 environment, with dedicated address space and the constraints 1203 specified in the sections above. 1205 The benchmarking network topology will be an independent test setup 1206 and MUST NOT be connected to devices that may forward the test 1207 traffic into a production network, or misroute traffic to the test 1208 management network. 1210 Further, benchmarking is performed on a "black-box" basis, relying 1211 solely on measurements observable external to the DUT/SUT. 1213 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 1214 benchmarking purposes. Any implications for network security arising 1215 from the DUT/SUT SHOULD be identical in the lab and in production 1216 networks. 1218 10. IANA Considerations 1220 This draft does not require any new allocations by IANA. 1222 11. Acknowledgements 1224 We would like to thank Jean Philip Vasseur for his invaluable input 1225 to the document, Curtis Villamizar for his contribution in suggesting 1226 text on definition and need for benchmarking Correlated failures and 1227 Bhavani Parise for his textual input and review. Additionally we 1228 would like to thank Al Morton, Arun Gandhi, Amrit Hanspal, Karu 1229 Ratnam, Raveesh Janardan, Andrey Kiselev, and Mohan Nanduri for their 1230 formal reviews of this document. 1232 12. References 1234 12.1. Informative References 1236 [RFC 2285] Mandeville, R., "Benchmarking Terminology for LAN 1237 Switching Devices", RFC 2285, February 1998. 1239 [RFC 4689] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana, 1240 "Terminology for Benchmarking Network-layer Traffic 1241 Control Mechanisms", RFC 4689, October 2006. 1243 12.2. Normative References 1245 [RFC 1242] Bradner, S., "Benchmarking terminology for network 1246 interconnection devices", RFC 1242, July 1991. 1248 [RFC 2119] Bradner, S., "Key words for use in RFCs to Indicate 1249 Requirement Levels", BCP 14, RFC 2119, March 1997. 1251 [RFC 4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 1252 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 1253 May 2005. 1255 [RFC 5695] Akhter, A., Asati, R., and C. Pignataro, "MPLS Forwarding 1256 Benchmarking Methodology for IP Flows", RFC 5695, 1257 November 2009. 1259 [RFC 6414] Poretsky, S., Papneja, R., Karthik, J., and S. Vapiwala, 1260 "Benchmarking Terminology for Protection Performance", 1261 RFC 6414, November 2011. 1263 [RFC 2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1264 Network Interconnect Devices", RFC 2544, March 1999. 1266 [RFC 6412] Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology 1267 for Benchmarking Link-State IGP Data-Plane Route 1268 Convergence", RFC 6412, November 2011. 1270 Appendix A. Fast Reroute Scalability Table 1272 This section provides the recommended numbers for evaluating the 1273 scalability of fast reroute implementations. It also recommends the 1274 typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries. 1275 Based on the features supported by the device under test (DUT), 1276 appropriate scaling limits can be used for the test bed. 1278 A1. FRR IGP Table 1280 No. of Headend TE Tunnels IGP Prefixes 1282 1 100 1284 1 500 1286 1 1000 1288 1 2000 1290 1 5000 1292 2 (Load Balance) 100 1294 2 (Load Balance) 500 1296 2 (Load Balance) 1000 1298 2 (Load Balance) 2000 1300 2 (Load Balance) 5000 1302 100 100 1304 500 500 1306 1000 1000 1308 2000 2000 1310 A2. FRR VPN Table 1312 No. of Headend TE Tunnels VPNv4 Prefixes 1314 1 100 1316 1 500 1318 1 1000 1320 1 2000 1322 1 5000 1324 1 10000 1326 1 20000 1328 1 Max 1330 2 (Load Balance) 100 1332 2 (Load Balance) 500 1334 2 (Load Balance) 1000 1336 2 (Load Balance) 2000 1338 2 (Load Balance) 5000 1340 2 (Load Balance) 10000 1342 2 (Load Balance) 20000 1344 2 (Load Balance) Max 1346 A3. FRR Mid-Point LSP Table 1348 No of Mid-point TE LSPs could be configured at recommended levels - 1349 100, 500, 1000, 2000, or max supported number. 1351 A2. FRR VC Table 1352 No. of Headend TE Tunnels VC entries 1354 1 100 1355 1 500 1356 1 1000 1357 1 2000 1358 1 Max 1359 100 100 1360 500 500 1361 1000 1000 1362 2000 2000 1364 Appendix B. Abbreviations 1366 AIS - Alarm Indication Signal 1367 BFD - Bidirectional Fault Detection 1368 BGP - Border Gateway protocol 1369 CE - Customer Edge 1370 DUT - Device Under Test 1371 FRR - Fast Reroute 1372 IGP - Interior Gateway Protocol 1373 IP - Internet Protocol 1374 LOS - Loss of Signal 1375 LSP - Label Switched Path 1376 MP - Merge Point 1377 MPLS - Multi Protocol Label Switching 1378 N-Nhop - Next - Next Hop 1379 Nhop - Next Hop 1380 OIR - Online Insertion and Removal 1381 P - Provider 1382 PE - Provider Edge 1383 PHP - Penultimate Hop Popping 1384 PLR - Point of Local Repair 1385 RSVP - Resource reSerVation Protocol 1386 SRLG - Shared Risk Link Group 1387 TA - Traffic Analyzer 1388 TE - Traffic Engineering 1389 TG - Traffic Generator 1390 VC - Virtual Circuit 1391 VPN - Virtual Private Network 1393 Authors' Addresses 1395 Rajiv Papneja 1396 Huawei Technologies 1397 2330 Central Expressway 1398 Santa Clara, CA 95050 1399 USA 1401 Email: rajiv.papneja@huawei.com 1403 Samir Vapiwala 1404 Cisco Systems 1405 300 Beaver Brook Road 1406 Boxborough, MA 01719 1407 USA 1409 Email: svapiwal@cisco.com 1411 Jay Karthik 1412 Cisco Systems 1413 300 Beaver Brook Road 1414 Boxborough, MA 01719 1415 USA 1417 Email: jkarthik@cisco.com 1419 Scott Poretsky 1420 Allot Communications 1421 USA 1423 Email: sporetsky@allot.com 1425 Shankar Rao 1426 Qwest Communications 1427 950 17th Street 1428 Suite 1900 1429 Denver, CO 80210 1430 USA 1432 Email: shankar.rao@du.edu 1433 JL. Le Roux 1434 France Telecom 1435 2 av Pierre Marzin 1436 22300 Lannion 1437 France 1439 Email: jeanlouis.leroux@orange.com