idnits 2.17.1 draft-ietf-bmwg-protection-meth-14.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 3 longer pages, the longest (page 31) being 66 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 22 instances of too long lines in the document, the longest one being 22 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 518 has weird spacing: '...failure after...' == Line 553 has weird spacing: '...failure after...' == Line 581 has weird spacing: '...failure after...' == Line 615 has weird spacing: '...failure after...' == Line 647 has weird spacing: '...failure after...' == (3 more instances...) -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 29, 2012) is 4138 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 1 error (**), 0 flaws (~~), 8 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group R. Papneja 3 Internet-Draft Huawei Technologies 4 Intended status: Informational S. Vapiwala 5 Expires: May 28, 2013 J. Karthik 6 Cisco Systems 7 S. Poretsky 8 Allot Communications 9 S. Rao 10 Qwest Communications 11 JL. Le Roux 12 France Telecom 13 November 29, 2012 15 Methodology for Benchmarking MPLS-TE Fast Reroute Protection 16 draft-ietf-bmwg-protection-meth-14.txt 18 Abstract 20 This draft describes the methodology for benchmarking MPLS Fast 21 Reroute (FRR) protection mechanisms for link and node protection. 22 This document provides test methodologies and testbed setup for 23 measuring failover times of Fast Reroute techniques while considering 24 factors (such as underlying links) that might impact 25 recovery times for real-time applications bound to MPLS traffic 26 engineered (MPLS-TE) tunnels. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on May 9, 2013. 45 Copyright Notice 47 Copyright (c) 2012 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 This document may contain material from IETF Documents or IETF 61 Contributions published or made publicly available before November 62 10, 2008. The person(s) controlling the copyright in some of this 63 material may not have granted the IETF Trust the right to allow 64 modifications of such material outside the IETF Standards Process. 65 Without obtaining an adequate license from the person(s) controlling 66 the copyright in such materials, this document may not be modified 67 outside the IETF Standards Process, and derivative works of it may 68 not be created outside the IETF Standards Process, except to format 69 it for publication as an RFC or to translate it into languages other 70 than English. 72 Table of Contents 74 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 75 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 6 76 3. Existing Definitions and Requirements . . . . . . . . . . . . 6 77 4. General Reference Topology . . . . . . . . . . . . . . . . . . 7 78 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 8 79 5.1. Failover Events [RFC 6414] . . . . . . . . . . . . . . . . 8 80 5.2. Failure Detection [RFC 6414] . . . . . . . . . . . . . . . 9 81 5.3. Use of Data Traffic for MPLS Protection benchmarking . . . 10 82 5.4. LSP and Route Scaling . . . . . . . . . . . . . . . . . . 10 83 5.5. Selection of IGP . . . . . . . . . . . . . . . . . . . . . 10 84 5.6. Restoration and Reversion [RFC 6414] . . . . . . . . . . . 10 85 5.7. Offered Load . . . . . . . . . . . . . . . . . . . . . . . 11 86 5.8. Tester Capabilities . . . . . . . . . . . . . . . . . . . 11 87 5.9. Failover Time Measurement Methods . . . . . . . . . . . . 12 88 6. Reference Test Setup . . . . . . . . . . . . . . . . . . . . . 12 89 6.1. Link Protection . . . . . . . . . . . . . . . . . . . . . 13 90 6.1.1. Link Protection - 1 hop primary (from PLR) and 1 91 hop backup TE tunnels . . . . . . . . . . . . . . . . 13 92 6.1.2. Link Protection - 1 hop primary (from PLR) and 2 93 hop backup TE tunnels . . . . . . . . . . . . . . . . 14 94 6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 95 hop backup TE tunnels . . . . . . . . . . . . . . . . 14 96 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 97 hop backup TE tunnels . . . . . . . . . . . . . . . . 15 98 6.2. Node Protection . . . . . . . . . . . . . . . . . . . . . 16 99 6.2.1. Node Protection - 2 hop primary (from PLR) and 1 100 hop backup TE tunnels . . . . . . . . . . . . . . . . 16 101 6.2.2. Node Protection - 2 hop primary (from PLR) and 2 102 hop backup TE tunnels . . . . . . . . . . . . . . . . 17 103 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 104 hop backup TE tunnels . . . . . . . . . . . . . . . . 18 105 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 106 hop backup TE tunnels . . . . . . . . . . . . . . . . 19 107 7. Test Methodology . . . . . . . . . . . . . . . . . . . . . . . 20 108 7.1. MPLS FRR Forwarding Performance . . . . . . . . . . . . . 20 109 7.1.1. Headend PLR Forwarding Performance . . . . . . . . . . 20 110 7.1.2. Mid-Point PLR Forwarding Performance . . . . . . . . . 21 111 7.2. Headend PLR with Link Failure . . . . . . . . . . . . . . 23 112 7.3. Mid-Point PLR with Link Failure . . . . . . . . . . . . . 24 113 7.4. Headend PLR with Node Failure . . . . . . . . . . . . . . 26 114 7.5. Mid-Point PLR with Node Failure . . . . . . . . . . . . . 27 115 8. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 28 116 9. Security Considerations . . . . . . . . . . . . . . . . . . . 30 117 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 30 118 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 30 119 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 30 120 12.1. Informative References . . . . . . . . . . . . . . . . . . 30 121 12.2. Normative References . . . . . . . . . . . . . . . . . . . 30 122 Appendix A. Fast Reroute Scalability Table . . . . . . . . . . . 30 123 Appendix B. Abbreviations . . . . . . . . . . . . . . . . . . . . 33 124 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 34 126 1. Introduction 128 This document describes the methodology for benchmarking MPLS Fast 129 Reroute (FRR) protection mechanisms. This document uses much of the 130 terminology defined in [RFC 6414]. 132 Protection mechanisms provide recovery of client services from a 133 planned or an unplanned link or node failures. MPLS FRR protection 134 mechanisms are generally deployed in a network infrastructure where 135 MPLS is used for provisioning of point-to-point traffic engineered 136 tunnels (tunnel). MPLS FRR protection mechanisms aim to reduce 137 service disruption period by minimizing recovery time from most 138 common failures. 140 Network elements from different manufacturers behave differently to 141 network failures, which impacts the network's ability and performance 142 for failure recovery. It therefore becomes imperative for service 143 providers to have a common benchmark to understand the performance 144 behaviors of network elements. 146 There are two factors impacting service availability: frequency of 147 failures and duration for which the failures persist. Failures can 148 be classified further into two types: correlated and uncorrelated. 149 Correlated and uncorrelated failures may be planned or unplanned. 151 Planned failures are generally predictable. Network implementations 152 should be able to handle both planned and unplanned failures and 153 recover gracefully within a time frame to maintain service assurance. 154 Hence, failover recovery time is one of the most important benchmark 155 that a service provider considers in choosing the building blocks 156 for their network infrastructure. 158 A correlated failure is a result of the occurrence of two or more 159 failures. A typical example is failure of a logical resource (e.g. 160 layer-2 links) due to a dependency on a common physical resource 161 (e.g. common conduit) that fails. Within the context of MPLS 162 protection mechanisms, failures that arise due to Shared Risk Link 163 Groups (SRLG) [RFC 4202] can be considered as correlated failures. 165 MPLS FRR [RFC 4090] allows for the possibility that the Label 166 Switched Paths can be re-optimized in the minutes following Failover. 167 IP Traffic would be re-routed according to the preferred path for the 168 post-failure topology. Thus, MPLS-FRR may include additional steps 169 following the occurrence of the failure detection [RFC 6414] and 170 failover event [RFC 6414]. 172 (1) Failover Event - Primary Path (Working Path) fails 174 (2) Failure Detection- Failover Event is detected 176 (3) 178 a. Failover - Working Path switched to Backup path 180 b. Re-Optimization of Working Path (possible change from 181 Backup Path) 183 (4) Restoration [RFC 6414] 185 (5) Reversion [RFC 6414] 187 2. Document Scope 189 This document provides detailed test cases along with different 190 topologies and scenarios that should be considered to effectively 191 benchmark MPLS FRR protection mechanisms and failover times on the 192 Data Plane. Different Failover Events and scaling considerations are 193 also provided in this document. 195 All benchmarking test-cases defined in this document apply to 196 Facility backup [RFC 4090]. The test cases cover set of interesting 197 failure scenarios and the associated procedures benchmark the 198 performance of the Device Under Test (DUT) to recover from failures. 199 Data plane traffic is used to benchmark failover times. Testing 200 scenarios related to MPLS-TE protection mechanisms when applied 201 to MPLS Transport Profile and IP fast reroute applied to MPLS 202 networks were not considered and are out of scope of this document. 203 However, the test setups considered for MPLS based Layer 3 and 204 Layer 2 services consider LDP over MPLS RSVP-TE configurations. 206 Benchmarking of correlated failures is out of scope of this document. 207 Detection using Bi-directional Forwarding Detection (BFD) is outside 208 the scope of this document, but mentioned in discussion sections. 210 The Performance of control plane is outside the scope of this 211 benchmarking. 213 As described above, MPLS-FRR may include a Re-optimization of the 214 Working Path, with possible packet transfer impairments. 215 Characterization of Re-optimization is beyond the scope of this memo. 217 3. Existing Definitions and Requirements 219 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 220 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 221 document are to be interpreted as described in BCP 14, [RFC 2119]. 222 While [RFC 2119] defines the use of these key words primarily for 223 Standards Track documents however, this Informational track document 224 may use some of uses these keywords. 226 The reader is assumed to be familiar with the commonly used MPLS 227 terminology, some of which is defined in [RFC 4090]. 229 This document uses much of the terminology defined in [RFC 6414]. 230 This document also uses existing terminology defined in other BMWG 231 Work [RFC 1242], [RFC 2285], [RFC 4689]. Appendix B provide 232 abbreviations used in the document 234 4. General Reference Topology 236 Figure 1 illustrates the basic reference testbed and is applicable to 237 all the test cases defined in this document. The Tester is comprised 238 of a Traffic Generator (TG) & Test Analyzer (TA) and Emulator. A 239 Tester is connected to the test network and depending upon the test 240 case, the DUT could vary. The Tester sends and receives IP traffic 241 to the tunnel ingress and performs signaling protocol emulation to 242 simulate real network scenarios in a lab environment. The Tester may 243 also support MPLS-TE signaling to act as the ingress node to the MPLS 244 tunnel. The lines in figures represent physical connections. 246 +---------------------------+ 247 | +------------|---------------+ 248 | | | | 249 | | | | 250 +--------+ +--------+ +--------+ +--------+ +--------+ 251 TG--| R1 |-----| R2 |----| R3 | | R4 | | R5 | 252 | |-----| |----| |----| |---| | 253 +--------+ +--------+ +--------+ +--------+ +--------+ 254 | | | | | 255 | | | | | 256 | +--------+ | | TA 257 +---------| R6 |---------+ | 258 | |----------------------+ 259 +--------+ 261 Fig. 1 Fast Reroute Topology 263 The tester MUST record the number of lost, duplicate, and out-of-order 264 packets. It should further record arrival and departure times so 265 that Failover Time, Additive Latency, and Reversion Time can be 266 measured. The tester may be a single device or a test system 267 emulating all the different roles along a primary or backup path. 269 The label stack is dependent of the following 3 entities: 271 (1) Type of protection (Link Vs Node) 273 (2) # of remaining hops of the primary tunnel from the PLR[RFC 274 6414] 276 (3) # of remaining hops of the backup tunnel from the PLR 278 Due to this dependency, it is RECOMMENDED that the benchmarking of 279 failover times be performed on all the topologies provided in section 280 6. 282 5. Test Considerations 284 This section discusses the fundamentals of MPLS Protection testing: 286 (1) The types of network events that causes failover (section 5.1) 288 (2) Indications for failover (section 5.2) 290 (3) the use of data traffic (section 5.3) 292 (4) LSP Scaling (Section 5.4) 294 (5) IGP Selection (Section 5.5) 296 (6) Reversion of LSP (Section 5.6) 298 (7) Traffic generation (section 5.7) 300 5.1. Failover Events [RFC 6414] 302 The failover to the backup tunnel is primarily triggered by either 303 link or node failures observed downstream of the Point of Local 304 repair (PLR). The failure events are listed below. 306 Link Failure Events 307 - Interface Shutdown on PLR side with physical/link Alarm 308 - Interface Shutdown on remote side with physical/link Alarm 309 - Interface Shutdown on PLR side with RSVP hello enabled 310 - Interface Shutdown on remote side with RSVP hello enabled 311 - Interface Shutdown on PLR side with BFD 312 - Interface Shutdown on remote side with BFD 313 - Fiber Pull on the PLR side (Both TX & RX or just the TX) 314 - Fiber Pull on the remote side (Both TX & RX or just the RX) 315 - Online insertion and removal (OIR) on PLR side 316 - OIR on remote side 317 - Sub-interface failure on PLR side (e.g. shutting down of a VLAN) 318 - Sub-interface failure on remote side 319 - Parent interface shutdown on PLR side (an interface bearing multiple 320 sub-interfaces) 321 - Parent interface shutdown on remote side 323 Node Failure Events 325 - A System reload initiated either by a graceful shutdown 326 or by a power failure. 327 - A system crash due to a software failure or an assert. 329 5.2. Failure Detection [RFC 6414] 331 Link failure detection time depends on the link type and failure 332 detection protocols running. For SONET/SDH, the alarm type (such as 333 LOS, AIS, or RDI) can be used. Other link types have layer-two 334 alarms, but they may not provide a short enough failure detection 335 time. Ethernet based links enabled with MPLS/IP do not have layer 2 336 failure indicators, and therefore relies on layer 3 signaling for 337 failure detection. However for directly connected devices, remote 338 fault indication in the ethernet auto-negotiation scheme could be 339 considered as a type of layer 2 link failure indicator. 341 MPLS has different failure detection techniques such as BFD, or use 342 of RSVP hellos. These methods can be used for the layer 3 failure 343 indicators required by Ethernet based links, or for some other non- 344 Ethernet based links to help improve failure detection time. 345 However, these fast failure detection mechanisms are out of scope. 347 The test procedures in this document can be used for a local failure 348 or remote failure scenarios for comprehensive benchmarking and to 349 evaluate failover performance independent of the failure detection 350 techniques. 352 5.3. Use of Data Traffic for MPLS Protection benchmarking 354 Currently end customers use packet loss as a key metric for Failover 355 Time [RFC 6414]. Failover Packet Loss [RFC 6414] is an externally 356 observable event and has direct impact on application performance. 357 MPLS protection is expected to minimize the packet loss in the event 358 of a failure. For this reason it is important to develop a standard 359 router benchmarking methodology for measuring MPLS protection that 360 uses packet loss as a metric. At a known rate of forwarding, packet 361 loss can be measured and the failover time can be determined. 362 Measurement of control plane signaling to establish backup paths is 363 not enough to verify failover. Failover is best determined when 364 packets are actually traversing the backup path. 366 An additional benefit of using packet loss for calculation of 367 failover time is that it allows use of a black-box test environment. 368 Data traffic is offered at line-rate to the device under test (DUT) 369 an emulated network failure event is forced to occur, and packet loss 370 is externally measured to calculate the convergence time. This setup 371 is independent of the DUT architecture. 373 In addition, this methodology considers the packets in error and 374 duplicate packets [RFC 4689] that could have been generated during 375 the failover process. The methodologies consider lost, out-of-order 376 [RFC 4689] and duplicate packets to be impaired packets that 377 contribute to the Failover Time. 379 5.4. LSP and Route Scaling 381 Failover time performance may vary with the number of established 382 primary and backup tunnel label switched paths (LSP) and installed 383 routes. However the procedure outlined here should be used for any 384 number of LSPs (L) and number of routes protected by PLR(R). The 385 amount of L and R must be recorded. 387 5.5. Selection of IGP 389 The underlying IGP could be ISIS-TE or OSPF-TE for the methodology 390 proposed here. See [RFC 6412] for IGP options to consider and 391 report. 393 5.6. Restoration and Reversion [RFC 6414] 395 Path restoration provides a method to restore an alternate primary 396 LSP upon failure and to switch traffic from the Backup Path to the 397 restored Primary Path (Reversion). In MPLS-FRR, Reversion can be 398 implemented as Global Reversion or Local Reversion. It is important 399 to include Restoration and Reversion as a step in each test case to 400 measure the amount of packet loss, out of order packets, or duplicate 401 packets that is produced. 403 Note: In addition to restoration and reversion, re-optimization can 404 take place while the failure is still not recovered but it depends on 405 the user configuration, and re-optimization timers. 407 5.7. Offered Load 409 It is suggested that there be three or more traffic streams as long 410 as there is a steady and constant rate of flow for all the streams. 411 In order to monitor the DUT performance for recovery times, a set of 412 route prefixes should be advertised before traffic is sent. The 413 traffic should be configured towards these routes. 415 Prefix-dependency behaviors are key in IP and tests with route-specific 416 flows spread across the routing table will reveal this dependency. 417 Generating traffic to all of the prefixes reachable by the protected 418 tunnel (probably in a Round-Robin fashion, where the traffic is 419 destined to all the prefixes but one prefix at a time in a cyclic 420 manner) is not recommended. Round-Robin traffic generation 421 is not recommended to all prefixes, as time to hit all the prefixes 422 may be higher than the failover time. This phenomenon will reduce 423 the granularity of the measured results and the results observed 424 may not be accurate. 426 5.8. Tester Capabilities 428 It is RECOMMENDED that the Tester used to execute each test case have 429 the following capabilities: 431 1.Ability to establish MPLS-TE tunnels and push/pop labels. 433 2.Ability to produce Failover Event [RFC 6414]. 435 3.Ability to insert a timestamp in each data packet's IP 436 payload. 438 4.An internal time clock to control timestamping, time 439 measurements, and time calculations. 441 5.Ability to disable or tune specific Layer-2 and Layer-3 442 protocol functions on any interface(s). 444 6.Ability to react upon the receipt of path error from the PLR 446 The Tester MAY be capable to make non-data plane convergence 447 observations and use those observations for measurements. 449 5.9. Failover Time Measurement Methods 451 Failover Time is calculated using one of the following three methods 453 1. Packet-Loss Based method (PLBM): (Number of packets dropped/ 454 packets per second * 1000) milliseconds. This method could also 455 be referred as Loss-Derived method. 457 2. Time-Based Loss Method (TBLM): This method relies on the ability 458 of the Traffic generators to provide statistics which reveal the 459 duration of failure in milliseconds based on when the packet loss 460 occurred (interval between non-zero packet loss and zero loss). 462 3. Timestamp Based Method (TBM): This method of failover calculation 463 is based on the timestamp that gets transmitted as payload in the 464 packets originated by the generator. The Traffic Analyzer 465 records the timestamp of the last packet received before the 466 failover event and the first packet after the failover and 467 derives the time based on the difference between these 2 468 timestamps. Note: The payload could also contain sequence 469 numbers for out-of-order packet calculation and duplicate 470 packets. 472 The timestamp based method method would be able to detect Reversion 473 impairments beyond loss, thus it is RECOMMENDED method as a Failover 474 Time method. 476 6. Reference Test Setup 478 In addition to the general reference topology shown in figure 1, this 479 section provides detailed insight into various proposed test setups 480 that should be considered for comprehensively benchmarking the 481 failover time in different roles along the primary tunnel 483 This section proposes a set of topologies that covers all the 484 scenarios for local protection. All of these topologies can be 485 mapped to the reference topology shown in Figure 1. Topologies 486 provided in this section refer to the testbed required to benchmark 487 failover time when the DUT is configured as a PLR in either Headend 488 or midpoint role. Provided with each topology below is the label 489 stack at the PLR. Penultimate Hop Popping (PHP) MAY be used and must 490 be reported when used. 492 Figures 2 thru 9 use the following convention and are subset of 493 figure 1: 495 a) HE is Headend 496 b) TE is Tail-End 497 c) MID is Mid point 498 d) MP is Merge Point 499 e) PLR is Point of Local Repair 500 f) PRI is Primary Path 501 g) BKP denotes Backup Path and Nodes 502 h) UR is Upstream Router 504 6.1. Link Protection 506 6.1.1. Link Protection - 1 hop primary (from PLR) and 1 hop backup TE 507 tunnels 509 +-------+ +--------+ +--------+ 510 | R1 | | R2 | PRI| R3 | 511 | UR/HE |--| HE/MID |----| MP/TE | 512 | | | PLR |----| | 513 +-------+ +--------+ BKP+--------+ 515 Figure 2. 517 Traffic Num of Labels Num of labels 518 before failure after failure 519 IP TRAFFIC (P-P) 0 0 520 Layer3 VPN (PE-PE) 1 1 521 Layer3 VPN (PE-P) 2 2 522 Layer2 VC (PE-PE) 1 1 523 Layer2 VC (PE-P) 2 2 524 Mid-point LSPs 0 0 526 Note: Please note the following: 528 a) For P-P case, R2 and R3 acts as P routers 529 b) For PE-PE case,R2 acts as PE and R3 acts as a remote PE 530 c) For PE-P case,R2 acts as a PE router, R3 acts as a P router and R5 acts as remote 531 PE router (Please refer to figure 1 for complete setup) 532 d) For Mid-point case, R1, R2 and R3 act as shown in above figure HE, Midpoint/PLR and 533 TE respectively 535 6.1.2. Link Protection - 1 hop primary (from PLR) and 2 hop backup TE 536 tunnels 538 +-------+ +--------+ +--------+ 539 | R1 | | R2 | | R3 | 540 | UR/HE | | HE/MID |PRI | MP/TE | 541 | |----| PLR |----| | 542 +-------+ +--------+ +--------+ 543 |BKP | 544 | +--------+ | 545 | | R6 | | 546 |----| BKP |----| 547 | MID | 548 +--------+ 550 Figure 3. 552 Traffic Num of Labels Num of labels 553 before failure after failure 554 IP TRAFFIC (P-P) 0 1 555 Layer3 VPN (PE-PE) 1 2 556 Layer3 VPN (PE-P) 2 3 557 Layer2 VC (PE-PE) 1 2 558 Layer2 VC (PE-P) 2 3 559 Mid-point LSPs 0 1 561 Note: Please note the following: 563 a) For P-P case, R2 and R3 acts as P routers 564 b) For PE-PE case,R2 acts as PE and R3 acts as a remote PE 565 c) For PE-P case,R2 acts as a PE router, R3 acts as a P router and R5 acts as remote 566 PE router (Please refer to figure 1 for complete setup) 567 d) For Mid-point case, R1, R2 and R3 act as shown in above figure HE, Midpoint/PLR 568 and TE respectively 570 6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 hop backup TE 571 tunnels 572 +--------+ +--------+ +--------+ +--------+ 573 | R1 | | R2 |PRI | R3 |PRI | R4 | 574 | UR/HE |----| HE/MID |----| MP/MID |------| TE | 575 | | | PLR |----| | | | 576 +--------+ +--------+ BKP+--------+ +--------+ 578 Figure 4. 580 Traffic Num of Labels Num of labels 581 before failure after failure 582 IP TRAFFIC (P-P) 1 1 583 Layer3 VPN (PE-PE) 2 2 584 Layer3 VPN (PE-P) 3 3 585 Layer2 VC (PE-PE) 2 2 586 Layer2 VC (PE-P) 3 3 587 Mid-point LSPs 1 1 589 Note: Please note the following: 591 a) For P-P case, R2, R3 and R4 acts as P routers 592 b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE 593 c) For PE-P case,R2 acts as a PE router, R3 acts as a P router and R5 acts as remote 594 PE router (Please refer to figure 1 for complete setup) 595 d) For Mid-point case, R1, R2, R3 and R4 act as shown in above figure HE, Midpoint/PLR 596 and TE respectively 598 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 hop backup TE 599 tunnels 600 +--------+ +--------+PRI +--------+ PRI +--------+ 601 | R1 | | R2 | | R3 | | R4 | 602 | UR/HE |----| HE/MID |----| MP/MID|------| TE | 603 | | | PLR | | | | | 604 +--------+ +--------+ +--------+ +--------+ 605 BKP| | 606 | +--------+ | 607 | | R6 | | 608 +---| BKP |- 609 | MID | 610 +--------+ 612 Figure 5. 614 Traffic Num of Labels Num of labels 615 before failure after failure 616 IP TRAFFIC (P-P) 1 2 617 Layer3 VPN (PE-PE) 2 3 618 Layer3 VPN (PE-P) 3 4 619 Layer2 VC (PE-PE) 2 3 620 Layer2 VC (PE-P) 3 4 621 Mid-point LSPs 1 2 623 Note: Please note the following: 625 a) For P-P case, R2, R3 and R4 acts as P routers 626 b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE 627 c) For PE-P case,R2 acts as a PE router, R3 acts as a P router and R5 acts as remote 628 PE router (Please refer to figure 1 for complete setup) 629 d) For Mid-point case, R1, R2, R3 and R4 act as shown in above figure HE, Midpoint/PLR 630 and TE respectively 632 6.2. Node Protection 634 6.2.1. Node Protection - 2 hop primary (from PLR) and 1 hop backup TE 635 tunnels 636 +--------+ +--------+ +--------+ +--------+ 637 | R1 | | R2 |PRI | R3 | PRI | R4 | 638 | UR/HE |----| HE/MID |----| MID |------| MP/TE | 639 | | | PLR | | | | | 640 +--------+ +--------+ +--------+ +--------+ 641 |BKP | 642 ----------------------------- 644 Figure 6. 646 Traffic Num of Labels Num of labels 647 before failure after failure 648 IP TRAFFIC (P-P) 1 0 649 Layer3 VPN (PE-PE) 2 1 650 Layer3 VPN (PE-P) 3 2 651 Layer2 VC (PE-PE) 2 1 652 Layer2 VC (PE-P) 3 2 653 Mid-point LSPs 1 0 655 Note: Please note the following: 657 a) For P-P case, R2, R3 and R3 acts as P routers 658 b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE 659 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router and R5 acts as remote 660 PE router (Please refer to figure 1 for complete setup) 661 d) For Mid-point case, R1, R2, R3 and R4 act as shown in above figure HE, Midpoint/PLR 662 and TE respectively 664 6.2.2. Node Protection - 2 hop primary (from PLR) and 2 hop backup TE 665 tunnels 667 +--------+ +--------+ +--------+ +--------+ 668 | R1 | | R2 | | R3 | | R4 | 669 | UR/HE | | HE/MID |PRI | MID |PRI | MP/TE | 670 | |----| PLR |----| |----| | 671 +--------+ +--------+ +--------+ +--------+ 672 | | 673 BKP| +--------+ | 674 | | R6 | | 675 ---------| BKP |--------- 676 | MID | 677 +--------+ 679 Figure 7. 681 Traffic Num of Labels Num of labels 682 before failure after failure 683 IP TRAFFIC (P-P) 1 1 684 Layer3 VPN (PE-PE) 2 2 685 Layer3 VPN (PE-P) 3 3 686 Layer2 VC (PE-PE) 2 2 687 Layer2 VC (PE-P) 3 3 688 Mid-point LSPs 1 1 690 Note: Please note the following: 692 a) For P-P case, R2, R3 and R4 acts as P routers 693 b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE 694 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router and R5 acts as remote 695 PE router (Please refer to figure 1 for complete setup) 696 d) For Mid-point case, R1, R2, R3 and R4 act as shown in above figure HE, Midpoint/PLR 697 and TE respectively 699 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 hop backup TE 700 tunnels 702 +--------+ +--------+PRI+--------+PRI+--------+PRI+--------+ 703 | R1 | | R2 | | R3 | | R4 | | R5 | 704 | UR/HE |--| HE/MID |---| MID |---| MP |---| TE | 705 | | | PLR | | | | | | | 706 +--------+ +--------+ +--------+ +--------+ +--------+ 707 BKP| | 708 -------------------------- 710 Figure 8. 712 Traffic Num of Labels Num of labels 713 before failure after failure 714 IP TRAFFIC (P-P) 1 1 715 Layer3 VPN (PE-PE) 2 2 716 Layer3 VPN (PE-P) 3 3 717 Layer2 VC (PE-PE) 2 2 718 Layer2 VC (PE-P) 3 3 719 Mid-point LSPs 1 1 721 Note: Please note the following: 723 a) For P-P case, R2, R3, R4 and R5 acts as P routers 724 b) For PE-PE case,R2 acts as PE and R5 acts as a remote PE 725 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router and R5 acts as remote 726 PE router (Please refer to figure 1 for complete setup) 727 d) For Mid-point case, R1, R2, R3, R4 and R5 act as shown in above figure HE, 728 Midpoint/PLR and TE respectively 730 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 hop backup TE 731 tunnels 733 +--------+ +--------+ +--------+ +--------+ +--------+ 734 | R1 | | R2 | | R3 | | R4 | | R5 | 735 | UR/HE | | HE/MID |PRI| MID |PRI| MP |PRI| TE | 736 | |-- | PLR |---| |---| |---| | 737 +--------+ +--------+ +--------+ +--------+ +--------+ 738 BKP| | 739 | +--------+ | 740 | | R6 | | 741 ---------| BKP |------- 742 | MID | 743 +--------+ 745 Figure 9. 747 Traffic Num of Labels Num of labels 748 before failure after failure 749 IP TRAFFIC (P-P) 1 2 750 Layer3 VPN (PE-PE) 2 3 751 Layer3 VPN (PE-P) 3 4 752 Layer2 VC (PE-PE) 2 3 753 Layer2 VC (PE-P) 3 4 754 Mid-point LSPs 1 2 756 Note: Please note the following: 758 a) For P-P case, R2, R3, R4 and R5 acts as P routers 759 b) For PE-PE case,R2 acts as PE and R5 acts as a remote PE 760 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router and R5 acts as remote 761 PE router (Please refer to figure 1 for complete setup) 762 d) For Mid-point case, R1, R2, R3, R4 and R5 act as shown in above figure HE, 763 Midpoint/PLR and TE respectively 765 7. Test Methodology 767 The procedure described in this section can be applied to all the 8 768 base test cases and the associated topologies. The backup as well as 769 the primary tunnels are configured to be alike in terms of bandwidth 770 usage. In order to benchmark failover with all possible label stack 771 depth applicable as seen with current deployments, it is RECOMMENDED 772 to perform all of the test cases provided in this section. The 773 forwarding performance test cases in section 7.1 MUST be performed 774 prior to performing the failover test cases. 776 The considerations of Section 4 of [RFC 2544] are applicable when 777 evaluating the results obtained using these methodologies as well. 779 7.1. MPLS FRR Forwarding Performance 781 Benchmarking Failover Time [RFC 6414] for MPLS protection first 782 requires baseline measurement of the forwarding performance of the 783 test topology including the DUT. Forwarding performance is 784 benchmarked by the Throughput as defined in [RFC 5695] and measured 785 in units pps. This section provides two test cases to benchmark 786 forwarding performance. These are with the DUT configured as a 787 Headend PLR, Mid-Point PLR, and Egress PLR. 789 7.1.1. Headend PLR Forwarding Performance 791 Objective: 793 To benchmark the maximum rate (pps) on the PLR (as headend) over 794 primary LSP and backup LSP. 796 Test Setup: 798 A. Select any one topology out of the 8 from section 6. 800 B. Select or enable IP, Layer 3 VPN or Layer 2 VPN services with 801 DUT as Headend PLR. 803 C. The DUT will also have 2 interfaces connected to the traffic 804 Generator/analyzer. (If the node downstream of the PLR is not 805 a simulated node, then the Ingress of the tunnel should have 806 one link connected to the traffic generator and the node 807 downstream to the PLR or the egress of the tunnel should have 808 a link connected to the traffic analyzer). 810 Procedure: 812 1. Establish the primary LSP on R2 required by the topology 813 selected. 815 2. Establish the backup LSP on R2 required by the selected 816 topology. 818 3. Verify primary and backup LSPs are up and that primary is 819 protected. 821 4. Verify Fast Reroute protection is enabled and ready. 823 5. Setup traffic streams as described in section 5.7. 825 6. Send MPLS traffic over the primary LSP at the Throughput 826 supported by the DUT (section 6, RFC 2544). 828 7. Record the Throughput over the primary LSP. 830 8. Trigger a link failure as described in section 5.1. 832 9. Verify that the offered load gets mapped to the backup tunnel 833 and measure the Additive Backup Delay (RFC 6414). 835 10. 30 seconds after Failover, stop the offered load and measure 836 the Throughput, Packet Loss, Out-of-Order Packets, and 837 Duplicate Packets over the Backup LSP. 839 11. Adjust the offered load and repeat steps 6 through 10 until 840 the Throughput values for the primary and backup LSPs are 841 equal. 843 12. Record the final Throughput, which corresponds to the offered 844 load that will be used for the Headend PLR failover test 845 cases. 847 7.1.2. Mid-Point PLR Forwarding Performance 849 Objective: 851 To benchmark the maximum rate (pps) on the PLR (as mid-point) over 852 primary LSP and backup LSP. 854 Test Setup: 856 A. Select any one topology out of the 8 from section 6. 858 B. The DUT will also have 2 interfaces connected to the traffic 859 generator. 861 Procedure: 863 1. Establish the primary LSP on R1 required by the topology 864 selected. 866 2. Establish the backup LSP on R2 required by the selected 867 topology. 869 3. Verify primary and backup LSPs are up and that primary is 870 protected. 872 4. Verify Fast Reroute protection is enabled and ready. 874 5. Setup traffic streams as described in section 5.7. 876 6. Send MPLS traffic over the primary LSP at the Throughput 877 supported by the DUT (section 6, RFC 2544). 879 7. Record the Throughput over the primary LSP. 881 8. Trigger a link failure as described in section 5.1. 883 9. Verify that the offered load gets mapped to the backup tunnel 884 and measure the Additive Backup Delay (RFC 6414). 886 10. 30 seconds after Failover, stop the offered load and measure 887 the Throughput, Packet Loss, Out-of-Order Packets, and 888 Duplicate Packets over the Backup LSP. 890 11. Adjust the offered load and repeat steps 6 through 10 until 891 the Throughput values for the primary and backup LSPs are 892 equal. 894 12. Record the final Throughput which corresponds to the offered 895 load that will be used for the Mid-Point PLR failover test 896 cases. 898 7.2. Headend PLR with Link Failure 900 Objective: 902 To benchmark the MPLS failover time due to link failure events 903 described in section 5.1 experienced by the DUT which is the 904 Headend PLR. 906 Test Setup: 908 A. Select any one topology out of the 8 from section 6. 910 B. Select or enable IP, Layer 3 VPN or Layer 2 VPN services with 911 DUT as Headend PLR. 913 C. The DUT will also have 2 interfaces connected to the traffic 914 Generator/analyzer. (If the node downstream of the PLR is not 915 a simulated node, then the Ingress of the tunnel should have 916 one link connected to the traffic generator and the node 917 downstream to the PLR or the egress of the tunnel should have 918 a link connected to the traffic analyzer). 920 Test Configuration: 922 1. Configure the number of primaries on R2 and the backups on R2 923 as required by the topology selected. 925 2. Configure the test setup to support Reversion. 927 3. Advertise prefixes (as per FRR Scalability Table described in 928 Appendix A) by the tail end. 930 Procedure: 932 Test Case "7.1.1. Headend PLR Forwarding Performance" MUST be 933 completed first to obtain the Throughput to use as the offered 934 load. 936 1. Establish the primary LSP on R2 required by the topology 937 selected. 939 2. Establish the backup LSP on R2 required by the selected 940 topology. 942 3. Verify primary and backup LSPs are up and that primary is 943 protected. 945 4. Verify Fast Reroute protection is enabled and ready. 947 5. Setup traffic streams for the offered load as described in 948 section 5.7. 950 6. Provide the offered load from the tester at the Throughput 951 [RFC 1242] level obtained from test case 7.1.1. 953 7. Verify traffic is switched over Primary LSP without packet 954 loss. 956 8. Trigger a link failure as described in section 5.1. 958 9. Verify that the offered load gets mapped to the backup tunnel 959 and measure the Additive Backup Delay. 961 10. 30 seconds after Failover [RFC 6414], stop the offered load 962 and measure the total Failover Packet Loss [RFC 6414]. 964 11. Calculate the Failover Time [RFC 6414] benchmark using the 965 selected Failover Time Calculation Method (TBLM, PLBM, or 966 TBM) [RFC 6414]. 968 12. Restart the offered load and restore the primary LSP to 969 verify Reversion [RFC 6414] occurs and measure the Reversion 970 Packet Loss [RFC 6414]. 972 13. Calculate the Reversion Time [RFC 6414] benchmark using the 973 selected Failover Time Calculation Method (TBLM, PLBM, or 974 TBM) [RFC 6414]. 976 14. Verify Headend signals new LSP and protection should be in 977 place again. 979 IT is RECOMMENDED that this procedure be repeated for each of the 980 link failure triggers defined in section 5.1. 982 7.3. Mid-Point PLR with Link Failure 984 Objective: 986 To benchmark the MPLS failover time due to link failure events 987 described in section 5.1 experienced by the DUT which is the Mid- 988 Point PLR. 990 Test Setup: 992 A. Select any one topology out of the 8 from section 6. 994 B. The DUT will also have 2 interfaces connected to the traffic 995 generator. 997 Test Configuration: 999 1. Configure the number of primaries on R1 and the backups on R2 1000 as required by the topology selected. 1002 2. Configure the test setup to support Reversion. 1004 3. Advertise prefixes (as per FRR Scalability Table described in 1005 Appendix A) by the tail end. 1007 Procedure: 1009 Test Case "7.1.2. Mid-Point PLR Forwarding Performance" MUST be 1010 completed first to obtain the Throughput to use as the offered 1011 load. 1013 1. Establish the primary LSP on R1 required by the topology 1014 selected. 1016 2. Establish the backup LSP on R2 required by the selected 1017 topology. 1019 3. Perform steps 3 through 14 from section 7.2 Headend PLR with 1020 Link Failure. 1022 IT is RECOMMENDED that this procedure be repeated for each of the 1023 link failure triggers defined in section 5.1. 1025 7.4. Headend PLR with Node Failure 1027 Objective: 1029 To benchmark the MPLS failover time due to Node failure events 1030 described in section 5.1 experienced by the DUT which is the 1031 Headend PLR. 1033 Test Setup: 1035 A. Select any one topology out of the 8 from section 6. 1037 B. Select or enable IP, Layer 3 VPN or Layer 2 VPN services with 1038 DUT as Headend PLR. 1040 C. The DUT will also have 2 interfaces connected to the traffic 1041 generator/analyzer. 1043 Test Configuration: 1045 1. Configure the number of primaries on R2 and the backups on R2 1046 as required by the topology selected. 1048 2. Configure the test setup to support Reversion. 1050 3. Advertise prefixes (as per FRR Scalability Table described in 1051 Appendix A) by the tail end. 1053 Procedure: 1055 Test Case "7.1.1. Headend PLR Forwarding Performance" MUST be 1056 completed first to obtain the Throughput to use as the offered 1057 load. 1059 1. Establish the primary LSP on R2 required by the topology 1060 selected. 1062 2. Establish the backup LSP on R2 required by the selected 1063 topology. 1065 3. Verify primary and backup LSPs are up and that primary is 1066 protected. 1068 4. Verify Fast Reroute protection is enabled and ready. 1070 5. Setup traffic streams for the offered load as described in 1071 section 5.7. 1073 6. Provide the offered load from the tester at the Throughput 1074 [RFC 1242] level obtained from test case 7.1.1. 1076 7. Verify traffic is switched over Primary LSP without packet 1077 loss. 1079 8. Trigger a node failure as described in section 5.1. 1081 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link 1082 Failure. 1084 IT is RECOMMENDED that this procedure be repeated for each of the 1085 node failure triggers defined in section 5.1. 1087 7.5. Mid-Point PLR with Node Failure 1089 Objective: 1091 To benchmark the MPLS failover time due to Node failure events 1092 described in section 5.1 experienced by the DUT which is the Mid- 1093 Point PLR. 1095 Test Setup: 1097 A. Select any one topology from section 6.1 to 6.2. 1099 B. The DUT will also have 2 interfaces connected to the traffic 1100 generator. 1102 Test Configuration: 1104 1. Configure the number of primaries on R1 and the backups on R2 1105 as required by the topology selected. 1107 2. Configure the test setup to support Reversion. 1109 3. Advertise prefixes (as per FRR Scalability Table described in 1110 Appendix A) by the tail end. 1112 Procedure: 1114 Test Case "7.1.1. Mid-Point PLR Forwarding Performance" MUST be 1115 completed first to obtain the Throughput to use as the offered 1116 load. 1118 1. Establish the primary LSP on R1 required by the topology 1119 selected. 1121 2. Establish the backup LSP on R2 required by the selected 1122 topology. 1124 3. Verify primary and backup LSPs are up and that primary is 1125 protected. 1127 4. Verify Fast Reroute protection is enabled and ready. 1129 5. Setup traffic streams for the offered load as described in 1130 section 5.7. 1132 6. Provide the offered load from the tester at the Throughput 1133 [RFC 1242] level obtained from test case 7.1.1. 1135 7. Verify traffic is switched over Primary LSP without packet 1136 loss. 1138 8. Trigger a node failure as described in section 5.1. 1140 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link 1141 Failure. 1143 IT is RECOMMENDED that this procedure be repeated for each of the 1144 node failure triggers defined in section 5.1. 1146 8. Reporting Format 1148 For each test, it is RECOMMENDED that the results be reported in the 1149 following format. 1151 Parameter Units 1153 IGP used for the test ISIS-TE/ OSPF-TE 1154 Interface types Gige,POS,ATM,VLAN etc. 1156 Packet Sizes offered to the DUT Bytes (at layer 3) 1158 Offered Load (Throughput) packets per second 1160 IGP routes advertised Number of IGP routes 1162 Penultimate Hop Popping Used/Not Used 1164 RSVP hello timers Milliseconds 1166 Number of Protected tunnels Number of tunnels 1168 Number of VPN routes installed Number of VPN routes 1169 on the Headend 1171 Number of VC tunnels Number of VC tunnels 1173 Number of mid-point tunnels Number of tunnels 1175 Number of Prefixes protected by Number of LSPs 1176 Primary 1178 Topology being used Section number, and 1179 figure reference 1181 Failover Event Event type 1183 Re-optimization Yes/No 1185 Benchmarks (to be recorded for each test case): 1187 Failover- 1188 Failover Time seconds 1189 Failover Packet Loss packets 1190 Additive Backup Delay seconds 1191 Out-of-Order Packets packets 1192 Duplicate Packets packets 1193 Failover Time Calculation Method Method Used 1195 Reversion- 1196 Reversion Time seconds 1197 Reversion Packet Loss packets 1198 Additive Backup Delay seconds 1199 Out-of-Order Packets packets 1200 Duplicate Packets packets 1201 Failover Time Calculation Method Method Used 1203 9. Security Considerations 1205 Benchmarking activities as described in this memo are limited to 1206 technology characterization using controlled stimuli in a laboratory 1207 environment, with dedicated address space and the constraints 1208 specified in the sections above. 1210 The benchmarking network topology will be an independent test setup 1211 and MUST NOT be connected to devices that may forward the test 1212 traffic into a production network, or misroute traffic to the test 1213 management network. 1215 Further, benchmarking is performed on a "black-box" basis, relying 1216 solely on measurements observable external to the DUT/SUT. 1218 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 1219 benchmarking purposes. Any implications for network security arising 1220 from the DUT/SUT SHOULD be identical in the lab and in production 1221 networks. 1223 10. IANA Considerations 1225 This draft does not require any new allocations by IANA. 1227 11. Acknowledgements 1229 We would like to thank Jean Philip Vasseur for his invaluable input 1230 to the document, Curtis Villamizar for his contribution in suggesting 1231 text on definition and need for benchmarking Correlated failures and 1232 Bhavani Parise for his textual input and review. Additionally we 1233 would like to thank Al Morton, Arun Gandhi, Amrit Hanspal, Karu 1234 Ratnam, Raveesh Janardan, Andrey Kiselev, and Mohan Nanduri for their 1235 formal reviews of this document. 1237 12. References 1239 12.1. Informative References 1241 [RFC 2285] Mandeville, R., "Benchmarking Terminology for LAN 1242 Switching Devices", RFC 2285, February 1998. 1244 [RFC 4689] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana, 1245 "Terminology for Benchmarking Network-layer Traffic 1246 Control Mechanisms", RFC 4689, October 2006. 1248 [RFC 4202] Kompella, K., Rekhter, Y., "Routing Extensions in Support 1249 of Generalized Multi-Protocol Label Switching (GMPLS)", 1250 RFC 4202, October 2005. 1252 12.2. Normative References 1254 [RFC 1242] Bradner, S., "Benchmarking terminology for network 1255 interconnection devices", RFC 1242, July 1991. 1257 [RFC 2119] Bradner, S., "Key words for use in RFCs to Indicate 1258 Requirement Levels", BCP 14, RFC 2119, March 1997. 1260 [RFC 4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 1261 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 1262 May 2005. 1264 [RFC 5695] Akhter, A., Asati, R., and C. Pignataro, "MPLS Forwarding 1265 Benchmarking Methodology for IP Flows", RFC 5695, 1266 November 2009. 1268 [RFC 6414] Poretsky, S., Papneja, R., Karthik, J., and S. Vapiwala, 1269 "Benchmarking Terminology for Protection Performance", 1270 RFC 6414, November 2011. 1272 [RFC 2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1273 Network Interconnect Devices", RFC 2544, March 1999. 1275 [RFC 6412] Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology 1276 for Benchmarking Link-State IGP Data-Plane Route 1277 Convergence", RFC 6412, November 2011. 1279 Appendix A. Fast Reroute Scalability Table 1281 This section provides the recommended numbers for evaluating the 1282 scalability of fast reroute implementations. It also recommends the 1283 typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries. 1284 Based on the features supported by the device under test (DUT), 1285 appropriate scaling limits can be used for the test bed. 1287 A1. FRR IGP Table 1289 No. of Headend TE Tunnels IGP Prefixes 1291 1 100 1293 1 500 1295 1 1000 1297 1 2000 1299 1 5000 1301 2 (Load Balance) 100 1303 2 (Load Balance) 500 1305 2 (Load Balance) 1000 1307 2 (Load Balance) 2000 1309 2 (Load Balance) 5000 1311 100 100 1313 500 500 1315 1000 1000 1317 2000 2000 1319 A2. FRR VPN Table 1321 No. of Headend TE Tunnels VPNv4 Prefixes 1323 1 100 1325 1 500 1327 1 1000 1329 1 2000 1331 1 5000 1333 1 10000 1335 1 20000 1337 1 Max 1339 2 (Load Balance) 100 1341 2 (Load Balance) 500 1343 2 (Load Balance) 1000 1345 2 (Load Balance) 2000 1347 2 (Load Balance) 5000 1349 2 (Load Balance) 10000 1351 2 (Load Balance) 20000 1353 2 (Load Balance) Max 1355 A3. FRR Mid-Point LSP Table 1357 No of Mid-point TE LSPs could be configured at recommended levels - 1358 100, 500, 1000, 2000, or max supported number. 1360 A2. FRR VC Table 1361 No. of Headend TE Tunnels VC entries 1363 1 100 1364 1 500 1365 1 1000 1366 1 2000 1367 1 Max 1368 100 100 1369 500 500 1370 1000 1000 1371 2000 2000 1373 Appendix B. Abbreviations 1375 AIS - Alarm Indication Signal 1376 BFD - Bidirectional Fault Detection 1377 BGP - Border Gateway protocol 1378 CE - Customer Edge 1379 DUT - Device Under Test 1380 FRR - Fast Reroute 1381 IGP - Interior Gateway Protocol 1382 IP - Internet Protocol 1383 LOS - Loss of Signal 1384 LSP - Label Switched Path 1385 MP - Merge Point 1386 MPLS - Multi Protocol Label Switching 1387 N-Nhop - Next - Next Hop 1388 Nhop - Next Hop 1389 OIR - Online Insertion and Removal 1390 P - Provider 1391 PE - Provider Edge 1392 PHP - Penultimate Hop Popping 1393 PLR - Point of Local Repair 1394 RSVP - Resource reSerVation Protocol 1395 SRLG - Shared Risk Link Group 1396 TA - Traffic Analyzer 1397 TE - Traffic Engineering 1398 TG - Traffic Generator 1399 VC - Virtual Circuit 1400 VPN - Virtual Private Network 1402 Authors' Addresses 1404 Rajiv Papneja 1405 Huawei Technologies 1406 2330 Central Expressway 1407 Santa Clara, CA 95050 1408 USA 1410 Email: rajiv.papneja@huawei.com 1412 Samir Vapiwala 1413 Cisco Systems 1414 300 Beaver Brook Road 1415 Boxborough, MA 01719 1416 USA 1418 Email: svapiwal@cisco.com 1420 Jay Karthik 1421 Cisco Systems 1422 300 Beaver Brook Road 1423 Boxborough, MA 01719 1424 USA 1426 Email: jkarthik@cisco.com 1428 Scott Poretsky 1429 Allot Communications 1430 USA 1432 Email: sporetsky@allot.com 1434 Shankar Rao 1435 Qwest Communications 1436 950 17th Street 1437 Suite 1900 1438 Denver, CO 80210 1439 USA 1441 Email: shankar.rao@du.edu 1442 JL. Le Roux 1443 France Telecom 1444 2 av Pierre Marzin 1445 22300 Lannion 1446 France 1448 Email: jeanlouis.leroux@orange.com