idnits 2.17.1 draft-ietf-bmwg-protection-meth-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 19 instances of too long lines in the document, the longest one being 22 characters in excess of 72. ** The abstract seems to contain references ([RFC4090]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 517 has weird spacing: '...failure after...' == Line 552 has weird spacing: '...failure after...' == Line 580 has weird spacing: '...failure after...' == Line 614 has weird spacing: '...failure after...' == Line 646 has weird spacing: '...failure after...' == (3 more instances...) -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (September 28, 2012) is 4228 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC 4090' is mentioned on line 197, but not defined == Missing Reference: 'RFC 6414' is mentioned on line 973, but not defined == Missing Reference: 'Br97' is mentioned on line 222, but not defined == Missing Reference: 'MPLS-FRR-EXT' is mentioned on line 224, but not defined == Missing Reference: 'Br91' is mentioned on line 1132, but not defined == Missing Reference: 'Ma98' is mentioned on line 228, but not defined == Missing Reference: 'Po06' is mentioned on line 372, but not defined == Missing Reference: 'RFC 6412' is mentioned on line 386, but not defined == Missing Reference: 'RFC 2544' is mentioned on line 775, but not defined == Missing Reference: 'MPLS-FWD' is mentioned on line 783, but not defined == Unused Reference: 'RFC2285' is defined on line 1240, but no explicit reference was found in the text == Unused Reference: 'RFC4689' is defined on line 1243, but no explicit reference was found in the text == Unused Reference: 'RFC1242' is defined on line 1249, but no explicit reference was found in the text == Unused Reference: 'RFC2119' is defined on line 1252, but no explicit reference was found in the text == Unused Reference: 'RFC4090' is defined on line 1255, but no explicit reference was found in the text == Unused Reference: 'RFC5695' is defined on line 1259, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1242 ** Downref: Normative reference to an Informational RFC: RFC 5695 Summary: 4 errors (**), 0 flaws (~~), 23 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group R. Papneja 3 Internet-Draft Huawei Technologies 4 Intended status: Standards Track S. Vapiwala 5 Expires: April 1, 2013 J. Karthik 6 Cisco Systems 7 S. Poretsky 8 Allot Communications 9 S. Rao 10 Qwest Communications 11 JL. Le Roux 12 France Telecom 13 September 28, 2012 15 Methodology for Benchmarking MPLS-TE Fast Reroute Protection 16 draft-ietf-bmwg-protection-meth-11.txt 18 Abstract 20 This document describes the methodology for benchmarking MPLS Protection 21 mechanisms for link and node protection as defined in [RFC 4090]. 22 This document provides test methodologies and testbed setup for 23 measuring failover times of Fast Reroute techniques while considering 24 all other factors (such as underlying links) that might impact 25 recovery times for real-time applications bound to MPLS traffic 26 engineered (MPLS-TE) tunnels. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on April 1, 2013. 45 Copyright Notice 47 Copyright (c) 2012 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 This document may contain material from IETF Documents or IETF 61 Contributions published or made publicly available before November 62 10, 2008. The person(s) controlling the copyright in some of this 63 material may not have granted the IETF Trust the right to allow 64 modifications of such material outside the IETF Standards Process. 65 Without obtaining an adequate license from the person(s) controlling 66 the copyright in such materials, this document may not be modified 67 outside the IETF Standards Process, and derivative works of it may 68 not be created outside the IETF Standards Process, except to format 69 it for publication as an RFC or to translate it into languages other 70 than English. 72 Table of Contents 74 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 75 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 6 76 3. Existing Definitions and Requirements . . . . . . . . . . . . 6 77 4. General Reference Topology . . . . . . . . . . . . . . . . . . 7 78 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 8 79 5.1. Failover Events [RFC 6414] . . . . . . . . . . . . . . . . 8 80 5.2. Failure Detection [RFC 6414] . . . . . . . . . . . . . . . 9 81 5.3. Use of Data Traffic for MPLS Protection benchmarking . . . 10 82 5.4. LSP and Route Scaling . . . . . . . . . . . . . . . . . . 10 83 5.5. Selection of IGP . . . . . . . . . . . . . . . . . . . . . 10 84 5.6. Restoration and Reversion [RFC 6414] . . . . . . . . . . . 10 85 5.7. Offered Load . . . . . . . . . . . . . . . . . . . . . . . 11 86 5.8. Tester Capabilities . . . . . . . . . . . . . . . . . . . 11 87 5.9. Failover Time Measurement Methods . . . . . . . . . . . . 12 88 6. Reference Test Setup . . . . . . . . . . . . . . . . . . . . . 12 89 6.1. Link Protection . . . . . . . . . . . . . . . . . . . . . 13 90 6.1.1. Link Protection - 1 hop primary (from PLR) and 1 91 hop backup TE tunnels . . . . . . . . . . . . . . . . 13 92 6.1.2. Link Protection - 1 hop primary (from PLR) and 2 93 hop backup TE tunnels . . . . . . . . . . . . . . . . 14 94 6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 95 hop backup TE tunnels . . . . . . . . . . . . . . . . 14 96 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 97 hop backup TE tunnels . . . . . . . . . . . . . . . . 15 98 6.2. Node Protection . . . . . . . . . . . . . . . . . . . . . 16 99 6.2.1. Node Protection - 2 hop primary (from PLR) and 1 100 hop backup TE tunnels . . . . . . . . . . . . . . . . 16 101 6.2.2. Node Protection - 2 hop primary (from PLR) and 2 102 hop backup TE tunnels . . . . . . . . . . . . . . . . 17 103 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 104 hop backup TE tunnels . . . . . . . . . . . . . . . . 18 105 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 106 hop backup TE tunnels . . . . . . . . . . . . . . . . 19 107 7. Test Methodology . . . . . . . . . . . . . . . . . . . . . . . 20 108 7.1. MPLS FRR Forwarding Performance . . . . . . . . . . . . . 20 109 7.1.1. Headend PLR Forwarding Performance . . . . . . . . . . 20 110 7.1.2. Mid-Point PLR Forwarding Performance . . . . . . . . . 21 111 7.2. Headend PLR with Link Failure . . . . . . . . . . . . . . 23 112 7.3. Mid-Point PLR with Link Failure . . . . . . . . . . . . . 24 113 7.4. Headend PLR with Node Failure . . . . . . . . . . . . . . 26 114 7.5. Mid-Point PLR with Node Failure . . . . . . . . . . . . . 27 115 8. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 28 116 9. Security Considerations . . . . . . . . . . . . . . . . . . . 30 117 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 30 118 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 30 119 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 30 120 12.1. Informative References . . . . . . . . . . . . . . . . . . 30 121 12.2. Normative References . . . . . . . . . . . . . . . . . . . 31 122 Appendix A. Fast Reroute Scalability Table . . . . . . . . . . . 31 123 Appendix B. Abbreviations . . . . . . . . . . . . . . . . . . . . 34 124 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 35 126 1. Introduction 128 This document describes the methodology for benchmarking MPLS Fast 129 Reroute (FRR) protection mechanisms. This document uses much of the 130 terminology defined in [RFC 6414].For any conflicting content, this 131 document supersedes [RFC 6414] 133 Protection mechanisms provide recovery of client services from a 134 planned or an unplanned link or node failures. MPLS FRR protection 135 mechanisms are generally deployed in a network infrastructure where 136 MPLS is used for provisioning of point-to- point traffic engineered 137 tunnels (tunnel). MPLS FRR protection mechanisms aim to reduce 138 service disruption period by minimizing recovery time from most 139 common failures. 141 Network elements from different manufacturers behave differently to 142 network failures, which impacts the network's ability and performance 143 for failure recovery. It therefore becomes imperative for service 144 providers to have a common benchmark to understand the performance 145 behaviors of network elements. 147 There are two factors impacting service availability: frequency of 148 failures and duration for which the failures persist. Failures can 149 be classified further into two types: correlated and uncorrelated. 150 Correlated and uncorrelated failures may be planned or unplanned. 152 Planned failures are predictable. Network implementations should be 153 able to handle both planned and unplanned failures and recover 154 gracefully within a time frame to maintain service assurance. Hence, 155 failover recovery time is one of the most important benchmark that a 156 service provider considers in choosing the building blocks for their 157 network infrastructure. 159 A correlated failure is the simultaneous occurrence of two or more 160 failures. A typical example is failure of a logical resource (e.g. 161 layer-2 links) due to a dependency on a common physical resource 162 (e.g. common conduit) that fails. Within the context of MPLS 163 protection mechanisms, failures that arise due to Shared Risk Link 164 Groups (SRLG) [RFC 4090] can be considered as correlated failures. 166 MPLS FRR [RFC 4090] allows for the possibility that the Label 167 Switched Paths can be re-optimized in the minutes following Failover. 168 IP Traffic would be re-routed according to the preferred path for the 169 post-failure topology. Thus, MPLS-FRR may include additional steps 170 following the occurrence of the failure detection [RFC 6414] and 171 failover event [RFC 6414]. 173 (1) Failover Event - Primary Path (Working Path) fails 175 (2) Failure Detection- Failover Event is detected 177 (3) 179 a. Failover - Working Path switched to Backup path 181 b. Re-Optimization of Working Path (possible change from 182 Backup Path) 184 (4) Restoration [RFC 6414] 186 (5) Reversion [RFC 6414] 188 2. Document Scope 190 This document provides detailed test cases along with different 191 topologies and scenarios that should be considered to effectively 192 benchmark MPLS FRR protection mechanisms and failover times on the 193 Data Plane. Different Failover Events and scaling considerations are 194 also provided in this document. 196 All benchmarking test-cases defined in this document apply to 197 Facility backup [RFC 4090]. The test cases cover all possible 198 failure scenarios and the associated procedures benchmark the 199 performance of the Device Under Test (DUT) to recover from failures. 200 Data plane traffic is used to benchmark failover times. 202 Benchmarking of correlated failures is out of scope of this document. 203 Detection using Bi-directional Forwarding Detection (BFD) is outside 204 the scope of this document, but mentioned in discussion sections. 206 The Performance of control plane is outside the scope of this 207 benchmarking. 209 As described above, MPLS-FRR may include a Re-optimization of the 210 Working Path, with possible packet transfer impairments. 211 Characterization of Re-optimization is beyond the scope of this memo. 213 3. Existing Definitions and Requirements 215 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 216 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 217 document are to be interpreted as described in BCP 14, RFC 2119 218 [Br97]. RFC 2119 defines the use of these key words to help make the 219 intent of standards track documents as clear as possible. While this 220 document uses these keywords, this document is not a standards track 221 document. 223 The reader is assumed to be familiar with the commonly used MPLS 224 terminology, some of which is defined in [MPLS-FRR-EXT]. 226 This document uses much of the terminology defined in [RFC 6414]. 227 This document also uses existing terminology defined in other BMWG 228 Work [Br91], [Ma98], [Po06]. 230 4. General Reference Topology 232 Figure 1 illustrates the basic reference testbed and is applicable to 233 all the test cases defined in this document. The Tester is comprised 234 of a Traffic Generator (TG) & Test Analyzer (TA) and Emulator. A 235 Tester is connected to the test network and depending upon the test 236 case, the DUT could vary. The Tester sends and receives IP traffic 237 to the tunnel ingress and performs signaling protocol emulation to 238 simulate real network scenarios in a lab environment. The Tester may 239 also support MPLS-TE signaling to act as the ingress node to the MPLS 240 tunnel. 242 +---------------------------+ 243 | +------------|---------------+ 244 | | | | 245 | | | | 246 +--------+ +--------+ +--------+ +--------+ +--------+ 247 TG--| R1 |-----| R2 |----| R3 | | R4 | | R5 | 248 | |-----| |----| |----| |---| | 249 +--------+ +--------+ +--------+ +--------+ +--------+ 250 | | | | | 251 | | | | | 252 | +--------+ | | TA 253 +---------| R6 |---------+ | 254 | |----------------------+ 255 +--------+ 257 Fig. 1 Fast Reroute Topology 259 The tester MUST record the number of lost, duplicate, and reordered 260 packets. It should further record arrival and departure times so 261 that Failover Time, Additive Latency, and Reversion Time can be 262 measured. The tester may be a single device or a test system 263 emulating all the different roles along a primary or backup path. 265 The label stack is dependent of the following 3 entities: 267 (1) Type of protection (Link Vs Node) 269 (2) # of remaining hops of the primary tunnel from the PLR[RFC 270 6414] 272 (3) # of remaining hops of the backup tunnel from the PLR 274 Due to this dependency, it is RECOMMENDED that the benchmarking of 275 failover times be performed on all the topologies provided in section 276 6. 278 5. Test Considerations 280 This section discusses the fundamentals of MPLS Protection testing: 282 (1) The types of network events that causes failover 284 (2) Indications for failover 286 (3) the use of data traffic 288 (4) Traffic generation 290 (5) LSP Scaling 292 (6) Reversion of LSP 294 (7) IGP Selection 296 5.1. Failover Events [RFC 6414] 298 The failover to the backup tunnel is primarily triggered by either 299 link or node failures observed downstream of the Point of Local 300 repair (PLR). Some of these failure events are listed below. 302 Link Failure Events 303 - Interface Shutdown on PLR side with POS Alarm 304 - Interface Shutdown on remote side with POS Alarm 305 - Interface Shutdown on PLR side with RSVP hello enabled 306 - Interface Shutdown on remote side with RSVP hello enabled 307 - Interface Shutdown on PLR side with BFD 308 - Interface Shutdown on remote side with BFD 309 - Fiber Pull on the PLR side (Both TX & RX or just the TX) 310 - Fiber Pull on the remote side (Both TX & RX or just the RX) 311 - Online insertion and removal (OIR) on PLR side 312 - OIR on remote side 313 - Sub-interface failure on PLR side (e.g. shutting down of a VLAN) 314 - Sub-interface failure on remote side 315 - Parent interface shutdown on PLR side (an interface bearing multiple 316 sub-interfaces) 317 - Parent interface shutdown on remote side 319 Node Failure Events 321 - A System reload initiated either by a graceful shutdown 322 or by a power failure. 323 - A system crash due to a software failure or an assert. 325 5.2. Failure Detection [RFC 6414] 327 Link failure detection time depends on the link type and failure 328 detection protocols running. For SONET/SDH, the alarm type (such as 329 LOS, AIS, or RDI) can be used. Other link types have layer-two 330 alarms, but they may not provide a short enough failure detection 331 time. Ethernet based links do not have layer 2 failure indicators, 332 and therefore relies on layer 3 signaling for failure detection. 333 However for directly connected devices, remote fault indication in 334 the ethernet auto-negotiation scheme could be considered as a type of 335 layer 2 link failure indicator. 337 MPLS has different failure detection techniques such as BFD, or use 338 of RSVP hellos. These methods can be used for the layer 3 failure 339 indicators required by Ethernet based links, or for some other non- 340 Ethernet based links to help improve failure detection time. 341 However, these fast failure detection mechanisms are out of scope 343 The test procedures in this document can be used for a local failure 344 or remote failure scenarios for comprehensive benchmarking and to 345 evaluate failover performance independent of the failure detection 346 techniques. 348 5.3. Use of Data Traffic for MPLS Protection benchmarking 350 Currently end customers use packet loss as a key metric for Failover 351 Time [RFC 6414]. Failover Packet Loss [RFC 6414] is an externally 352 observable event and has direct impact on application performance. 353 MPLS protection is expected to minimize the packet loss in the event 354 of a failure. For this reason it is important to develop a standard 355 router benchmarking methodology for measuring MPLS protection that 356 uses packet loss as a metric. At a known rate of forwarding, packet 357 loss can be measured and the failover time can be determined. 358 Measurement of control plane signaling to establish backup paths is 359 not enough to verify failover. Failover is best determined when 360 packets are actually traversing the backup path. 362 An additional benefit of using packet loss for calculation of 363 failover time is that it allows use of a black-box test environment. 364 Data traffic is offered at line-rate to the device under test (DUT) 365 an emulated network failure event is forced to occur, and packet loss 366 is externally measured to calculate the convergence time. This setup 367 is independent of the DUT architecture. 369 In addition, this methodology considers the packets in error and 370 duplicate packets [Po06] that could have been generated during the 371 failover process. The methodologies consider lost, out-of-order 372 [Po06] and duplicate packets to be impaired packets that contribute 373 to the Failover Time. 375 5.4. LSP and Route Scaling 377 Failover time performance may vary with the number of established 378 primary and backup tunnel label switched paths (LSP) and installed 379 routes. However the procedure outlined here should be used for any 380 number of LSPs (L) and number of routes protected by PLR(R). The 381 amount of L and R must be recorded. 383 5.5. Selection of IGP 385 The underlying IGP could be ISIS-TE or OSPF-TE for the methodology 386 proposed here. See [RFC 6412] for IGP options to consider and 387 report. 389 5.6. Restoration and Reversion [RFC 6414] 391 Path restoration provides a method to restore an alternate primary 392 LSP upon failure and to switch traffic from the Backup Path to the 393 restored Primary Path (Reversion). In MPLS-FRR, Reversion can be 394 implemented as Global Reversion or Local Reversion. It is important 395 to include Restoration and Reversion as a step in each test case to 396 measure the amount of packet loss, out of order packets, or duplicate 397 packets that is produced. 399 Note: In addition to restoration and reversion, re-optimization can 400 take place while the failure is still not recovered but it depends on 401 the user configuration, and re-optimization timers. 403 5.7. Offered Load 405 It is suggested that there be three or more traffic streams as long 406 as there is a steady and constant rate of flow for all the streams. 407 In order to monitor the DUT performance for recovery times, a set of 408 route prefixes should be advertised before traffic is sent. The 409 traffic should be configured towards these routes. 411 At least 16 flows should be used, and more if possible. Prefix- 412 dependency behaviors are key in IP and tests with route-specific 413 flows spread across the routing table will reveal this dependency. 414 Generating traffic to all of the prefixes reachable by the protected 415 tunnel (probably in a Round-Robin fashion, where the traffic is 416 destined to all the prefixes but one prefix at a time in a cyclic 417 manner) is not recommended. The reason why traffic generation is not 418 recommended in a Round-Robin fashion to all the prefixes, one at a 419 time is that if there are many prefixes reachable through the LSP the 420 time interval between 2 packets destined to one prefix may be 421 significantly high and may be comparable with the failover time being 422 measured which does not aid in getting an accurate failover 423 measurement. 425 5.8. Tester Capabilities 427 It is RECOMMENDED that the Tester used to execute each test case have 428 the following capabilities: 430 1.Ability to establish MPLS-TE tunnels and push/pop labels. 432 2.Ability to produce Failover Event [RFC 6414]. 434 3.Ability to insert a timestamp in each data packet's IP 435 payload. 437 4.An internal time clock to control timestamping, time 438 measurements, and time calculations. 440 5.Ability to disable or tune specific Layer-2 and Layer-3 441 protocol functions on any interface(s). 443 6.Ability to react upon the receipt of path error from the PLR 445 The Tester MAY be capable to make non-data plane convergence 446 observations and use those observations for measurements. 448 5.9. Failover Time Measurement Methods 450 Failover Time is calculated using one of the following three methods 452 1. Packet-Loss Based method (PLBM): (Number of packets dropped/ 453 packets per second * 1000) milliseconds. This method could also 454 be referred as Loss-Derived method. 456 2. Time-Based Loss Method (TBLM): This method relies on the ability 457 of the Traffic generators to provide statistics which reveal the 458 duration of failure in milliseconds based on when the packet loss 459 occurred (interval between non-zero packet loss and zero loss). 461 3. Timestamp Based Method (TBM): This method of failover calculation 462 is based on the timestamp that gets transmitted as payload in the 463 packets originated by the generator. The Traffic Analyzer 464 records the timestamp of the last packet received before the 465 failover event and the first packet after the failover and 466 derives the time based on the difference between these 2 467 timestamps. Note: The payload could also contain sequence 468 numbers for out-of-order packet calculation and duplicate 469 packets. 471 The timestamp based method method would be able to detect Reversion 472 impairments beyond loss, thus it is RECOMMENDED method as a Failover 473 Time method. 475 6. Reference Test Setup 477 In addition to the general reference topology shown in figure 1, this 478 section provides detailed insight into various proposed test setups 479 that should be considered for comprehensively benchmarking the 480 failover time in different roles along the primary tunnel 482 This section proposes a set of topologies that covers all the 483 scenarios for local protection. All of these topologies can be 484 mapped to the reference topology shown in Figure 1. Topologies 485 provided in this section refer to the testbed required to benchmark 486 failover time when the DUT is configured as a PLR in either Headend 487 or midpoint role. Provided with each topology below is the label 488 stack at the PLR. Penultimate Hop Popping (PHP) MAY be used and must 489 be reported when used. 491 Figures 2 thru 9 use the following convention and are subset of 492 figure 1: 494 a) HE is Headend 495 b) TE is Tail-End 496 c) MID is Mid point 497 d) MP is Merge Point 498 e) PLR is Point of Local Repair 499 f) PRI is Primary Path 500 g) BKP denotes Backup Path and Nodes 501 h) UR is Upstream Router 503 6.1. Link Protection 505 6.1.1. Link Protection - 1 hop primary (from PLR) and 1 hop backup TE 506 tunnels 508 +-------+ +--------+ +--------+ 509 | R1 | | R2 | PRI| R3 | 510 | UR/HE |--| HE/MID |----| MP/TE | 511 | | | PLR |----| | 512 +-------+ +--------+ BKP+--------+ 514 Figure 2. 516 Traffic Num of Labels Num of labels 517 before failure after failure 518 IP TRAFFIC (P-P) 0 0 519 Layer3 VPN (PE-PE) 1 1 520 Layer3 VPN (PE-P) 2 2 521 Layer2 VC (PE-PE) 1 1 522 Layer2 VC (PE-P) 2 2 523 Mid-point LSPs 0 0 525 Note: Please note the following: 527 a) For P-P case, R2 and R3 acts as P routers 528 b) For PE-PE case,R2 acts as PE and R3 acts as a remote PE 529 c) For PE-P case,R2 acts as a PE router, R3 acts as a P router and R5 acts as remote 530 PE router (Please refer to figure 1 for complete setup) 531 d) For Mid-point case, R1, R2 and R3 act as shown in above figure HE, Midpoint/PLR and 532 TE respectively 534 6.1.2. Link Protection - 1 hop primary (from PLR) and 2 hop backup TE 535 tunnels 537 +-------+ +--------+ +--------+ 538 | R1 | | R2 | | R3 | 539 | UR/HE | | HE/MID |PRI | MP/TE | 540 | |----| PLR |----| | 541 +-------+ +--------+ +--------+ 542 |BKP | 543 | +--------+ | 544 | | R6 | | 545 |----| BKP |----| 546 | MID | 547 +--------+ 549 Figure 3. 551 Traffic Num of Labels Num of labels 552 before failure after failure 553 IP TRAFFIC (P-P) 0 1 554 Layer3 VPN (PE-PE) 1 2 555 Layer3 VPN (PE-P) 2 3 556 Layer2 VC (PE-PE) 1 2 557 Layer2 VC (PE-P) 2 3 558 Mid-point LSPs 0 1 560 Note: Please note the following: 562 a) For P-P case, R2 and R3 acts as P routers 563 b) For PE-PE case,R2 acts as PE and R3 acts as a remote PE 564 c) For PE-P case,R2 acts as a PE router, R3 acts as a P router and R5 acts as remote 565 PE router (Please refer to figure 1 for complete setup) 566 d) For Mid-point case, R1, R2 and R3 act as shown in above figure HE, Midpoint/PLR 567 and TE respectively 569 6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 hop backup TE 570 tunnels 571 +--------+ +--------+ +--------+ +--------+ 572 | R1 | | R2 |PRI | R3 |PRI | R4 | 573 | UR/HE |----| HE/MID |----| MP/MID |------| TE | 574 | | | PLR |----| | | | 575 +--------+ +--------+ BKP+--------+ +--------+ 577 Figure 4. 579 Traffic Num of Labels Num of labels 580 before failure after failure 581 IP TRAFFIC (P-P) 1 1 582 Layer3 VPN (PE-PE) 2 2 583 Layer3 VPN (PE-P) 3 3 584 Layer2 VC (PE-PE) 2 2 585 Layer2 VC (PE-P) 3 3 586 Mid-point LSPs 1 1 588 Note: Please note the following: 590 a) For P-P case, R2, R3 and R4 acts as P routers 591 b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE 592 c) For PE-P case,R2 acts as a PE router, R3 acts as a P router and R5 acts as remote 593 PE router (Please refer to figure 1 for complete setup) 594 d) For Mid-point case, R1, R2, R3 and R4 act as shown in above figure HE, Midpoint/PLR 595 and TE respectively 597 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 hop backup TE 598 tunnels 599 +--------+ +--------+PRI +--------+ PRI +--------+ 600 | R1 | | R2 | | R3 | | R4 | 601 | UR/HE |----| HE/MID |----| MP/MID|------| TE | 602 | | | PLR | | | | | 603 +--------+ +--------+ +--------+ +--------+ 604 BKP| | 605 | +--------+ | 606 | | R6 | | 607 +---| BKP |- 608 | MID | 609 +--------+ 611 Figure 5. 613 Traffic Num of Labels Num of labels 614 before failure after failure 615 IP TRAFFIC (P-P) 1 2 616 Layer3 VPN (PE-PE) 2 3 617 Layer3 VPN (PE-P) 3 4 618 Layer2 VC (PE-PE) 2 3 619 Layer2 VC (PE-P) 3 4 620 Mid-point LSPs 1 2 622 Note: Please note the following: 624 a) For P-P case, R2, R3 and R4 acts as P routers 625 b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE 626 c) For PE-P case,R2 acts as a PE router, R3 acts as a P router and R5 acts as remote 627 PE router (Please refer to figure 1 for complete setup) 628 d) For Mid-point case, R1, R2, R3 and R4 act as shown in above figure HE, Midpoint/PLR 629 and TE respectively 631 6.2. Node Protection 633 6.2.1. Node Protection - 2 hop primary (from PLR) and 1 hop backup TE 634 tunnels 635 +--------+ +--------+ +--------+ +--------+ 636 | R1 | | R2 |PRI | R3 | PRI | R4 | 637 | UR/HE |----| HE/MID |----| MID |------| MP/TE | 638 | | | PLR | | | | | 639 +--------+ +--------+ +--------+ +--------+ 640 |BKP | 641 ----------------------------- 643 Figure 6. 645 Traffic Num of Labels Num of labels 646 before failure after failure 647 IP TRAFFIC (P-P) 1 0 648 Layer3 VPN (PE-PE) 2 1 649 Layer3 VPN (PE-P) 3 2 650 Layer2 VC (PE-PE) 2 1 651 Layer2 VC (PE-P) 3 2 652 Mid-point LSPs 1 0 654 Note: Please note the following: 656 a) For P-P case, R2, R3 and R3 acts as P routers 657 b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE 658 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router and R5 acts as remote 659 PE router (Please refer to figure 1 for complete setup) 660 d) For Mid-point case, R1, R2, R3 and R4 act as shown in above figure HE, Midpoint/PLR 661 and TE respectively 663 6.2.2. Node Protection - 2 hop primary (from PLR) and 2 hop backup TE 664 tunnels 666 +--------+ +--------+ +--------+ +--------+ 667 | R1 | | R2 | | R3 | | R4 | 668 | UR/HE | | HE/MID |PRI | MID |PRI | MP/TE | 669 | |----| PLR |----| |----| | 670 +--------+ +--------+ +--------+ +--------+ 671 | | 672 BKP| +--------+ | 673 | | R6 | | 674 ---------| BKP |--------- 675 | MID | 676 +--------+ 678 Figure 7. 680 Traffic Num of Labels Num of labels 681 before failure after failure 682 IP TRAFFIC (P-P) 1 1 683 Layer3 VPN (PE-PE) 2 2 684 Layer3 VPN (PE-P) 3 3 685 Layer2 VC (PE-PE) 2 2 686 Layer2 VC (PE-P) 3 3 687 Mid-point LSPs 1 1 689 Note: Please note the following: 691 a) For P-P case, R2, R3 and R4 acts as P routers 692 b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE 693 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router and R5 acts as remote 694 PE router (Please refer to figure 1 for complete setup) 695 d) For Mid-point case, R1, R2, R3 and R4 act as shown in above figure HE, Midpoint/PLR 696 and TE respectively 698 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 hop backup TE 699 tunnels 701 +--------+ +--------+PRI+--------+PRI+--------+PRI+--------+ 702 | R1 | | R2 | | R3 | | R4 | | R5 | 703 | UR/HE |--| HE/MID |---| MID |---| MP |---| TE | 704 | | | PLR | | | | | | | 705 +--------+ +--------+ +--------+ +--------+ +--------+ 706 BKP| | 707 -------------------------- 709 Figure 8. 711 Traffic Num of Labels Num of labels 712 before failure after failure 713 IP TRAFFIC (P-P) 1 1 714 Layer3 VPN (PE-PE) 2 2 715 Layer3 VPN (PE-P) 3 3 716 Layer2 VC (PE-PE) 2 2 717 Layer2 VC (PE-P) 3 3 718 Mid-point LSPs 1 1 720 Note: Please note the following: 722 a) For P-P case, R2, R3, R4 and R5 acts as P routers 723 b) For PE-PE case,R2 acts as PE and R5 acts as a remote PE 724 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router and R5 acts as remote 725 PE router (Please refer to figure 1 for complete setup) 726 d) For Mid-point case, R1, R2, R3, R4 and R5 act as shown in above figure HE, 727 Midpoint/PLR and TE respectively 729 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 hop backup TE 730 tunnels 732 +--------+ +--------+ +--------+ +--------+ +--------+ 733 | R1 | | R2 | | R3 | | R4 | | R5 | 734 | UR/HE | | HE/MID |PRI| MID |PRI| MP |PRI| TE | 735 | |-- | PLR |---| |---| |---| | 736 +--------+ +--------+ +--------+ +--------+ +--------+ 737 BKP| | 738 | +--------+ | 739 | | R6 | | 740 ---------| BKP |------- 741 | MID | 742 +--------+ 744 Figure 9. 746 Traffic Num of Labels Num of labels 747 before failure after failure 748 IP TRAFFIC (P-P) 1 2 749 Layer3 VPN (PE-PE) 2 3 750 Layer3 VPN (PE-P) 3 4 751 Layer2 VC (PE-PE) 2 3 752 Layer2 VC (PE-P) 3 4 753 Mid-point LSPs 1 2 755 Note: Please note the following: 757 a) For P-P case, R2, R3, R4 and R5 acts as P routers 758 b) For PE-PE case,R2 acts as PE and R5 acts as a remote PE 759 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router and R5 acts as remote 760 PE router (Please refer to figure 1 for complete setup) 761 d) For Mid-point case, R1, R2, R3, R4 and R5 act as shown in above figure HE, 762 Midpoint/PLR and TE respectively 764 7. Test Methodology 766 The procedure described in this section can be applied to all the 8 767 base test cases and the associated topologies. The backup as well as 768 the primary tunnels are configured to be alike in terms of bandwidth 769 usage. In order to benchmark failover with all possible label stack 770 depth applicable as seen with current deployments, it is RECOMMENDED 771 to perform all of the test cases provided in this section. The 772 forwarding performance test cases in section 7.1 MUST be performed 773 prior to performing the failover test cases. 775 The considerations of Section 4 of [RFC 2544] are applicable when 776 evaluating the results obtained using these methodologies as well. 778 7.1. MPLS FRR Forwarding Performance 780 Benchmarking Failover Time [RFC 6414] for MPLS protection first 781 requires baseline measurement of the forwarding performance of the 782 test topology including the DUT. Forwarding performance is 783 benchmarked by the Throughput as defined in [MPLS-FWD] and measured 784 in units pps. This section provides two test cases to benchmark 785 forwarding performance. These are with the DUT configured as a 786 Headend PLR, Mid-Point PLR, and Egress PLR. 788 7.1.1. Headend PLR Forwarding Performance 790 Objective: 792 To benchmark the maximum rate (pps) on the PLR (as headend) over 793 primary LSP and backup LSP. 795 Test Setup: 797 A. Select any one topology out of the 8 from section 6. 799 B. Select or enable IP, Layer 3 VPN or Layer 2 VPN services with 800 DUT as Headend PLR. 802 C. The DUT will also have 2 interfaces connected to the traffic 803 Generator/analyzer. (If the node downstream of the PLR is not 804 a simulated node, then the Ingress of the tunnel should have 805 one link connected to the traffic generator and the node 806 downstream to the PLR or the egress of the tunnel should have 807 a link connected to the traffic analyzer). 809 Procedure: 811 1. Establish the primary LSP on R2 required by the topology 812 selected. 814 2. Establish the backup LSP on R2 required by the selected 815 topology. 817 3. Verify primary and backup LSPs are up and that primary is 818 protected. 820 4. Verify Fast Reroute protection is enabled and ready. 822 5. Setup traffic streams as described in section 5.7. 824 6. Send MPLS traffic over the primary LSP at the Throughput 825 supported by the DUT (section 6, RFC 2544). 827 7. Record the Throughput over the primary LSP. 829 8. Trigger a link failure as described in section 5.1. 831 9. Verify that the offered load gets mapped to the backup tunnel 832 and measure the Additive Backup Delay (RFC 6414). 834 10. 30 seconds after Failover, stop the offered load and measure 835 the Throughput, Packet Loss, Out-of-Order Packets, and 836 Duplicate Packets over the Backup LSP. 838 11. Adjust the offered load and repeat steps 6 through 10 until 839 the Throughput values for the primary and backup LSPs are 840 equal. 842 12. Record the final Throughput, which corresponds to the offered 843 load that will be used for the Headend PLR failover test 844 cases. 846 7.1.2. Mid-Point PLR Forwarding Performance 848 Objective: 850 To benchmark the maximum rate (pps) on the PLR (as mid-point) over 851 primary LSP and backup LSP. 853 Test Setup: 855 A. Select any one topology out of the 8 from section 6. 857 B. The DUT will also have 2 interfaces connected to the traffic 858 generator. 860 Procedure: 862 1. Establish the primary LSP on R1 required by the topology 863 selected. 865 2. Establish the backup LSP on R2 required by the selected 866 topology. 868 3. Verify primary and backup LSPs are up and that primary is 869 protected. 871 4. Verify Fast Reroute protection is enabled and ready. 873 5. Setup traffic streams as described in section 5.7. 875 6. Send MPLS traffic over the primary LSP at the Throughput 876 supported by the DUT (section 6, RFC 2544). 878 7. Record the Throughput over the primary LSP. 880 8. Trigger a link failure as described in section 5.1. 882 9. Verify that the offered load gets mapped to the backup tunnel 883 and measure the Additive Backup Delay (RFC 6414). 885 10. 30 seconds after Failover, stop the offered load and measure 886 the Throughput, Packet Loss, Out-of-Order Packets, and 887 Duplicate Packets over the Backup LSP. 889 11. Adjust the offered load and repeat steps 6 through 10 until 890 the Throughput values for the primary and backup LSPs are 891 equal. 893 12. Record the final Throughput which corresponds to the offered 894 load that will be used for the Mid-Point PLR failover test 895 cases. 897 7.2. Headend PLR with Link Failure 899 Objective: 901 To benchmark the MPLS failover time due to link failure events 902 described in section 5.1 experienced by the DUT which is the 903 Headend PLR. 905 Test Setup: 907 A. Select any one topology out of the 8 from section 6. 909 B. Select or enable IP, Layer 3 VPN or Layer 2 VPN services with 910 DUT as Headend PLR. 912 C. The DUT will also have 2 interfaces connected to the traffic 913 Generator/analyzer. (If the node downstream of the PLR is not 914 a simulated node, then the Ingress of the tunnel should have 915 one link connected to the traffic generator and the node 916 downstream to the PLR or the egress of the tunnel should have 917 a link connected to the traffic analyzer). 919 Test Configuration: 921 1. Configure the number of primaries on R2 and the backups on R2 922 as required by the topology selected. 924 2. Configure the test setup to support Reversion. 926 3. Advertise prefixes (as per FRR Scalability Table described in 927 Appendix A) by the tail end. 929 Procedure: 931 Test Case "7.1.1. Headend PLR Forwarding Performance" MUST be 932 completed first to obtain the Throughput to use as the offered 933 load. 935 1. Establish the primary LSP on R2 required by the topology 936 selected. 938 2. Establish the backup LSP on R2 required by the selected 939 topology. 941 3. Verify primary and backup LSPs are up and that primary is 942 protected. 944 4. Verify Fast Reroute protection is enabled and ready. 946 5. Setup traffic streams for the offered load as described in 947 section 5.7. 949 6. Provide the offered load from the tester at the Throughput 950 [Br91] level obtained from test case 7.1.1. 952 7. Verify traffic is switched over Primary LSP without packet 953 loss. 955 8. Trigger a link failure as described in section 5.1. 957 9. Verify that the offered load gets mapped to the backup tunnel 958 and measure the Additive Backup Delay. 960 10. 30 seconds after Failover [RFC 6414], stop the offered load 961 and measure the total Failover Packet Loss [RFC 6414]. 963 11. Calculate the Failover Time [RFC 6414] benchmark using the 964 selected Failover Time Calculation Method (TBLM, PLBM, or 965 TBM) [RFC 6414]. 967 12. Restart the offered load and restore the primary LSP to 968 verify Reversion [RFC 6414] occurs and measure the Reversion 969 Packet Loss [RFC 6414]. 971 13. Calculate the Reversion Time [RFC 6414] benchmark using the 972 selected Failover Time Calculation Method (TBLM, PLBM, or 973 TBM) [RFC 6414]. 975 14. Verify Headend signals new LSP and protection should be in 976 place again. 978 IT is RECOMMENDED that this procedure be repeated for each of the 979 link failure triggers defined in section 5.1. 981 7.3. Mid-Point PLR with Link Failure 983 Objective: 985 To benchmark the MPLS failover time due to link failure events 986 described in section 5.1 experienced by the DUT which is the Mid- 987 Point PLR. 989 Test Setup: 991 A. Select any one topology out of the 8 from section 6. 993 B. The DUT will also have 2 interfaces connected to the traffic 994 generator. 996 Test Configuration: 998 1. Configure the number of primaries on R1 and the backups on R2 999 as required by the topology selected. 1001 2. Configure the test setup to support Reversion. 1003 3. Advertise prefixes (as per FRR Scalability Table described in 1004 Appendix A) by the tail end. 1006 Procedure: 1008 Test Case "7.1.2. Mid-Point PLR Forwarding Performance" MUST be 1009 completed first to obtain the Throughput to use as the offered 1010 load. 1012 1. Establish the primary LSP on R1 required by the topology 1013 selected. 1015 2. Establish the backup LSP on R2 required by the selected 1016 topology. 1018 3. Perform steps 3 through 14 from section 7.2 Headend PLR with 1019 Link Failure. 1021 IT is RECOMMENDED that this procedure be repeated for each of the 1022 link failure triggers defined in section 5.1. 1024 7.4. Headend PLR with Node Failure 1026 Objective: 1028 To benchmark the MPLS failover time due to Node failure events 1029 described in section 5.1 experienced by the DUT which is the 1030 Headend PLR. 1032 Test Setup: 1034 A. Select any one topology out of the 8 from section 6. 1036 B. Select or enable IP, Layer 3 VPN or Layer 2 VPN services with 1037 DUT as Headend PLR. 1039 C. The DUT will also have 2 interfaces connected to the traffic 1040 generator/analyzer. 1042 Test Configuration: 1044 1. Configure the number of primaries on R2 and the backups on R2 1045 as required by the topology selected. 1047 2. Configure the test setup to support Reversion. 1049 3. Advertise prefixes (as per FRR Scalability Table described in 1050 Appendix A) by the tail end. 1052 Procedure: 1054 Test Case "7.1.1. Headend PLR Forwarding Performance" MUST be 1055 completed first to obtain the Throughput to use as the offered 1056 load. 1058 1. Establish the primary LSP on R2 required by the topology 1059 selected. 1061 2. Establish the backup LSP on R2 required by the selected 1062 topology. 1064 3. Verify primary and backup LSPs are up and that primary is 1065 protected. 1067 4. Verify Fast Reroute protection is enabled and ready. 1069 5. Setup traffic streams for the offered load as described in 1070 section 5.7. 1072 6. Provide the offered load from the tester at the Throughput 1073 [Br91] level obtained from test case 7.1.1. 1075 7. Verify traffic is switched over Primary LSP without packet 1076 loss. 1078 8. Trigger a node failure as described in section 5.1. 1080 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link 1081 Failure. 1083 IT is RECOMMENDED that this procedure be repeated for each of the 1084 node failure triggers defined in section 5.1. 1086 7.5. Mid-Point PLR with Node Failure 1088 Objective: 1090 To benchmark the MPLS failover time due to Node failure events 1091 described in section 5.1 experienced by the DUT which is the Mid- 1092 Point PLR. 1094 Test Setup: 1096 A. Select any one topology from section 6.1 to 6.2. 1098 B. The DUT will also have 2 interfaces connected to the traffic 1099 generator. 1101 Test Configuration: 1103 1. Configure the number of primaries on R1 and the backups on R2 1104 as required by the topology selected. 1106 2. Configure the test setup to support Reversion. 1108 3. Advertise prefixes (as per FRR Scalability Table described in 1109 Appendix A) by the tail end. 1111 Procedure: 1113 Test Case "7.1.1. Mid-Point PLR Forwarding Performance" MUST be 1114 completed first to obtain the Throughput to use as the offered 1115 load. 1117 1. Establish the primary LSP on R1 required by the topology 1118 selected. 1120 2. Establish the backup LSP on R2 required by the selected 1121 topology. 1123 3. Verify primary and backup LSPs are up and that primary is 1124 protected. 1126 4. Verify Fast Reroute protection is enabled and ready. 1128 5. Setup traffic streams for the offered load as described in 1129 section 5.7. 1131 6. Provide the offered load from the tester at the Throughput 1132 [Br91] level obtained from test case 7.1.1. 1134 7. Verify traffic is switched over Primary LSP without packet 1135 loss. 1137 8. Trigger a node failure as described in section 5.1. 1139 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link 1140 Failure. 1142 IT is RECOMMENDED that this procedure be repeated for each of the 1143 node failure triggers defined in section 5.1. 1145 8. Reporting Format 1147 For each test, it is RECOMMENDED that the results be reported in the 1148 following format. 1150 Parameter Units 1152 IGP used for the test ISIS-TE/ OSPF-TE 1153 Interface types Gige,POS,ATM,VLAN etc. 1155 Packet Sizes offered to the DUT Bytes (at layer 3) 1157 Offered Load (Throughput) packets per second 1159 IGP routes advertised Number of IGP routes 1161 Penultimate Hop Popping Used/Not Used 1163 RSVP hello timers Milliseconds 1165 Number of Protected tunnels Number of tunnels 1167 Number of VPN routes installed Number of VPN routes 1168 on the Headend 1170 Number of VC tunnels Number of VC tunnels 1172 Number of mid-point tunnels Number of tunnels 1174 Number of Prefixes protected by Number of LSPs 1175 Primary 1177 Topology being used Section number, and 1178 figure reference 1180 Failover Event Event type 1182 Re-optimization Yes/No 1184 Benchmarks (to be recorded for each test case): 1186 Failover- 1187 Failover Time seconds 1188 Failover Packet Loss packets 1189 Additive Backup Delay seconds 1190 Out-of-Order Packets packets 1191 Duplicate Packets packets 1192 Failover Time Calculation Method Method Used 1194 Reversion- 1195 Reversion Time seconds 1196 Reversion Packet Loss packets 1197 Additive Backup Delay seconds 1198 Out-of-Order Packets packets 1199 Duplicate Packets packets 1200 Failover Time Calculation Method Method Used 1202 9. Security Considerations 1204 Benchmarking activities as described in this memo are limited to 1205 technology characterization using controlled stimuli in a laboratory 1206 environment, with dedicated address space and the constraints 1207 specified in the sections above. 1209 The benchmarking network topology will be an independent test setup 1210 and MUST NOT be connected to devices that may forward the test 1211 traffic into a production network, or misroute traffic to the test 1212 management network. 1214 Further, benchmarking is performed on a "black-box" basis, relying 1215 solely on measurements observable external to the DUT/SUT. 1217 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 1218 benchmarking purposes. Any implications for network security arising 1219 from the DUT/SUT SHOULD be identical in the lab and in production 1220 networks. 1222 10. IANA Considerations 1224 This draft does not require any new allocations by IANA. 1226 11. Acknowledgements 1228 We would like to thank Jean Philip Vasseur for his invaluable input 1229 to the document, Curtis Villamizar for his contribution in suggesting 1230 text on definition and need for benchmarking Correlated failures and 1231 Bhavani Parise for his textual input and review. Additionally we 1232 would like to thank Al Morton, Arun Gandhi, Amrit Hanspal, Karu 1233 Ratnam, Raveesh Janardan, Andrey Kiselev, and Mohan Nanduri for their 1234 formal reviews of this document. 1236 12. References 1238 12.1. Informative References 1240 [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN 1241 Switching Devices", RFC 2285, February 1998. 1243 [RFC4689] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana, 1244 "Terminology for Benchmarking Network-layer Traffic 1245 Control Mechanisms", RFC 4689, October 2006. 1247 12.2. Normative References 1249 [RFC1242] Bradner, S., "Benchmarking terminology for network 1250 interconnection devices", RFC 1242, July 1991. 1252 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1253 Requirement Levels", BCP 14, RFC 2119, March 1997. 1255 [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 1256 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 1257 May 2005. 1259 [RFC5695] Akhter, A., Asati, R., and C. Pignataro, "MPLS Forwarding 1260 Benchmarking Methodology for IP Flows", RFC 5695, 1261 November 2009. 1263 Appendix A. Fast Reroute Scalability Table 1265 This section provides the recommended numbers for evaluating the 1266 scalability of fast reroute implementations. It also recommends the 1267 typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries. 1268 Based on the features supported by the device under test (DUT), 1269 appropriate scaling limits can be used for the test bed. 1271 A1. FRR IGP Table 1272 No. of Headend TE Tunnels IGP Prefixes 1274 1 100 1276 1 500 1278 1 1000 1280 1 2000 1282 1 5000 1284 2 (Load Balance) 100 1286 2 (Load Balance) 500 1288 2 (Load Balance) 1000 1290 2 (Load Balance) 2000 1292 2 (Load Balance) 5000 1294 100 100 1296 500 500 1298 1000 1000 1300 2000 2000 1302 A2. FRR VPN Table 1303 No. of Headend TE Tunnels VPNv4 Prefixes 1305 1 100 1307 1 500 1309 1 1000 1311 1 2000 1313 1 5000 1315 1 10000 1317 1 20000 1319 1 Max 1321 2 (Load Balance) 100 1323 2 (Load Balance) 500 1325 2 (Load Balance) 1000 1327 2 (Load Balance) 2000 1329 2 (Load Balance) 5000 1331 2 (Load Balance) 10000 1333 2 (Load Balance) 20000 1335 2 (Load Balance) Max 1337 A3. FRR Mid-Point LSP Table 1339 No of Mid-point TE LSPs could be configured at recommended levels - 1340 100, 500, 1000, 2000, or max supported number. 1342 A2. FRR VC Table 1343 No. of Headend TE Tunnels VC entries 1345 1 100 1346 1 500 1347 1 1000 1348 1 2000 1349 1 Max 1350 100 100 1351 500 500 1352 1000 1000 1353 2000 2000 1355 Appendix B. Abbreviations 1357 AIS - Alarm Indication Signal 1358 BFD - Bidirectional Fault Detection 1359 BGP - Border Gateway protocol 1360 CE - Customer Edge 1361 DUT - Device Under Test 1362 FRR - Fast Reroute 1363 IGP - Interior Gateway Protocol 1364 IP - Internet Protocol 1365 LOS - Loss of Signal 1366 LSP - Label Switched Path 1367 MP - Merge Point 1368 MPLS - Multi Protocol Label Switching 1369 N-Nhop - Next - Next Hop 1370 Nhop - Next Hop 1371 OIR - Online Insertion and Removal 1372 P - Provider 1373 PE - Provider Edge 1374 PHP - Penultimate Hop Popping 1375 PLR - Point of Local Repair 1376 RSVP - Resource reSerVation Protocol 1377 SRLG - Shared Risk Link Group 1378 TA - Traffic Analyzer 1379 TE - Traffic Engineering 1380 TG - Traffic Generator 1381 VC - Virtual Circuit 1382 VPN - Virtual Private Network 1384 Authors' Addresses 1386 Rajiv Papneja 1387 Huawei Technologies 1388 2330 Central Expressway 1389 Santa Clara, CA 95050 1390 USA 1392 Email: rajiv.papneja@huawei.com 1394 Samir Vapiwala 1395 Cisco Systems 1396 300 Beaver Brook Road 1397 Boxborough, MA 01719 1398 USA 1400 Email: svapiwal@cisco.com 1402 Jay Karthik 1403 Cisco Systems 1404 300 Beaver Brook Road 1405 Boxborough, MA 01719 1406 USA 1408 Email: jkarthik@cisco.com 1410 Scott Poretsky 1411 Allot Communications 1412 USA 1414 Email: sporetsky@allot.com 1416 Shankar Rao 1417 950 17th Street 1418 Suite 1900 1419 Denver, CO 80210 1420 USA 1422 Email: shankar.rao@du.edu 1423 JL. Le Roux 1424 France Telecom 1425 2 av Pierre Marzin 1426 22300 Lannion 1427 France 1429 Email: jeanlouis.leroux@orange.com