idnits 2.17.1 draft-ietf-bmwg-protection-meth-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([TERM-ID], [MPLS-FRR-EXT]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 487 has weird spacing: '...failure after...' == Line 513 has weird spacing: '...failure after...' == Line 533 has weird spacing: '...failure after...' == Line 559 has weird spacing: '...failure after...' == Line 582 has weird spacing: '...failure after...' == (3 more instances...) -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 26, 2011) is 4565 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'TERM-ID' is mentioned on line 927, but not defined == Unused Reference: 'RFC6414' is defined on line 1251, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 9 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group R. Papneja 3 Internet-Draft Huawei Technologies 4 Intended status: Standards Track S. Vapiwala 5 Expires: April 28, 2012 J. Karthik 6 Cisco Systems 7 S. Poretsky 8 Allot Communications 9 S. Rao 10 Qwest Communications 11 J. Roux 12 France Telecom 13 October 26, 2011 15 Methodology for benchmarking MPLS protection mechanisms 16 draft-ietf-bmwg-protection-meth-09.txt 18 Abstract 20 This draft describes the methodology for benchmarking MPLS Protection 21 mechanisms for link and node protection as defined in [MPLS-FRR-EXT]. 22 This document provides test methodologies and testbed setup for 23 measuring failover times while considering all dependencies that 24 might impact faster recovery of real-time applications bound to MPLS 25 based traffic engineered tunnels. The benchmarking terms used in 26 this document are defined in [TERM-ID]. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on April 28, 2012. 45 Copyright Notice 47 Copyright (c) 2011 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 This document may contain material from IETF Documents or IETF 61 Contributions published or made publicly available before November 62 10, 2008. The person(s) controlling the copyright in some of this 63 material may not have granted the IETF Trust the right to allow 64 modifications of such material outside the IETF Standards Process. 65 Without obtaining an adequate license from the person(s) controlling 66 the copyright in such materials, this document may not be modified 67 outside the IETF Standards Process, and derivative works of it may 68 not be created outside the IETF Standards Process, except to format 69 it for publication as an RFC or to translate it into languages other 70 than English. 72 Table of Contents 74 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 75 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 6 76 3. Existing Definitions and Requirements . . . . . . . . . . . . 6 77 4. General Reference Topology . . . . . . . . . . . . . . . . . . 7 78 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 8 79 5.1. Failover Events [TERM-ID] . . . . . . . . . . . . . . . . 8 80 5.2. Failure Detection [TERM-ID] . . . . . . . . . . . . . . . 9 81 5.3. Use of Data Traffic for MPLS Protection benchmarking . . . 9 82 5.4. LSP and Route Scaling . . . . . . . . . . . . . . . . . . 10 83 5.5. Selection of IGP . . . . . . . . . . . . . . . . . . . . . 10 84 5.6. Restoration and Reversion [TERM-ID] . . . . . . . . . . . 10 85 5.7. Offered Load . . . . . . . . . . . . . . . . . . . . . . . 11 86 5.8. Tester Capabilities . . . . . . . . . . . . . . . . . . . 11 87 6. Reference Test Setup . . . . . . . . . . . . . . . . . . . . . 12 88 6.1. Link Protection . . . . . . . . . . . . . . . . . . . . . 12 89 6.1.1. Link Protection - 1 hop primary (from PLR) and 1 90 hop backup TE tunnels . . . . . . . . . . . . . . . . 12 91 6.1.2. Link Protection - 1 hop primary (from PLR) and 2 92 hop backup TE tunnels . . . . . . . . . . . . . . . . 13 93 6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 94 hop backup TE tunnels . . . . . . . . . . . . . . . . 13 95 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 96 hop backup TE tunnels . . . . . . . . . . . . . . . . 14 97 6.2. Node Protection . . . . . . . . . . . . . . . . . . . . . 14 98 6.2.1. Node Protection - 2 hop primary (from PLR) and 1 99 hop backup TE tunnels . . . . . . . . . . . . . . . . 14 100 6.2.2. Node Protection - 2 hop primary (from PLR) and 2 101 hop backup TE tunnels . . . . . . . . . . . . . . . . 15 102 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 103 hop backup TE tunnels . . . . . . . . . . . . . . . . 16 104 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 105 hop backup TE tunnels . . . . . . . . . . . . . . . . 17 106 7. Test Methodology . . . . . . . . . . . . . . . . . . . . . . . 17 107 7.1. MPLS FRR Forwarding Performance . . . . . . . . . . . . . 18 108 7.1.1. Headend PLR Forwarding Performance . . . . . . . . . . 18 109 7.1.2. Mid-Point PLR Forwarding Performance . . . . . . . . . 19 110 7.1.3. Egress PLR Forwarding Performance . . . . . . . . . . 20 111 7.2. Headend PLR with Link Failure . . . . . . . . . . . . . . 21 112 7.3. Mid-Point PLR with Link Failure . . . . . . . . . . . . . 23 113 7.4. Headend PLR with Node Failure . . . . . . . . . . . . . . 24 114 7.5. Mid-Point PLR with Node Failure . . . . . . . . . . . . . 26 115 8. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 27 116 9. Security Considerations . . . . . . . . . . . . . . . . . . . 29 117 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 29 118 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 30 119 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 30 120 12.1. Informative References . . . . . . . . . . . . . . . . . . 30 121 12.2. Normative References . . . . . . . . . . . . . . . . . . . 31 122 Appendix A. Fast Reroute Scalability Table . . . . . . . . . . . 31 123 Appendix B. Abbreviations . . . . . . . . . . . . . . . . . . . . 33 124 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33 126 1. Introduction 128 This draft describes the methodology for benchmarking MPLS based 129 protection mechanisms. The new terminology that this document 130 introduces is defined in [TERM-ID]. 132 MPLS based protection mechanisms provide fast recovery of real-time 133 services from a planned or an unplanned link or node failures. MPLS 134 protection mechanisms are generally deployed in a network 135 infrastructure where MPLS is used for provisioning of point-to- point 136 traffic engineered tunnels (tunnel). MPLS based protection 137 mechanisms promise to improve service disruption period by minimizing 138 recovery time from most common failures. 140 Network elements from different manufacturers behave differently to 141 network failures, which impacts the network's ability and performance 142 for failure recovery. It therefore becomes imperative for service 143 providers to have a common benchmark to understand the performance 144 behaviors of network elements. 146 There are two factors impacting service availability: frequency of 147 failures and duration for which the failures persist. Failures can 148 be classified further into two types: correlated and uncorrelated. 149 Correlated and uncorrelated failures may be planned or unplanned. 151 Planned failures are predictable. Network implementations should be 152 able to handle both planned and unplanned failures and recover 153 gracefully within a time frame to maintain service assurance. Hence, 154 failover recovery time is one of the most important benchmark that a 155 service provider considers in choosing the building blocks for their 156 network infrastructure. 158 A correlated failure is the simultaneous occurrence of two or more 159 failures. A typical example is failure of a logical resource (e.g. 160 layer-2 links) due to a dependency on a common physical resource 161 (e.g. common conduit) that fails. Within the context of MPLS 162 protection mechanisms, failures that arise due to Shared Risk Link 163 Groups (SRLG) [MPLS-FRR-EXT] can be considered as correlated 164 failures. Not all correlated failures are predictable in advance, 165 for example, those caused by natural disasters. 167 MPLS Fast Re-Route (MPLS-FRR) allows for the possibility that the 168 Label Switched Paths can be re-optimized in the minutes following 169 Failover. IP Traffic would be re-routed according to the preferred 170 path for the post-failure topology. Thus, MPLS-FRR includes an 171 additional step to the General model: 173 (1) Failover Event - Primary Path (Working Path) fails 175 (2) Failure Detection- Failover Event is detected 177 (3) 179 a. Failover - Working Path switched to Backup path 181 b. Re-Optimization of Working Path (possible change from 182 Backup Path) 184 (4) Restoration - Primary Path recovers from a Failover Event 186 (5) Reversion (optional) - Working Path returns to Primary Path 188 2. Document Scope 190 This document provides detailed test cases along with different 191 topologies and scenarios that should be considered to effectively 192 benchmark MPLS protection mechanisms and failover times on the Data 193 Plane. Different Failover Events and scaling considerations are also 194 provided in this document. 196 All benchmarking testcases defined in this document apply to both 197 facility backup and local protection enabled in detour mode. The 198 test cases cover all possible failure scenarios and the associated 199 procedures benchmark the performance of the Device Under Test (DUT) 200 to recover from failures. Data plane traffic is used to benchmark 201 failover times. 203 Benchmarking of correlated failures is out of scope of this document. 204 Protection from Bi-directional Forwarding Detection (BFD) is outside 205 the scope of this document. 207 As described above, MPLS-FRR may include a Re-optimization of the 208 Working Path, with possible packet transfer impairments. 209 Characterization of Re-optimization is beyond the scope of this memo. 211 3. Existing Definitions and Requirements 213 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 214 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 215 document are to be interpreted as described in BCP 14, RFC 2119 216 [Br97]. RFC 2119 defines the use of these key words to help make the 217 intent of standards track documents as clear as possible. While this 218 document uses these keywords, this document is not a standards track 219 document. 221 The reader is assumed to be familiar with the commonly used MPLS 222 terminology, some of which is defined in [MPLS-FRR-EXT]. 224 This document uses much of the terminology defined in [TERM-ID]. 225 This document also uses existing terminology defined in other BMWG 226 work. Examples include, but are not limited to: 228 Throughput [Ref.[Br91], section 3.17] 229 Device Under Test (DUT) [Ref.[Ma98], section 3.1.1] 230 System Under Test (SUT) [Ref.[Ma98], section 3.1.2] 231 Out-of-order Packet [Ref.[Po06], section 3.3.2] 232 Duplicate Packet [Ref.[Po06], section 3.3.3] 234 4. General Reference Topology 236 Figure 1 illustrates the basic reference testbed and is applicable to 237 all the test cases defined in this document. The Tester is comprised 238 of a Traffic Generator (TG) & Test Analyzer (TA). A Tester is 239 directly connected to the DUT. The Tester sends and receives IP 240 traffic to the tunnel ingress and performs signaling protocol 241 emulation to simulate real network scenarios in a lab environment. 242 The Tester may also support MPLS-TE signaling to act as the ingress 243 node to the MPLS tunnel. 245 +---------------------------+ 246 | +------------|---------------+ 247 | | | | 248 | | | | 249 +--------+ +--------+ +--------+ +--------+ +--------+ 250 TG--| R1 |-----| R2 |----| R3 | | R4 | | R5 | 251 | |-----| |----| |----| |---| | 252 +--------+ +--------+ +--------+ +--------+ +--------+ 253 | | | | | 254 | | | | | 255 | +--------+ | | TA 256 +---------| R6 |---------+ | 257 | |----------------------+ 258 +--------+ 259 Fig. 1 Fast Reroute Topology 261 The tester MUST record the number of lost, duplicate, and reordered 262 packets. It should further record arrival and departure times so 263 that Failover Time, Additive Latency, and Reversion Time can be 264 measured. The tester may be a single device or a test system 265 emulating all the different roles along a primary or backup path. 267 The label stack is dependent of the following 3 entities: 269 (1) Type of protection (Link Vs Node) 271 (2) # of remaining hops of the primary tunnel from the PLR 273 (3) # of remaining hops of the backup tunnel from the PLR 275 Due to this dependency, it is RECOMMENDED that the benchmarking of 276 failover times be performed on all the topologies provided in section 277 6. 279 5. Test Considerations 281 This section discusses the fundamentals of MPLS Protection testing: 283 (1) The types of network events that causes failover 285 (2) Indications for failover 287 (3) the use of data traffic 289 (4) Traffic generation 291 (5) LSP Scaling 293 (6) Reversion of LSP 295 (7) IGP Selection 297 5.1. Failover Events [TERM-ID] 299 The failover to the backup tunnel is primarily triggered by either 300 link or node failures observed downstream of the Point of Local 301 repair (PLR). Some of these failure events are listed below. 303 Link Failure Events 304 - Interface Shutdown on PLR side with POS Alarm 305 - Interface Shutdown on remote side with POS Alarm 306 - Interface Shutdown on PLR side with RSVP hello enabled 307 - Interface Shutdown on remote side with RSVP hello enabled 308 - Interface Shutdown on PLR side with BFD 309 - Interface Shutdown on remote side with BFD 310 - Fiber Pull on the PLR side (Both TX & RX or just the TX) 311 - Fiber Pull on the remote side (Both TX & RX or just the RX) 312 - Online insertion and removal (OIR) on PLR side 313 - OIR on remote side 314 - Sub-interface failure (e.g. shutting down of a VLAN) 315 - Parent interface shutdown (an interface bearing multiple 316 sub-interfaces 318 Node Failure Events 320 - A System reload initiated either by a graceful shutdown 321 or by a power failure. 322 - A system crash due to a software failure or an assert. 324 5.2. Failure Detection [TERM-ID] 326 Link failure detection time depends on the link type and failure 327 detection protocols running. For SONET/SDH, the alarm type (such as 328 LOS, AIS, or RDI) can be used. Other link types have layer-two 329 alarms, but they may not provide a short enough failure detection 330 time. Ethernet based links do not have layer 2 failure indicators, 331 and therefore relies on layer 3 signaling for failure detection. 332 However for directly connected devices, remote fault indication in 333 the ethernet auto-negotiation scheme could be considered as a type of 334 layer 2 link failure indicator. 336 MPLS has different failure detection techniques such as BFD, or use 337 of RSVP hellos. These methods can be used for the layer 3 failure 338 indicators required by Ethernet based links, or for some other non- 339 Ethernet based links to help improve failure detection time. 341 The test procedures in this document can be used for a local failure 342 or remote failure scenarios for comprehensive benchmarking and to 343 evaluate failover performance independent of the failure detection 344 techniques. 346 5.3. Use of Data Traffic for MPLS Protection benchmarking 348 Currently end customers use packet loss as a key metric for Failover 349 Time [TERM-ID]. Failover Packet Loss [TERM-ID] is an externally 350 observable event and has direct impact on application performance. 351 MPLS protection is expected to minimize the packet loss in the event 352 of a failure. For this reason it is important to develop a standard 353 router benchmarking methodology for measuring MPLS protection that 354 uses packet loss as a metric. At a known rate of forwarding, packet 355 loss can be measured and the failover time can be determined. 356 Measurement of control plane signaling to establish backup paths is 357 not enough to verify failover. Failover is best determined when 358 packets are actually traversing the backup path. 360 An additional benefit of using packet loss for calculation of 361 failover time is that it allows use of a black-box test environment. 362 Data traffic is offered at line-rate to the device under test (DUT) 363 an emulated network failure event is forced to occur, and packet loss 364 is externally measured to calculate the convergence time. This setup 365 is independent of the DUT architecture. 367 In addition, this methodology considers the packets in error and 368 duplicate packets that could have been generated during the failover 369 process. The methodologies consider lost, out-of-order, and 370 duplicate packets to be impaired packets that contribute to the 371 Failover Time. 373 5.4. LSP and Route Scaling 375 Failover time performance may vary with the number of established 376 primary and backup tunnel label switched paths (LSP) and installed 377 routes. However the procedure outlined here should be used for any 378 number of LSPs (L) and number of routes protected by PLR(R). The 379 amount of L and R must be recorded. 381 5.5. Selection of IGP 383 The underlying IGP could be ISIS-TE or OSPF-TE for the methodology 384 proposed here. See [IGP-METH] for IGP options to consider and 385 report. 387 5.6. Restoration and Reversion [TERM-ID] 389 Fast Reroute provides a method to return or restore an original 390 primary LSP upon recovery from the failure (Restoration) and to 391 switch traffic from the Backup Path to the restored Primary Path 392 (Reversion). In MPLS-FRR, Reversion can be implemented as Global 393 Reversion or Local Reversion. It is important to include Restoration 394 and Reversion as a step in each test case to measure the amount of 395 packet loss, out of order packets, or duplicate packets that is 396 produced. 398 Note: In addition to restoration and reversion, re-optimization can 399 take place while the failure is still not recovered but it depends on 400 the user configuration, and re-otimization timers. 402 5.7. Offered Load 404 It is suggested that there be one or more traffic streams as long as 405 there is a steady and constant rate of flow for all the streams. In 406 order to monitor the DUT performance for recovery times, a set of 407 route prefixes should be advertised before traffic is sent. The 408 traffic should be configured towards these routes. 410 At least 16 flows should be used, and more if possible. Prefix- 411 dependency behaviors are key in IP and tests with route-specific 412 flows spread across the routing table will reveal this dependency. 413 Generating traffic to all of the prefixes reachable by the protected 414 tunnel (probably in a Round-Robin fashion, where the traffic is 415 destined to all the prefixes but one prefix at a time in a cyclic 416 manner) is not recommended. The reason why traffic generation is not 417 recommended in a Round-Robin fashion to all the prefixes, one at a 418 time is that if there are many prefixes reachable through the LSP the 419 time interval between 2 packets destined to one prefix may be 420 significantly high and may be comparable with the failover time being 421 measured which does not aid in getting an accurate failover 422 measurement. 424 5.8. Tester Capabilities 426 It is RECOMMENDED that the Tester used to execute each test case have 427 the following capabilities: 429 1.Ability to establish MPLS-TE tunnels and push/pop labels. 431 2.Ability to produce Failover Event [TERM-ID]. 433 3.Ability to insert a timestamp in each data packet's IP 434 payload. 436 4.An internal time clock to control timestamping, time 437 measurements, and time calculations. 439 5.Ability to disable or tune specific Layer-2 and Layer-3 440 protocol functions on any interface(s). 442 6.Ability to react upon the receipt of path error from the PLR 444 The Tester MAY be capable to make non-data plane convergence 445 observations and use those observations for measurements. 447 6. Reference Test Setup 449 In addition to the general reference topology shown in figure 1, this 450 section provides detailed insight into various proposed test setups 451 that should be considered for comprehensively benchmarking the 452 failover time in different roles along the primary tunnel 454 This section proposes a set of topologies that covers all the 455 scenarios for local protection. All of these topologies can be 456 mapped to the reference topology shown in Figure 1. Topologies 457 provided in this section refer to the testbed required to benchmark 458 failover time when the DUT is configured as a PLR in either Headend 459 or midpoint role. Provided with each topology below is the label 460 stack at the PLR. Penultimate Hop Popping (PHP) MAY be used and must 461 be reported when used. 463 Figures 2 thru 9 use the following convention: 465 a) HE is Headend 466 b) TE is Tail-End 467 c) MID is Mid point 468 d) MP is Merge Point 469 e) PLR is Point of Local Repair 470 f) PRI is Primary Path 471 g) BKP denotes Backup Path and Nodes 473 6.1. Link Protection 475 6.1.1. Link Protection - 1 hop primary (from PLR) and 1 hop backup TE 476 tunnels 478 +-------+ +--------+ +--------+ 479 | R1 | | R2 | PRI| R3 | 480 TG-| HE |--| MID |----| TE |-TA 481 | | | PLR |----| | 482 +-------+ +--------+ BKP+--------+ 484 Figure 2. 486 Traffic Num of Labels Num of labels 487 before failure after failure 488 IP TRAFFIC (P-P) 0 0 489 Layer3 VPN (PE-PE) 1 1 490 Layer3 VPN (PE-P) 2 2 491 Layer2 VC (PE-PE) 1 1 492 Layer2 VC (PE-P) 2 2 493 Mid-point LSPs 0 0 495 6.1.2. Link Protection - 1 hop primary (from PLR) and 2 hop backup TE 496 tunnels 498 +-------+ +--------+ +--------+ 499 | R1 | | R2 | | R3 | 500 TG-| HE | | MID |PRI | TE |-TA 501 | |----| PLR |----| | 502 +-------+ +--------+ +--------+ 503 |BKP | 504 | +--------+ | 505 | | R6 | | 506 |----| BKP |----| 507 | MID | 508 +--------+ 510 Figure 3. 512 Traffic Num of Labels Num of labels 513 before failure after failure 514 IP TRAFFIC (P-P) 0 1 515 Layer3 VPN (PE-PE) 1 2 516 Layer3 VPN (PE-P) 2 3 517 Layer2 VC (PE-PE) 1 2 518 Layer2 VC (PE-P) 2 3 519 Mid-point LSPs 0 1 521 6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 hop backup TE 522 tunnels 524 +--------+ +--------+ +--------+ +--------+ 525 | R1 | | R2 |PRI | R3 |PRI | R4 | 526 TG-| HE |----| MID |----| MID |------| TE |-TA 527 | | | PLR |----| | | | 528 +--------+ +--------+ BKP+--------+ +--------+ 530 Figure 4. 532 Traffic Num of Labels Num of labels 533 before failure after failure 534 IP TRAFFIC (P-P) 1 1 535 Layer3 VPN (PE-PE) 2 2 536 Layer3 VPN (PE-P) 3 3 537 Layer2 VC (PE-PE) 2 2 538 Layer2 VC (PE-P) 3 3 539 Mid-point LSPs 1 1 541 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 hop backup TE 542 tunnels 544 +--------+ +--------+PRI +--------+ PRI +--------+ 545 | R1 | | R2 | | R3 | | R4 | 546 TG-| HE |----| MID |----| MID |------| TE |-TA 547 | | | PLR | | | | | 548 +--------+ +--------+ +--------+ +--------+ 549 BKP| | 550 | +--------+ | 551 | | R6 | | 552 +---| BKP |- 553 | MID | 554 +--------+ 556 Figure 5. 558 Traffic Num of Labels Num of labels 559 before failure after failure 560 IP TRAFFIC (P-P) 1 2 561 Layer3 VPN (PE-PE) 2 3 562 Layer3 VPN (PE-P) 3 4 563 Layer2 VC (PE-PE) 2 3 564 Layer2 VC (PE-P) 3 4 565 Mid-point LSPs 1 2 567 6.2. Node Protection 569 6.2.1. Node Protection - 2 hop primary (from PLR) and 1 hop backup TE 570 tunnels 571 +--------+ +--------+ +--------+ +--------+ 572 | R1 | | R2 |PRI | R3 | PRI | R4 | 573 TG-| HE |----| MID |----| MID |------| TE |-TA 574 | | | PLR | | | | | 575 +--------+ +--------+ +--------+ +--------+ 576 |BKP | 577 ----------------------------- 579 Figure 6. 581 Traffic Num of Labels Num of labels 582 before failure after failure 583 IP TRAFFIC (P-P) 1 0 584 Layer3 VPN (PE-PE) 2 1 585 Layer3 VPN (PE-P) 3 2 586 Layer2 VC (PE-PE) 2 1 587 Layer2 VC (PE-P) 3 2 588 Mid-point LSPs 1 0 590 6.2.2. Node Protection - 2 hop primary (from PLR) and 2 hop backup TE 591 tunnels 593 +--------+ +--------+ +--------+ +--------+ 594 | R1 | | R2 | | R3 | | R4 | 595 TG-| HE | | MID |PRI | MID |PRI | TE |-TA 596 | |----| PLR |----| |----| | 597 +--------+ +--------+ +--------+ +--------+ 598 | | 599 BKP| +--------+ | 600 | | R6 | | 601 ---------| BKP |--------- 602 | MID | 603 +--------+ 605 Figure 7. 607 Traffic Num of Labels Num of labels 608 before failure after failure 609 IP TRAFFIC (P-P) 1 1 610 Layer3 VPN (PE-PE) 2 2 611 Layer3 VPN (PE-P) 3 3 612 Layer2 VC (PE-PE) 2 2 613 Layer2 VC (PE-P) 3 3 614 Mid-point LSPs 1 1 616 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 hop backup TE 617 tunnels 619 +--------+ +--------+PRI+--------+PRI+--------+PRI+--------+ 620 | R1 | | R2 | | R3 | | R4 | | R5 | 621 TG-| HE |--| MID |---| MID |---| MP |---| TE |-TA 622 | | | PLR | | | | | | | 623 +--------+ +--------+ +--------+ +--------+ +--------+ 624 BKP| | 625 -------------------------- 627 Figure 8. 629 Traffic Num of Labels Num of labels 630 before failure after failure 631 IP TRAFFIC (P-P) 1 1 632 Layer3 VPN (PE-PE) 2 2 633 Layer3 VPN (PE-P) 3 3 634 Layer2 VC (PE-PE) 2 2 635 Layer2 VC (PE-P) 3 3 636 Mid-point LSPs 1 1 638 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 hop backup TE 639 tunnels 641 +--------+ +--------+ +--------+ +--------+ +--------+ 642 | R1 | | R2 | | R3 | | R4 | | R5 | 643 TG-| HE | | MID |PRI| MID |PRI| MP |PRI| TE |-TA 644 | |-- | PLR |---| |---| |---| | 645 +--------+ +--------+ +--------+ +--------+ +--------+ 646 BKP| | 647 | +--------+ | 648 | | R6 | | 649 ---------| BKP |------- 650 | MID | 651 +--------+ 653 Figure 9. 655 Traffic Num of Labels Num of labels 656 before failure after failure 657 IP TRAFFIC (P-P) 1 2 658 Layer3 VPN (PE-PE) 2 3 659 Layer3 VPN (PE-P) 3 4 660 Layer2 VC (PE-PE) 2 3 661 Layer2 VC (PE-P) 3 4 662 Mid-point LSPs 1 2 664 7. Test Methodology 666 The procedure described in this section can be applied to all the 8 667 base test cases and the associated topologies. The backup as well as 668 the primary tunnels are configured to be alike in terms of bandwidth 669 usage. In order to benchmark failover with all possible label stack 670 depth applicable as seen with current deployments, it is RECOMMENDED 671 to perform all of the test cases provided in this section. The 672 forwarding performance test cases in section 7.1 MUST be performed 673 prior to performing the failover test cases. 675 The considerations of Section 4 of [RFC2544] are applicable when 676 evaluating the results obtained using these methodologies as well. 678 7.1. MPLS FRR Forwarding Performance 680 Benchmarking Failover Time [TERM-ID] for MPLS protection first 681 requires baseline measurement of the forwarding performance of the 682 test topology including the DUT. Forwarding performance is 683 benchmarked by the Throughput as defined in [MPLS-FWD] and measured 684 in units pps. This section provides two test cases to benchmark 685 forwarding performance. These are with the DUT configured as a 686 Headend PLR, Mid-Point PLR, and Egress PLR. 688 7.1.1. Headend PLR Forwarding Performance 690 Objective: 692 To benchmark the maximum rate (pps) on the PLR (as headend) over 693 primary LSP and backup LSP. 695 Test Setup: 697 A. Select any one topology out of the 8 from section 6. 699 B. Select overlay technologies (e.g. IGP, VPN, or VC) with DUT 700 as Headend PLR. 702 C. The DUT will also have 2 interfaces connected to the traffic 703 Generator/analyzer. (If the node downstream of the PLR is not 704 a simulated node, then the Ingress of the tunnel should have 705 one link connected to the traffic generator and the node 706 downstream to the PLR or the egress of the tunnel should have 707 a link connected to the traffic analyzer). 709 Procedure: 711 1. Establish the primary LSP on R2 required by the topology 712 selected. 714 2. Establish the backup LSP on R2 required by the selected 715 topology. 717 3. Verify primary and backup LSPs are up and that primary is 718 protected. 720 4. Verify Fast Reroute protection is enabled and ready. 722 5. Setup traffic streams as described in section 5.7. 724 6. Send MPLS traffic over the primary LSP at the Throughput 725 supported by the DUT. 727 7. Record the Throughput over the primary LSP. 729 8. Trigger a link failure as described in section 5.1. 731 9. Verify that the offered load gets mapped to the backup tunnel 732 and measure the Additive Backup Delay. 734 10. 30 seconds after Failover, stop the offered load and measure 735 the Throughput, Packet Loss, Out-of-Order Packets, and 736 Duplicate Packets over the Backup LSP. 738 11. Adjust the offered load and repeat steps 6 through 10 until 739 the Throughput values for the primary and backup LSPs are 740 equal. 742 12. Record the Throughput. This is the offered load that will be 743 used for the Headend PLR failover test cases. 745 7.1.2. Mid-Point PLR Forwarding Performance 747 Objective: 749 To benchmark the maximum rate (pps) on the PLR (as mid-point) over 750 primary LSP and backup LSP. 752 Test Setup: 754 A. Select any one topology out of the 9 from section 6. 756 B. Select overlay technologies (e.g. IGP, VPN, or VC) with DUT 757 as Mid-Point PLR. 759 C. The DUT will also have 2 interfaces connected to the traffic 760 generator. 762 Procedure: 764 1. Establish the primary LSP on R1 required by the topology 765 selected. 767 2. Establish the backup LSP on R2 required by the selected 768 topology. 770 3. Verify primary and backup LSPs are up and that primary is 771 protected. 773 4. Verify Fast Reroute protection is enabled and ready. 775 5. Setup traffic streams as described in section 5.7. 777 6. Send MPLS traffic over the primary LSP at the Throughput 778 supported by the DUT. 780 7. Record the Throughput over the primary LSP. 782 8. Trigger a link failure as described in section 5.1. 784 9. Verify that the offered load gets mapped to the backup tunnel 785 and measure the Additive Backup Delay. 787 10. 30 seconds after Failover, stop the offered load and measure 788 the Throughput, Packet Loss, Out-of-Order Packets, and 789 Duplicate Packets over the Backup LSP. 791 11. Adjust the offered load and repeat steps 6 through 10 until 792 the Throughput values for the primary and backup LSPs are 793 equal. 795 12. Record the Throughput. This is the offered load that will be 796 used for the Mid-Point PLR failover test cases. 798 7.1.3. Egress PLR Forwarding Performance 800 Objective: 802 To benchmark the maximum rate (pps) on the PLR (as egress) over 803 primary LSP and backup LSP. 805 Test Setup: 807 A. Select any one topology out of the 8 from section 6. 809 B. Select overlay technologies (e.g. IGP, VPN, or VC) with DUT 810 as Egress PLR. 812 C. The DUT will also have 2 interfaces connected to the traffic 813 generator. 815 Procedure: 817 1. Establish the primary LSP on R1 required by the topology 818 selected. 820 2. Establish the backup LSP on R2 required by the selected 821 topology. 823 3. Verify primary and backup LSPs are up and that primary is 824 protected. 826 4. Verify Fast Reroute protection is enabled and ready. 828 5. Setup traffic streams as described in section 5.7. 830 6. Send MPLS traffic over the primary LSP at the Throughput 831 supported by the DUT. 833 7. Record the Throughput over the primary LSP. 835 8. Trigger a link failure as described in section 5.1. 837 9. Verify that the offered load gets mapped to the backup tunnel 838 and measure the Additive Backup Delay. 840 10. 30 seconds after Failover, stop the offered load and measure 841 the Throughput, Packet Loss, Out-of-Order Packets, and 842 Duplicate Packets over the Backup LSP. 844 11. Adjust the offered load and repeat steps 6 through 10 until 845 the Throughput values for the primary and backup LSPs are 846 equal. 848 12. Record the Throughput. This is the offered load that will be 849 used for the Egress PLR failover test cases. 851 7.2. Headend PLR with Link Failure 853 Objective: 855 To benchmark the MPLS failover time due to link failure events 856 described in section 5.1 experienced by the DUT which is the 857 Headend PLR. 859 Test Setup: 861 A. Select any one topology out of the 8 from section 6. 863 B. Select overlay technology for FRR test (e.g. IGP, VPN, or 864 VC). 866 C. The DUT will also have 2 interfaces connected to the traffic 867 Generator/analyzer. (If the node downstream of the PLR is not 868 a simulated node, then the Ingress of the tunnel should have 869 one link connected to the traffic generator and the node 870 downstream to the PLR or the egress of the tunnel should have 871 a link connected to the traffic analyzer). 873 Test Configuration: 875 1. Configure the number of primaries on R2 and the backups on R2 876 as required by the topology selected. 878 2. Configure the test setup to support Reversion. 880 3. Advertise prefixes (as per FRR Scalability Table described in 881 Appendix A) by the tail end. 883 Procedure: 885 Test Case "7.1.1. Headend PLR Forwarding Performance" MUST be 886 completed first to obtain the Throughput to use as the offered 887 load. 889 1. Establish the primary LSP on R2 required by the topology 890 selected. 892 2. Establish the backup LSP on R2 required by the selected 893 topology. 895 3. Verify primary and backup LSPs are up and that primary is 896 protected. 898 4. Verify Fast Reroute protection is enabled and ready. 900 5. Setup traffic streams for the offered load as described in 901 section 5.7. 903 6. Provide the offered load from the tester at the Throughput 904 [Br91] level obtained from test case 7.1.1. 906 7. Verify traffic is switched over Primary LSP without packet 907 loss. 909 8. Trigger a link failure as described in section 5.1. 911 9. Verify that the offered load gets mapped to the backup tunnel 912 and measure the Additive Backup Delay. 914 10. 30 seconds after Failover [TERM-ID], stop the offered load 915 and measure the total Failover Packet Loss [TERM-ID]. 917 11. Calculate the Failover Time [TERM-ID] benchmark using the 918 selected Failover Time Calculation Method (TBLM, PLBM, or 919 TBM) [TERM-ID]. 921 12. Restart the offered load and restore the primary LSP to 922 verify Reversion [TERM-ID] occurs and measure the Reversion 923 Packet Loss [TERM-ID]. 925 13. Calculate the Reversion Time [TERM-ID] benchmark using the 926 selected Failover Time Calculation Method (TBLM, PLBM, or 927 TBM) [TERM-ID]. 929 14. Verify Headend signals new LSP and protection should be in 930 place again. 932 IT is RECOMMENDED that this procedure be repeated for each of the 933 link failure triggers defined in section 5.1. 935 7.3. Mid-Point PLR with Link Failure 937 Objective: 939 To benchmark the MPLS failover time due to link failure events 940 described in section 5.1 experienced by the DUT which is the Mid- 941 Point PLR. 943 Test Setup: 945 A. Select any one topology out of the 8 from section 6. 947 B. Select overlay technology for FRR test as Mid-Point LSPs. 949 C. The DUT will also have 2 interfaces connected to the traffic 950 generator. 952 Test Configuration: 954 1. Configure the number of primaries on R1 and the backups on R2 955 as required by the topology selected. 957 2. Configure the test setup to support Reversion. 959 3. Advertise prefixes (as per FRR Scalability Table described in 960 Appendix A) by the tail end. 962 Procedure: 964 Test Case "7.1.2. Mid-Point PLR Forwarding Performance" MUST be 965 completed first to obtain the Throughput to use as the offered 966 load. 968 1. Establish the primary LSP on R1 required by the topology 969 selected. 971 2. Establish the backup LSP on R2 required by the selected 972 topology. 974 3. Perform steps 3 through 14 from section 7.2 Headend PLR with 975 Link Failure. 977 IT is RECOMMENDED that this procedure be repeated for each of the 978 link failure triggers defined in section 5.1. 980 7.4. Headend PLR with Node Failure 982 Objective: 984 To benchmark the MPLS failover time due to Node failure events 985 described in section 5.1 experienced by the DUT which is the 986 Headend PLR. 988 Test Setup: 990 A. Select any one topology from section 6. 992 B. Select overlay technology for FRR test (e.g. IGP, VPN, or 993 VC). 995 C. The DUT will also have 2 interfaces connected to the traffic 996 generator/analyzer. 998 Test Configuration: 1000 1. Configure the number of primaries on R2 and the backups on R2 1001 as required by the topology selected. 1003 2. Configure the test setup to support Reversion. 1005 3. Advertise prefixes (as per FRR Scalability Table described in 1006 Appendix A) by the tail end. 1008 Procedure: 1010 Test Case "7.1.1. Headend PLR Forwarding Performance" MUST be 1011 completed first to obtain the Throughput to use as the offered 1012 load. 1014 1. Establish the primary LSP on R2 required by the topology 1015 selected. 1017 2. Establish the backup LSP on R2 required by the selected 1018 topology. 1020 3. Verify primary and backup LSPs are up and that primary is 1021 protected. 1023 4. Verify Fast Reroute protection. 1025 5. Setup traffic streams for the offered load as described in 1026 section 5.7. 1028 6. Provide the offered load from the tester at the Throughput 1029 [Br91] level obtained from test case 7.1.1. 1031 7. Verify traffic is switched over Primary LSP without packet 1032 loss. 1034 8. Trigger a node failure as described in section 5.1. 1036 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link 1037 Failure. 1039 IT is RECOMMENDED that this procedure be repeated for each of the 1040 node failure triggers defined in section 5.1. 1042 7.5. Mid-Point PLR with Node Failure 1044 Objective: 1046 To benchmark the MPLS failover time due to Node failure events 1047 described in section 5.1 experienced by the DUT which is the Mid- 1048 Point PLR. 1050 Test Setup: 1052 A. Select any one topology from section 6.1 to 6.2. 1054 B. Select overlay technology for FRR test as Mid-Point LSPs. 1056 C. The DUT will also have 2 interfaces connected to the traffic 1057 generator. 1059 Test Configuration: 1061 1. Configure the number of primaries on R1 and the backups on R2 1062 as required by the topology selected. 1064 2. Configure the test setup to support Reversion. 1066 3. Advertise prefixes (as per FRR Scalability Table described in 1067 Appendix A) by the tail end. 1069 Procedure: 1071 Test Case "7.1.1. Mid-Point PLR Forwarding Performance" MUST be 1072 completed first to obtain the Throughput to use as the offered 1073 load. 1075 1. Establish the primary LSP on R1 required by the topology 1076 selected. 1078 2. Establish the backup LSP on R2 required by the selected 1079 topology. 1081 3. Verify primary and backup LSPs are up and that primary is 1082 protected. 1084 4. Verify Fast Reroute protection. 1086 5. Setup traffic streams for the offered load as described in 1087 section 5.7. 1089 6. Provide the offered load from the tester at the Throughput 1090 [Br91] level obtained from test case 7.1.1. 1092 7. Verify traffic is switched over Primary LSP without packet 1093 loss. 1095 8. Trigger a node failure as described in section 5.1. 1097 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link 1098 Failure. 1100 IT is RECOMMENDED that this procedure be repeated for each of the 1101 node failure triggers defined in section 5.1. 1103 8. Reporting Format 1105 For each test, it is recommended that the results be reported in the 1106 following format. 1108 Parameter Units 1110 IGP used for the test ISIS-TE/ OSPF-TE 1112 Interface types Gige,POS,ATM,VLAN etc. 1114 Packet Sizes offered to the DUT Bytes (at layer 3) 1116 Offered Load packets per second 1117 IGP routes advertised Number of IGP routes 1119 Penultimate Hop Popping Used/Not Used 1121 RSVP hello timers Milliseconds 1123 Number of Protected tunnels Number of tunnels 1125 Number of VPN routes installed Number of VPN routes 1126 on the Headend 1128 Number of VC tunnels Number of VC tunnels 1130 Number of mid-point tunnels Number of tunnels 1132 Number of Prefixes protected by Number of LSPs 1133 Primary 1135 Topology being used Section number, and 1136 figure reference 1138 Failover Event Event type 1140 Re-optimization Yes/No 1142 Benchmarks (to be recorded for each test case): 1144 Failover- 1145 Failover Time seconds 1146 Failover Packet Loss packets 1147 Additive Backup Delay seconds 1148 Out-of-Order Packets packets 1149 Duplicate Packets packets 1150 Failover Time Calculation Method Method Used 1152 Reversion- 1153 Reversion Time seconds 1154 Reversion Packet Loss packets 1155 Additive Backup Delay seconds 1156 Out-of-Order Packets packets 1157 Duplicate Packets packets 1158 Failover Time Calculation Method Method Used 1160 Failover Time suggested above is calculated using one of the 1161 following three methods 1162 1. Packet-Loss Based method (PLBM): (Number of packets dropped/ 1163 packets per second * 1000) milliseconds. This method could also 1164 be referred as Loss-Derived method. 1166 2. Time-Based Loss Method (TBLM): This method relies on the ability 1167 of the Traffic generators to provide statistics which reveal the 1168 duration of failure in milliseconds based on when the packet loss 1169 occurred (interval between non-zero packet loss and zero loss). 1171 3. Timestamp Based Method (TBM): This method of failover calculation 1172 is based on the timestamp that gets transmitted as payload in the 1173 packets originated by the generator. The Traffic Analyzer 1174 records the timestamp of the last packet received before the 1175 failover event and the first packet after the failover and 1176 derives the time based on the difference between these 2 1177 timestamps. Note: The payload could also contain sequence 1178 numbers for out-of-order packet calculation and duplicate 1179 packets. 1181 The timestamp based method method would be able to detect Reversion 1182 impairments beyond loss, thus it is RECOMMENDED method as a Failover 1183 Time method. 1185 9. Security Considerations 1187 Benchmarking activities as described in this memo are limited to 1188 technology characterization using controlled stimuli in a laboratory 1189 environment, with dedicated address space and the constraints 1190 specified in the sections above. 1192 The benchmarking network topology will be an independent test setup 1193 and MUST NOT be connected to devices that may forward the test 1194 traffic into a production network, or misroute traffic to the test 1195 management network. 1197 Further, benchmarking is performed on a "black-box" basis, relying 1198 solely on measurements observable external to the DUT/SUT. 1200 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 1201 benchmarking purposes. Any implications for network security arising 1202 from the DUT/SUT SHOULD be identical in the lab and in production 1203 networks. 1205 10. IANA Considerations 1207 This draft does not require any new allocations by IANA. 1209 11. Acknowledgements 1211 We would like to thank Jean Philip Vasseur for his invaluable input 1212 to the document, Curtis Villamizar for his contribution in suggesting 1213 text on definition and need for benchmarking Correlated failures and 1214 Bhavani Parise for his textual input and review. Additionally we 1215 would like to thank Al Morton, Arun Gandhi, Amrit Hanspal, Karu 1216 Ratnam, Raveesh Janardan, Andrey Kiselev, and Mohan Nanduri for their 1217 formal reviews of this document. 1219 12. References 1221 12.1. Informative References 1223 [IGP-METH] 1224 Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology 1225 for Benchmarking Link-State IGP Data Plane Route 1226 Convergence", draft-ietf-bmwg-igp-dataplane-conv-term-23 1227 (work in progress), February 2011. 1229 [Br91] Bradner, S., "Benchmarking terminology for network 1230 interconnection devices", RFC 1242, July 1991. 1232 [Ma98] Mandeville, R., "Benchmarking Terminology for LAN 1233 Switching Devices", RFC 2285, February 1998. 1235 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1236 Network Interconnect Devices", RFC 2544, March 1999. 1238 [MPLS-FRR-EXT] 1239 Pan, P., Swallow, G., and A. Atlas, "Fast Reroute 1240 Extensions to RSVP-TE for LSP Tunnels", RFC 4090, 1241 May 2005. 1243 [Po06] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana, 1244 "Terminology for Benchmarking Network-layer Traffic 1245 Control Mechanisms", RFC 4689, October 2006. 1247 [MPLS-FWD] Akhter, A., Asati, R., and C. Pignataro, "MPLS Forwarding 1248 Benchmarking Methodology for IP Flows", RFC 5695, 1249 November 2009. 1251 [RFC6414] Papneja, R., Poretsky, S., Vapiwala, S., and J. Karthik, 1252 "Benchmarking Terminology for Protection Performance", 1253 RFC 6414, October 2011. 1255 12.2. Normative References 1257 [Br97] Bradner, S., "Key words for use in RFCs to Indicate 1258 Requirement Levels", BCP 14, RFC 2119, March 1997. 1260 Appendix A. Fast Reroute Scalability Table 1262 This section provides the recommended numbers for evaluating the 1263 scalability of fast reroute implementations. It also recommends the 1264 typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries. 1265 Based on the features supported by the device under test (DUT), 1266 appropriate scaling limits can be used for the test bed. 1268 A1. FRR IGP Table 1270 No. of Headend TE Tunnels IGP Prefixes 1272 1 100 1274 1 500 1276 1 1000 1278 1 2000 1280 1 5000 1282 2 (Load Balance) 100 1284 2 (Load Balance) 500 1286 2 (Load Balance) 1000 1288 2 (Load Balance) 2000 1290 2 (Load Balance) 5000 1292 100 100 1294 500 500 1296 1000 1000 1298 2000 2000 1300 A2. FRR VPN Table 1302 No. of Headend TE Tunnels VPNv4 Prefixes 1304 1 100 1306 1 500 1308 1 1000 1310 1 2000 1312 1 5000 1314 1 10000 1316 1 20000 1318 1 Max 1320 2 (Load Balance) 100 1322 2 (Load Balance) 500 1324 2 (Load Balance) 1000 1326 2 (Load Balance) 2000 1328 2 (Load Balance) 5000 1330 2 (Load Balance) 10000 1332 2 (Load Balance) 20000 1334 2 (Load Balance) Max 1336 A3. FRR Mid-Point LSP Table 1338 No of Mid-point TE LSPs could be configured at recommended levels - 1339 100, 500, 1000, 2000, or max supported number. 1341 A2. FRR VC Table 1342 No. of Headend TE Tunnels VC entries 1344 1 100 1345 1 500 1346 1 1000 1347 1 2000 1348 1 Max 1349 100 100 1350 500 500 1351 1000 1000 1352 2000 2000 1354 Appendix B. Abbreviations 1356 BFD - Bidirectional Fault Detection 1357 BGP - Border Gateway protocol 1358 CE - Customer Edge 1359 DUT - Device Under Test 1360 FRR - Fast Reroute 1361 IGP - Interior Gateway Protocol 1362 IP - Internet Protocol 1363 LSP - Label Switched Path 1364 MP - Merge Point 1365 MPLS - Multi Protocol Label Switching 1366 N-Nhop - Next - Next Hop 1367 Nhop - Next Hop 1368 OIR - Online Insertion and Removal 1369 P - Provider 1370 PE - Provider Edge 1371 PHP - Penultimate Hop Popping 1372 PLR - Point of Local Repair 1373 RSVP - Resource reSerVation Protocol 1374 SRLG - Shared Risk Link Group 1375 TA - Traffic Analyzer 1376 TE - Traffic Engineering 1377 TG - Traffic Generator 1378 VC - Virtual Circuit 1379 VPN - Virtual Private Network 1381 Authors' Addresses 1383 Rajiv Papneja 1384 Huawei Technologies 1385 2330 Central Expressway 1386 Santa Clara, CA 95050 1387 USA 1389 Email: rajiv.papneja@huawei.com 1391 Samir Vapiwala 1392 Cisco Systems 1393 300 Beaver Brook Road 1394 Boxborough, MA 01719 1395 USA 1397 Email: svapiwal@cisco.com 1399 Jay Karthik 1400 Cisco Systems 1401 300 Beaver Brook Road 1402 Boxborough, MA 01719 1403 USA 1405 Email: jkarthik@cisco.com 1407 Scott Poretsky 1408 Allot Communications 1409 USA 1411 Email: sporetsky@allot.com 1413 Shankar Rao 1414 5005 E. Dartmouth Ave. 1415 Denver, CO 80222 1416 USA 1418 Email: srao@du.edu 1419 Jean-Louis Le Roux 1420 France Telecom 1421 2 av Pierre Marzin 1422 22300 Lannion 1423 France 1425 Email: jeanlouis.leroux@francetelecom.com