idnits 2.17.1 draft-ietf-bmwg-protection-meth-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 24. -- Found old boilerplate from RFC 3978, Section 5.3 on line 26. -- Found old boilerplate from RFC 3978, Section 5.5 on line 1164. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1179. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1189. ** The document has an RFC 3978 Section 5.3 Publication Limitation clause. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. ** The document seems to lack an RFC 3979 Section 5, para. 2 IPR Disclosure Acknowledgement -- however, there's a paragraph with a matching beginning. Boilerplate error? Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There is 1 instance of lines with non-ascii characters in the document. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 1) being 59 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. (A line matching the expected section header was found, but with an unexpected indentation: ' 7. Security Considerations' ) ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack an Authors' Addresses Section. ** There are 286 instances of too long lines in the document, the longest one being 9 characters in excess of 72. ** The abstract seems to contain references ([RFC2119]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 60 has weird spacing: '... media type ...' == Line 141 has weird spacing: '... The follo...' == Line 143 has weird spacing: '... the failo...' == Line 215 has weird spacing: '... type of f...' == Line 298 has weird spacing: '...failure as w...' == (12 more instances...) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC2119' is mentioned on line 68, but not defined == Missing Reference: 'FRR-METH' is mentioned on line 164, but not defined == Missing Reference: 'RFC-4090' is mentioned on line 204, but not defined == Missing Reference: 'TERMID' is mentioned on line 346, but not defined == Unused Reference: 'MPLS-ARCH' is defined on line 1060, but no explicit reference was found in the text == Unused Reference: 'MPLS-LDP' is defined on line 1070, but no explicit reference was found in the text == Unused Reference: 'RFC-WORDS' is defined on line 1074, but no explicit reference was found in the text == Unused Reference: 'RFC-IANA' is defined on line 1078, but no explicit reference was found in the text == Unused Reference: 'TERM-ID' is defined on line 1082, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLS-FRR-EXT' ** Downref: Normative reference to an Informational RFC: RFC 2544 (ref. 'RFC-BENCH') -- Obsolete informational reference (is this intentional?): RFC 3036 (ref. 'MPLS-LDP') (Obsoleted by RFC 5036) -- Obsolete informational reference (is this intentional?): RFC 2434 (ref. 'RFC-IANA') (Obsoleted by RFC 5226) == Outdated reference: A later version (-02) exists of draft-poretsky-protection-term-00 == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-meth-11 Summary: 11 errors (**), 0 flaws (~~), 21 warnings (==), 10 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group R. Papneja 3 Internet Draft Isocore 4 Expires: March 2007 S.Vapiwala 5 J.Karthik 6 Cisco Systems 7 S. Poretsky 8 Reef Point 9 S. Rao 10 Qwest Communications 11 Jean-Louis Le Roux 12 France Telecom 13 October 06 15 Methodology for benchmarking MPLS Protection mechanisms 16 18 Status of this Memo 20 By submitting this Internet-Draft, each author represents that 21 any applicable patent or other IPR claims of which he or she is 22 aware have been or will be disclosed, and any of which he or she 23 becomes aware will be disclosed, in accordance with Section 6 of 24 BCP 79. 26 This document may only be posted in an Internet-Draft. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF), its areas, and its working groups. Note that 30 other groups may also distribute working documents as Internet- 31 Drafts. 33 Internet-Drafts are draft documents valid for a maximum of six months 34 and may be updated, replaced, or obsoleted by other documents at any 35 time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 The list of current Internet-Drafts can be accessed at 39 http://www.ietf.org/ietf/1id-abstracts.txt 41 The list of Internet-Draft Shadow Directories can be accessed at 42 http://www.ietf.org/shadow.html 44 Poretsky, Rao, Le Roux 45 Protection Mechanisms 47 Abstract 49 This draft provides the methodology for benchmarking MPLS Protection 50 mechanisms especially the failover time of local protection (MPLS Fast 51 Reroute as defined in RFC-4090). The failover to a backup tunnel could 52 happen at the headend of the primary tunnel or a midpoint and the backup 53 could offer link or node protection. It becomes vital to benchmark the 54 failover time for all the cases and combinations. The failover time 55 could also greatly differ based on the design and implementation and by 56 factors like the number of prefixes carried by the tunnel, the routing 57 protocols that installed these prefixes (IGP, BGP...), the number of 58 primary tunnels affected by the event that caused the failover, number 59 of primary tunnels the backup protects and type of failure, the physical 60 media type on which the failover occurs etc. All the required 61 benchmarking criteria and benchmarking topology required for measuring 62 failover time of local protection is described Conventions used in this 63 document 65 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 66 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 67 document are to be interpreted as described in RFC 2119 [RFC2119]. 69 Table of Contents 71 1. Introduction...................................................3 72 2. Existing definitions...........................................6 73 3. Test Considerations............................................6 74 3.1. Failover Events...........................................6 75 3.2. Failure Detection [TERMID]................................7 76 3.3. Use of Data Traffic for MPLS Protection Benchmarking......7 77 3.4. LSP and Route Scaling.....................................8 78 3.5. Selection of IGP..........................................8 79 3.6. Reversion [TERMID]........................................8 80 3.7. Traffic generation........................................9 81 3.8. Motivation for topologies.................................9 82 4. Test Setup.....................................................9 83 4.1. Link Protection with 1 hop primary (from PLR) and 1 hop 84 backup........................................................10 85 TE tunnels....................................................10 86 4.2. Link Protection with 1 hop primary (from PLR) and 2 hop 87 backup TE tunnels.............................................11 88 4.3. Link Protection with 2+ hop (from PLR) primary and 1 hop 89 backup TE tunnels.............................................11 91 Poretsky, Rao, Le Roux 92 Protection Mechanisms 93 4.4. Link Protection with 2+ hop (from PLR) primary and 2 hop 94 backup TE tunnels.............................................12 95 4.5. Node Protection with 2 hop primary (from PLR) and 1 hop 96 backup TE tunnels.............................................13 97 4.6. Node Protection with 2 hop primary (from PLR) and 2 hop 98 backup TE tunnels.............................................13 99 4.7. Node Protection with 3+ hop primary (from PLR) and 1 hop 100 backup TE tunnels.............................................14 101 4.8. Node Protection with 3+ hop primary (from PLR) and 2 hop 102 backup TE tunnels.............................................15 103 4.9. Baseline MPLS Forwarding Performance Test Topology.......15 104 5. Test Methodology..............................................16 105 5.1. Headend as PLR with link failure.........................16 106 5.2. Mid-Point as PLR with link failure.......................17 107 5.3. Headend as PLR with Node failure.........................18 108 5.4. Mid-Point as PLR with Node failure.......................20 109 5.5. Baseline MPLS Forwarding Performance Test Cases..........21 110 5.5.1. DUT Throughput as Ingress...........................21 111 5.5.2. DUT Latency as Ingress..............................21 112 5.5.3. DUT Throughput as Egress............................22 113 5.5.4. DUT Latency as Egress...............................22 114 5.5.5. DUT Throughput as Mid-Point.........................23 115 5.5.6. DUT Latency as Mid-Point............................23 116 6. Reporting Format..............................................24 117 7. Security Considerations.......................................25 118 8. Acknowledgements..............................................25 119 9. References....................................................25 120 9.1. Normative References.....................................25 121 9.2. Informative References...................................26 122 10. Author's Address.............................................26 123 Appendix A: Fast Reroute Scalability Table.......................29 125 1. Introduction 127 A link or a node failure could occur at the headend or the mid point 128 node of a given primary tunnel. The time it takes to failover to the 129 backup tunnel is a key measurement since it directly affects the traffic 130 carried over the tunnel. The failover could occur at the headend or the 131 midpoint of a primary tunnel and the time it takes to failover depends 132 on a variety of factors like the type of physical media, method of FRR 133 solution (detour vs facility), number of primary tunnels, number of 134 prefixes carried over the tunnel etc. Given all this service providers 135 certainly like to see a methodology to measure the failover time under 136 all possible conditions. 138 Poretsky, Rao, Le Roux 139 Protection Mechanisms 141 The following sections describe all the different topologies and 142 scenarios that should be used and considered to effectively benchmark 143 the failover time. The failure triggers, procedures, scaling 144 considerations and reporting format of the results are discussed as 145 well. 147 In order to benchmark failover time, data plane traffic is used as 148 mentioned in [IGP-METH] since traffic loss is measured in a black-box 149 test and is a widely accepted way to measure convergence. 151 Important point to be noted when benchmarking the failover time is that 152 depending on whether PHP is happening (whether or not implicit null is 153 advertised by the tail-end), and on the number of hops of primary and 154 backup tunnel, we could have different situations where the packets 155 switched over to the backup tunnel may have one, more or 0 labels. 157 All the benchmarking cases mentioned in this document could apply to 158 facility backup as well as local protection enabled in the detour mode. 159 The test cases and the procedures described here should completely 160 benchmark the failover time of a device under test in all possible 161 scenarios and configuration. 163 The additional scenarios defined in this document, are in addition to 164 those considered in [FRR-METH]. All the cases enlisted in this document 165 could be verified in a single topology that is similar to this. 167 --------------------------- 168 | ------------|--------------- 169 | | | | 170 | | | | 171 -------- -------- -------- -------- -------- 172 TG-| R1 |-----| R2 |----| R3 | | R4 | | R5 |-TA 173 | |-----| |----| |----| |---| | 174 -------- -------- -------- -------- -------- 175 | | | | 176 | | | | 177 | -------- | | 178 ---------| R6 |-------- | 179 | |-------------------- 180 -------- 182 Poretsky, Rao, Le Roux 183 Protection Mechanisms 185 Fig.1: Fast Reroute Topology. 187 In figure 1, TG & TA are Traffic Generator & Analyzer respectively. 188 A tester is set outside the node as it sends and receives IP traffic 189 along the working Path, run protocol emulations simulating real world 190 peering scenarios. The tester MUST record the number of lost packets, 191 duplicate packet count, reordered packet count, departure time, and 192 arrival time so that the metrics of Failover Time, Additive Latency, and 193 Reversion Time can be measured. The tester may be a single device or a 194 test system. 196 Two or more failures are considered correlated if those failures occur 197 more or less simultaneously. Correlated failures are often expected 198 where two or more logical resources, such as layer-2 links, rely on a 199 common physical resource, such as common transport. TDM and WDM provide 200 multiplexing at layer-2 and layer-1 that are often the cause of 201 correlated failures. Where such correlations are known, such as knowing 202 that two logical links share a common fiber segment, the expectation of 203 a common failure can be compensated for by specifying Shared Risk Link 204 Groups [RFC-4090]. Not all correlated failures are anticipated in 205 advance of their occurrence. Failures due to natural disasters or due 206 to certain man-made disasters or mistakes are the most notable causes. 207 Failures of this type occur many times a year and generally a quite 208 spectacular failure occurs every few years. 210 There are two factors impacting service availability. One is the 211 frequency of failure. The other is the duration of failure. FRR 212 improves availability by minimizing the duration of the most common 213 failures. Unexpected correlated failures are less common. Some routers 214 recover much more quickly than others and therefore benchmarking this 215 type of failure may also be useful. Benchmarking of unexpected 216 correlated failures should include measurement of restoration with and 217 without the availability of IP fallback. The use BGP free core may be 218 growing, making the latter case an important test case. This document 219 focuses on FRR failover benchmarking with MPLS TE. Benchmarking of 220 unexpected correlated failures is out of scope but may be covered by a 221 later document. 223 Poretsky, Rao, Le Roux 224 Protection Mechanisms 225 2. Existing definitions 227 For the sake of clarity and continuity this RFC adopts the template 228 for definitions set out in Section 2 of RFC 1242. Definitions are 229 indexed and grouped together in sections for ease of reference. 231 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 232 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in 233 this document are to be interpreted as described in RFC 2119. 235 The reader is assumed to be familiar with the commonly used MPLS 236 terminology, some of which is defined in [MPLS-RSVP], [MPLS-RSVP-TE], 237 and [MPLS-FRR-EXT]. 239 3. Test Considerations 241 This section discusses the fundamentals of MPLS Protection testing: 243 -The types of network events that causes failover 244 -Indications for failover 245 -the use of data traffic 246 -Traffic generation 247 -LSP Scaling 248 -Reversion of LSP 249 -IGP Selection 251 3.1. Failover Events 253 Triggers for failover to a backup tunnel are link and node failures 254 seen downstream of the PLR as follows. 256 Link failure events 258 - Shutdown interface on PLR side with POS Alarm 259 - Shutdown interface on remote side with POS Alarm 260 - Shutdown interface on PLR side with RSVP hello 261 - Shutdown interface on remote side with RSVP hello 262 - Shutdown interface on PLR side with BFD 263 - Shutdown interface on remote side with BFD 264 - Fiber Pull on PLR side (Both TX & RX or just the Tx) 265 - Fiber Pull on remote side (Both TX & RX or just the Rx) 267 Poretsky, Rao, Le Roux 268 Protection Mechanisms 269 - OIR on PLR side 270 - OIR on remote side 271 - Sub-interface failure (shutting down of a VLAN) 272 - Shut parent interface bearing multiple sub-interfaces 274 Node failure events 275 A Reload is a graceful shutdown or a power failure. We refer to Crash 276 as a software failure or an assert. 278 - Reload protected Node, when RSVP Hello are enable 279 - Crash Protected Node, when RSVP Hello are enable 280 - Reload Protected Node, when BFD is enable 281 - Crash Protected Node, when BFD is enable 283 3.2. Failure Detection [TERMID] 285 Local failures can be detected via SONET/SDH failure with directly 286 connected LSR. Failure indication may vary with the type of alarm - 287 LOS, AIS, or RDI. Failures on Ethernet technology links such as 288 Gigabit Ethernet rely upon Layer 3 signaling indication for failure. 290 Different MPLS protection mechanisms and different implementations 291 use different failure indications such as RSVP hellos, BFD etc. 292 Ethernet technologies such as Gigabit Ethernet rely upon layer 3 293 failure indication mechanisms since there is no Layer 2 failure 294 indication mechanism. The failure detection time may not always be 295 negligible and it could impact the overall failover time. 297 The test procedures in this document can be used against a local 298 failure as well as against a remote failure to account for 299 completeness of benchmarking and to evaluate failover performance 300 independent of the implemented signaling indication mechanism. 302 3.3. Use of Data Traffic for MPLS Protection Benchmarking 304 Customers of service providers use packet loss as the metric for 305 failover time. Packet loss is an externally observable event having 306 direct impact on customers' application performance. MPLS protection 307 mechanism is expected to minimize the packet loss in the event of a 308 failure. For this reason it is important to develop a standard router 309 benchmarking methodology for measuring MPLS protection that uses 311 Poretsky, Rao, Le Roux 312 Protection Mechanisms 313 packet loss as a metric. At a known rate for forwarding, packet loss 314 can be measured and used to calculate the Failover time. Measurement 315 of control plane signaling to establish backup paths is not enough 316 to verify failover. Failover is best determined when packets are 317 actually traversing the backup path. 319 An additional benefit of using packet loss for calculation of 320 Failover time is that it enables black-box tests to be designed. Data 321 traffic can be offered at line-rate to the device under test (DUT), 322 an emulated network event as described above can be forced to occur, 323 and packet loss can be externally measured to calculate the 324 convergence time. Knowledge of DUT architecture is not required. 325 There is no need to rely on the understanding of the implementation 326 details of the DUT to get the required test results. 328 In addition, this methodology will consider the errored packets and 329 duplicate packets that could have been generated during the failover 330 process. In extreme cases, where measurement of errored and duplicate 331 packets is difficult, these packets could be attributed to lost 332 packets. 334 3.4. LSP and Route Scaling 336 Failover time performance may vary with the number of established 337 primary and backup LSPs and routes learned. However the procedure 338 outlined here may be used for any number of LSPs, L, and number of 339 routes protected by PLR, R. L and R must be recorded. 341 3.5. Selection of IGP 343 The underlying IGP could be ISIS-TE or OSPF-TE for the methodology 344 proposed here. 346 3.6. Reversion [TERMID] 348 Fast Reroute provides a method to return or restore a backup path to 349 original primary LSP upon recovery from the failure. This is referred 350 to as Reversion, which can be implemented as Global Reversion or 351 Local Reversion. In all test cases listed here Reversion should not 352 produce any packet loss, out of order or duplicate packets. Each of 353 the test cases in this methodology document provides a step to verify 354 that there is no packet loss. 356 Poretsky, Rao, Le Roux 357 Protection Mechanisms 358 3.7. Traffic generation 360 It is suggested that there be one or more traffic streams as long as 361 there is a steady and constant rate of flow for all the streams. In 362 order to monitor the DUT performance for recovery times a set of 363 route prefixes should be advertised before traffic is sent. The 364 traffic should be configured towards these routes. 366 A typical example would be configuring the traffic generator to send 367 the traffic to the first, middle and last of the advertised routes. 368 (First, middle and last could be decided by the numerically smallest, 369 median and the largest respectively of the advertised prefix). 370 Generating traffic to all of the prefixes reachable by the protected 371 tunnel (probably in a Round-Robin fashion, where the traffic is 372 destined to all the prefixes but one prefix at a time in a cyclic 373 manner) is not recommended. The reason why traffic generation is not 374 recommended in a Round-Robin fashion to all the prefixes, one at a 375 time is that if there are many prefixes reachable through the LSP the 376 time interval between 2 packets destined to one prefix may be 377 significantly high and may be comparable with the failover time being 378 measured which does not aid in getting an accurate failover 379 measurement. 381 3.8. Motivation for topologies 383 Given that the label stack is dependent on the following 3 entities 384 it is recommended that the benchmarking of failover time be performed 385 on all the 8 topologies enlisted in section 4 387 - Type of protection (Link Vs Node) 389 - # of remaining hops of the primary tunnel from the PLR 391 - # of remaining hops of the backup tunnel from the PLR 393 4. Test Setup 395 Topologies to be used for benchmarking the failover time: 397 This section proposes a set of topologies that covers the scenarios 398 for local protection. All of these 8 topologies shown (figure 2- 399 figure 9) can be mapped to the master FRR topology shown in figure 1. 400 Topologies shown in section 4.1 to 4.8 refer to the network 401 topologies required to benchmark failover time when DUT is configured 402 as a PLR either in headend or midpoint role. The number of labels 403 listed below are all w.r.t the PLR. 405 Poretsky, Rao, Le Roux 406 Protection Mechanisms 407 The label stacks shown below each figure in section 4.1 to 4.9 408 considers the scenario when PHP is enabled. 410 In the following network topologies, 412 HE is Head-End, TE is Tail-End, MID is Mid point, MP is Merge Point, 414 PLR is Point of Local Repair, PRI is Primary and BKP denotes Backup 415 Node 417 4.1. Link Protection with 1 hop primary (from PLR) and 1 hop backup 419 TE tunnels 421 ------- -------- PRI -------- 422 | R1 | | R2 | | R3 | 423 TG-| HE |--| MID |----| TE |-TA 424 | | | PLR |----| | 425 ------- -------- BKP -------- 426 Figure 2: Represents the setup for section 4.1 428 Traffic No of Labels No of labels after 429 before failure failure 430 IP TRAFFIC (P-P) 0 0 431 Layer3 VPN (PE-PE) 1 1 432 Layer3 VPN (PE-P) 2 2 433 Layer2 VC (PE-PE) 1 1 434 Layer2 VC (PE-P) 2 2 435 Mid-point LSPs 0 0 437 Poretsky, Rao, Le Roux 438 Protection Mechanisms 440 4.2. Link Protection with 1 hop primary (from PLR) and 2 hop backup TE 441 tunnels 443 ------- -------- -------- 444 | R1 | | R2 | | R3 | 445 TG-| HE | | MID |PRI | TE |-TA 446 | |----| PLR |----| | 447 ------- -------- -------- 448 |BKP | 449 | -------- | 450 | | R6 | | 451 |----| BKP |----| 452 | MID | 453 -------- 454 Figure 3: Representing setup for section 4.2 456 Traffic No of Labels No of labels 457 before failure after failure 458 IP TRAFFIC (P-P) 0 1 459 Layer3 VPN (PE-PE) 1 2 460 Layer3 VPN (PE-P) 2 3 461 Layer2 VC (PE-PE) 1 2 462 Layer2 VC (PE-P) 2 3 463 Mid-point LSPs 0 1 465 4.3. Link Protection with 2+ hop (from PLR) primary and 1 hop backup TE 466 tunnels 468 -------- -------- -------- -------- 469 | R1 | | R2 |PRI | R3 |PRI | R4 | 470 TG-| HE |----| MID |----| MID |------| TE |-TA 471 | | | PLR |----| | | | 472 -------- -------- BKP -------- -------- 473 Figure 4: Representing setup for section 4.3 475 Traffic No of Labels No of labels 477 Poretsky, Rao, Le Roux 478 Protection Mechanisms 479 before failure after failure 481 IP TRAFFIC (P-P) 1 1 482 Layer3 VPN (PE-PE) 2 2 483 Layer3 VPN (PE-P) 3 3 484 Layer2 VC (PE-PE) 2 2 485 Layer2 VC (PE-P) 3 3 486 Mid-point LSPs 1 1 488 4.4. Link Protection with 2+ hop (from PLR) primary and 2 hop backup TE 489 tunnels 491 -------- -------- PRI -------- PRI -------- 492 | R1 | | R2 | | R3 | | R4 | 493 TG-| HE |----| MID |----| MID |------| TE |-TA 494 | | | PLR | | | | | 495 -------- -------- -------- -------- 496 BKP| | 497 | -------- | 498 | | R6 | | 499 ---| BKP |- 500 | MID | 501 -------- 502 Figure 5: Representing the setup for section 4.4 504 Traffic No of Labels No of labels 505 before failure after failure 507 IP TRAFFIC (P-P) 1 2 508 Layer3 VPN (PE-PE) 2 3 509 Layer3 VPN (PE-P) 3 4 510 Layer2 VC (PE-PE) 2 3 511 Layer2 VC (PE-P) 3 4 512 Mid-point LSPs 1 2 514 Poretsky, Rao, Le Roux 515 Protection Mechanisms 517 4.5. Node Protection with 2 hop primary (from PLR) and 1 hop backup TE 518 tunnels 520 -------- -------- -------- -------- 521 | R1 | | R2 |PRI | R3 | PRI | R4 | 522 TG-| HE |----| MID |----| MID |------| TE |-TA 523 | | | PLR | | | | | 524 -------- -------- -------- -------- 525 |BKP | 526 ----------------------------- 527 Figure 6: Representing the setup for section 4.5 529 Traffic No of Labels No of labels 530 before failure after failure 532 IP TRAFFIC (P-P) 1 0 533 Layer3 VPN (PE-PE) 2 1 534 Layer3 VPN (PE-P) 3 2 535 Layer2 VC (PE-PE) 2 1 536 Layer2 VC (PE-P) 3 2 537 Mid-point LSPs 1 0 539 4.6. Node Protection with 2 hop primary (from PLR) and 2 hop backup TE 540 tunnels 542 -------- -------- -------- -------- 543 | R1 | | R2 | | R3 | | R4 | 544 TG-| HE | | MID |PRI | MID |PRI | TE |-TA 545 | |----| PLR |----| |----| | 546 -------- -------- -------- -------- 547 | | 548 BKP| -------- | 549 | | R6 | | 550 ---------| BKP |--------- 551 | MID | 552 -------- 553 Figure 7: Representing setup for section 4.6 555 Poretsky, Rao, Le Roux 556 Protection Mechanisms 558 Traffic No of Labels No of labels 559 before failure after failure 561 IP TRAFFIC (P-P) 1 1 562 Layer3 VPN (PE-PE) 2 2 563 Layer3 VPN (PE-P) 3 3 564 Layer2 VC (PE-PE) 2 2 565 Layer2 VC (PE-P) 3 3 566 Mid-point LSPs 1 1 568 4.7. Node Protection with 3+ hop primary (from PLR) and 1 hop backup TE 569 tunnels 571 -------- -------- PRI -------- PRI -------- PRI -------- 572 | R1 | | R2 | | R3 | | R4 | | R5 | 573 TG-| HE |--| MID |---| MID |---| MP |---| TE |-TA 574 | | | PLR | | | | | | | 575 -------- -------- -------- -------- -------- 576 BKP| | 577 -------------------------- 578 Figure 8: Representing setup for section 4.7 580 Traffic No of Labels No of labels 581 before failure after failure 583 IP TRAFFIC (P-P) 1 1 584 Layer3 VPN (PE-PE) 2 2 585 Layer3 VPN (PE-P) 3 3 586 Layer2 VC (PE-PE) 2 2 587 Layer2 VC (PE-P) 3 3 588 Mid-point LSPs 1 1 590 Poretsky, Rao, Le Roux 591 Protection Mechanisms 593 4.8. Node Protection with 3+ hop primary (from PLR) and 2 hop backup 594 TE tunnels 596 -------- -------- -------- -------- -------- 597 | R1 | | R2 | | R3 | | R4 | | R5 | 598 TG-| HE | | MID |PRI| MID |PRI| MP |PRI| TE |-TA 599 | |-- | PLR |---| |---| |---| | 600 -------- -------- -------- -------- -------- 601 BKP| | 602 | -------- | 603 | | R6 | | 604 ---------| BKP |------- 605 | MID | 606 -------- 607 Figure 9: Representing setup for section 4.8 609 Traffic No of Labels No of labels 610 before failure after failure 612 IP TRAFFIC (P-P) 1 2 613 Layer3 VPN (PE-PE) 2 3 614 Layer3 VPN (PE-P) 3 4 615 Layer2 VC (PE-PE) 2 3 616 Layer2 VC (PE-P) 3 4 617 Any 1 2 619 4.9. Baseline MPLS Forwarding Performance Test Topology 621 ------- -------- -------- 622 | R1 | | R2 | | R3 | 623 | HE |--| MID |----| TE | 624 | | | | | | 625 ------- -------- -------- 627 Figure 10: Baseline Forwarding Performance 629 Poretsky, Rao, Le Roux 630 Protection Mechanisms 631 5. Test Methodology 633 The procedure described in this section can be applied to all the 8 634 base test cases and the associated topologies. The backup as well as 635 the primary tunnel are configured to be alike in terms of bandwidth 636 usage. In order to benchmark failover with all possible label stack 637 depth applicable as seen with current deployments, it is suggested 638 that the methodology includes all the scenarios listed here 640 5.1. Headend as PLR with link failure 642 Objective 644 To benchmark the MPLS failover time due to Link failure events 645 described in section 3.1 experienced by the DUT which is the point 646 of local repair (PLR). 648 Test Setup 650 - select any one topology out of 8 from section 4 651 - select overlay technology for FRR test e.g IGP,VPN,or VC 652 - The DUT will also have 2 interfaces connected to the traffic 653 Generator/analyzer. (If the node downstream of the PLR is not 654 A simulated node, then the Ingress of the tunnel should have 655 one link connected to the traffic generator and the node 656 downstream to the PLR or the egress of the tunnel should have 657 a link connected to the traffic analyzer). 659 Test Configuration 661 1. Configure the number of primaries on R2 and the backups on 662 R2 as required by the topology selected. 663 2. Advertise prefixes (as per FRR Scalability table describe in 664 Appendix A) by the tail end. 666 Procedure 668 1. Establish the primary lsp on R2 required by the topology 669 selected 670 2. Establish the backup lsp on R2 required by the selected 671 topology 673 Poretsky, Rao, Le Roux 674 Protection Mechanisms 675 3. Verify primary and backup lsps are up and that primary is 676 protected 677 4. Verify Fast Reroute protection is enabled and ready 678 5. Setup traffic streams as described in section 3.7 679 6. Send IP traffic at maximum Forwarding Rate to DUT. 680 7. Verify traffic switched over Primary LSP. 681 8. Trigger any choice of Link failure as describe in section 682 3.1 683 9. Verify that primary tunnel and prefixes gets mapped to 684 backup tunnels 685 10. Stop traffic stream and measure the traffic loss. 686 11. Failover time is calculated as per defined in section 6, 687 Reporting format. 688 12. Start traffic stream again to verify reversion when 689 protected interface comes up. Traffic loss should be 0 due 690 to make before break or reversion. 691 13. Enable protected interface that was down (Node in the case 692 of NNHOP) 693 14. Verify head-end signals new LSP and protection should be in 694 place again 696 5.2. Mid-Point as PLR with link failure 698 Objective 700 To benchmark the MPLS failover time due to Link failure events 701 described in section 3.1 experienced by the device under test which 702 is the point of local repair (PLR). 704 Test Setup 706 - select any one topology out of 8 from section 4 707 - select overlay technology for FRR test as Mid-Point lsps 708 - The DUT will also have 2 interfaces connected to the traffic 709 generator. 711 Test Configuration 713 1. Configure the number of primaries on R1 and the backups on 714 R2 as required by the topology selected 716 Poretsky, Rao, Le Roux 717 Protection Mechanisms 718 2. Advertise prefixes (as per FRR Scalability table describe in 719 Appendix A) by the tail end. 721 Procedure 723 1. Establish the primary lsp on R1 required by the topology 724 selected 725 2. Establish the backup lsp on R2 required by the selected 726 topology 727 3. Verify primary and backup lsps are up and that primary is 728 protected 729 4. Verify Fast Reroute protection 730 5. Setup traffic streams as described in section 3.7 731 6. Send IP traffic at maximum Forwarding Rate to DUT. 732 7. Verify traffic switched over Primary LSP. 733 8. Trigger any choice of Link failure as describe in section 734 3.1 735 9. Verify that primary tunnel and prefixes gets mapped to 736 backup tunnels 737 10. Stop traffic stream and measure the traffic loss. 738 11. Failover time is calculated as per defined in section 6, 739 Reporting format. 740 12. Start traffic stream again to verify reversion when 741 protected interface comes up. Traffic loss should be 0 due 742 to make before break or reversion 743 13. Enable protected interface that was down (Node in the case 744 of NNHOP) 745 14. Verify head-end signals new LSP and protection should be in 746 place again 748 5.3. Headend as PLR with Node failure 750 Objective 752 To benchmark the MPLS failover time due to Node failure events 753 described in section 3.1 experienced by the device under test which 754 is the point of local repair (PLR). 756 Test Setup 758 Poretsky, Rao, Le Roux 759 Protection Mechanisms 760 - select any one topology from section 4.5 to 4.8 761 - select overlay technology for FRR test e.g IGP,VPN,or VC 762 - The DUT will also have 2 interfaces connected to the traffic 763 generator. 765 Test Configuration 767 1. Configure the number of primaries on R2 and the backups on 768 R2 as required by the topology selected 769 2. Advertise prefixes (as per FRR Scalability table describe in 770 Appendix A) by the tail end. 772 Procedure 774 1. Establish the primary lsp on R2 required by the topology 775 selected 776 2. Establish the backup lsp on R2 required by the selected 777 topology 778 3. Verify primary and backup lsps are up and that primary is 779 protected 780 4. Verify Fast Reroute protection 781 5. Setup traffic streams as described in section 3.7 782 6. Send IP traffic at maximum Forwarding Rate to DUT. 783 7. Verify traffic switched over Primary LSP. 784 8. Trigger any choice of Node failure as describe in section 785 3.1 786 9. Verify that primary tunnel and prefixes gets mapped to 787 backup tunnels 788 10. Stop traffic stream and measure the traffic loss. 789 11. Failover time is calculated as per defined in section 6, 790 Reporting format. 791 12. Start traffic stream again to verify reversion when 792 protected interface comes up. Traffic loss should be 0 due 793 to make before break or reversion 794 13. Boot protected Node that was down. 795 14. Verify head-end signals new LSP and protection should be in 796 place again 798 Poretsky, Rao, Le Roux 799 Protection Mechanisms 800 5.4. Mid-Point as PLR with Node failure 802 Objective 804 To benchmark the MPLS failover time due to Node failure events 805 described in section 3.1 experienced by the device under test which 806 is the point of local repair (PLR). 808 Test Setup 810 - select any one topology from section 4.5 to 4.8 811 - select overlay technology for FRR test as Mid-Point lsps 812 - The DUT will also have 2 interfaces connected to the traffic 813 generator. 815 Test Configuration 817 1. Configure the number of primaries on R1 and the backups on 818 R2 as required by the topology selected 819 2. Advertise prefixes (as per FRR Scalability table describe in 820 Appendix A) by the tail end. 822 Procedure 824 1. Establish the primary lsp on R1 required by the topology 825 selected 826 2. Establish the backup lsp on R2 required by the selected 827 topology 828 3. Verify primary and backup lsps are up and that primary is 829 protected 830 4. Verify Fast Reroute protection 831 5. Setup traffic streams as described in section 3.7 832 6. Send IP traffic at maximum Forwarding Rate to DUT. 833 7. Verify traffic switched over Primary LSP. 834 8. Trigger any choice of Node failure as describe in section 835 3.1 836 9. Verify that primary tunnel and prefixes gets mapped to 837 backup tunnels 838 10. Stop traffic stream and measure the traffic loss. 839 11. Failover time is calculated as per defined in section 6, 840 Reporting format. 842 Poretsky, Rao, Le Roux 843 Protection Mechanisms 844 12. Start traffic stream again to verify reversion when 845 protected interface comes up. Traffic loss should be 0 due 846 to make before break or reversion 847 13. Boot protected Node that was down 848 14. Verify head-end signals new LSP and protection should be in 849 place again 851 5.5. Baseline MPLS Forwarding Performance Test Cases 853 For the following Forwarding Performance Benchmarking cases, the 854 egress must not send an implicit-null label. That is PHP should 855 not occur. 857 5.5.1. DUT Throughput as Ingress 859 Objective 861 To baseline the MPLS Throughput of the DUT acting as an 862 Ingress. 864 Procedure 866 1. Configure the DUT as R1, Ingress and the Tester as R2/R3 867 Midpoint and Egress as shown in Figure 10. 868 2. Execute the Throughput benchmarking test, as specified in 869 [RFC-BENCH], paragraph 26.1. 871 Expected Results: 873 The DUT will push a single label onto the IP packet and 874 forward it to the Tester as an MPLS packet. 876 5.5.2. DUT Latency as Ingress 878 Objective 880 To baseline the MPLS Latency of the DUT acting as an 881 Ingress. 883 Procedure 885 Poretsky, Rao, Le Roux 886 Protection Mechanisms 888 1. Configure the DUT as R1, Ingress and the Tester as R2/R3 889 Midpoint and Egress as shown in Figure 10. 890 2. Execute the Latency benchmarking test, as specified in 891 [RFC-BENCH], paragraph 26.2. 893 Expected Results: 895 The DUT will push a single label onto the IP packet and 896 forward it to the Tester as an MPLS packet. 898 5.5.3. DUT Throughput as Egress 900 Objective 902 To baseline the MPLS Throughput of the DUT acting as an 903 Egress. 905 Procedure 907 1. Configure the DUT as R3, Egress and the Tester as R1/R2 908 Ingress and Midpoint as shown in Figure 10. 909 2. Execute the Throughput benchmarking test, as specified in 910 [RFC-BENCH], paragraph 26.1 using MPLS labeled IP packets for 911 the offered load. 913 Expected Results: 915 The DUT will pop a single label from the IP packet and 916 forward it to the Tester as an IP packet. 918 5.5.4. DUT Latency as Egress 920 Objective 922 To baseline the MPLS Latency of the DUT acting as an Egress. 924 Procedure 926 1. Configure the DUT as R3, Egress and the Tester as R1/R2 927 Ingress and Midpoint as shown in Figure 10. 929 Poretsky, Rao, Le Roux 930 Protection Mechanisms 931 2. Execute the Latency benchmarking test, as specified in 932 [RFC-BENCH], paragraph 26.2 using MPLS labeled IP packets for 933 the offered load. 935 Expected Results: 937 The DUT will pop a single label from the IP packet and 938 forward it to the Tester as an IP packet. 940 5.5.5. DUT Throughput as Mid-Point 942 Objective 944 To baseline the MPLS Throughput of the DUT acting as a Mid- 945 Point. 947 Procedure 949 1. Configure the DUT as R2, Mid-Point and the Tester as 950 R1/R3 Ingress and Egress as shown in Figure 10. 951 2. Execute the Throughput benchmarking test, as specified in 952 [RFC-BENCH], paragraph 26.1 using MPLS labeled IP packets for 953 the offered load. 955 Expected Results: 957 The DUT will receive the MPLS labeled packet, swap a single 958 MPLS label and forward it to the Tester as an MPLS labeled 959 packet. 961 5.5.6. DUT Latency as Mid-Point 963 Objective 965 To baseline the MPLS Latency of the DUT acting as a Mid- 966 Point. 968 Procedure 970 Poretsky, Rao, Le Roux 971 Protection Mechanisms 972 1. Configure the DUT as R2, Mid-Point and the Tester as 973 R1/R3 Ingress and Egress as shown in Figure 10. 974 2. Execute the Latency benchmarking test, as specified in 975 [RFC-BENCH], paragraph 26.2 using MPLS labeled IP packets for 976 the offered load. 978 Expected Results: 980 The DUT will receive the MPLS labeled packet, swap a single 981 MPLS label and forward it to the Tester as an MPLS labeled 982 packet. 984 6. Reporting Format 986 For each test, it is recommended that the results be reported in the 987 following format. 989 Parameter Units 991 IGP used for the test ISIS-TE/ OSPF-TE 992 Interface types Gige,POS,ATM,VLAN etc. 993 Packet Sizes offered to the DUT Bytes 994 IGP routes advertised number of IGP routes 995 RSVP hello timers configured (if any) milliseconds 996 Number of FRR tunnels configured number of tunnels 997 Number of VPN routes in head-end number of VPN routes 998 Number of VC tunnels number of VC tunnels 999 Number of BGP routes number of BGP routes 1000 Number of mid-point tunnels number of tunnels 1001 Number of Prefixes protected by Primary number of prefixes 1002 Number of LSPs being protected number of LSPs 1003 Topology being used Section number 1004 Failure Event Event type 1006 Benchmarks 1008 Minimum failover time milliseconds 1009 Mean failover time milliseconds 1011 Poretsky, Rao, Le Roux 1012 Protection Mechanisms 1013 Maximum failover time milliseconds 1014 Minimum reversion time milliseconds 1015 Mean reversion time milliseconds 1016 Maximum reversion time milliseconds 1018 Failover time suggested above is calculated using the following 1019 formula: (Numbers of packet drop/rate per second * 1000) milliseconds 1021 Note: If the primary is configured to be dynamic, and if the primary 1022 is to reroute, make before break should occur from the backup that is 1023 in use to a new alternate primary. If there is any packet loss seen, 1024 it should be added to failover time. 1026 7. Security Considerations 1028 Documents of this type do not directly affect the security of 1029 the Internet or of corporate networks as long as benchmarking 1030 is not performed on devices or systems connected to operating 1031 networks. 1033 8. Acknowledgements 1035 We would like to thank Jean Philip Vasseur for his invaluable input 1036 to the document and Curtis Villamizar for his contribution in suggesting 1037 text on definition and need for benchmarking Correlated failures. 1039 Additionally we would like to thank Arun Gandhi, Amrit Hanspal, Karu 1040 Ratnam and for their input to the document. 1042 9. References 1044 9.1. Normative References 1046 [MPLS-RSVP] R. Braden, Ed., et al, "Resource ReSerVation 1047 protocol (RSVP) -- version 1 functional 1048 specification," RFC2205, September 1999. 1050 [MPLS-RSVP-TE] D. Awduche, et al, "RSVP-TE: Extensions to 1051 RSVP for LSP Tunnels", RFC3209, December 2001. 1053 [MPLS-FRR-EXT] Pan, P., Atlas, A., Swallow, G., 1055 Poretsky, Rao, Le Roux 1056 Protection Mechanisms 1057 "Fast Reroute Extensions to RSVP-TE for LSP 1058 Tunnels", RFC 4090. 1060 [MPLS-ARCH] Rosen, E., Viswanathan, A. and R. Callon, 1061 "Multiprotocol Label Switching Architecture", 1062 RFC 3031, January 2001. 1064 [RFC-BENCH] Bradner, S. and McQuaid, J., "Benchmarking 1065 Methodology for Network Interconnect Devices", 1066 RFC 2544. 1068 9.2. Informative References 1070 [MPLS-LDP] Andersson, L., Doolan, P., Feldman, N., 1071 Fredette, A. and B. Thomas, "LDP Specification", 1072 RFC 3036, January 2001. 1074 [RFC-WORDS] Bradner, S., "Key words for use in RFCs to 1075 Indicate Requirement Levels", RFC 2119, 1076 March 1997. 1078 [RFC-IANA] T. Narten and H. Alvestrand, "Guidelines for 1079 Writing an IANA Considerations Section in RFCs", 1080 RFC 2434. 1082 [TERM-ID] Poretsky, S., Papneja, R., 1083 "Benchmarking Terminology for Protection 1084 Performance", draft-poretsky-protection-term- 1085 00.txt, work in progress. 1087 [IGP-METH] S. Poretsky, B. Imhoff. "Benchmarking Methodology 1088 for IGP Data Plane Route Convergence," draft-ietf- 1089 bmwg-igp-dataplane-conv-meth-11.txt,� work in 1090 progress. 1092 10. Author's Address 1094 Rajiv Papneja 1096 Poretsky, Rao, Le Roux 1097 Protection Mechanisms 1098 Isocore 1099 12359 Sunrise Valley Drive, STE 100 1100 Reston, VA 20190 1101 USA 1102 Phone: +1 703 860 9273 1103 Email: rpapneja@isocore.com 1105 Samir Vapiwala 1106 Cisco System 1107 300 Beaver Brook Road 1108 Boxborough, MA 01719 1109 USA 1110 Phone: +1 978 936 1484 1111 Email: svapiwal@cisco.com 1113 Jay Karthik 1114 Cisco System 1115 300 Beaver Brook Road 1116 Boxborough, MA 01719 1117 USA 1118 Phone: +1 978 936 0533 1119 Email: jkarthik@cisco.com 1121 Scott Poretsky 1122 Reef Point Systems 1123 8 New England Executive Park 1124 Burlington, MA 01803 1125 USA 1126 Phone: + 1 781 395 5090 1127 EMail: sporetsky@reefpoint.com 1129 Shankar Rao 1130 Qwest Communications, 1131 950 17th Street 1132 Suite 1900 1133 Qwest Communications 1134 Denver, CO 80210 1135 USA 1136 Phone: + 1 303 437 6643 1137 Email: shankar.rao@qwest.com 1139 Poretsky, Rao, Le Roux 1140 Protection Mechanisms 1142 Jean-Louis Le Roux 1143 France Telecom 1144 2 av Pierre Marzin 1145 22300 Lannion 1146 France 1147 Phone: 00 33 2 96 05 30 20 1148 Email: jeanlouis.leroux@orange-ft.com 1150 Full Copyright Statement 1152 Copyright (C) The Internet Society (2006). 1154 This document is subject to the rights, licenses and restrictions 1155 contained in BCP 78, and except as set forth therein, the authors 1156 retain all their rights. 1158 This document and the information contained herein are provided on an 1159 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1160 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 1161 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 1162 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 1163 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1164 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1166 Intellectual Property 1167 The IETF takes no position regarding the validity or scope of any 1168 Intellectual Property Rights or other rights that might be claimed to 1169 pertain to the implementation or use of the technology described in 1170 this document or the extent to which any license under such rights 1171 might or might not be available; nor does it represent that it has 1172 made any independent effort to identify any such rights. Information 1173 on the procedures with respect to rights in RFC documents can be 1174 found in BCP 78 and BCP 79. 1175 Copies of IPR disclosures made to the IETF Secretariat and any 1176 assurances of licenses to be made available, or the result of an 1177 attempt made to obtain a general license or permission for the use of 1178 such proprietary rights by implementers or users of this 1179 specification can be obtained from the IETF on-line IPR repository at 1181 Poretsky, Rao, Le Roux 1182 Protection Mechanisms 1183 http://www.ietf.org/ipr. 1185 The IETF invites any interested party to bring to its attention any 1186 copyrights, patents or patent applications, or other proprietary 1187 rights that may cover technology that may be required to implement 1188 this standard. Please address the information to the IETF at ietf- 1189 ipr@ietf.org. 1191 Acknowledgement 1192 Funding for the RFC Editor function is currently provided by the 1193 Internet Society. 1195 Appendix A: Fast Reroute Scalability Table 1197 This section provides the recommended numbers for evaluating the 1198 scalability of fast reroute implementations. It also recommends the 1199 typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries. 1200 Based on the features supported by the device under test, appropriate 1201 scaling limits can be used for the test bed. 1203 A 1. FRR IGP Table 1205 No of Headend IGP Prefixes 1206 TE LSPs 1207 1 100 1208 1 500 1209 1 1000 1210 1 2000 1211 1 5000 1212 2(Load Balance) 100 1213 2(Load Balance) 500 1214 2(Load Balance) 1000 1215 2(Load Balance) 2000 1216 2(Load Balance) 5000 1217 100 100 1218 500 500 1219 1000 1000 1220 2000 2000 1222 Poretsky, Rao, Le Roux 1223 Protection Mechanisms 1224 A 2. FRR VPN Table 1226 No of Headend VPNv4 Prefixes 1227 TE LSPs 1229 1 100 1230 1 500 1231 1 1000 1232 1 2000 1233 1 5000 1234 1 10000 1235 1 20000 1236 1 Max 1237 2(Load Balance) 100 1238 2(Load Balance) 500 1239 2(Load Balance) 1000 1240 2(Load Balance) 2000 1241 2(Load Balance) 5000 1242 2(Load Balance) 10000 1243 2(Load Balance) 20000 1244 2(Load Balance) Max 1246 A 3. FRR Mid-Point LSP Table 1248 No of Mid-point TE LSps could be configured at the following 1249 recommended levels 1250 100 1251 500 1252 1000 1253 2000 1254 Max supported number 1256 A 4. FRR VC Table 1258 No of Headend VC entries 1259 TE LSPs 1261 1 100 1262 1 500 1263 1 1000 1265 Poretsky, Rao, Le Roux 1266 Protection Mechanisms 1267 1 2000 1268 1 Max 1269 100 100 1270 500 500 1271 1000 1000 1272 2000 2000 1274 Appendix B: Abbreviations 1276 BFD - Bidirectional Fault Detection 1277 BGP - Border Gateway protocol 1278 CE - Customer Edge 1279 DUT - Device Under Test 1280 FRR - Fast Reroute 1281 IGP - Interior Gateway Protocol 1282 IP - Internet Protocol 1283 LSP - Label Switched Path 1284 MP - Merge Point 1285 MPLS - Multi Protocol Label Switching 1286 N-Nhop - Next - Next Hop 1287 Nhop - Next Hop 1288 OIR - Online Insertion and Removal 1289 P - Provider 1290 PE - Provider Edge 1291 PHP - Penultimate Hop Popping 1292 PLR - Point of Local Repair 1293 RSVP - Resource reSerVation Protocol 1294 SRLG - Shared Risk Link Group 1295 TA - Traffic Analyzer 1296 TE - Traffic Engineering 1297 TG - Traffic Generator 1298 VC - Virtual Circuit 1299 VPN - Virtual Private Network 1301 Poretsky, Rao, Le Roux