idnits 2.17.1 draft-ietf-bmwg-igp-dataplane-conv-meth-17.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 373 has weird spacing: '... Pacing seco...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (March 08, 2009) is 5521 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-term-17 Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group S. Poretsky 2 Internet Draft Allot Communications 3 Expires: September 08, 2009 4 Intended Status: Informational Brent Imhoff 5 Juniper Networks 7 March 08, 2009 9 Benchmarking Methodology for 10 Link-State IGP Data Plane Route Convergence 12 14 Status of this Memo 15 This Internet-Draft is submitted to IETF in full conformance with the 16 provisions of BCP 78 and BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 This Internet-Draft will expire on September 8, 2009. 35 Copyright Notice 36 Copyright (c) 2009 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents in effect on the date of 41 publication of this document (http://trustee.ietf.org/license-info). 42 Please review these documents carefully, as they describe your rights 43 and restrictions with respect to this document. 45 ABSTRACT 46 This document describes the methodology for benchmarking Interior 47 Gateway Protocol (IGP) Route Convergence. The methodology is to 48 be used for benchmarking IGP convergence time through externally 49 observable (black box) data plane measurements. The methodology 50 can be applied to any link-state IGP, such as ISIS and OSPF. 52 Link-State IGP Data Plane Route Convergence 54 Table of Contents 55 1. Introduction and Scope......................................2 56 2. Existing Definitions .......................................2 57 3. Test Setup..................................................3 58 3.1 Test Topologies............................................3 59 3.2 Test Considerations........................................5 60 3.3 Reporting Format...........................................8 61 4. Test Cases..................................................9 62 4.1 Convergence Due to Local Interface Failure.................9 63 4.2 Convergence Due to Remote Interface Failure................10 64 4.3 Convergence Due to Local Administrative Shutdown...........11 65 4.4 Convergence Due to Layer 2 Session Loss....................11 66 4.5 Convergence Due to Loss of IGP Adjacency...................12 67 4.6 Convergence Due to Route Withdrawal........................13 68 4.7 Convergence Due to Cost Change.............................14 69 4.8 Convergence Due to ECMP Member Interface Failure...........15 70 4.9 Convergence Due to ECMP Member Remote Interface Failure....16 71 4.10 Convergence Due to Parallel Link Interface Failure........16 72 5. IANA Considerations.........................................17 73 6. Security Considerations.....................................17 74 7. Acknowledgements............................................17 75 8. References..................................................18 76 9. Author's Address............................................18 78 1. Introduction and Scope 79 This document describes the methodology for benchmarking Interior 80 Gateway Protocol (IGP) Route Convergence. The motivation and 81 applicability for this benchmarking is described in [Po09a]. 82 The terminology to be used for this benchmarking is described 83 in [Po09t]. Service Providers use IGP Convergence time as a key 84 metric of router design and architecture. Customers of Service 85 Providers observe convergence time by packet loss, so IGP Route 86 Convergence is considered a Direct Measure of Quality (DMOQ). The 87 test cases in this document are black-box tests that emulate the 88 network events that cause route convergence, as described in 89 [Po09a]. The black-box test designs benchmark the data plane and 90 account for all of the factors contributing to convergence time, 91 as discussed in [Po09a]. Convergence times are measured at the 92 Tester on the data plane by observing packet loss through the DUT. 93 The methodology (and terminology) for benchmarking route 94 convergence can be applied to any link-state IGP such as ISIS 95 [Ca90] and OSPF [Mo98] and others. These methodologies apply to 96 IPv4 and IPv6 traffic and IGPs. 98 2. Existing Definitions 99 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 100 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 101 document are to be interpreted as described in BCP 14, RFC 2119 102 [Br97]. RFC 2119 defines the use of these key words to help make the 103 Link-State IGP Data Plane Route Convergence 105 intent of standards track documents as clear as possible. While this 106 document uses these keywords, this document is not a standards track 107 document. 109 This document adopts the definition format in Section 2 of RFC 1242 110 [Br91]. This document uses much of the terminology defined in 111 [Po09t]. This document uses existing terminology defined in other 112 BMWG work. Examples include, but are not limited to: 114 Throughput [Ref.[Br91], section 3.17] 115 Device Under Test (DUT) [Ref.[Ma98], section 3.1.1] 116 System Under Test (SUT) [Ref.[Ma98], section 3.1.2] 117 Out-of-order Packet [Ref.[Po06], section 3.3.2] 118 Duplicate Packet [Ref.[Po06], section 3.3.3] 119 Packet Loss [Ref.[Po09t], Section 3.5] 121 3. Test Setup 123 3.1 Test Topologies 125 Convergence times are measured at the Tester on the data plane 126 by observing packet loss through the DUT. Figure 1 shows the test 127 topology to measure IGP Route Convergence due to local Convergence 128 Events such as Link Failure, Layer 2 Session Failure, IGP 129 Adjacency Failure, Route Withdrawal, and route cost change. These 130 test cases discussed in section 4 provide route convergence times 131 that include the Event Detection time, SPF Processing time, and 132 FIB Update time. 134 Figure 2 shows the test topology to measure IGP Route Convergence 135 time due to remote changes in the network topology. These times 136 are measured by observing packet loss in the data plane at the 137 Tester. In this topology the three routers are considered a System 138 Under Test (SUT). A Remote Interface [Po09t] failure on router R2 139 MUST result in convergence of traffic to router R3. NOTE: All 140 routers in the SUT must be the same model and identically 141 configured. 143 --------- Ingress Interface --------- 144 | |<--------------------------------| | 145 | | | | 146 | | Preferred Egress Interface | | 147 | DUT |-------------------------------->| Tester| 148 | | | | 149 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 150 | | Next-Best Egress Interface | | 151 --------- --------- 153 Figure 1. Test Topology 1: IGP Convergence Test Topology 154 for Local Changes 156 Link-State IGP Data Plane Route Convergence 158 ----- --------- 159 | | Preferred | | 160 ----- |R2 |---------------------->| | 161 | |-->| | Egress Interface | | 162 | | ----- | | 163 |R1 | |Tester | 164 | | ----- | | 165 | |-->| | Next-Best | | 166 ----- |R3 |~~~~~~~~~~~~~~~~~~~~~~>| | 167 ^ | | Egress Interface | | 168 | ----- --------- 169 | | 170 |-------------------------------------- 171 Ingress Interface 173 Figure 2. Test Topology 2: IGP Convergence Test Topology 174 for Convergence Due to Remote Changes 176 --------- Ingress Interface --------- 177 | |<--------------------------------| | 178 | | | | 179 | | ECMP Set Interface 1 | | 180 | DUT |-------------------------------->| Tester| 181 | | . | | 182 | | . | | 183 | | . | | 184 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 185 | | ECMP Set Interface N | | 186 --------- --------- 188 Figure 3. Test Topology 3: IGP Convergence Test Topology 189 for ECMP Convergence 191 Figure 3 shows the test topology to measure IGP Route Convergence 192 time with members of an Equal Cost Multipath (ECMP) Set. These 193 times are measured by observing packet loss in the data plane at 194 the Tester. In this topology, the DUT is configured with each 195 Egress interface as a member of an ECMP set and the Tester emulates 196 multiple next-hop routers (emulates one router for each member). 198 Figure 4 shows the test topology to measure IGP Route Convergence 199 time with members of a Parallel Link. These times are measured by 200 observing packet loss in the data plane at the Tester. In this 201 topology, the DUT is configured with each Egress interface as a 202 member of a Parallel Link and the Tester emulates the single 203 next-hop router. 205 Link-State IGP Data Plane Route Convergence 207 --------- Ingress Interface --------- 208 | |<--------------------------------| | 209 | | | | 210 | | Parallel Link Interface 1 | | 211 | DUT |-------------------------------->| Tester| 212 | | . | | 213 | | . | | 214 | | . | | 215 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 216 | | Parallel Link Interface N | | 217 --------- --------- 219 Figure 4. Test Topology 4: IGP Convergence Test Topology 220 for Parallel Link Convergence 222 3.2 Test Considerations 223 3.2.1 IGP Selection 224 The test cases described in section 4 MAY be used for link-state 225 IGPs, such as ISIS or OSPF. The Route Convergence test methodology 226 is identical. The IGP adjacencies are established on the Preferred 227 Egress Interface and Next-Best Egress Interface. 229 3.2.2 Routing Protocol Configuration 230 The obtained results for IGP Route Convergence may vary if 231 other routing protocols are enabled and routes learned via those 232 protocols are installed. IGP convergence times MUST be benchmarked 233 without routes installed from other protocols. 235 When performing test cases, advertise a single IGP topology from 236 Tester to DUT on the Preferred Egress Interface [Po09t] and 237 Next-Best Egress Interface [Po09t] using the test setup shown in 238 Figure 1. These two interfaces on the DUT must peer with 239 different emulated neighbor routers for their IGP adjacencies. 240 The IGP topology learned on both interfaces MUST be the same 241 topology with the same nodes and routes. 243 3.2.3 IGP Route Scaling 244 The number of IGP routes will impact the measured IGP Route 245 Convergence. To obtain results similar to those that would be 246 observed in an operational network, it is RECOMMENDED that the 247 number of installed routes and nodes closely approximates that 248 of the network (e.g. thousands of routes with tens of nodes). 249 The number of areas (for OSPF) and levels (for ISIS) can impact 250 the benchmark results. 252 Link-State IGP Data Plane Route Convergence 254 3.2.4 Timers 255 There are some timers that will impact the measured IGP Convergence 256 time. Benchmarking metrics may be measured at any fixed values for 257 these timers. It is RECOMMENDED that the following timers be 258 configured to the minimum values listed: 260 Timer Recommended Value 261 ----- ----------------- 262 Link Failure Indication Delay <10milliseconds 263 IGP Hello Timer 1 second 264 IGP Dead-Interval 3 seconds 265 LSA Generation Delay 0 266 LSA Flood Packet Pacing 0 267 LSA Retransmission Packet Pacing 0 268 SPF Delay 0 270 3.2.5 Interface Types 271 All test cases in this methodology document may be executed with any 272 interface type. All interfaces MUST be the same media and Throughput 273 [Br91][Br99] for each test case. The type of media may dictate which 274 test cases may be executed. This is because each interface type has 275 a unique mechanism for detecting link failures and the speed at which 276 that mechanism operates will influence the measure results. Media 277 and protocols MUST be configured for minimum failure detection delay 278 to minimize the contribution to the measured Convergence time. For 279 example, configure SONET with the minimum carrier-loss-delay. All 280 interfaces SHOULD be configured as point-to-point. 282 3.2.6 Packet Sampling Interval 283 The Packet Sampling Interval [Po09t] value is the fastest measurable 284 convergence time. The RECOMMENDED value for the Packet Sampling 285 Interval to be set on the Tester is 10 milliseconds. The Packet 286 Sampling Interval MUST be reported. 288 3.2.7 Offered Load 289 The offered load MUST be the Throughput of the device as defined in 290 [Br91] and benchmarked in [Br99] at a fixed packet size. At least 291 one packet per route in the FIB for all routes in the FIB MUST be 292 offered to the DUT within the Packet Sampling interval. Packet 293 size is measured in bytes and includes the IP header and payload. 294 The packet size is selectable and MUST be recorded. The Throughput 295 MUST be measured at the Preferred Egress Interface and the 296 Next-Best Egress Interface. The duration of offered load MUST be 297 greater than the convergence time. 299 The destination addresses for the offered load MUST be distributed 300 such that all routes are matched and each route is offered an equal 301 share of the total Offered Load. This requirement for the Offered 302 Load to be distributed to match all destinations in the route table 303 creates separate flows that are offered to the DUT. The capability 304 of the Tester to measure packet loss for each individual flow 305 Link-State IGP Data Plane Route Convergence 307 (identified by the destination address matching a route entry) and 308 the scale for the number of individual flows for which it can 309 measure packet loss should be considered when benchmarking 310 Route-Specific Convergence [Po09t]. 312 3.2.8 Selection of Convergence Time Benchmark Metrics and Methods 314 The methodologies in the section 4 test cases MAY be applied to 315 benchmark Full Convergence Time, First Route Convergence Time, 316 Reversion Convergence Time, and Route-Specific Convergence Time 317 [Po09t]. The First Route Convergence Time benchmark metric MAY 318 be measured while measuring any of these convergence benchmarks. 319 The benchmarking metrics may be obtained using either the 320 Loss-Derived Convergence Method or Rate-Derived Convergence 321 Method. It is RECOMMENDED that the Rate-Derived Convergence 322 Method be measured when benchmarking convergence times. The 323 Loss-Derived Convergence Method is not the preferred method to 324 measure convergence benchmarks because it can produce a result 325 that is faster than the actual convergence time. When the 326 Packet Sampling Interval is too large, the Rate-Derived 327 Convergence Method may produce a larger than actual convergence 328 time. In such cases the Loss-Derived Convergence Method may 329 produce a more accurate result. 331 3.2.9 Tester Capabilities 332 It is RECOMMENDED that the Tester used to execute each test case 333 have the following capabilities: 334 1. Ability to establish IGP adjacencies and advertise a single 335 IGP topology to one or more peers. 336 2. Ability to produce convergence Event Triggers [Po09t]. 337 3. Ability to insert a timestamp in each data packet's IP 338 payload. 339 2. An internal time clock to control timestamping, time 340 measurements, and time calculations. 341 3. Ability to distinguish traffic load received on the 342 Preferred and Next-Best Interfaces [Po09t]. 343 4. Ability to disable or tune specific Layer-2 and Layer-3 344 protocol functions on any interface(s). 346 It is not required that the Tester be capable of making non-data 347 plane convergence observations nor to use those observations for 348 measurements. 350 Link-State IGP Data Plane Route Convergence 352 3.3 Reporting Format 353 For each test case, it is recommended that the reporting table below 354 is completed and all time values SHOULD be reported with resolution 355 as specified in [Po09t]. 357 Parameter Units 358 --------- ----- 359 Test Case test case number 360 Test Topology (1, 2, 3, or 4) 361 IGP (ISIS, OSPF, other) 362 Interface Type (GigE, POS, ATM, other) 363 Packet Size offered to DUT bytes 364 IGP Routes advertised to DUT number of IGP routes 365 Nodes in emulated network number of nodes 366 Packet Sampling Interval on Tester milliseconds 367 IGP Timer Values configured on DUT: 368 Interface Failure Indication Delay seconds 369 IGP Hello Timer seconds 370 IGP Dead-Interval seconds 371 LSA Generation Delay seconds 372 LSA Flood Packet Pacing seconds 373 LSA Retransmission Packet Pacing seconds 374 SPF Delay seconds 375 Forwarding Metrics 376 Total Packets Offered to DUT number of Packets 377 Total Packets Routed by DUT number of Packets 378 Convergence Packet Loss number of Packets 379 Out-of-Order Packets number of Packets 380 Duplicate Packets number of Packets 381 Convergence Benchmarks 382 Full Convergence 383 First Route Convergence Time seconds 384 Full Convergence Time (Rate-Derived) seconds 385 Full Convergence Time (Loss-Derived) seconds 386 Route-Specific Convergence 387 Number of Routes Measured number of flows 388 Route-Specific Convergence Time[n] array of seconds 389 Minimum R-S Convergence Time seconds 390 Maximum R-S Convergence Time seconds 391 Median R-S Convergence Time seconds 392 Average R-S Convergence Time seconds 393 Reversion 394 Reversion Convergence Time seconds 395 First Route Convergence Time seconds 396 Route-Specific Convergence 397 Number of Routes Measured number of flows 398 Route-Specific Convergence Time[n] array of seconds 399 Minimum R-S Convergence Time seconds 400 Maximum R-S Convergence Time seconds 401 Median R-S Convergence Time seconds 402 Average R-S Convergence Time seconds 403 Link-State IGP Data Plane Route Convergence 405 4. Test Cases 406 It is RECOMMENDED that all applicable test cases be performed for 407 best characterization of the DUT. The test cases follow a generic 408 procedure tailored to the specific DUT configuration and Convergence 409 Event[Po09t]. This generic procedure is as follows: 411 1. Establish DUT configuration and install routes. 412 2. Send offered load with traffic traversing Preferred Egress 413 Interface [Po09t]. 414 3. Introduce Convergence Event to force traffic to Next-Best 415 Egress Interface [Po09t]. 416 4. Measure First Route Convergence Time. 417 5. Measure Full Convergence Time and, optionally, the 418 Route-Specific Convergence Times. 419 6. Wait the Sustained Convergence Validation Time to ensure there 420 is no residual packet loss. 421 7. Recover from Convergence Event. 422 8. Measure Reversion Convergence Time, and optionally the First 423 Route Convergence Time and Route-Specific Convergence Times. 425 4.1 Convergence Due to Local Interface Failure 426 Objective 427 To obtain the IGP Route Convergence due to a local link failure event 428 at the DUT's Local Interface. 430 Procedure 431 1. Advertise matching IGP routes and topology from Tester to DUT on 432 the Preferred Egress Interface [Po09t] and Next-Best Egress 433 Interface [Po09t] using the topology shown in Figure 1. Set the 434 cost of the routes so that the Preferred Egress Interface is the 435 preferred next-hop. 436 2. Send offered load at measured Throughput with fixed packet 437 size to destinations matching all IGP routes from Tester to 438 DUT on Ingress Interface [Po09t]. 439 3. Verify traffic is routed over Preferred Egress Interface. 440 4. Remove link on DUT's Preferred Egress Interface. This is the 441 Convergence Event Trigger[Po09t] that produces the Convergence 442 Event Instant [Po09t]. 443 5. Measure First Route Convergence Time [Po09t] as DUT detects the 444 link down event and begins to converge IGP routes and traffic 445 over the Next-Best Egress Interface. 446 6. Measure Full Convergence Time [Po09t] as DUT detects the 447 link down event and converges all IGP routes and traffic over 448 the Next-Best Egress Interface. Optionally, Route-Specific 449 Convergence Times [Po09t] MAY be measured. 450 7. Stop offered load. Wait 30 seconds for queues to drain. 451 Restart offered load. 452 8. Restore link on DUT's Preferred Egress Interface. 453 9. Measure Reversion Convergence Time [Po09t], and optionally 454 measure First Route Convergence Time and Route-Specific 455 Convergence Times, as DUT detects the link up event and 456 converges all IGP routes and traffic back to the Preferred 457 Egress Interface. 459 Link-State IGP Data Plane Route Convergence 461 Results 462 The measured IGP Convergence time is influenced by the Local 463 link failure indication, SPF delay, SPF Hold time, SPF Execution 464 Time, Tree Build Time, and Hardware Update Time [Po09a]. 466 4.2 Convergence Due to Remote Interface Failure 468 Objective 469 To obtain the IGP Route Convergence due to a Remote Interface 470 Failure event. 472 Procedure 473 1. Advertise matching IGP routes and topology from Tester to 474 SUT on Preferred Egress Interface [Po09t] and Next-Best Egress 475 Interface [Po09t] using the topology shown in Figure 2. 476 Set the cost of the routes so that the Preferred Egress 477 Interface is the preferred next-hop. 478 2. Send offered load at measured Throughput with fixed packet 479 size to destinations matching all IGP routes from Tester to 480 SUT on Ingress Interface [Po09t]. 481 3. Verify traffic is routed over Preferred Egress Interface. 482 4. Remove link on Tester's Neighbor Interface [Po09t] connected to 483 SUT's Preferred Egress Interface. This is the Convergence Event 484 Trigger [Po09t] that produces the Convergence Event Instant 485 [Po09t]. 486 5. Measure First Route Convergence Time [Po09t] as SUT detects the 487 link down event and begins to converge IGP routes and traffic 488 over the Next-Best Egress Interface. 489 6. Measure Full Convergence Time [Po09t] as SUT detects 490 the link down event and converges all IGP routes and traffic 491 over the Next-Best Egress Interface. Optionally, Route-Specific 492 Convergence Times [Po09t] MAY be measured. 493 7. Stop offered load. Wait 30 seconds for queues to drain. 494 Restart offered load. 495 8. Restore link on Tester's Neighbor Interface connected to 496 DUT's Preferred Egress Interface. 497 9. Measure Reversion Convergence Time [Po09t], and optionally 498 measure First Route Convergence Time [Po09t] and Route-Specific 499 Convergence Times [Po09t], as DUT detects the link up event and 500 converges all IGP routes and traffic back to the Preferred Egress 501 Interface. 503 Results 504 The measured IGP Convergence time is influenced by the link failure 505 indication, LSA/LSP Flood Packet Pacing, LSA/LSP Retransmission 506 Packet Pacing, LSA/LSP Generation time, SPF delay, SPF Hold time, 507 SPF Execution Time, Tree Build Time, and Hardware Update Time 508 [Po09a]. This test case may produce Stale Forwarding [Po09t] due to 509 microloops which may increase the measured convergence times. 511 Link-State IGP Data Plane Route Convergence 513 4.3 Convergence Due to Local Adminstrative Shutdown 514 Objective 515 To obtain the IGP Route Convergence due to a administrative shutdown 516 at the DUT's Local Interface. 518 Procedure 519 1. Advertise matching IGP routes and topology from Tester to DUT on 520 Preferred Egress Interface [Po09t] and Next-Best Egress Interface 521 [Po09t] using the topology shown in Figure 1. Set the cost of 522 the routes so that the Preferred Egress Interface is the 523 preferred next-hop. 524 2. Send offered load at measured Throughput with fixed packet 525 size to destinations matching all IGP routes from Tester to 526 DUT on Ingress Interface [Po09t]. 527 3. Verify traffic is routed over Preferred Egress Interface. 528 4. Perform adminstrative shutdown on the DUT's Preferred Egress 529 Interface. This is the Convergence Event Trigger [Po09t] that 530 produces the Convergence Event Instant [Po09t]. 531 5. Measure First Route Convergence Time [Po09t] as DUT detects the 532 link down event and begins to converge IGP routes and traffic 533 over the Next-Best Egress Interface. 534 6. Measure Full Convergence Time [Po09t] as DUT converges 535 all IGP routes and traffic over the Next-Best Egress Interface. 536 Optionally, Route-Specific Convergence Times [Po09t] MAY be 537 measured. 538 7. Stop offered load. Wait 30 seconds for queues to drain. 539 Restart offered load. 540 8. Restore Preferred Egress Interface by administratively enabling 541 the interface. 542 9. Measure Reversion Convergence Time [Po09t], and optionally 543 measure First Route Convergence Time [Po09t] and Route-Specific 544 Convergence Times [Po09t], as DUT detects the link up event and 545 converges all IGP routes and traffic back to the Preferred 546 Egress Interface. 548 Results 549 The measured IGP Convergence time is influenced by SPF delay, 550 SPF Hold time, SPF Execution Time, Tree Build Time, and Hardware 551 Update Time [Po09a]. 553 4.4 Convergence Due to Layer 2 Session Loss 554 Objective 555 To obtain the IGP Route Convergence due to a local Layer 2 loss. 557 Procedure 558 1. Advertise matching IGP routes and topology from Tester to DUT on 559 Preferred Egress Interface [Po09t] and Next-Best Egress Interface 560 [Po09t] using the topology shown in Figure 1. Set the cost of 561 the routes so that the IGP routes along the Preferred Egress 562 Interface is the preferred next-hop. 563 2. Send offered load at measured Throughput with fixed packet 564 size to destinations matching all IGP routes from Tester to 565 DUT on Ingress Interface [Po09t]. 567 Link-State IGP Data Plane Route Convergence 569 3. Verify traffic is routed over Preferred Egress Interface. 570 4. Tester removes Layer 2 session from DUT's Preferred Egress 571 Interface [Po09t]. It is RECOMMENDED that this be achieved with 572 messaging, but the method MAY vary with the Layer 2 protocol. 573 This is the Convergence Event Trigger [Po09t] that produces the 574 Convergence Event Instant [Po09t]. 575 5. Measure First Route Convergence Time [Po09t] as DUT detects the 576 Layer 2 session down event and begins to converge IGP routes and 577 traffic over the Next-Best Egress Interface. 578 6. Measure Full Convergence Time [Po09t] as DUT detects the 579 Layer 2 session down event and converges all IGP routes and 580 traffic over the Next-Best Egress Interface. Optionally, 581 Route-Specific Convergence Times [Po09t] MAY be measured. 582 7. Stop offered load. Wait 30 seconds for queues to drain. 583 Restart offered load. 584 8. Restore Layer 2 session on DUT's Preferred Egress Interface. 585 9. Measure Reversion Convergence Time [Po09t], and optionally 586 measure First Route Convergence Time [Po09t] and Route-Specific 587 Convergence Times [Po09t], as DUT detects the session up event 588 and converges all IGP routes and traffic over the Preferred Egress 589 Interface. 591 Results 592 The measured IGP Convergence time is influenced by the Layer 2 593 failure indication, SPF delay, SPF Hold time, SPF Execution 594 Time, Tree Build Time, and Hardware Update Time [Po09a]. 596 4.5 Convergence Due to Loss of IGP Adjacency 597 Objective 598 To obtain the IGP Route Convergence due to loss of the IGP 599 Adjacency. 601 Procedure 602 1. Advertise matching IGP routes and topology from Tester to DUT on 603 Preferred Egress Interface [Po09t] and Next-Best Egress Interface 604 [Po09t] using the topology shown in Figure 1. Set the cost of 605 the routes so that the Preferred Egress Interface is the 606 preferred next-hop. 607 2. Send offered load at measured Throughput with fixed packet 608 size to destinations matching all IGP routes from Tester to 609 DUT on Ingress Interface [Po09t]. 610 3. Verify traffic is routed over Preferred Egress Interface. 611 4. Remove IGP adjacency from Tester's Neighbor Interface [Po09t] 612 connected to Preferred Egress Interface. The Layer 2 session 613 MUST be maintained. This is the Convergence Event Trigger 614 [Po09t] that produces the Convergence Event Instant [Po09t]. 615 5. Measure First Route Convergence Time [Po09t] as DUT detects the 616 loss of IGP adjacency and begins to converge IGP routes and 617 traffic over the Next-Best Egress Interface. 618 6. Measure Full Convergence Time [Po09t] as DUT detects the 619 IGP session failure event and converges all IGP routes and 620 traffic over the Next-Best Egress Interface. Optionally, 621 Route-Specific Convergence Times [Po09t] MAY be measured. 623 Link-State IGP Data Plane Route Convergence 625 7. Stop offered load. Wait 30 seconds for queues to drain. 626 Restart offered load. 627 8. Restore IGP session on DUT's Preferred Egress Interface. 628 9. Measure Reversion Convergence Time [Po09t], and optionally 629 measure First Route Convergence Time [Po09t] and Route-Specific 630 Convergence Times [Po09t], as DUT detects the session recovery 631 event and converges all IGP routes and traffic over the 632 Preferred Egress Interface. 634 Results 635 The measured IGP Convergence time is influenced by the IGP Hello 636 Interval, IGP Dead Interval, SPF delay, SPF Hold time, SPF 637 Execution Time, Tree Build Time, and Hardware Update Time [Po09a]. 639 4.6 Convergence Due to Route Withdrawal 641 Objective 642 To obtain the IGP Route Convergence due to Route Withdrawal. 644 Procedure 645 1. Advertise a single IGP topology from Tester to DUT on Preferred 646 Egress Interface [Po09t] and Next-Best Egress Interface [Po09t] 647 using the test setup shown in Figure 1. These two interfaces 648 on the DUT must peer with different emulated neighbor routers 649 for their IGP adjacency. The IGP topology learned on both 650 interfaces MUST be the same topology with the same nodes and 651 routes. It is RECOMMENDED that the IGP routes be IGP external 652 routes for which the Tester would be emulating a preferred and 653 a next-best Autonomous System Border Router (ASBR). Set the 654 cost of the routes so that the Preferred Egress Interface is 655 the preferred next-hop. 656 2. Send offered load at measured Throughput with fixed packet 657 size to destinations matching all IGP routes from Tester to 658 DUT on Ingress Interface [Po09t]. 659 3. Verify traffic is routed over Preferred Egress Interface. 660 4. The Tester, emulating the neighbor node, withdraws one or 661 more IGP leaf routes from the DUT's Preferred Egress Interface. 662 The withdrawal update message MUST be a single unfragmented 663 packet. This is the Convergence Event Trigger [Po09t] that 664 produces the Convergence Event Instant [Po09t]. The Tester 665 MAY record the time it sends the withdrawal message(s). 666 5. Measure First Route Convergence Time [Po09t] as DUT detects the 667 route withdrawal event and begins to converge IGP routes and 668 traffic over the Next-Best Egress Interface. 669 6. Measure Full Convergence Time [Po09t] as DUT withdraws 670 routes and converges all IGP routes and traffic over the 671 Next-Best Egress Interface. Optionally, Route-Specific 672 Convergence Times [Po09t] MAY be measured. 673 7. Stop offered load. Wait 30 seconds for queues to drain. 674 Restart offered load. 675 8. Re-advertise the withdrawn IGP leaf routes to DUT's Preferred 676 Egress Interface. 678 Link-State IGP Data Plane Route Convergence 680 9. Measure Reversion Convergence Time [Po09t], and optionally 681 measure First Route Convergence Time [Po09t] and Route-Specific 682 Convergence Times [Po09t], as DUT converges all IGP routes and 683 traffic over the Preferred Egress Interface. 685 Results 686 The measured IGP Convergence time is the SPF Processing and FIB 687 Update time as influenced by the SPF or route calculation delay, 688 Hold time, Execution Time, and Hardware Update Time [Po09a]. 690 4.7 Convergence Due to Cost Change 692 Objective 693 To obtain the IGP Route Convergence due to route cost change. 695 Procedure 696 1. Advertise a single IGP topology from Tester to DUT on 697 Preferred Egress Interface [Po09t] and Next-Best Egress 698 Interface [Po09t] using the test setup shown in Figure 1. 699 These two interfaces on the DUT must peer with different 700 emulated neighbor routers for their IGP adjacency. The 701 IGP topology learned on both interfaces MUST be the same 702 topology with the same nodes and routes. It is RECOMMENDED 703 that the IGP routes be IGP external routes for which the 704 Tester would be emulating a preferred and a next-best 705 Autonomous System Border Router (ASBR). Set the cost of 706 the routes so that the Preferred Egress Interface is the 707 preferred next-hop. 708 2. Send offered load at measured Throughput with fixed packet 709 size to destinations matching all IGP routes from Tester to 710 DUT on Ingress Interface [Po09t]. 711 3. Verify traffic is routed over Preferred Egress Interface. 712 4. The Tester, emulating the neighbor node, increases the cost for 713 all IGP routes at DUT's Preferred Egress Interface so that the 714 Next-Best Egress Interface has lower cost and becomes preferred 715 path. The update message advertising the higher cost MUST be a 716 single unfragmented packet. This is the Convergence Event 717 Trigger [Po09t] that produces the Convergence Event Instant 718 [Po09t]. The Tester MAY record the time it sends the message 719 advertising the higher cost on the Preferred Egress Interface. 720 5. Measure First Route Convergence Time [Po09t] as DUT detects the 721 cost change event and begins to converge IGP routes and traffic 722 over the Next-Best Egress Interface. 723 6. Measure Full Convergence Time [Po09t] as DUT detects the 724 cost change event and converges all IGP routes and traffic 725 over the Next-Best Egress Interface. Optionally, Route-Specific 726 Convergence Times [Po09t] MAY be measured. 727 7. Stop offered load. Wait 30 seconds for queues to drain. 728 Restart offered load. 729 8. Re-advertise IGP routes to DUT's Preferred Egress Interface 730 with original lower cost metric. 732 Link-State IGP Data Plane Route Convergence 734 9. Measure Reversion Convergence Time [Po09t], and optionally 735 measure First Route Convergence Time [Po09t] and Route-Specific 736 Convergence Times [Po09t], as DUT converges all IGP routes and 737 traffic over the Preferred Egress Interface. 739 Results 740 It is possible that no measured packet loss will be observed for 741 this test case. 743 4.8 Convergence Due to ECMP Member Interface Failure 745 Objective 746 To obtain the IGP Route Convergence due to a local link failure event 747 of an ECMP Member. 749 Procedure 750 1. Configure ECMP Set as shown in Figure 3. 751 2. Advertise matching IGP routes and topology from Tester to DUT on 752 each ECMP member. 753 3. Send offered load at measured Throughput with fixed packet size to 754 destinations matching all IGP routes from Tester to DUT on Ingress 755 Interface [Po09t]. 756 4. Verify traffic is routed over all members of ECMP Set. 757 5. Remove link on Tester's Neighbor Interface [Po09t] connected to 758 one of the DUT's ECMP member interfaces. This is the Convergence 759 Event Trigger [Po09t] that produces the Convergence Event Instant 760 [Po09t]. 761 6. Measure First Route Convergence Time [Po09t] as DUT detects the 762 link down event and begins to converge IGP routes and traffic 763 over the other ECMP members. 764 7. Measure Full Convergence Time [Po09t] as DUT detects 765 the link down event and converges all IGP routes and traffic 766 over the other ECMP members. At the same time measure 767 Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. 768 Optionally, Route-Specific Convergence Times [Po09t] MAY be 769 measured. 770 8. Stop offered load. Wait 30 seconds for queues to drain. 771 Restart offered load. 772 9. Restore link on Tester's Neighbor Interface connected to 773 DUT's ECMP member interface. 774 10. Measure Reversion Convergence Time [Po09t], and optionally 775 measure First Route Convergence Time [Po09t] and Route-Specific 776 Convergence Times [Po09t], as DUT detects the link up event and 777 converges IGP routes and some distribution of traffic over the 778 restored ECMP member. 780 Results 781 The measured IGP Convergence time is influenced by Local link 782 failure indication, Tree Build Time, and Hardware Update Time 783 [Po09a]. 785 Link-State IGP Data Plane Route Convergence 787 4.9 Convergence Due to ECMP Member Remote Interface Failure 789 Objective 790 To obtain the IGP Route Convergence due to a remote interface 791 failure event for an ECMP Member. 793 Procedure 794 1. Configure ECMP Set as shown in Figure 2 in which the links 795 from R1 to R2 and R1 to R3 are members of an ECMP Set. 796 2. Advertise matching IGP routes and topology from Tester to 797 SUT to balance traffic to each ECMP member. 798 3. Send offered load at measured Throughput with fixed packet 799 size to destinations matching all IGP routes from Tester to 800 SUT on Ingress Interface [Po09t]. 801 4. Verify traffic is routed over all members of ECMP Set. 802 5. Remove link on Tester's Neighbor Interface to R2 or R3. 803 This is the Convergence Event Trigger [Po09t] that produces 804 the Convergence Event Instant [Po09t]. 805 6. Measure First Route Convergence Time [Po09t] as SUT detects 806 the link down event and begins to converge IGP routes and 807 traffic over the other ECMP members. 808 7. Measure Full Convergence Time [Po09t] as SUT detects 809 the link down event and converges all IGP routes and traffic 810 over the other ECMP members. At the same time measure 811 Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. 812 Optionally, Route-Specific Convergence Times [Po09t] MAY be 813 measured. 814 8. Stop offered load. Wait 30 seconds for queues to drain. 815 Restart offered load. 816 9. Restore link on Tester's Neighbor Interface to R2 or R3. 817 10. Measure Reversion Convergence Time [Po09t], and optionally 818 measure First Route Convergence Time [Po09t] and 819 Route-Specific Convergence Times [Po09t], as SUT detects 820 the link up event and converges IGP routes and some 821 distribution of traffic over the restored ECMP member. 823 Results 824 The measured IGP Convergence time is influenced by Local link 825 failure indication, Tree Build Time, and Hardware Update Time 826 [Po09a]. 828 4.10 Convergence Due to Parallel Link Interface Failure 830 Objective 831 To obtain the IGP Route Convergence due to a local link failure 832 event for a Member of a Parallel Link. The links can be used 833 for data Load Balancing 835 Procedure 836 1. Configure Parallel Link as shown in Figure 4. 837 2. Advertise matching IGP routes and topology from Tester to DUT 838 on each Parallel Link member. 840 Link-State IGP Data Plane Route Convergence 842 3. Send offered load at measured Throughput with fixed packet 843 size to destinations matching all IGP routes from Tester to 844 DUT on Ingress Interface [Po09t]. 845 4. Verify traffic is routed over all members of Parallel Link. 846 5. Remove link on Tester's Neighbor Interface [Po09t] connected to 847 one of the DUT's Parallel Link member interfaces. This is the 848 Convergence Event Trigger [Po09t] that produces the Convergence 849 Event Instant [Po09t]. 850 6. Measure First Route Convergence Time [Po09t] as DUT detects the 851 link down event and begins to converge IGP routes and traffic 852 over the other Parallel Link members. 853 7. Measure Full Convergence Time [Po09t] as DUT detects the 854 link down event and converges all IGP routes and traffic over 855 the other Parallel Link members. At the same time measure 856 Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. 857 Optionally, Route-Specific Convergence Times [Po09t] MAY be 858 measured. 859 8. Stop offered load. Wait 30 seconds for queues to drain. 860 Restart offered load. 861 9. Restore link on Tester's Neighbor Interface connected to 862 DUT's Parallel Link member interface. 863 10. Measure Reversion Convergence Time [Po09t], and optionally 864 measure First Route Convergence Time [Po09t] and 865 Route-Specific Convergence Times [Po09t], as DUT 866 detects the link up event and converges IGP routes and some 867 distribution of traffic over the restored Parallel Link member. 869 Results 870 The measured IGP Convergence time is influenced by the Local 871 link failure indication, Tree Build Time, and Hardware Update 872 Time [Po09a]. 874 5. IANA Considerations 876 This document requires no IANA considerations. 878 6. Security Considerations 879 Documents of this type do not directly affect the security of 880 Internet or corporate networks as long as benchmarking is not 881 performed on devices or systems connected to production networks. 882 Security threats and how to counter these in SIP and the media 883 layer is discussed in RFC3261, RFC3550, and RFC3711 and various 884 other drafts. This document attempts to formalize a set of 885 common methodology for benchmarking IGP convergence performance 886 in a lab environment. 888 7. Acknowledgements 889 Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward, 890 Kris Michielsen, Peter De Vriendt and the BMWG for their 891 contributions to this work. 893 Link-State IGP Data Plane Route Convergence 895 8. References 896 8.1 Normative References 898 [Br91] Bradner, S., "Benchmarking Terminology for Network 899 Interconnection Devices", RFC 1242, IETF, March 1991. 901 [Br97] Bradner, S., "Key words for use in RFCs to Indicate 902 Requirement Levels", RFC 2119, March 1997 904 [Br99] Bradner, S. and McQuaid, J., "Benchmarking Methodology for 905 Network Interconnect Devices", RFC 2544, IETF, March 1999. 907 [Ca90] Callon, R., "Use of OSI IS-IS for Routing in TCP/IP and Dual 908 Environments", RFC 1195, IETF, December 1990. 910 [Ma98] Mandeville, R., "Benchmarking Terminology for LAN 911 Switching Devices", RFC 2285, February 1998. 913 [Mo98] Moy, J., "OSPF Version 2", RFC 2328, IETF, April 1998. 915 [Po06] Poretsky, S., et al., "Terminology for Benchmarking 916 Network-layer Traffic Control Mechanisms", RFC 4689, 917 November 2006. 919 [Po09a] Poretsky, S., "Considerations for Benchmarking Link-State 920 IGP Convergence", draft-ietf-bmwg-igp-dataplane-conv-app-17, 921 work in progress, March 2009. 923 [Po09t] Poretsky, S., Imhoff, B., "Benchmarking Terminology for 924 Link-State IGP Convergence", 925 draft-ietf-bmwg-igp-dataplane-conv-term-17, work in 926 progress, March 2009. 928 8.2 Informative References 929 None 931 9. Author's Address 933 Scott Poretsky 934 Allot Communications 935 67 South Bedford Street, Suite 400 936 Burlington, MA 01803 937 USA 938 Phone: + 1 508 309 2179 939 Email: sporetsky@allot.com 941 Brent Imhoff 942 Juniper Networks 943 1194 North Mathilda Ave 944 Sunnyvale, CA 94089 945 USA 946 Phone: + 1 314 378 2571 947 EMail: bimhoff@planetspork.com