idnits 2.17.1 draft-ietf-bmwg-igp-dataplane-conv-meth-15.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 18. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 888. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 899. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 906. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 914. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 336 has weird spacing: '... Pacing seco...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 25, 2008) is 5904 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-17) exists of draft-ietf-bmwg-igp-dataplane-conv-app-15 == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-term-15 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group S. Poretsky 2 Internet Draft NextPoint Networks 3 Expires: August 2008 4 Intended Status: Informational Brent Imhoff 5 Juniper Networks 7 February 25, 2008 9 Benchmarking Methodology for 10 Link-State IGP Data Plane Route Convergence 12 14 Intellectual Property Rights (IPR) statement: 15 By submitting this Internet-Draft, each author represents that any 16 applicable patent or other IPR claims of which he or she is aware 17 have been or will be disclosed, and any of which he or she becomes 18 aware will be disclosed, in accordance with Section 6 of BCP 79. 20 Status of this Memo 22 Internet-Drafts are working documents of the Internet Engineering 23 Task Force (IETF), its areas, and its working groups. Note that 24 other groups may also distribute working documents as 25 Internet-Drafts. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 The list of current Internet-Drafts can be accessed at 33 http://www.ietf.org/ietf/1id-abstracts.txt. 35 The list of Internet-Draft Shadow Directories can be accessed at 36 http://www.ietf.org/shadow.html. 38 Copyright Notice 39 Copyright (C) The IETF Trust (2008). 41 ABSTRACT 42 This document describes the methodology for benchmarking Interior 43 Gateway Protocol (IGP) Route Convergence. The methodology is to 44 be used for benchmarking IGP convergence time through externally 45 observable (black box) data plane measurements. The methodology 46 can be applied to any link-state IGP, such as ISIS and OSPF. 48 Link-State IGP Data Plane Route Convergence 50 Table of Contents 51 1. Introduction ...............................................2 52 2. Existing definitions .......................................2 53 3. Test Setup..................................................3 54 3.1 Test Topologies............................................3 55 3.2 Test Considerations........................................5 56 3.3 Reporting Format...........................................7 57 4. Test Cases..................................................8 58 4.1 Convergence Due to Local Interface Failure.................8 59 4.2 Convergence Due to Remote Interface Failure................9 60 4.3 Convergence Due to Local Administrative Shutdown...........10 61 4.4 Convergence Due to Layer 2 Session Loss....................10 62 4.5 Convergence Due to Loss of IGP Adjacency...................11 63 4.6 Convergence Due to Route Withdrawal........................12 64 4.7 Convergence Due to Cost Change.............................13 65 4.8 Convergence Due to ECMP Member Interface Failure...........13 66 4.9 Convergence Due to ECMP Member Remote Interface Failure....14 67 4.10 Convergence Due to Parallel Link Interface Failure........15 68 5. IANA Considerations.........................................16 69 6. Security Considerations.....................................16 70 7. Acknowledgements............................................16 71 8. References..................................................16 72 9. Author's Address............................................17 74 1. Introduction 75 This document describes the methodology for benchmarking Interior 76 Gateway Protocol (IGP) Route Convergence. The applicability of this 77 testing is described in [Po07a] and the new terminology that it 78 introduces is defined in [Po07t]. Service Providers use IGP 79 Convergence time as a key metric of router design and architecture. 80 Customers of Service Providers observe convergence time by packet 81 loss, so IGP Route Convergence is considered a Direct Measure of 82 Quality (DMOQ). The test cases in this document are black-box tests 83 that emulate the network events that cause route convergence, as 84 described in [Po07a]. The black-box test designs benchmark the data 85 plane and account for all of the factors contributing to convergence 86 time, as discussed in [Po07a]. The methodology (and terminology) for 87 benchmarking route convergence can be applied to any link-state IGP 88 such as ISIS [Ca90] and OSPF [Mo98] and others. These methodologies 89 apply to IPv4 and IPv6 traffic and IGPs. 91 2. Existing definitions 92 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 93 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 94 document are to be interpreted as described in BCP 14, RFC 2119 95 [Br97]. RFC 2119 defines the use of these key words to help make the 96 intent of standards track documents as clear as possible. While this 97 document uses these keywords, this document is not a standards track 98 document. 100 This document uses much of the terminology defined in [Po07t]. 101 This document uses existing terminology defined in other BMWG 102 work. Examples include, but are not limited to: 104 Link-State IGP Data Plane Route Convergence 106 Throughput [Ref.[Br91], section 3.17] 107 Device Under Test (DUT) [Ref.[Ma98], section 3.1.1] 108 System Under Test (SUT) [Ref.[Ma98], section 3.1.2] 109 Out-of-order Packet [Ref.[Po06], section 3.3.2] 110 Duplicate Packet [Ref.[Po06], section 3.3.3] 111 Packet Loss [Ref.[Po07t], Section 3.5] 113 This document adopts the definition format in Section 2 of RFC 1242 114 [Br91]. 116 3. Test Setup 118 3.1 Test Topologies 120 Figure 1 shows the test topology to measure IGP Route Convergence 121 due to local Convergence Events such as Link Failure, Layer 2 122 Session Failure, IGP Adjacency Failure, Route Withdrawal, and route 123 cost change. These test cases discussed in section 4 provide route 124 convergence times that account for the Event Detection time, SPF 125 Processing time, and FIB Update time. These times are measured 126 by observing packet loss in the data plane at the Tester. 128 Figure 2 shows the test topology to measure IGP Route Convergence 129 time due to remote changes in the network topology. These times 130 are measured by observing packet loss in the data plane at the 131 Tester. In this topology the three routers are considered a System 132 Under Test (SUT). A Remote Interface [Po07t] failure on router R2 133 MUST result in convergence of traffic to router R3. NOTE: All 134 routers in the SUT must be the same model and identically 135 configured. 137 Figure 3 shows the test topology to measure IGP Route Convergence 138 time with members of an Equal Cost Multipath (ECMP) Set. These 139 times are measured by observing packet loss in the data plane at 140 the Tester. In this topology, the DUT is configured with each 141 Egress interface as a member of an ECMP set and the Tester emulates 142 multiple next-hop routers (emulates one router for each member). 144 --------- Ingress Interface --------- 145 | |<--------------------------------| | 146 | | | | 147 | | Preferred Egress Interface | | 148 | DUT |-------------------------------->| Tester| 149 | | | | 150 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 151 | | Next-Best Egress Interface | | 152 --------- --------- 154 Figure 1. Test Topology 1: IGP Convergence Test Topology 155 for Local Changes 157 Link-State IGP Data Plane Route Convergence 159 ----- --------- 160 | | Preferred | | 161 ----- |R2 |---------------------->| | 162 | |-->| | Egress Interface | | 163 | | ----- | | 164 |R1 | |Tester | 165 | | ----- | | 166 | |-->| | Next-Best | | 167 ----- |R3 |~~~~~~~~~~~~~~~~~~~~~~>| | 168 ^ | | Egress Interface | | 169 | ----- --------- 170 | | 171 |-------------------------------------- 172 Ingress Interface 174 Figure 2. Test Topology 2: IGP Convergence Test Topology 175 for Convergence Due to Remote Changes 177 --------- Ingress Interface --------- 178 | |<--------------------------------| | 179 | | | | 180 | | ECMP Set Interface 1 | | 181 | DUT |-------------------------------->| Tester| 182 | | . | | 183 | | . | | 184 | | . | | 185 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 186 | | ECMP Set Interface N | | 187 --------- --------- 189 Figure 3. Test Topology 3: IGP Convergence Test Topology 190 for ECMP Convergence 192 --------- Ingress Interface --------- 193 | |<--------------------------------| | 194 | | | | 195 | | Parallel Link Interface 1 | | 196 | DUT |-------------------------------->| Tester| 197 | | . | | 198 | | . | | 199 | | . | | 200 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 201 | | Parallel Link Interface N | | 202 --------- --------- 204 Figure 4. Test Topology 4: IGP Convergence Test Topology 205 for Parallel Link Convergence 207 Link-State IGP Data Plane Route Convergence 209 Figure 4 shows the test topology to measure IGP Route Convergence 210 time with members of a Parallel Link. These times are measured by 211 observing packet loss in the data plane at the Tester. In this 212 topology, the DUT is configured with each Egress interface as a 213 member of a Parallel Link and the Tester emulates the single 214 next-hop router. 216 3.2 Test Considerations 217 3.2.1 IGP Selection 218 The test cases described in section 4 MAY be used for link-state 219 IGPs, such as ISIS or OSPF. The Route Convergence test methodology 220 is identical. The IGP adjacencies are established on the Preferred 221 Egress Interface and Next-Best Egress Interface. 223 3.2.2 Routing Protocol Configuration 224 The obtained results for IGP Route Convergence may vary if 225 other routing protocols are enabled and routes learned via those 226 protocols are installed. IGP convergence times MUST be benchmarked 227 without routes installed from other protocols. 229 3.2.3 IGP Route Scaling 230 The number of IGP routes will impact the measured IGP Route 231 Convergence. To obtain results similar to those that would be 232 observed in an operational network, it is RECOMMENDED that the 233 number of installed routes and nodes closely approximates that 234 of the network (e.g. thousands of routes with tens of nodes). 235 The number of areas (for OSPF) and levels (for ISIS) can impact 236 the benchmark results. 238 3.2.4 Timers 239 There are some timers that will impact the measured IGP Convergence 240 time. Benchmarking metrics may be measured at any fixed values for 241 these timers. It is RECOMMENDED that the following timers be 242 configured to the minimum values listed: 244 Timer Recommended Value 245 ----- ----------------- 246 Link Failure Indication Delay <10milliseconds 247 IGP Hello Timer 1 second 248 IGP Dead-Interval 3 seconds 249 LSA Generation Delay 0 250 LSA Flood Packet Pacing 0 251 LSA Retransmission Packet Pacing 0 252 SPF Delay 0 254 3.2.5 Interface Types 255 All test cases in this methodology document may be executed with any 256 interface type. All interfaces MUST be the same media and Throughput 257 Link-State IGP Data Plane Route Convergence 259 [Br91][Br99] for each test case. The type of media may dictate which 260 test cases may be executed. This is because each interface type has 261 a unique mechanism for detecting link failures and the speed at which 262 that mechanism operates will influence the measure results. Media 263 and protocols MUST be configured for minimum failure detection delay 264 to minimize the contribution to the measured Convergence time. For 265 example, configure SONET with the minimum carrier-loss-delay. All 266 interfaces SHOULD be configured as point-to-point. 268 3.2.6 Packet Sampling Interval 269 The Packet Sampling Interval [Po07t] value is the fastest measurable 270 Rate-Derived Convergence Time [Po07t]. The RECOMMENDED value for the 271 Packet Sampling Interval is 10 milliseconds. Rate-Derived Convergence 272 Time is the preferred benchmark for IGP Route Convergence. This 273 benchmark must always be reported when the Packet Sampling Interval 274 is set <= 10 milliseconds on the test equipment. If the test 275 equipment does not permit the Packet Sampling Interval to be set as 276 low as 10 milliseconds, then both the Rate-Derived Convergence Time 277 and Loss-Derived Convergence Time [Po07t] MUST be reported. 279 3.2.7 Offered Load 280 The offered load MUST be the Throughput of the device as defined in 281 [Br91] and benchmarked in [Br99] at a fixed packet size. At least 282 one packet per route in the FIB for all routes in the FIB MUST be 283 offered to the DUT within the Packet Sampling interval. Packet size 284 is measured in bytes and includes the IP header and payload. The 285 packet size is selectable and MUST be recorded. The Forwarding 286 Rate [Ma98] MUST be measured at the Preferred Egress Interface and 287 the Next-Best Egress Interface. The duration of offered load MUST 288 be greater than the convergence time. The destination addresses 289 for the offered load MUST be distributed such that all routes are 290 matched and each route is offered an equal share of the total 291 Offered Load. This requirement for the Offered Load to be 292 distributed to match all destinations in the route table creates 293 separate flows that are offered to the DUT. The capability of the 294 Tester to measure packet loss for each individual flow (identified 295 by the destination address matching a route entry) and the scale 296 for the number of individual flows for which it can measure packet 297 loss should be considered when benchmarking Route-Specific 298 Convergence [Po07t]. 300 3.2.8 Selection of Convergence Time Benchmark Metrics 301 The methodologies in the section 4 test cases MAY be applied to 302 benchmark Full Convergence and Route-Specific Convergence with 303 benchmarking metrics First Route Convergence Time, Loss-Derived 304 Convergence Time, Rate-Derived Convergence Time, Reversion 305 Convergence Time, and Route-Specific Convergence Times [Po07t]. 306 When benchmarking Full Convergence the Rate-Derived Convergence 307 Time benchmarking metric SHOULD be measured. When benchmarking 308 Route-Specific Convergence the ROute-Specific Convergence Time 309 benchmarking metric SHOULD be measured. The First Route Convergence 310 Time benchmarking metric MAY be measured when benchmarking either 311 Full Convergence or Route-Specific Convergence. 313 Link-State IGP Data Plane Route Convergence 315 3.3 Reporting Format 316 For each test case, it is recommended that the reporting table below 317 is completed and all time values SHOULD be reported with resolution 318 as specified in [Po07t]. 320 Parameter Units 321 --------- ----- 322 Test Case test case number 323 Test Topology (1, 2, 3, or 4) 324 IGP (ISIS, OSPF, other) 325 Interface Type (GigE, POS, ATM, other) 326 Packet Size offered to DUT bytes 327 IGP Routes advertised to DUT number of IGP routes 328 Nodes in emulated network number of nodes 329 Packet Sampling Interval on Tester milliseconds 330 IGP Timer Values configured on DUT: 331 Interface Failure Indication Delay seconds 332 IGP Hello Timer seconds 333 IGP Dead-Interval seconds 334 LSA Generation Delay seconds 335 LSA Flood Packet Pacing seconds 336 LSA Retransmission Packet Pacing seconds 337 SPF Delay seconds 338 Forwarding Metrics 339 Total Packets Offered to DUT number of Packets 340 Total Packets Routed by DUT number of Packets 341 Convergence Packet Loss number of Packets 342 Out-of-Order Packets number of Packets 343 Duplicate Packets number of Packets 344 Convergence Benchmarks 345 Full Convergence 346 First Route Convergence Time seconds 347 Rate-Derived Convergence Time seconds 348 Loss-Derived Convergence Time seconds 349 Route-Specific Convergence 350 Number of Routes Measured number of flows 351 Route-Specific Convergence Time[n] array of seconds 352 Minimum R-S Convergence Time seconds 353 Maximum R-S Convergence Time seconds 354 Median R-S Convergence Time seconds 355 Average R-S Convergence Time seconds 356 Reversion 357 Reversion Convergence Time seconds 358 First Route Convergence Time seconds 359 Route-Specific Convergence 360 Number of Routes Measured number of flows 361 Route-Specific Convergence Time[n] array of seconds 362 Minimum R-S Convergence Time seconds 363 Maximum R-S Convergence Time seconds 364 Median R-S Convergence Time seconds 365 Average R-S Convergence Time seconds 366 Link-State IGP Data Plane Route Convergence 368 4. Test Cases 370 It is RECOMMENDED that all applicable test cases be executed for 371 best characterization of the DUT. The test cases follow a generic 372 procedure tailored to the specific DUT configuration and Convergence 373 Event[Po07t]. This generic procedure is as follows: 375 1. Establish DUT configuration and install routes. 376 2. Send offered load with traffic traversing Preferred Egress 377 Interface [Po07t]. 378 3. Introduce Convergence Event to force traffic to Next-Best 379 Egress Interface [Po07t]. 380 4. Measure First Route Convergence Time. 381 5. Measure Loss-Derived Convergence Time, Rate-Derived 382 Convergence Time, and optionally the Route-Specific 383 Convergence Times. 384 6. Wait the Sustained Convergence Validation Time to ensure there 385 no residual packet loss. 386 7. Recover from Convergence Event. 387 8. Measure Reversion Convergence Time, and optionally the First 388 Route Convergence Time and Route-Specific Convergence Times. 390 4.1 Convergence Due to Local Interface Failure 392 Objective 393 To obtain the IGP Route Convergence due to a local link failure event 394 at the DUT's Local Interface. 396 Procedure 397 1. Advertise matching IGP routes from Tester to DUT on Preferred 398 Egress Interface [Po07t] and Next-Best Egress Interface [Po07t] 399 using the topology shown in Figure 1. Set the cost of the routes 400 so that the Preferred Egress Interface is the preferred next-hop. 401 2. Send offered load at measured Throughput with fixed packet 402 size to destinations matching all IGP routes from Tester to 403 DUT on Ingress Interface [Po07t]. 404 3. Verify traffic is routed over Preferred Egress Interface. 405 4. Remove link on DUT's Preferred Egress Interface. 406 5. Measure First Route Convergence Time [Po07t] as DUT detects the 407 link down event and begins to converge IGP routes and traffic 408 over the Next-Best Egress Interface. 409 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 410 link down event and converges all IGP routes and traffic over 411 the Next-Best Egress Interface. Optionally, Route-Specific 412 Convergence Times [Po07t] MAY be measured. 413 7. Stop offered load. Wait 30 seconds for queues to drain. 414 Restart offered load. 415 8. Restore link on DUT's Preferred Egress Interface. 416 9. Measure Reversion Convergence Time [Po07t], and optionally 417 measure First Route Convergence Time [Po07t] and Route-Specific 418 Convergence Times [Po07t], as DUT detects the link up event and 419 converges all IGP routes and traffic back to the Preferred 420 Egress Interface. 422 Link-State IGP Data Plane Route Convergence 424 Results 425 The measured IGP Convergence time is influenced by the Local 426 link failure indication, SPF delay, SPF Hold time, SPF Execution 427 Time, Tree Build Time, and Hardware Update Time [Po07a]. 429 4.2 Convergence Due to Remote Interface Failure 431 Objective 432 To obtain the IGP Route Convergence due to a Remote Interface 433 Failure event. 435 Procedure 436 1. Advertise matching IGP routes from Tester to SUT on 437 Preferred Egress Interface [Po07t] and Next-Best Egress 438 Interface [Po07t] using the topology shown in Figure 2. 439 Set the cost of the routes so that the Preferred Egress 440 Interface is the preferred next-hop. 441 2. Send offered load at measured Throughput with fixed packet 442 size to destinations matching all IGP routes from Tester to 443 SUT on Ingress Interface [Po07t]. 444 3. Verify traffic is routed over Preferred Egress Interface. 445 4. Remove link on Tester's Neighbor Interface [Po07t] connected to 446 SUT's Preferred Egress Interface. 447 5. Measure First Route Convergence Time [Po07t] as SUT detects the 448 link down event and begins to converge IGP routes and traffic 449 over the Next-Best Egress Interface. 450 6. Measure Rate-Derived Convergence Time [Po07t] as SUT detects 451 the link down event and converges all IGP routes and traffic 452 over the Next-Best Egress Interface. Optionally, Route-Specific 453 Convergence Times [Po07t] MAY be measured. 454 7. Stop offered load. Wait 30 seconds for queues to drain. 455 Restart offered load. 456 8. Restore link on Tester's Neighbor Interface connected to 457 DUT's Preferred Egress Interface. 458 9. Measure Reversion Convergence Time [Po07t], and optionally 459 measure First Route Convergence Time [Po07t] and Route-Specific 460 Convergence Times [Po07t], as DUT detects the link up event and 461 converges all IGP routes and traffic back to the Preferred Egress 462 Interface. 464 Results 465 The measured IGP Convergence time is influenced by the link failure 466 indication, LSA/LSP Flood Packet Pacing, LSA/LSP Retransmission 467 Packet Pacing, LSA/LSP Generation time, SPF delay, SPF Hold time, 468 SPF Execution Time, Tree Build Time, and Hardware Update Time 469 [Po07a]. This test case may produce Stale Forwarding [Po07t] due to 470 microloops which may increase the measured convergence times. 472 Link-State IGP Data Plane Route Convergence 474 4.3 Convergence Due to Local Adminstrative Shutdown 475 Objective 476 To obtain the IGP Route Convergence due to a administrative shutdown 477 at the DUT's Local Interface. 479 Procedure 480 1. Advertise matching IGP routes from Tester to DUT on 481 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 482 [Po07t] using the topology shown in Figure 1. Set the cost of 483 the routes so that the Preferred Egress Interface is the 484 preferred next-hop. 485 2. Send offered load at measured Throughput with fixed packet 486 size to destinations matching all IGP routes from Tester to 487 DUT on Ingress Interface [Po07t]. 488 3. Verify traffic is routed over Preferred Egress Interface. 489 4. Perform adminstrative shutdown on the DUT's Preferred Egress 490 Interface. 491 5. Measure First Route Convergence Time [Po07t] as DUT detects the 492 link down event and begins to converge IGP routes and traffic 493 over the Next-Best Egress Interface. 494 6. Measure Rate-Derived Convergence Time [Po07t] as DUT converges 495 all IGP routes and traffic over the Next-Best Egress Interface. 496 Optionally, Route-Specific Convergence Times [Po07t] MAY be 497 measured. 498 7. Stop offered load. Wait 30 seconds for queues to drain. 499 Restart offered load. 500 8. Restore Preferred Egress Interface by administratively enabling 501 the interface. 502 9. Measure Reversion Convergence Time [Po07t], and optionally 503 measure First Route Convergence Time [Po07t] and Route-Specific 504 Convergence Times [Po07t], as DUT detects the link up event and 505 converges all IGP routes and traffic back to the Preferred 506 Egress Interface. 508 Results 509 The measured IGP Convergence time is influenced by SPF delay, 510 SPF Hold time, SPF Execution Time, Tree Build Time, and Hardware 511 Update Time [Po07a]. 513 4.4 Convergence Due to Layer 2 Session Loss 514 Objective 515 To obtain the IGP Route Convergence due to a Local Layer 2 516 session loss. 518 Procedure 519 1. Advertise matching IGP routes from Tester to DUT on 520 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 521 [Po07t] using the topology shown in Figure 1. Set the cost of 522 the routes so that the IGP routes along the Preferred Egress 523 Interface is the preferred next-hop. 524 2. Send offered load at measured Throughput with fixed packet 525 size to destinations matching all IGP routes from Tester to 526 DUT on Ingress Interface [Po07t]. 528 Link-State IGP Data Plane Route Convergence 530 3. Verify traffic is routed over Preferred Egress Interface. 531 4. Tester removes Layer 2 session from DUT's Preferred Egress 532 Interface [Po07t]. It is RECOMMENDED that this be achieved with 533 messaging, but the method MAY vary with the Layer 2 protocol. 534 5. Measure First Route Convergence Time [Po07t] as DUT detects the 535 Layer 2 session down event and begins to converge IGP routes and 536 traffic over the Next-Best Egress Interface. 537 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 538 Layer 2 session down event and converges all IGP routes and 539 traffic over the Next-Best Egress Interface. Optionally, 540 Route-Specific Convergence Times [Po07t] MAY be measured. 541 7. Stop offered load. Wait 30 seconds for queues to drain. 542 Restart offered load. 543 8. Restore Layer 2 session on DUT's Preferred Egress Interface. 544 9. Measure Reversion Convergence Time [Po07t], and optionally 545 measure First Route Convergence Time [Po07t] and Route-Specific 546 Convergence Times [Po07t], as DUT detects the session up event 547 and converges all IGP routes and traffic over the Preferred Egress 548 Interface. 550 Results 551 The measured IGP Convergence time is influenced by the Layer 2 552 failure indication, SPF delay, SPF Hold time, SPF Execution 553 Time, Tree Build Time, and Hardware Update Time [Po07a]. 555 4.5 Convergence Due to Loss of IGP Adjacency 556 Objective 557 To obtain the IGP Route Convergence due to loss of the IGP 558 Adjacency. 560 Procedure 561 1. Advertise matching IGP routes from Tester to DUT on 562 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 563 [Po07t] using the topology shown in Figure 1. Set the cost of 564 the routes so that the Preferred Egress Interface is the 565 preferred next-hop. 566 2. Send offered load at measured Throughput with fixed packet 567 size to destinations matching all IGP routes from Tester to 568 DUT on Ingress Interface [Po07t]. 569 3. Verify traffic is routed over Preferred Egress Interface. 570 4. Remove IGP adjacency from Tester's Neighbor Interface [Po07t] 571 connected to Preferred Egress Interface. The Layer 2 session 572 MUST be maintained. 573 5. Measure First Route Convergence Time [Po07t] as DUT detects the 574 loss of IGP adjacency and begins to converge IGP routes and 575 traffic over the Next-Best Egress Interface. 576 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 577 IGP session failure event and converges all IGP routes and 578 traffic over the Next-Best Egress Interface. Optionally, 579 Route-Specific Convergence Times [Po07t] MAY be measured. 580 7. Stop offered load. Wait 30 seconds for queues to drain. 581 Restart offered load. 582 8. Restore IGP session on DUT's Preferred Egress Interface. 584 Link-State IGP Data Plane Route Convergence 586 9. Measure Reversion Convergence Time [Po07t], and optionally 587 measure First Route Convergence Time [Po07t] and Route-Specific 588 Convergence Times [Po07t], as DUT detects the session recovery 589 event and converges all IGP routes and traffic over the 590 Preferred Egress Interface. 592 Results 593 The measured IGP Convergence time is influenced by the IGP Hello 594 Interval, IGP Dead Interval, SPF delay, SPF Hold time, SPF 595 Execution Time, Tree Build Time, and Hardware Update Time [Po07a]. 597 4.6 Convergence Due to Route Withdrawal 599 Objective 600 To obtain the IGP Route Convergence due to Route Withdrawal. 602 Procedure 603 1. Advertise matching IGP routes from Tester to DUT on Preferred 604 Egress Interface [Po07t] and Next-Best Egress Interface [Po07t] 605 using the topology shown in Figure 1. Set the cost of the routes 606 so that the Preferred Egress Interface is the preferred next-hop. 607 It is RECOMMENDED that the IGP routes be IGP external routes 608 for which the Tester would be emulating a preferred and a 609 next-best Autonomous System Border Router (ASBR). 610 2. Send offered load at measured Throughput with fixed packet 611 size to destinations matching all IGP routes from Tester to 612 DUT on Ingress Interface [Po07t]. 613 3. Verify traffic is routed over Preferred Egress Interface. 614 4. Tester withdraws all IGP routes from DUT's Local Interface 615 on Preferred Egress Interface. The Tester records the time it 616 sends the withdrawal message(s). This MAY be achieved with 617 inclusion of a timestamp in the traffic payload. 618 5. Measure First Route Convergence Time [Po07t] as DUT detects the 619 route withdrawal event and begins to converge IGP routes and 620 traffic over the Next-Best Egress Interface. This is measured 621 from the time that the Tester sent the withdrawal message(s). 622 6. Measure Rate-Derived Convergence Time [Po07t] as DUT withdraws 623 routes and converges all IGP routes and traffic over the 624 Next-Best Egress Interface. Optionally, Route-Specific 625 Convergence Times [Po07t] MAY be measured. 626 7. Stop offered load. Wait 30 seconds for queues to drain. 627 Restart offered load. 628 8. Re-advertise IGP routes to DUT's Preferred Egress Interface. 629 9. Measure Reversion Convergence Time [Po07t], and optionally 630 measure First Route Convergence Time [Po07t] and Route-Specific 631 Convergence Times [Po07t], as DUT converges all IGP routes and 632 traffic over the Preferred Egress Interface. 634 Results 635 The measured IGP Convergence time is the SPF Processing and FIB 636 Update time as influenced by the SPF or route calculation delay, 637 Hold time, Execution Time, and Hardware Update Time [Po07a]. 639 Link-State IGP Data Plane Route Convergence 641 4.7 Convergence Due to Cost Change 642 Objective 643 To obtain the IGP Route Convergence due to route cost change. 645 Procedure 646 1. Advertise matching IGP routes from Tester to DUT on Preferred 647 Egress Interface [Po07t] and Next-Best Egress Interface [Po07t] 648 using the topology shown in Figure 1. Set the cost of the routes 649 so that the Preferred Egress Interface is the preferred next-hop. 650 2. Send offered load at measured Throughput with fixed packet 651 size to destinations matching all IGP routes from Tester to 652 DUT on Ingress Interface [Po07t]. 653 3. Verify traffic is routed over Preferred Egress Interface. 654 4. Tester increases cost for all IGP routes at DUT's Preferred 655 Egress Interface so that the Next-Best Egress Interface 656 has lower cost and becomes preferred path. 657 5. Measure First Route Convergence Time [Po07t] as DUT detects the 658 cost change event and begins to converge IGP routes and traffic 659 over the Next-Best Egress Interface. 660 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 661 cost change event and converges all IGP routes and traffic 662 over the Next-Best Egress Interface. Optionally, Route-Specific 663 Convergence Times [Po07t] MAY be measured. 664 7. Stop offered load. Wait 30 seconds for queues to drain. 665 Restart offered load. 666 8. Re-advertise IGP routes to DUT's Preferred Egress Interface 667 with original lower cost metric. 668 9. Measure Reversion Convergence Time [Po07t], and optionally 669 measure First Route Convergence Time [Po07t] and Route-Specific 670 Convergence Times [Po07t], as DUT converges all IGP routes and 671 traffic over the Preferred Egress Interface. 673 Results 674 There should be no measured packet loss for this case. 676 4.8 Convergence Due to ECMP Member Interface Failure 678 Objective 679 To obtain the IGP Route Convergence due to a local link failure event 680 of an ECMP Member. 682 Procedure 683 1. Configure ECMP Set as shown in Figure 3. 684 2. Advertise matching IGP routes from Tester to DUT on each ECMP 685 member. 686 3. Send offered load at measured Throughput with fixed packet size to 687 destinations matching all IGP routes from Tester to DUT on Ingress 688 Interface [Po07t]. 689 4. Verify traffic is routed over all members of ECMP Set. 690 5. Remove link on Tester's Neighbor Interface [Po07t] connected to 691 one of the DUT's ECMP member interfaces. 693 Link-State IGP Data Plane Route Convergence 695 6. Measure First Route Convergence Time [Po07t] as DUT detects the 696 link down event and begins to converge IGP routes and traffic 697 over the other ECMP members. 698 7. Measure Rate-Derived Convergence Time [Po07t] as DUT detects 699 the link down event and converges all IGP routes and traffic 700 over the other ECMP members. At the same time measure 701 Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. 702 Optionally, Route-Specific Convergence Times [Po07t] MAY be 703 measured. 704 8. Stop offered load. Wait 30 seconds for queues to drain. 705 Restart offered load. 706 9. Restore link on Tester's Neighbor Interface connected to 707 DUT's ECMP member interface. 708 10. Measure Reversion Convergence Time [Po07t], and optionally 709 measure First Route Convergence Time [Po07t] and Route-Specific 710 Convergence Times [Po07t], as DUT detects the link up event and 711 converges IGP routes and some distribution of traffic over the 712 restored ECMP member. 714 Results 715 The measured IGP Convergence time is influenced by Local link 716 failure indication, Tree Build Time, and Hardware Update Time 717 [Po07a]. 719 4.9 Convergence Due to ECMP Member Remote Interface Failure 721 Objective 722 To obtain the IGP Route Convergence due to a remote interface 723 failure event for an ECMP Member. 725 Procedure 726 1. Configure ECMP Set as shown in Figure 2 in which the links 727 from R1 to R2 and R1 to R3 are members of an ECMP Set. 728 2. Advertise matching IGP routes from Tester to SUT to balance 729 traffic to each ECMP member. 730 3. Send offered load at measured Throughput with fixed packet 731 size to destinations matching all IGP routes from Tester to 732 SUT on Ingress Interface [Po07t]. 733 4. Verify traffic is routed over all members of ECMP Set. 734 5. Remove link on Tester's Neighbor Interface to R2 or R3. 735 6. Measure First Route Convergence Time [Po07t] as SUT detects 736 the link down event and begins to converge IGP routes and 737 traffic over the other ECMP members. 738 7. Measure Rate-Derived Convergence Time [Po07t] as SUT detects 739 the link down event and converges all IGP routes and traffic 740 over the other ECMP members. At the same time measure 741 Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. 742 Optionally, Route-Specific Convergence Times [Po07t] MAY be 743 measured. 744 8. Stop offered load. Wait 30 seconds for queues to drain. 745 Restart offered load. 746 9. Restore link on Tester's Neighbor Interface to R2 or R3. 748 Link-State IGP Data Plane Route Convergence 750 10. Measure Reversion Convergence Time [Po07t], and optionally 751 measure First Route Convergence Time [Po07t] and 752 Route-Specific Convergence Times [Po07t], as SUT detects 753 the link up event and converges IGP routes and some 754 distribution of traffic over the restored ECMP member. 756 Results 757 The measured IGP Convergence time is influenced by Local link 758 failure indication, Tree Build Time, and Hardware Update Time 759 [Po07a]. 761 4.10 Convergence Due to Parallel Link Interface Failure 763 Objective 764 To obtain the IGP Route Convergence due to a local link failure 765 event for a Member of a Parallel Link. The links can be used 766 for data Load Balancing 768 Procedure 769 1. Configure Parallel Link as shown in Figure 4. 770 2. Advertise matching IGP routes from Tester to DUT on 771 each Parallel Link member. 772 3. Send offered load at measured Throughput with fixed packet 773 size to destinations matching all IGP routes from Tester to 774 DUT on Ingress Interface [Po07t]. 775 4. Verify traffic is routed over all members of Parallel Link. 776 5. Remove link on Tester's Neighbor Interface [Po07t] connected to 777 one of the DUT's Parallel Link member interfaces. 778 6. Measure First Route Convergence Time [Po07t] as DUT detects the 779 link down event and begins to converge IGP routes and traffic 780 over the other Parallel Link members. 781 7. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 782 link down event and converges all IGP routes and traffic over 783 the other Parallel Link members. At the same time measure 784 Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. 785 Optionally, Route-Specific Convergence Times [Po07t] MAY be 786 measured. 787 8. Stop offered load. Wait 30 seconds for queues to drain. 788 Restart offered load. 789 9. Restore link on Tester's Neighbor Interface connected to 790 DUT's Parallel Link member interface. 791 10. Measure Reversion Convergence Time [Po07t], and optionally 792 measure First Route Convergence Time [Po07t] and 793 Route-Specific Convergence Times [Po07t], as DUT 794 detects the link up event and converges IGP routes and some 795 distribution of traffic over the restored Parallel Link member. 797 Results 798 The measured IGP Convergence time is influenced by the Local 799 link failure indication, Tree Build Time, and Hardware Update 800 Time [Po07a]. 802 Link-State IGP Data Plane Route Convergence 804 5. IANA Considerations 806 This document requires no IANA considerations. 808 6. Security Considerations 809 Documents of this type do not directly affect the security of 810 the Internet or corporate networks as long as benchmarking 811 is not performed on devices or systems connected to operating 812 networks. 814 7. Acknowledgements 815 Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward, 816 Kris Michielsen and the BMWG for their contributions to this work. 818 8. References 819 8.1 Normative References 821 [Br91] Bradner, S., "Benchmarking Terminology for Network 822 Interconnection Devices", RFC 1242, IETF, March 1991. 824 [Br97] Bradner, S., "Key words for use in RFCs to Indicate 825 Requirement Levels", RFC 2119, March 1997 827 [Br99] Bradner, S. and McQuaid, J., "Benchmarking Methodology for 828 Network Interconnect Devices", RFC 2544, IETF, March 1999. 830 [Ca90] Callon, R., "Use of OSI IS-IS for Routing in TCP/IP and Dual 831 Environments", RFC 1195, IETF, December 1990. 833 [Ma98] Mandeville, R., "Benchmarking Terminology for LAN 834 Switching Devices", RFC 2285, February 1998. 836 [Mo98] Moy, J., "OSPF Version 2", RFC 2328, IETF, April 1998. 838 [Po06] Poretsky, S., et al., "Terminology for Benchmarking 839 Network-layer Traffic Control Mechanisms", RFC 4689, 840 November 2006. 842 [Po07a] Poretsky, S., "Considerations for Benchmarking Link-State 843 IGP Convergence", draft-ietf-bmwg-igp-dataplane-conv-app-15, 844 work in progress, February 2008. 846 [Po07t] Poretsky, S., Imhoff, B., "Benchmarking Terminology for 847 Link-State IGP Convergence", 848 draft-ietf-bmwg-igp-dataplane-conv-term-15, work in 849 progress, February 2008. 851 8.2 Informative References 852 None 853 Link-State IGP Data Plane Route Convergence 855 9. Author's Address 857 Scott Poretsky 858 NextPoint Networks 859 3 Federal Street 860 Billerica, MA 01821 861 USA 862 Phone: + 1 508 439 9008 863 EMail: sporetsky@nextpointnetworks.com 865 Brent Imhoff 866 Juniper Networks 867 1194 North Mathilda Ave 868 Sunnyvale, CA 94089 869 USA 870 Phone: + 1 314 378 2571 871 EMail: bimhoff@planetspork.com 873 Full Copyright Statement 875 Copyright (C) The IETF Trust (2008). 877 This document is subject to the rights, licenses and restrictions 878 contained in BCP 78, and except as set forth therein, the authors 879 retain all their rights. 881 This document and the information contained herein are provided 882 on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 883 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE 884 IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL 885 WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY 886 WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE 887 ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS 888 FOR A PARTICULAR PURPOSE. 890 Intellectual Property 892 The IETF takes no position regarding the validity or scope of any 893 Intellectual Property Rights or other rights that might be claimed to 894 pertain to the implementation or use of the technology described in 895 this document or the extent to which any license under such rights 896 might or might not be available; nor does it represent that it has 897 made any independent effort to identify any such rights. Information 898 on the procedures with respect to rights in RFC documents can be 899 found in BCP 78 and BCP 79. 901 Copies of IPR disclosures made to the IETF Secretariat and any 902 assurances of licenses to be made available, or the result of an 903 attempt made to obtain a general license or permission for the use of 904 such proprietary rights by implementers or users of this 905 specification can be obtained from the IETF on-line IPR repository at 906 http://www.ietf.org/ipr. 908 Link-State IGP Data Plane Route Convergence 910 The IETF invites any interested party to bring to its attention any 911 copyrights, patents or patent applications, or other proprietary 912 rights that may cover technology that may be required to implement 913 this standard. Please address the information to the IETF at ietf- 914 ipr@ietf.org. 916 Acknowledgement 917 Funding for the RFC Editor function is currently provided by the 918 Internet Society.