idnits 2.17.1 draft-ietf-bmwg-igp-dataplane-conv-meth-14.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 22. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 837. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 848. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 855. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 861. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 321 has weird spacing: '... Pacing seco...' == Line 325 has weird spacing: '...ce Time sec...' == Line 326 has weird spacing: '...ce Time sec...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (May 2008) is 5824 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-17) exists of draft-ietf-bmwg-igp-dataplane-conv-app-14 == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-term-14 Summary: 1 error (**), 0 flaws (~~), 6 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group 2 INTERNET-DRAFT 3 Expires in: May 2008 4 Intended Status: Informational 5 Scott Poretsky 6 Reef Point Systems 8 Brent Imhoff 9 Juniper Networks 11 November 2007 13 Benchmarking Methodology for 14 Link-State IGP Data Plane Route Convergence 16 18 Intellectual Property Rights (IPR) statement: 19 By submitting this Internet-Draft, each author represents that any 20 applicable patent or other IPR claims of which he or she is aware 21 have been or will be disclosed, and any of which he or she becomes 22 aware will be disclosed, in accordance with Section 6 of BCP 79. 24 Status of this Memo 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF), its areas, and its working groups. Note that 28 other groups may also distribute working documents as 29 Internet-Drafts. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 The list of current Internet-Drafts can be accessed at 37 http://www.ietf.org/ietf/1id-abstracts.txt. 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html. 42 Copyright Notice 43 Copyright (C) The IETF Trust (2007). 45 ABSTRACT 46 This document describes the methodology for benchmarking Interior 47 Gateway Protocol (IGP) Route Convergence. The methodology is to 48 be used for benchmarking IGP convergence time through externally 49 observable (black box) data plane measurements. The methodology 50 can be applied to any link-state IGP, such as ISIS and OSPF. 52 IGP Data Plane Route Convergence 54 Table of Contents 55 1. Introduction ...............................................2 56 2. Existing definitions .......................................2 57 3. Test Setup..................................................3 58 3.1 Test Topologies............................................3 59 3.2 Test Considerations........................................5 60 3.3 Reporting Format...........................................7 61 4. Test Cases..................................................7 62 4.1 Convergence Due to Link Failure............................8 63 4.1.1 Convergence Due to Local Interface Failure...............8 64 4.1.2 Convergence Due to Neighbor Interface Failure............8 65 4.1.3 Convergence Due to Remote Interface Failure..............9 66 4.2 Convergence Due to Local Administrative Shutdown...........10 67 4.3 Convergence Due to Layer 2 Session Failure.................11 68 4.4 Convergence Due to IGP Adjacency Failure...................11 69 4.5 Convergence Due to Route Withdrawal........................12 70 4.6 Convergence Due to Cost Change.............................13 71 4.7 Convergence Due to ECMP Member Interface Failure...........13 72 4.8 Convergence Due to ECMP Member Remote Interface Failure....14 73 4.9 Convergence Due to Parallel Link Interface Failure.........15 74 5. IANA Considerations.........................................15 75 6. Security Considerations.....................................15 76 7. Acknowledgements............................................15 77 8. Normative References........................................16 78 9. Author's Address............................................16 80 1. Introduction 81 This document describes the methodology for benchmarking Interior 82 Gateway Protocol (IGP) Route Convergence. The applicability of this 83 testing is described in [Po07a] and the new terminology that it 84 introduces is defined in [Po07t]. Service Providers use IGP 85 Convergence time as a key metric of router design and architecture. 86 Customers of Service Providers observe convergence time by packet 87 loss, so IGP Route Convergence is considered a Direct Measure of 88 Quality (DMOQ). The test cases in this document are black-box tests 89 that emulate the network events that cause route convergence, as 90 described in [Po07a]. The black-box test designs benchmark the data 91 plane and account for all of the factors contributing to convergence 92 time, as discussed in [Po07a]. The methodology (and terminology) for 93 benchmarking route convergence can be applied to any link-state IGP 94 such as ISIS [Ca90] and OSPF [Mo98] and other IGPs such as RIP. 95 These methodologies apply to IPv4 and IPv6 traffic and IGPs. 97 2. Existing definitions 98 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 99 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 100 document are to be interpreted as described in BCP 14, RFC 2119 101 [Br97]. RFC 2119 defines the use of these key words to help make the 102 intent of standards track documents as clear as possible. While this 103 document uses these keywords, this document is not a standards track 104 document. 106 IGP Data Plane Route Convergence 108 This document uses much of the terminology defined in [Po07t]. 109 This document uses existing terminology defined in other BMWG 110 work. Examples include, but are not limited to: 112 Throughput [Ref.[Br91], section 3.17] 113 Device Under Test (DUT) [Ref.[Ma98], section 3.1.1] 114 System Under Test (SUT) [Ref.[Ma98], section 3.1.2] 115 Out-of-order Packet [Ref.[Po06], section 3.3.2] 116 Duplicate Packet [Ref.[Po06], section 3.3.3] 117 Packet Loss [Ref.[Po07t], Section 3.5] 119 This document adopts the definition format in Section 2 of RFC 1242 120 [Br91]. 122 3. Test Setup 124 3.1 Test Topologies 126 Figure 1 shows the test topology to measure IGP Route Convergence 127 due to local Convergence Events such as Link Failure, Layer 2 128 Session Failure, IGP Adjacency Failure, Route Withdrawal, and route 129 cost change. These test cases discussed in section 4 provide route 130 convergence times that account for the Event Detection time, SPF 131 Processing time, and FIB Update time. These times are measured 132 by observing packet loss in the data plane at the Tester. 134 Figure 2 shows the test topology to measure IGP Route Convergence 135 time due to remote changes in the network topology. These times 136 are measured by observing packet loss in the data plane at the 137 Tester. In this topology the three routers are considered a System 138 Under Test (SUT). A Remote Interface [Po07t] failure on router R2 139 MUST result in convergence of traffic to router R3. NOTE: All 140 routers in the SUT must be the same model and identically 141 configured. 143 Figure 3 shows the test topology to measure IGP Route Convergence 144 time with members of an Equal Cost Multipath (ECMP) Set. These 145 times are measured by observing packet loss in the data plane at 146 the Tester. In this topology, the DUT is configured with each 147 Egress interface as a member of an ECMP set and the Tester emulates 148 multiple next-hop routers (emulates one router for each member). 150 Figure 4 shows the test topology to measure IGP Route Convergence 151 time with members of a Parallel Link. These times are measured by 152 observing packet loss in the data plane at the Tester. In this 153 topology, the DUT is configured with each Egress interface as a 154 member of a Parallel Link and the Tester emulates the single 155 next-hop router. 157 IGP Data Plane Route Convergence 159 --------- Ingress Interface --------- 160 | |<--------------------------------| | 161 | | | | 162 | | Preferred Egress Interface | | 163 | DUT |-------------------------------->| Tester| 164 | | | | 165 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 166 | | Next-Best Egress Interface | | 167 --------- --------- 169 Figure 1. Test Topology 1: IGP Convergence Test Topology 170 for Local Changes 172 ----- --------- 173 | | Preferred | | 174 ----- |R2 |---------------------->| | 175 | |-->| | Egress Interface | | 176 | | ----- | | 177 |R1 | |Tester | 178 | | ----- | | 179 | |-->| | Next-Best | | 180 ----- |R3 |~~~~~~~~~~~~~~~~~~~~~~>| | 181 ^ | | Egress Interface | | 182 | ----- --------- 183 | | 184 |-------------------------------------- 185 Ingress Interface 187 Figure 2. Test Topology 2: IGP Convergence Test Topology 188 for Convergence Due to Remote Changes 190 --------- Ingress Interface --------- 191 | |<--------------------------------| | 192 | | | | 193 | | ECMP Set Interface 1 | | 194 | DUT |-------------------------------->| Tester| 195 | | . | | 196 | | . | | 197 | | . | | 198 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 199 | | ECMP Set Interface N | | 200 --------- --------- 202 Figure 3. Test Topology 3: IGP Convergence Test Topology 203 for ECMP Convergence 204 IGP Data Plane Route Convergence 206 --------- Ingress Interface --------- 207 | |<--------------------------------| | 208 | | | | 209 | | Parallel Link Interface 1 | | 210 | DUT |-------------------------------->| Tester| 211 | | . | | 212 | | . | | 213 | | . | | 214 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 215 | | Parallel Link Interface N | | 216 --------- --------- 218 Figure 4. Test Topology 4: IGP Convergence Test Topology 219 for Parallel Link Convergence 221 3.2 Test Considerations 222 3.2.1 IGP Selection 223 The test cases described in section 4 can be used for ISIS or 224 OSPF. The Route Convergence test methodology for both is 225 identical. The IGP adjacencies are established on the Preferred 226 Egress Interface and Next-Best Egress Interface. 228 3.2.2 Routing Protocol Configuration 229 The obtained results for IGP Route Convergence may vary if 230 other routing protocols are enabled and routes learned via those 231 protocols are installed. IGP convergence times MUST be benchmarked 232 without routes installed from other protocols. 234 3.2.3 IGP Route Scaling 235 The number of IGP routes will impact the measured IGP Route 236 Convergence. To obtain results similar to those that would be 237 observed in an operational network, it is reocmmended that the 238 number of installed routes and nodes closely approximates that 239 of the network (e.g. thousands of routes with tens of nodes). 240 The number of areas (for OSPF) and levels (for ISIS) can impact 241 the benchmark results. 243 3.2.4 Timers 244 There are some timers that will impact the measured IGP Convergence 245 time. Benchmarking metrics may be measured at any fixed values for 246 these timers. It is RECOMMENDED that the following timers be 247 configured to the minimum values listed: 249 Timer Recommended Value 250 ----- ----------------- 251 Link Failure Indication Delay <10milliseconds 252 IGP Hello Timer 1 second 253 IGP Dead-Interval 3 seconds 254 LSA Generation Delay 0 255 LSA Flood Packet Pacing 0 256 LSA Retransmission Packet Pacing 0 257 SPF Delay 0 258 IGP Data Plane Route Convergence 260 3.2.5 Convergence Time Metrics 261 The Packet Sampling Interval [Po07t] value is the fastest 262 measurable convergence time. The RECOMMENDED value for the 263 Packet Sampling Interval is 10 milliseconds. Rate-Derived 264 Convergence Time [Po07t] is the preferred benchmark for IGP 265 Route Convergence. This benchmark must always be reported 266 when the Packet Sampling Interval is set <= 10 milliseconds 267 on the test equipment. If the test equipment does not permit 268 the Packet Sampling Interval to be set as low as 10 269 milliseconds, then both the Rate-Derived Convergence Time and 270 Loss-Derived Convergence Time [Po07t] MUST be reported. 272 3.2.6 Interface Types 273 All test cases in this methodology document may be executed with 274 any interface type. All interfaces MUST be the same media and 275 Throughput [Br91][Br99] for each test case. The type of media 276 may dictate which test cases may be executed. This is because 277 each interface type has a unique mechanism for detecting link 278 failures and the speed at which that mechanism operates will 279 influence the measure results. Media and protocols MUST be 280 configured for minimum failure detection delay to minimize the 281 contribution to the measured Convergence time. For example, 282 configure SONET with the minimum carrier-loss-delay. All 283 interfaces SHOULD be configured as point-to-point. 285 3.2.7 offered load 286 The offered load MUST be the Throughput of the device as defined 287 in [Br91] and benchmarked in [Br99] at a fixed packet size. 288 Packet size is measured in bytes and includes the IP header and 289 payload. The packet size is selectable and MUST be recorded. 290 The Forwarding Rate [Ma98] MUST be measured at the Preferred Egress 291 Interface and the Next-Best Egress Interface. The duration of 292 offered load MUST be greater than the convergence time. The 293 destination addresses for the offered load MUST be distributed 294 such that all routes are matched. This enables Full Convergence 295 [Po07t] to be observed. 297 IGP Data Plane Route Convergence 299 3.3 Reporting Format 300 For each test case, it is recommended that the reporting table below 301 is completed and all time values SHOULD be reported with resolution 302 as specified in [Po07t]. 304 Parameter Units 305 --------- ----- 306 IGP (ISIS or OSPF) 307 Interface Type (GigE, POS, ATM, etc.) 308 Test Topology (1, 2, 3, or 4) 309 Packet Size offered to DUT bytes 310 Total Packets Offered to DUT number of Packets 311 Total Packets Routed by DUT number of Packets 312 IGP Routes advertised to DUT number of IGP routes 313 Nodes in emulated network number of nodes 314 Packet Sampling Interval on Tester milliseconds 315 IGP Timer Values configured on DUT 316 Interface Failure Indication Delay seconds 317 IGP Hello Timer seconds 318 IGP Dead-Interval seconds 319 LSA Generation Delay seconds 320 LSA Flood Packet Pacing seconds 321 LSA Retransmission Packet Pacing seconds 322 SPF Delay seconds 323 Benchmarks 324 First Prefix Conversion Time seconds 325 Rate-Derived Convergence Time seconds 326 Loss-Derived Convergence Time seconds 327 Reversion Convergence Time seconds 329 4. Test Cases 331 The test cases follow a generic procedure tailored to the specific 332 DUT configuration and Convergence Event. This generic procedure is 333 as follows: 335 1. Establish DUT configuration and install routes. 336 2. Send offered load with traffic traversing Preferred Egress 337 Interface [Po07t]. 338 3. Introduce Convergence Event to force traffic to Next-Best 339 Egress Interface [Po07t]. 340 4. Measure First Prefix Convergence Time. 341 4. Measure Rate-Derived Convergence Time. 342 5. Recover from Convergence Event. 343 6. Measure Reversion Convergence Time. 345 IGP Data Plane Route Convergence 346 4.1 Convergence Due to Link Failure 348 4.1.1 Convergence Due to Local Interface Failure 349 Objective 350 To obtain the IGP Route Convergence due to a local link failure event 351 at the DUT's Local Interface. 353 Procedure 354 1. Advertise matching IGP routes from Tester to DUT on 355 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 356 [Po07t] using the topology shown in Figure 1. Set the cost of 357 the routes so that the Preferred Egress Interface is the 358 preferred next-hop. 359 2. Send offered load at measured Throughput with fixed packet 360 size to destinations matching all IGP routes from Tester to 361 DUT on Ingress Interface [Po07t]. 362 3. Verify traffic routed over Preferred Egress Interface. 363 4. Remove link on DUT's Preferred Egress Interface. 364 5. Measure First Prefix Convergence Time [Po07t] as DUT detects the 365 link down event and begins to converge IGP routes and traffic 366 over the Next-Best Egress Interface. 367 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 368 link down event and converges all IGP routes and traffic over 369 the Next-Best Egress Interface. 370 7. Stop offered load. Wait 30 seconds for queues to drain. 371 Restart offered load. 372 8. Restore link on DUT's Preferred Egress Interface. 373 9. Measure Reversion Convergence Time [Po07t] as DUT detects the 374 link up event and converges all IGP routes and traffic back 375 to the Preferred Egress Interface. 377 Results 378 The measured IGP Convergence time is influenced by the Local 379 link failure indication, SPF delay, SPF Hold time, SPF Execution 380 Time, Tree Build Time, and Hardware Update Time [Po07a]. 382 4.1.2 Convergence Due to Neighbor Interface Failure 383 Objective 384 To obtain the IGP Route Convergence due to a local link 385 failure event at the Tester's Neighbor Interface. 387 Procedure 388 1. Advertise matching IGP routes from Tester to DUT on 389 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 390 [Po07t] using the topology shown in Figure 1. Set the cost of 391 the routes so that the Preferred Egress Interface is the 392 preferred next-hop. 393 2. Send offered load at measured Throughput with fixed packet 394 size to destinations matching all IGP routes from Tester to 395 DUT on Ingress Interface [Po07t]. 397 IGP Data Plane Route Convergence 399 3. Verify traffic routed over Preferred Egress Interface. 400 4. Remove link on Tester's Neighbor Interface [Po07t] connected to 401 DUT's Preferred Egress Interface. 402 5. Measure First Prefix Convergence Time [Po07t] as DUT detects the 403 link down event and begins to converge IGP routes and traffic 404 over the Next-Best Egress Interface. 405 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 406 link down event and converges all IGP routes and traffic over 407 the Next-Best Egress Interface. 408 7. Stop offered load. Wait 30 seconds for queues to drain. 409 Restart offered load. 410 8. Restore link on Tester's Neighbor Interface connected to 411 DUT's Preferred Egress Interface. 412 9. Measure Reversion Convergence Time [Po07t] as DUT detects the 413 link up event and converges all IGP routes and traffic back 414 to the Preferred Egress Interface. 416 Results 417 The measured IGP Convergence time is influenced by the Local 418 link failure indication, SPF delay, SPF Hold time, SPF Execution 419 Time, Tree Build Time, and Hardware Update Time [Po07a]. 421 4.1.3 Convergence Due to Remote Interface Failure 422 Objective 423 To obtain the IGP Route Convergence due to a Remote Interface 424 Failure event. 426 Procedure 427 1. Advertise matching IGP routes from Tester to SUT on 428 Preferred Egress Interface [Po07t] and Next-Best Egress 429 Interface [Po07t] using the topology shown in Figure 2. 430 Set the cost of the routes so that the Preferred Egress 431 Interface is the preferred next-hop. 432 2. Send offered load at measured Throughput with fixed packet 433 size to destinations matching all IGP routes from Tester to 434 SUT on Ingress Interface [Po07t]. 435 3. Verify traffic is routed over Preferred Egress Interface. 436 4. Remove link on Tester's Neighbor Interface [Po07t] connected to 437 SUT's Preferred Egress Interface. 438 5. Measure First Prefix Convergence Time [Po07t] as SUT detects the 439 link down event and begins to converge IGP routes and traffic 440 over the Next-Best Egress Interface. 441 6. Measure Rate-Derived Convergence Time [Po07t] as SUT detects 442 the link down event and converges all IGP routes and traffic 443 over the Next-Best Egress Interface. 444 7. Stop offered load. Wait 30 seconds for queues to drain. 445 Restart offered load. 446 8. Restore link on Tester's Neighbor Interface connected to 447 DUT's Preferred Egress Interface. 448 9. Measure Reversion Convergence Time [Po07t] as DUT detects the 449 link up event and converges all IGP routes and traffic back 450 to the Preferred Egress Interface. 452 IGP Data Plane Route Convergence 454 Results 455 The measured IGP Convergence time is influenced by the link failure 456 indication, LSA/LSP Flood Packet Pacing, LSA/LSP Retransmission 457 Packet Pacing, LSA/LSP Generation time, SPF delay, SPF Hold time, 458 SPF Execution Time, Tree Build Time, and Hardware Update Time 459 [Po07a]. This test case may produce Stale Forwarding [Po07t] due to 460 microloops which may increase the Rate-Derived Convergence Time. 462 4.2 Convergence Due to Local Adminstrative Shutdown 463 Objective 464 To obtain the IGP Route Convergence due to a local link failure event 465 at the DUT's Local Interface. 467 Procedure 468 1. Advertise matching IGP routes from Tester to DUT on 469 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 470 [Po07t] using the topology shown in Figure 1. Set the cost of 471 the routes so that the Preferred Egress Interface is the 472 preferred next-hop. 473 2. Send offered load at measured Throughput with fixed packet 474 size to destinations matching all IGP routes from Tester to 475 DUT on Ingress Interface [Po07t]. 476 3. Verify traffic routed over Preferred Egress Interface. 477 4. Perform adminstrative shutdown on the DUT's Preferred Egress 478 Interface. 479 5. Measure First Prefix Convergence Time [Po07t] as DUT detects the 480 link down event and begins to converge IGP routes and traffic 481 over the Next-Best Egress Interface. 482 6. Measure Rate-Derived Convergence Time [Po07t] as DUT converges 483 all IGP routes and traffic over the Next-Best Egress Interface. 484 7. Stop offered load. Wait 30 seconds for queues to drain. 485 Restart offered load. 486 8. Restore Preferred Egress Interface by administratively enabling 487 the interface. 488 9. Measure Reversion Convergence Time [Po07t] as DUT converges all 489 IGP routes and traffic back to the Preferred Egress Interface. 491 Results 492 The measured IGP Convergence time is influenced by SPF delay, 493 SPF Hold time, SPF Execution Time, Tree Build Time, and Hardware 494 Update Time [Po07a]. 496 IGP Data Plane Route Convergence 498 4.3 Convergence Due to Layer 2 Session Failure 499 Objective 500 To obtain the IGP Route Convergence due to a Local Layer 2 501 Session failure event, such as PPP session loss. 503 Procedure 504 1. Advertise matching IGP routes from Tester to DUT on 505 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 506 [Po07t] using the topology shown in Figure 1. Set the cost of 507 the routes so that the IGP routes along the Preferred Egress 508 Interface is the preferred next-hop. 509 2. Send offered load at measured Throughput with fixed packet 510 size to destinations matching all IGP routes from Tester to 511 DUT on Ingress Interface [Po07t]. 512 3. Verify traffic routed over Preferred Egress Interface. 513 4. Remove Layer 2 session from Tester's Neighbor Interface [Po07t] 514 connected to Preferred Egress Interface. 515 5. Measure First Prefix Convergence Time [Po07t] as DUT detects the 516 link down event and begins to converge IGP routes and traffic 517 over the Next-Best Egress Interface. 518 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 519 Layer 2 session down event and converges all IGP routes and 520 traffic over the Next-Best Egress Interface. 521 7. Stop offered load. Wait 30 seconds for queues to drain. 522 Restart offered load. 523 8. Restore Layer 2 session on DUT's Preferred Egress Interface. 524 9. Measure Reversion Convergence Time [Po07t] as DUT detects the 525 session up event and converges all IGP routes and traffic 526 over the Preferred Egress Interface. 528 Results 529 The measured IGP Convergence time is influenced by the Layer 2 530 failure indication, SPF delay, SPF Hold time, SPF Execution 531 Time, Tree Build Time, and Hardware Update Time [Po07a]. 533 4.4 Convergence Due to IGP Adjacency Failure 535 Objective 536 To obtain the IGP Route Convergence due to a Local IGP Adjacency 537 failure event. 539 Procedure 540 1. Advertise matching IGP routes from Tester to DUT on 541 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 542 [Po07t] using the topology shown in Figure 1. Set the cost of 543 the routes so that the Preferred Egress Interface is the 544 preferred next-hop. 545 2. Send offered load at measured Throughput with fixed packet 546 size to destinations matching all IGP routes from Tester to 547 DUT on Ingress Interface [Po07t]. 548 3. Verify traffic routed over Preferred Egress Interface. 550 IGP Data Plane Route Convergence 552 4. Remove IGP adjacency from Tester's Neighbor Interface [Po07t] 553 connected to Preferred Egress Interface. 554 5. Measure First Prefix Convergence Time [Po07t] as DUT detects the 555 link down event and begins to converge IGP routes and traffic 556 over the Next-Best Egress Interface. 557 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 558 IGP session failure event and converges all IGP routes and 559 traffic over the Next-Best Egress Interface. 560 7. Stop offered load. Wait 30 seconds for queues to drain. 561 Restart offered load. 562 8. Restore IGP session on DUT's Preferred Egress Interface. 563 9. Measure Reversion Convergence Time [Po07t] as DUT detects the 564 session up event and converges all IGP routes and traffic 565 over the Preferred Egress Interface. 567 Results 568 The measured IGP Convergence time is influenced by the IGP Hello 569 Interval, IGP Dead Interval, SPF delay, SPF Hold time, SPF 570 Execution Time, Tree Build Time, and Hardware Update Time [Po07a]. 572 4.5 Convergence Due to Route Withdrawal 574 Objective 575 To obtain the IGP Route Convergence due to Route Withdrawal. 577 Procedure 578 1. Advertise matching IGP routes from Tester to DUT on Preferred 579 Egress Interface [Po07t] and Next-Best Egress Interface [Po07t] 580 using the topology shown in Figure 1. Set the cost of the routes 581 so that the Preferred Egress Interface is the preferred next-hop. 582 2. Send offered load at measured Throughput with fixed packet 583 size to destinations matching all IGP routes from Tester to 584 DUT on Ingress Interface [Po07t]. 585 3. Verify traffic routed over Preferred Egress Interface. 586 4. Tester withdraws all IGP routes from DUT's Local Interface 587 on Preferred Egress Interface. 588 5. Measure Rate-Derived Convergence Time [Po07t] as DUT withdraws 589 routes and converges all IGP routes and traffic over the 590 Next-Best Egress Interface. 591 6. Measure First Prefix Convergence Time [Po07t] as DUT detects the 592 link down event and begins to converge IGP routes and traffic 593 over the Next-Best Egress Interface. 594 7. Stop offered load. Wait 30 seconds for queues to drain. 595 Restart offered load. 596 8. Re-advertise IGP routes to DUT's Preferred Egress Interface. 597 9. Measure Reversion Convergence Time [Po07t] as DUT converges all 598 IGP routes and traffic over the Preferred Egress Interface. 600 Results 601 The measured IGP Convergence time is the SPF Processing and FIB 602 Update time as influenced by the SPF delay, SPF Hold time, SPF 603 Execution Time, Tree Build Time, and Hardware Update Time [Po07a]. 605 IGP Data Plane Route Convergence 607 4.6 Convergence Due to Cost Change 608 Objective 609 To obtain the IGP Route Convergence due to route cost change. 611 Procedure 612 1. Advertise matching IGP routes from Tester to DUT on Preferred 613 Egress Interface [Po07t] and Next-Best Egress Interface [Po07t] 614 using the topology shown in Figure 1. Set the cost of the routes 615 so that the Preferred Egress Interface is the preferred next-hop. 616 2. Send offered load at measured Throughput with fixed packet 617 size to destinations matching all IGP routes from Tester to 618 DUT on Ingress Interface [Po07t]. 619 3. Verify traffic routed over Preferred Egress Interface. 620 4. Tester increases cost for all IGP routes at DUT's Preferred 621 Egress Interface so that the Next-Best Egress Interface 622 has lower cost and becomes preferred path. 623 5. Measure First Prefix Convergence Time [Po07t] as DUT detects the 624 link down event and begins to converge IGP routes and traffic 625 over the Next-Best Egress Interface. 626 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 627 cost change event and converges all IGP routes and traffic 628 over the Next-Best Egress Interface. 629 7. Stop offered load. Wait 30 seconds for queues to drain. 630 Restart offered load. 631 8. Re-advertise IGP routes to DUT's Preferred Egress Interface 632 with original lower cost metric. 633 9. Measure Reversion Convergence Time [Po07t] as DUT converges all 634 IGP routes and traffic over the Preferred Egress Interface. 636 Results 637 There should be no measured packet loss for this case. 639 4.7 Convergence Due to ECMP Member Interface Failure 641 Objective 642 To obtain the IGP Route Convergence due to a local link failure event 643 of an ECMP Member. 645 Procedure 646 1. Configure ECMP Set as shown in Figure 3. 647 2. Advertise matching IGP routes from Tester to DUT on each ECMP 648 member. 649 3. Send offered load at measured Throughput with fixed packet size to 650 destinations matching all IGP routes from Tester to DUT on Ingress 651 Interface [Po07t]. 652 4. Verify traffic routed over all members of ECMP Set. 653 5. Remove link on Tester's Neighbor Interface [Po07t] connected to 654 one of the DUT's ECMP member interfaces. 655 6. Measure First Prefix Convergence Time [Po07t] as DUT detects the 656 link down event and begins to converge IGP routes and traffic 657 over the Next-Best Egress Interface. 659 IGP Data Plane Route Convergence 661 7. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 662 link down event and converges all IGP routes and traffic 663 over the other ECMP members. At the same time measure 664 Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. 665 8. Stop offered load. Wait 30 seconds for queues to drain. 666 Restart offered load. 667 9. Restore link on Tester's Neighbor Interface connected to 668 DUT's ECMP member interface. 669 10. Measure Reversion Convergence Time [Po07t] as DUT detects the 670 link up event and converges IGP routes and some distribution 671 of traffic over the restored ECMP member. 673 Results 674 The measured IGP Convergence time is influenced by Local link 675 failure indication, Tree Build Time, and Hardware Update Time 676 [Po07a]. 678 4.8 Convergence Due to ECMP Member Remote Interface Failure 680 Objective 681 To obtain the IGP Route Convergence due to a remote interface 682 failure event for an ECMP Member. 684 Procedure 685 1. Configure ECMP Set as shown in Figure 2 in which the links 686 from R1 to R2 and R1 to R3 are members of an ECMP Set. 687 2. Advertise matching IGP routes from Tester to SUT to balance 688 traffic to each ECMP member. 689 3. Send offered load at measured Throughput with fixed packet 690 size to destinations matching all IGP routes from Tester to 691 SUT on Ingress Interface [Po07t]. 692 4. Verify traffic routed over all members of ECMP Set. 693 5. Remove link on Tester's Neighbor Interface to R2 or R3. 694 6. Measure First Prefix Convergence Time [Po07t] as SUT detects 695 the link down event and begins to converge IGP routes and 696 traffic over the Next-Best Egress Interface. 697 7. Measure Rate-Derived Convergence Time [Po07t] as SUT detects 698 the link down event and converges all IGP routes and traffic 699 over the other ECMP members. At the same time measure 700 Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. 701 8. Stop offered load. Wait 30 seconds for queues to drain. 702 Restart offered load. 703 9. Restore link on Tester's Neighbor Interface to R2 or R3. 704 10. Measure Reversion Convergence Time [Po07t] as SUT detects the 705 link up event and converges IGP routes and some distribution 706 of traffic over the restored ECMP member. 708 Results 709 The measured IGP Convergence time is influenced by Local link 710 failure indication, Tree Build Time, and Hardware Update Time 711 [Po07a]. 713 IGP Data Plane Route Convergence 714 4.9 Convergence Due to Parallel Link Interface Failure 716 Objective 717 To obtain the IGP Route Convergence due to a local link failure 718 event for a Member of a Parallel Link. The links can be used 719 for data Load Balancing 721 Procedure 722 1. Configure Parallel Link as shown in Figure 4. 723 2. Advertise matching IGP routes from Tester to DUT on 724 each Parallel Link member. 725 3. Send offered load at measured Throughput with fixed packet 726 size to destinations matching all IGP routes from Tester to 727 DUT on Ingress Interface [Po07t]. 728 4. Verify traffic routed over all members of Parallel Link. 729 5. Remove link on Tester's Neighbor Interface [Po07t] connected to 730 one of the DUT's Parallel Link member interfaces. 731 6. Measure First Prefix Convergence Time [Po07t] as DUT detects the 732 link down event and begins to converge IGP routes and traffic 733 over the Next-Best Egress Interface. 734 7. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 735 link down event and converges all IGP routes and traffic over 736 the other Parallel Link members. At the same time measure 737 Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. 738 8. Stop offered load. Wait 30 seconds for queues to drain. 739 Restart offered load. 740 9. Restore link on Tester's Neighbor Interface connected to 741 DUT's Parallel Link member interface. 742 10. Measure Reversion Convergence Time [Po07t] as DUT detects the 743 link up event and converges IGP routes and some distribution 744 of traffic over the restored Parallel Link member. 746 Results 747 The measured IGP Convergence time is influenced by the Local 748 link failure indication, Tree Build Time, and Hardware Update 749 Time [Po07a]. 751 5. IANA Considerations 753 This document requires no IANA considerations. 755 6. Security Considerations 756 Documents of this type do not directly affect the security of 757 the Internet or corporate networks as long as benchmarking 758 is not performed on devices or systems connected to operating 759 networks. 761 7. Acknowledgements 762 Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward, 763 and the BMWG for their contributions to this work. 765 IGP Data Plane Route Convergence 767 8. References 768 8.1 Normative References 770 [Br91] Bradner, S., "Benchmarking Terminology for Network 771 Interconnection Devices", RFC 1242, IETF, March 1991. 773 [Br97] Bradner, S., "Key words for use in RFCs to Indicate 774 Requirement Levels", RFC 2119, March 1997 776 [Br99] Bradner, S. and McQuaid, J., "Benchmarking Methodology for 777 Network Interconnect Devices", RFC 2544, IETF, March 1999. 779 [Ca90] Callon, R., "Use of OSI IS-IS for Routing in TCP/IP and Dual 780 Environments", RFC 1195, IETF, December 1990. 782 [Ma98] Mandeville, R., "Benchmarking Terminology for LAN 783 Switching Devices", RFC 2285, February 1998. 785 [Mo98] Moy, J., "OSPF Version 2", RFC 2328, IETF, April 1998. 787 [Po06] Poretsky, S., et al., "Terminology for Benchmarking 788 Network-layer Traffic Control Mechanisms", RFC 4689, 789 November 2006. 791 [Po07a] Poretsky, S., "Considerations for Benchmarking Link-State 792 IGP Convergence", draft-ietf-bmwg-igp-dataplane-conv-app-14, 793 work in progress, November 2007. 795 [Po07t] Poretsky, S., Imhoff, B., "Benchmarking Terminology for 796 Link-State IGP Convergence", 797 draft-ietf-bmwg-igp-dataplane-conv-term-14, work in 798 progress, November 2007. 800 8.2 Informative References 801 None 803 9. Author's Address 805 Scott Poretsky 806 Reef Point Systems 807 3 Federal Street 808 Billerica, MA 01821 809 USA 810 Phone: + 1 508 439 9008 811 EMail: sporetsky@reefpoint.com 813 Brent Imhoff 814 Juniper Networks 815 1194 North Mathilda Ave 816 Sunnyvale, CA 94089 817 USA 818 Phone: + 1 314 378 2571 819 EMail: bimhoff@planetspork.com 820 IGP Data Plane Route Convergence 822 Full Copyright Statement 824 Copyright (C) The IETF Trust (2007). 826 This document is subject to the rights, licenses and restrictions 827 contained in BCP 78, and except as set forth therein, the authors 828 retain all their rights. 830 This document and the information contained herein are provided 831 on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 832 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE 833 IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL 834 WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY 835 WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE 836 ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS 837 FOR A PARTICULAR PURPOSE. 839 Intellectual Property 841 The IETF takes no position regarding the validity or scope of any 842 Intellectual Property Rights or other rights that might be claimed to 843 pertain to the implementation or use of the technology described in 844 this document or the extent to which any license under such rights 845 might or might not be available; nor does it represent that it has 846 made any independent effort to identify any such rights. Information 847 on the procedures with respect to rights in RFC documents can be 848 found in BCP 78 and BCP 79. 850 Copies of IPR disclosures made to the IETF Secretariat and any 851 assurances of licenses to be made available, or the result of an 852 attempt made to obtain a general license or permission for the use of 853 such proprietary rights by implementers or users of this 854 specification can be obtained from the IETF on-line IPR repository at 855 http://www.ietf.org/ipr. 857 The IETF invites any interested party to bring to its attention any 858 copyrights, patents or patent applications, or other proprietary 859 rights that may cover technology that may be required to implement 860 this standard. Please address the information to the IETF at ietf- 861 ipr@ietf.org. 863 Acknowledgement 864 Funding for the RFC Editor function is currently provided by the 865 Internet Society.