idnits 2.17.1 draft-ietf-bmwg-igp-dataplane-conv-meth-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 20. -- Found old boilerplate from RFC 3978, Section 5.5 on line 667. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 678. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 685. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 691. ** Found boilerplate matching RFC 3978, Section 5.4, paragraph 1 (on line 655), which is fine, but *also* found old RFC 2026, Section 10.4C, paragraph 1 text on line 41. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. (A line matching the expected section header was found, but with an unexpected indentation: ' 1. Introduction' ) ** There are 18 instances of too long lines in the document, the longest one being 3 characters in excess of 72. ** There are 52 instances of lines with control characters in the document. ** The abstract seems to contain references ([2], [3], [4], [1]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 280 has weird spacing: '...n Delay seco...' == Line 288 has weird spacing: '...ce Time secon...' == Line 289 has weird spacing: '...ce Time secon...' == Line 290 has weird spacing: '...ce Time seco...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (April 2006) is 6585 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Looks like a reference, but probably isn't: 'Br97' on line 102 == Outdated reference: A later version (-17) exists of draft-ietf-bmwg-igp-dataplane-conv-app-08 ** Downref: Normative reference to an Informational draft: draft-ietf-bmwg-igp-dataplane-conv-app (ref. '1') == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-term-08 ** Downref: Normative reference to an Informational draft: draft-ietf-bmwg-igp-dataplane-conv-term (ref. '2') ** Downref: Normative reference to an Informational RFC: RFC 1242 (ref. '5') ** Downref: Normative reference to an Informational RFC: RFC 2544 (ref. '6') == Outdated reference: A later version (-11) exists of draft-ietf-bfd-base-02 Summary: 12 errors (**), 0 flaws (~~), 9 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group 2 INTERNET-DRAFT 3 Expires in: April 2006 4 Scott Poretsky 5 Reef Point Systems 7 Brent Imhoff 9 October 2005 11 Benchmarking Methodology for 12 IGP Data Plane Route Convergence 14 16 Intellectual Property Rights (IPR) statement: 17 By submitting this Internet-Draft, each author represents that any 18 applicable patent or other IPR claims of which he or she is aware 19 have been or will be disclosed, and any of which he or she becomes 20 aware will be disclosed, in accordance with Section 6 of BCP 79. 22 Status of this Memo 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF), its areas, and its working groups. Note that 26 other groups may also distribute working documents as 27 Internet-Drafts. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 The list of current Internet-Drafts can be accessed at 35 http://www.ietf.org/ietf/1id-abstracts.txt. 37 The list of Internet-Draft Shadow Directories can be accessed at 38 http://www.ietf.org/shadow.html. 40 Copyright Notice 41 Copyright (C) The Internet Society (2005). All Rights Reserved. 43 ABSTRACT 44 This draft describes the methodology for benchmarking IGP Route 45 Convergence as described in Applicability document [1] and 46 Terminology document [2]. The methodology and terminology are 47 to be used for benchmarking route convergence and can be applied 48 to any link-state IGP such as ISIS [3] and OSPF [4]. The terms 49 used in the procedures provided within this document are 50 defined in [2]. 52 IGP Data Plane Route Convergence 54 Table of Contents 55 1. Introduction ...............................................2 56 2. Existing definitions .......................................2 57 3. Test Setup..................................................3 58 3.1 Test Topologies............................................3 59 3.2 Test Considerations........................................4 60 3.3 Reporting Format...........................................6 61 4. Test Cases..................................................7 62 4.1 Convergence Due to Link Failure............................7 63 4.1.1 Convergence Due to Local Interface Failure...............7 64 4.1.2 Convergence Due to Neighbor Interface Failure............7 65 4.1.3 Convergence Due to Remote Interface Failure..............8 66 4.2 Convergence Due to Layer 2 Session Failure.................9 67 4.3 Convergence Due to IGP Adjacency Failure...................10 68 4.4 Convergence Due to Route Withdrawal........................10 69 4.5 Convergence Due to Cost Change.............................11 70 4.6 Convergence Due to ECMP Member Interface Failure...........12 71 4.7 Convergence Due to Parallel Link Interface Failure.........12 72 5. IANA Considerations.........................................13 73 6. Security Considerations.....................................13 74 7. Normative References........................................13 75 8. Author's Address............................................13 77 1. Introduction 78 This draft describes the methodology for benchmarking IGP Route 79 Convergence. The applicability of this testing is described in 80 [1] and the new terminology that it introduces is defined in [2]. 81 Service Providers use IGP Convergence time as a key metric of 82 router design and architecture. Customers of Service Providers 83 observe convergence time by packet loss, so IGP Route Convergence 84 is considered a Direct Measure of Quality (DMOQ). The test cases 85 in this document are black-box tests that emulate the network 86 events that cause route convergence, as described in [1]. The 87 black-box test designs benchmark the data plane accounting for 88 all of the factors contributing to convergence time, as discussed 89 in [1]. The methodology (and terminology) for benchmarking route 90 convergence can be applied to any link-state IGP such as ISIS [3] 91 and OSPF [4]. These methodologies apply to IPv4 and IPv6 traffic 92 as well as IPv4 and IPv6 IGPs. 94 2. Existing definitions 95 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 96 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 97 document are to be interpreted as described in BCP 14, RFC 2119 98 [Br97]. RFC 2119 defines the use of these key words to help make the 99 intent of standards track documents as clear as possible. While this 100 document uses these keywords, this document is not a standards track 101 document. 103 IGP Data Plane Route Convergence 105 3. Test Setup 106 3.1 Test Topologies 107 Figure 1 shows the test topology to measure IGP Route Convergence due 108 to local Convergence Events such as SONET Link Failure, Layer 2 109 Session Failure, IGP Adjacency Failure, Route Withdrawal, and route 110 cost change. These test cases discussed in section 4 provide route 111 convergence times that account for the Event Detection time, SPF 112 Processing time, and FIB Update time. These times are measured 113 by observing packet loss in the data plane. 115 --------- Ingress Interface --------- 116 | |<--------------------------------| | 117 | | | | 118 | | Preferred Egress Interface | | 119 | DUT |-------------------------------->| Tester| 120 | | | | 121 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 122 | | Next-Best Egress Interface | | 123 --------- --------- 125 Figure 1. IGP Route Convergence Test Topology for Local Changes 127 Figure 2 shows the test topology to measure IGP Route Convergence 128 time due to remote changes in the network topology. These times are 129 measured by observing packet loss in the data plane. In this 130 topology the three routers are considered a System Under Test (SUT). 131 NOTE: All routers in the SUT must be the same model and identically 132 configured. 134 ----- --------- 135 | | Preferred | | 136 ----- |R2 |---------------------->| | 137 | |-->| | Egress Interface | | 138 | | ----- | | 139 |R1 | |Tester | 140 | | ----- | | 141 | |-->| | Next-Best | | 142 ----- |R3 |~~~~~~~~~~~~~~~~~~~~~~>| | 143 ^ | | Egress Interface | | 144 | ----- --------- 145 | | 146 |-------------------------------------- 147 Ingress Interface 149 Figure 2. IGP Route Convergence Test Topology 150 for Remote Changes 152 Figure 3 shows the test topology to measure IGP Route Convergence 153 time with members of an Equal Cost Multipath (ECMP) Set. These 154 times are measured by observing packet loss in the data plane. 155 In thistopology, the DUT is configured with each Egress interface 156 IGP Data Plane Route Convergence 158 as a member of an ECMP set and the Tester emulates multiple 159 next-hop routers (emulates one router for each member). 161 --------- Ingress Interface --------- 162 | |<--------------------------------| | 163 | | | | 164 | | ECMP Set Interface 1 | | 165 | DUT |-------------------------------->| Tester| 166 | | . | | 167 | | . | | 168 | | . | | 169 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 170 | | ECMP Set Interface N | | 171 --------- --------- 173 Figure 3. IGP Route Convergence Test Topology 174 for ECMP Convergence 176 Figure 4 shows the test topology to measure IGP Route Convergence 177 time with members of a Parallel Link. These times are measured by 178 observing packet loss in the data plane. In this topology, the DUT 179 is configured with each Egress interface as a member of a Parallel 180 Link and the Tester emulates the single next-hop router. 182 --------- Ingress Interface --------- 183 | |<--------------------------------| | 184 | | | | 185 | | Parallel Link Interface 1 | | 186 | DUT |-------------------------------->| Tester| 187 | | . | | 188 | | . | | 189 | | . | | 190 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 191 | | Parallel Link Interface N | | 192 --------- --------- 194 Figure 4. IGP Route Convergence Test Topology 195 for Parallel Link Convergence 197 3.2 Test Considerations 198 3.2.1 IGP Selection 199 The test cases described in section 4 can be used for ISIS or 200 OSPF. The Route Convergence test methodology for both is 201 identical. The IGP adjacencies are established on the Preferred 202 Egress Interface and Next-Best Egress Interface. 204 3.2.2 BGP Configuration 205 The obtained results for IGP Route Convergence may vary if 206 BGP routes are installed. It is recommended that the IGP 207 Convergence times be benchmarked without BGP routes installed. 209 IGP Data Plane Route Convergence 211 3.2.3 IGP Route Scaling 212 The number of IGP routes will impact the measured IGP Route 213 Convergence because convergence for the entire IGP route table is 214 measured. For results similar to those that would be observed in 215 an operational network it is recommended that the number of 216 installed routes closely approximate that for routers in the 217 network. The number of areas (for OSPF) and levels (for ISIS) can 218 impact the benchmark results. 220 3.2.4 Timers 221 There are some timers that will impact the measured IGP Convergence 222 time. The following timers should be configured to the minimum value 223 prior to beginning execution of the test cases: 225 Timer Recommended Value 226 ----- ----------------- 227 Failure Indication Delay <10milliseconds 228 IGP Hello Timer 1 second 229 IGP Dead-Interval 3 seconds 230 LSA Generation Delay 0 231 LSA Flood Packet Pacing 0 232 LSA Retransmission Packet Pacing 0 233 SPF Delay 0 235 3.2.5 Convergence Time Metrics 236 The recommended value for the Packet Sampling Interval [2] is 237 100 milliseconds. Rate-Derived Convergence Time [2] is the 238 preferred benchmark for IGP Route Convergence. This benchmark 239 must always be reported when the Packet Sampling Interval [2] 240 <= 100 milliseconds. If the test equipment does not permit 241 the Packet Sampling Interval to be set as low as 100 msec, 242 then both the Rate-Derived Convergence Time and Loss-Derived 243 Convergence Time [2] must be reported. The Packet Sampling 244 Interval value MUST be the smallest measurable convergence 245 time. 247 3.2.6 Interface Types 248 All test cases in this methodology document may be executed with 249 any interface type. All interfaces MUST be the same media and 250 Throughout [5,6] for each test case. Media and protocols MUST 251 be configured for minimum failure detection delay to minimize 252 the contribution to the measured Convergence time. For example, 253 configure SONET with minimum carrier-loss-delay or Bi-directional 254 Forwarding Detection (BFD) [7]. 256 IGP Data Plane Route Convergence 258 3.2.7 Offered Load 259 The offered Load MUST be the Throughput of the device as defined 260 in [5] and benchmarked in [6] at a fixed packet size. The packet 261 size is selectable and MUST be recorded. The Throughput MUST be 262 measured at the Preferred Egress Interface and the Next-Best 263 Egress Interface. The duration of offered load MUST be greater 264 than the convergence time. The destination addresses for the 265 offered load MUST be distributed such that all routes are matched. 266 This enables Full Convergence [2] to be observed. 268 3.3 Reporting Format 269 For each test case, it is recommended that the following reporting 270 format be completed: 272 Parameter Units 273 --------- ----- 274 IGP (ISIS or OSPF) 275 Interface Type (GigE, POS, ATM, etc.) 276 Packet Size offered to DUT bytes 277 IGP Routes advertised to DUT number of IGP routes 278 Packet Sampling Interval on Tester seconds or milliseconds 279 IGP Timer Values configured on DUT 280 SONET Failure Indication Delay seconds or milliseconds 281 IGP Hello Timer seconds or milliseconds 282 IGP Dead-Interval seconds or milliseconds 283 LSA Generation Delay seconds or milliseconds 284 LSA Flood Packet Pacing seconds or milliseconds 285 LSA Retransmission Packet Pacing seconds or milliseconds 286 SPF Delay seconds or milliseconds 287 Benchmarks 288 Rate-Derived Convergence Time seconds or milliseconds 289 Loss-Derived Convergence Time seconds or milliseconds 290 Restoration Convergence Time seconds or milliseconds 291 IGP Data Plane Route Convergence 293 4. Test Cases 294 4.1 Convergence Due to Link Failure 295 4.1.1 Convergence Due to Local Interface Failure 296 Objective 297 To obtain the IGP Route Convergence due to a local link 298 failure event at the DUT's Local Interface. 300 Procedure 301 1. Advertise matching IGP routes from Tester to DUT on 302 Preferred Egress Interface [2] and Next-Best Egress Interface 303 [2] using the topology shown in Figure 1. Set the cost of the 304 routes so that the Preferred Egress Interface is the preferred 305 next-hop. 306 2. Send offered load at measured Throughput with fixed packet size 307 to destinations matching all IGP routes from Tester to DUT on 308 Ingress Interface [2]. 309 3. Verify traffic routed over Preferred Egress Interface. 310 4. Remove link on DUT's Local Interface [2] by performing an 311 administrative shutdown of the interface. 312 5. Measure Rate-Derived Convergence Time [2] as DUT detects the 313 link down event and converges all IGP routes and traffic over 314 the Next-Best Egress Interface. 315 6. Stop offered load. Wait 30 seconds for queues to drain. 316 Restart Offered Load. 317 7. Restore link on DUT's Local Interface by administratively 318 enabling the interface. 319 8. Measure Restoration Convergence Time [2] as DUT detects the 320 link up event and converges all IGP routes and traffic back 321 to the Preferred Egress Interface. 323 Results 324 The measured IGP Convergence time is influenced by the Local 325 link failure indication, SPF delay, SPF Holdtime, SPF Execution 326 Time, Tree Build Time, and Hardware Update Time. 328 4.1.2 Convergence Due to Neighbor Interface Failure 329 Objective 330 To obtain the IGP Route Convergence due to a local link 331 failure event at the Tester's Neighbor Interface. 333 Procedure 334 1. Advertise matching IGP routes from Tester to DUT on 335 Preferred Egress Interface [2] and Next-Best Egress Interface 336 [2] using the topology shown in Figure 1. Set the cost of 337 the routes so that the Preferred Egress Interface is the 338 preferred next-hop. 339 2. Send offered load at measured Throughput with fixed packet size 340 to destinations matching all IGP routes from Tester to DUT on 341 Ingress Interface [2]. 343 IGP Data Plane Route Convergence 345 3. Verify traffic routed over Preferred Egress Interface. 346 4. Remove link on Tester's Neighbor Interface [2] connected to 347 DUT' s Preferred Egress Interface. 348 5. Measure Rate-Derived Convergence Time [2] as DUT detects the 349 link down event and converges all IGP routes and traffic over 350 the Next-Best Egress Interface. 351 6. Stop offered load. Wait 30 seconds for queues to drain. 352 Restart Offered Load. 353 7. Restore link on Tester's Neighbor Interface connected to 354 DUT's Preferred Egress Interface. 355 8. Measure Restoration Convergence Time [2] as DUT detects the 356 link up event and converges all IGP routes and traffic back to 357 the Preferred Egress Interface. 359 Results 360 The measured IGP Convergence time is influenced by the Local 361 link failure indication, SPF delay, SPF Holdtime, SPF Execution 362 Time, Tree Build Time, and Hardware Update Time. 364 4.1.3 Convergence Due to Remote Interface Failure 365 Objective 366 To obtain the IGP Route Convergence due to a Remote Interface 367 Failure event. 369 Procedure 370 1. Advertise matching IGP routes from Tester to SUT on 371 Preferred Egress Interface [2] and Next-Best Egress Interface 372 [2] using the topology shown in Figure 2. Set the cost of the 373 routes so that the Preferred Egress Interface is the preferred 374 next-hop. 375 2. Send offered load at measured Throughput with fixed packet size 376 to destinations matching all IGP routes from Tester to DUT on 377 Ingress Interface [2]. 378 3. Verify traffic is routed over Preferred Egress Interface. 379 4. Remove link on Tester's Neighbor Interface [2] connected to 380 SUT' s Preferred Egress Interface. 381 5. Measure Rate-Derived Convergence Time [2] as SUT detects 382 the link down event and converges all IGP routes and traffic 383 over the Next-Best Egress Interface. 384 6. Stop offered load. Wait 30 seconds for queues to drain. 385 Restart Offered Load. 386 7. Restore link on Tester's Neighbor Interface connected to 387 DUT's Preferred Egress Interface. 388 8. Measure Restoration Convergence Time [2] as DUT detects the 389 link up event and converges all IGP routes and traffic back to 390 the Preferred Egress Interface. 392 IGP Data Plane Route Convergence 394 Results 395 The measured IGP Convergence time is influenced by the 396 link failure failure indication, LSA/LSP Flood Packet Pacing, 397 LSA/LSP Retransmission Packet Pacing, LSA/LSP Generation 398 time, SPF delay, SPF Holdtime, SPF Execution Time, Tree 399 Build Time, and Hardware Update Time. The additional 400 convergence time contributed by LSP Propagation can be 401 obtained by subtracting the Rate-Derived Convergence Time 402 measured in 4.1.2 (Convergence Due to Neighbor Interface 403 Failure) from the Rate-Derived Convergence Time measured in 404 this test case. 406 4.2 Convergence Due to Layer 2 Session Failure 407 Objective 408 To obtain the IGP Route Convergence due to a Local Layer 2 Session 409 failure event. 411 Procedure 412 1. Advertise matching IGP routes from Tester to DUT on 413 Preferred Egress Interface [2] and Next-Best Egress Interface 414 [2] using the topology shown in Figure 1. Set the cost of 415 the routes so that the IGP routes along the Preferred Egress 416 Interface is the preferred next-hop. 417 2. Send offered load at measured Throughput with fixed packet size 418 to destinations matching all IGP routes from Tester to DUT on 419 Ingress Interface [2]. 420 3. Verify traffic routed over Preferred Egress Interface. 421 4. Remove Layer 2 session from Tester's Neighbor Interface [2] 422 connected to Preferred Egress Interface. 423 5. Measure Rate-Derived Convergence Time [2] as DUT detects the 424 Layer 2 session down event and converges all IGP routes and 425 traffic over the Next-Best Egress Interface. 426 6. Restore Layer 2 session on DUT's Preferred Egress Interface. 427 7. Measure Restoration Convergence Time [2] as DUT detects the 428 session up event and converges all IGP routes and traffic over 429 the Preferred Egress Interface. 431 Results 432 The measured IGP Convergence time is influenced by the Layer 2 433 failure indication, SPF delay, SPF Holdtime, SPF Execution 434 Time, Tree Build Time, and Hardware Update Time. 436 IGP Data Plane Route Convergence 438 4.3 Convergence Due to IGP Adjacency Failure 440 Objective 441 To obtain the IGP Route Convergence due to a Local IGP Adjacency 442 failure event. 444 Procedure 445 1. Advertise matching IGP routes from Tester to DUT on 446 Preferred Egress Interface [2] and Next-Best Egress Interface 447 [2] using the topology shown in Figure 1. Set the cost of 448 the routes so that the Preferred Egress Interface is the 449 preferred next-hop. 450 2. Send offered load at measured Throughput with fixed packet size 451 to destinations matching all IGP routes from Tester to DUT on 452 Ingress Interface [2]. 453 3. Verify traffic routed over Preferred Egress Interface. 454 4. Remove IGP adjacency from Tester's Neighbor Interface [2] 455 connected to Preferred Egress Interface. 456 5. Measure Rate-Derived Convergence Time [2] as DUT detects the 457 IGP session failure event and converges all IGP routes and 458 traffic over the Next-Best Egress Interface. 459 6. Stop offered load. Wait 30 seconds for queues to drain. 460 Restart Offered Load. 461 7. Restore IGP session on DUT's Preferred Egress Interface. 462 8. Measure Restoration Convergence Time [2] as DUT detects the 463 session up event and converges all IGP routes and traffic over 464 the Preferred Egress Interface. 466 Results 467 The measured IGP Convergence time is influenced by the IGP 468 Hello Interval, IGP Dead Interval, SPF delay, SPF Holdtime, 469 SPF Execution Time, Tree Build Time, and Hardware Update Time. 471 4.4 Convergence Due to Route Withdrawal 473 Objective 474 To obtain the IGP Route Convergence due to Route Withdrawal. 476 Procedure 477 1. Advertise matching IGP routes from Tester to DUT on 478 Preferred Egress Interface [2] and Next-Best Egress Interface 479 [2] using the topology shown in Figure 1. Set the cost of 480 the routes so that the Preferred Egress Interface is the 481 preferred next-hop. 482 2. Send offered load at measured Throughput with fixed packet size 483 to destinations matching all IGP routes from Tester to DUT on 484 Ingress Interface [2]. 486 IGP Data Plane Route Convergence 488 3. Verify traffic routed over Preferred Egress Interface. 489 4. Tester withdraws all IGP routes from DUT's Local Interface 490 on Preferred Egress Interface. 491 5. Measure Rate-Derived Convergence Time [2] as DUT detects the 492 Layer 2 session down event and converges all IGP routes and 493 traffic over the Next-Best Egress Interface. 494 6. Stop offered load. Wait 30 seconds for queues to drain. 495 Restart Offered Load. 496 7. Re-advertise IGP routes to DUT's Preferred Egress Interface. 497 8. Measure Restoration Convergence Time [2] as DUT converges all 498 IGP routes and traffic over the Preferred Egress Interface. 500 Results 501 The measured IGP Convergence time is the SPF Processing and FIB 502 Update time as influenced by the SPF delay, SPF Holdtime, 503 SPF Execution Time, Tree Build Time, and Hardware Update Time. 505 4.5 Convergence Due to Cost Change 507 Objective 508 To obtain the IGP Route Convergence due to route cost change. 510 Procedure 511 1. Advertise matching IGP routes from Tester to DUT on 512 Preferred Egress Interface [2] and Next-Best Egress Interface 513 [2] using the topology shown in Figure 1. Set the cost of 514 the routes so that the Preferred Egress Interface is the 515 preferred next-hop. 516 2. Send offered load at measured Throughput with fixed packet size 517 to destinations matching all IGP routes from Tester to DUT 518 on Ingress Interface [2]. 519 3. Verify traffic routed over Preferred Egress Interface. 520 4. Tester increases cost for all IGP routes at DUT's Preferred 521 Egress Interface so that the Next-Best Egress Interface 522 has lower cost and becomes preferred path. 523 5. Measure Rate-Derived Convergence Time [2] as DUT detects the 524 cost change event and converges all IGP routes and traffic 525 over the Next-Best Egress Interface. 526 6. Stop offered load. Wait 30 seconds for queues to drain. 527 Restart Offered Load. 528 7. Re-advertise IGP routes to DUT's Preferred Egress Interface 529 with original lower cost metric. 530 8. Measure Restoration Convergence Time [2] as DUT converges all 531 IGP routes and traffic over the Preferred Egress Interface. 533 Results 534 There should be no measured packet loss for this case. 536 IGP Data Plane Route Convergence 538 4.6 Convergence Due to ECMP Member Interface Failure 539 Objective 540 To obtain the IGP Route Convergence due to a local link 541 failure event of an ECMP Member. 543 Procedure 544 1. Configure ECMP Set as shown in Figure 3. 545 2. Advertise matching IGP routes from Tester to DUT on 546 each ECMP member. 547 3. Send offered load at measured Throughput with fixed packet size 548 to destinations matching all IGP routes from Tester to DUT on 549 Ingress Interface [2]. 550 4. Verify traffic routed over all members of ECMP Set. 551 5. Remove link on Tester's Neighbor Interface [2] connected to 552 one of the DUT's ECMP member interfaces. 553 6. Measure Rate-Derived Convergence Time [2] as DUT detects the 554 link down event and converges all IGP routes and traffic 555 over the other ECMP members. 556 7. Stop offered load. Wait 30 seconds for queues to drain. 557 Restart Offered Load. 558 8. Restore link on Tester's Neighbor Interface connected to 559 DUT's ECMP member interface. 560 9. Measure Restoration Convergence Time [2] as DUT detects the 561 link up event and converges IGP routes and some distribution 562 of traffic over the restored ECMP member. 564 Results 565 The measured IGP Convergence time is influenced by the Local 566 link failure indication, Tree Build Time, and Hardware Update Time. 568 4.7 Convergence Due to Parallel Link Interface Failure 569 Objective 570 To obtain the IGP Route Convergence due to a local link failure 571 event for a Member of a Parallel Link. The links can be used for 572 data Load Balancing 574 Procedure 575 1. Configure Parallel Link as shown in Figure 4. 576 2. Advertise matching IGP routes from Tester to DUT on 577 each Parallel Link member. 578 3. Send offered load at measured Throughput with fixed packet size 579 to destinations matching all IGP routes from Tester to DUT on 580 Ingress Interface [2]. 581 4. Verify traffic routed over all members of Parallel Link. 582 5. Remove link on Tester's Neighbor Interface [2] connected to 583 one of the DUT's Parallel Link member interfaces. 584 6. Measure Rate-Derived Convergence Time [2] as DUT detects the 585 link down event and converges all IGP routes and traffic over 586 the other Parallel Link members. 587 7. Stop offered load. Wait 30 seconds for queues to drain. 588 Restart Offered Load. 590 IGP Data Plane Route Convergence 592 8. Restore link on Tester's Neighbor Interface connected to 593 DUT's Parallel Link member interface. 594 9. Measure Restoration Convergence Time [2] as DUT detects the 595 link up event and converges IGP routes and some distribution 596 of traffic over the restored Parallel Link member. 598 Results 599 The measured IGP Convergence time is influenced by the Local 600 link failure indication, Tree Build Time, and Hardware Update 601 Time. 603 5. IANA Considerations 605 This document requires no IANA considerations. 607 6. Security Considerations 608 Documents of this type do not directly affect the security of 609 the Internet or corporate networks as long as benchmarking 610 is not performed on devices or systems connected to operating 611 networks. 613 7. Normative References 614 [1] Poretsky, S., "Benchmarking Applicability for IGP 615 Convergence", draft-ietf-bmwg-igp-dataplane-conv-app-08, work 616 in progress, October 2005. 618 [2] Poretsky, S., Imhoff, B., "Benchmarking Terminology for IGP 619 Convergence", draft-ietf-bmwg-igp-dataplane-conv-term-08, work 620 in progress, October 2005 622 [3] Callon, R., "Use of OSI IS-IS for Routing in TCP/IP and Dual 623 Environments", RFC 1195, IETF, December 1990. 625 [4] Moy, J., "OSPF Version 2", RFC 2328, IETF, April 1998. 627 [5] Bradner, S., "Benchmarking Terminology for Network 628 Interconnection Devices", RFC 1242, IETF, October 1991. 630 [6] Bradner, S. and McQuaid, J., "Benchmarking Methodology for 631 Network Interconnect Devices", RFC 2544, IETF, March 1999. 633 [7] Katz, D. and Ward, D., "Bidirectional Forwarding Detection", 634 draft-ietf-bfd-base-02.txt, work in progress, IETF, 635 March 2005. 637 8. Author's Address 639 Scott Poretsky 640 Reef Point Systems 641 8 New England Executive Park 642 Burlington, MA 01803 643 USA 644 IGP Data Plane Route Convergence 646 Phone: + 1 508 439 9008 647 EMail: sporetsky@reefpoint.com 649 Brent Imhoff 650 USA 651 EMail: bimhoff@planetspork.com 653 Full Copyright Statement 655 Copyright (C) The Internet Society (2005). 657 This document is subject to the rights, licenses and restrictions 658 contained in BCP 78, and except as set forth therein, the authors 659 retain all their rights. 661 This document and the information contained herein are provided on an 662 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 663 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 664 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 665 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 666 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 667 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 669 Intellectual Property 671 The IETF takes no position regarding the validity or scope of any 672 Intellectual Property Rights or other rights that might be claimed to 673 pertain to the implementation or use of the technology described in 674 this document or the extent to which any license under such rights 675 might or might not be available; nor does it represent that it has 676 made any independent effort to identify any such rights. Information 677 on the procedures with respect to rights in RFC documents can be 678 found in BCP 78 and BCP 79. 680 Copies of IPR disclosures made to the IETF Secretariat and any 681 assurances of licenses to be made available, or the result of an 682 attempt made to obtain a general license or permission for the use of 683 such proprietary rights by implementers or users of this 684 specification can be obtained from the IETF on-line IPR repository at 685 http://www.ietf.org/ipr. 687 The IETF invites any interested party to bring to its attention any 688 copyrights, patents or patent applications, or other proprietary 689 rights that may cover technology that may be required to implement 690 this standard. Please address the information to the IETF at ietf- 691 ipr@ietf.org. 693 Acknowledgement 694 Funding for the RFC Editor function is currently provided by the 695 Internet Society.