idnits 2.17.1 draft-ietf-bmwg-igp-dataplane-conv-meth-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 22. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 686. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 697. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 704. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 712. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 283 has weird spacing: '...n Delay seco...' == Line 291 has weird spacing: '...ce Time secon...' == Line 292 has weird spacing: '...ce Time secon...' == Line 293 has weird spacing: '...ce Time seco...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 2007) is 6096 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-17) exists of draft-ietf-bmwg-igp-dataplane-conv-app-12 == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-term-12 Summary: 1 error (**), 0 flaws (~~), 7 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group 2 INTERNET-DRAFT 3 Expires in: August 2007 4 Intended Status: Informational 5 Scott Poretsky 6 Reef Point Systems 8 Brent Imhoff 9 Juniper Networks 11 February 2007 13 Benchmarking Methodology for 14 IGP Data Plane Route Convergence 16 18 Intellectual Property Rights (IPR) statement: 19 By submitting this Internet-Draft, each author represents that any 20 applicable patent or other IPR claims of which he or she is aware 21 have been or will be disclosed, and any of which he or she becomes 22 aware will be disclosed, in accordance with Section 6 of BCP 79. 24 Status of this Memo 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF), its areas, and its working groups. Note that 28 other groups may also distribute working documents as 29 Internet-Drafts. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 The list of current Internet-Drafts can be accessed at 37 http://www.ietf.org/ietf/1id-abstracts.txt. 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html. 42 Copyright Notice 43 Copyright (C) The IETF Trust (2007). 45 ABSTRACT 46 This document describes the methodology for benchmarking Interior 47 Gateway Protocol (IGP) Route Convergence. The methodology is to 48 be used for benchmarking IGP convergence time through externally 49 observable (black box) data plane measurements. The methodology 50 can be applied to any link-state IGP, such as ISIS and OSPF. 52 IGP Data Plane Route Convergence 54 Table of Contents 55 1. Introduction ...............................................2 56 2. Existing definitions .......................................2 57 3. Test Setup..................................................3 58 3.1 Test Topologies............................................3 59 3.2 Test Considerations........................................4 60 3.3 Reporting Format...........................................6 61 4. Test Cases..................................................7 62 4.1 Convergence Due to Link Failure............................7 63 4.1.1 Convergence Due to Local Interface Failure...............7 64 4.1.2 Convergence Due to Neighbor Interface Failure............7 65 4.1.3 Convergence Due to Remote Interface Failure..............8 66 4.2 Convergence Due to Layer 2 Session Failure.................9 67 4.3 Convergence Due to IGP Adjacency Failure...................10 68 4.4 Convergence Due to Route Withdrawal........................10 69 4.5 Convergence Due to Cost Change.............................11 70 4.6 Convergence Due to ECMP Member Interface Failure...........11 71 4.7 Convergence Due to Parallel Link Interface Failure.........12 72 5. IANA Considerations.........................................13 73 6. Security Considerations.....................................13 74 7. Acknowledgements............................................13 75 8. Normative References........................................13 76 9. Author's Address............................................14 78 1. Introduction 79 This draft describes the methodology for benchmarking IGP Route 80 Convergence. The applicability of this testing is described in 81 [Po07a] and the new terminology that it introduces is defined in 82 [Po07t]. Service Providers use IGP Convergence time as a key metric 83 of router design and architecture. Customers of Service Providers 84 observe convergence time by packet loss, so IGP Route Convergence 85 is considered a Direct Measure of Quality (DMOQ). The test cases 86 in this document are black-box tests that emulate the network 87 events that cause route convergence, as described in [Po07a]. The 88 black-box test designs benchmark the data plane and account for 89 all of the factors contributing to convergence time, as discussed 90 in [Po07a]. The methodology (and terminology) for benchmarking route 91 convergence can be applied to any link-state IGP such as ISIS [Ca90] 92 and OSPF [Mo98]. These methodologies apply to IPv4 and IPv6 traffic 93 as well as IPv4 and IPv6 IGPs. 95 2. Existing definitions 96 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 97 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 98 document are to be interpreted as described in BCP 14, RFC 2119 99 [Br97]. RFC 2119 defines the use of these key words to help make the 100 intent of standards track documents as clear as possible. While this 101 document uses these keywords, this document is not a standards track 102 document. The term Throughput is defined in RFC 2544 [Br99]. 104 IGP Data Plane Route Convergence 106 3. Test Setup 107 3.1 Test Topologies 108 Figure 1 shows the test topology to measure IGP Route Convergence due 109 to local Convergence Events such as SONET Link Failure, Layer 2 110 Session Failure, IGP Adjacency Failure, Route Withdrawal, and route 111 cost change. These test cases discussed in section 4 provide route 112 convergence times that account for the Event Detection time, SPF 113 Processing time, and FIB Update time. These times are measured 114 by observing packet loss in the data plane. 116 --------- Ingress Interface --------- 117 | |<--------------------------------| | 118 | | | | 119 | | Preferred Egress Interface | | 120 | DUT |-------------------------------->| Tester| 121 | | | | 122 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 123 | | Next-Best Egress Interface | | 124 --------- --------- 126 Figure 1. IGP Route Convergence Test Topology for Local Changes 128 Figure 2 shows the test topology to measure IGP Route Convergence 129 time due to remote changes in the network topology. These times are 130 measured by observing packet loss in the data plane. In this 131 topology the three routers are considered a System Under Test (SUT). 132 NOTE: All routers in the SUT must be the same model and identically 133 configured. 135 ----- --------- 136 | | Preferred | | 137 ----- |R2 |---------------------->| | 138 | |-->| | Egress Interface | | 139 | | ----- | | 140 |R1 | |Tester | 141 | | ----- | | 142 | |-->| | Next-Best | | 143 ----- |R3 |~~~~~~~~~~~~~~~~~~~~~~>| | 144 ^ | | Egress Interface | | 145 | ----- --------- 146 | | 147 |-------------------------------------- 148 Ingress Interface 150 Figure 2. IGP Route Convergence Test Topology 151 for Remote Changes 153 Figure 3 shows the test topology to measure IGP Route Convergence 154 time with members of an Equal Cost Multipath (ECMP) Set. These 155 times are measured by observing packet loss in the data plane. 156 In this topology, the DUT is configured with each Egress interface 157 IGP Data Plane Route Convergence 159 as a member of an ECMP set and the Tester emulates multiple 160 next-hop routers (emulates one router for each member). 162 --------- Ingress Interface --------- 163 | |<--------------------------------| | 164 | | | | 165 | | ECMP Set Interface 1 | | 166 | DUT |-------------------------------->| Tester| 167 | | . | | 168 | | . | | 169 | | . | | 170 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 171 | | ECMP Set Interface N | | 172 --------- --------- 174 Figure 3. IGP Route Convergence Test Topology 175 for ECMP Convergence 177 Figure 4 shows the test topology to measure IGP Route Convergence 178 time with members of a Parallel Link. These times are measured by 179 observing packet loss in the data plane. In this topology, the DUT 180 is configured with each Egress interface as a member of a Parallel 181 Link and the Tester emulates the single next-hop router. 183 --------- Ingress Interface --------- 184 | |<--------------------------------| | 185 | | | | 186 | | Parallel Link Interface 1 | | 187 | DUT |-------------------------------->| Tester| 188 | | . | | 189 | | . | | 190 | | . | | 191 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 192 | | Parallel Link Interface N | | 193 --------- --------- 195 Figure 4. IGP Route Convergence Test Topology 196 for Parallel Link Convergence 198 3.2 Test Considerations 199 3.2.1 IGP Selection 200 The test cases described in section 4 can be used for ISIS or 201 OSPF. The Route Convergence test methodology for both is 202 identical. The IGP adjacencies are established on the Preferred 203 Egress Interface and Next-Best Egress Interface. 205 3.2.2 BGP Configuration 206 The obtained results for IGP Route Convergence may vary if 207 BGP routes are installed. It is recommended that the IGP 208 Convergence times are benchmarked without BGP routes installed. 210 IGP Data Plane Route Convergence 212 3.2.3 IGP Route Scaling 213 The number of IGP routes will impact the measured IGP Route 214 Convergence because convergence for the entire IGP route table 215 is measured. To obtain results similar to those that would be 216 observed in an operational network, it is recommended that the 217 number of installed routes closely approximates that for routers 218 in the network. The number of areas (for OSPF) and levels (for 219 ISIS) can impact the benchmark results. 221 3.2.4 Timers 222 There are some timers that will impact the measured IGP Convergence 223 time. The following timers should be configured to the minimum value 224 prior to beginning execution of the test cases: 226 Timer Recommended Value 227 ----- ----------------- 228 Link Failure Indication Delay <10milliseconds 229 IGP Hello Timer 1 second 230 IGP Dead-Interval 3 seconds 231 LSA Generation Delay 0 232 LSA Flood Packet Pacing 0 233 LSA Retransmission Packet Pacing 0 234 SPF Delay 0 236 3.2.5 Convergence Time Metrics 237 The recommended value for the Packet Sampling Interval [Po07t] is 238 100 milliseconds. Rate-Derived Convergence Time [Po07t] is the 239 preferred benchmark for IGP Route Convergence. This benchmark 240 must always be reported when the Packet Sampling Interval [Po07t] 241 <= 100 milliseconds. If the test equipment does not permit 242 the Packet Sampling Interval to be set as low as 100 msec, 243 then both the Rate-Derived Convergence Time and Loss-Derived 244 Convergence Time [Po07t] must be reported. The Packet Sampling 245 Interval value MUST be reported as the smallest measurable 246 convergence time. 248 3.2.6 Interface Types 249 All test cases in this methodology document may be executed with 250 any interface type. All interfaces MUST be the same media and 251 Throughput [Br91][Br99] for each test case. Media and protocols MUST 252 be configured for minimum failure detection delay to minimize 253 the contribution to the measured Convergence time. For example, 254 configure SONET with minimum carrier-loss-delay or Bi-directional 255 Forwarding Detection (BFD). 257 IGP Data Plane Route Convergence 259 3.2.7 Offered Load 260 The offered Load MUST be the Throughput of the device as defined 261 in [Br91] and benchmarked in [Br99] at a fixed packet size. 262 Packet size is measured in bytes and includes the IP header and 263 payload. The packet size is selectable and MUST be recorded. 264 The Forwarding Rate [Ma98] MUST be measured at the Preferred Egress 265 Interface and the Next-Best Egress Interface. The duration of 266 offered load MUST be greater than the convergence time. The 267 destination addresses for the offered load MUST be distributed 268 such that all routes are matched. This enables Full Convergence 269 [Po07t] to be observed. 271 3.3 Reporting Format 272 For each test case, it is recommended that the following reporting 273 format is completed: 275 Parameter Units 276 --------- ----- 277 IGP (ISIS or OSPF) 278 Interface Type (GigE, POS, ATM, etc.) 279 Packet Size offered to DUT bytes 280 IGP Routes advertised to DUT number of IGP routes 281 Packet Sampling Interval on Tester seconds or milliseconds 282 IGP Timer Values configured on DUT 283 SONET Failure Indication Delay seconds or milliseconds 284 IGP Hello Timer seconds or milliseconds 285 IGP Dead-Interval seconds or milliseconds 286 LSA Generation Delay seconds or milliseconds 287 LSA Flood Packet Pacing seconds or milliseconds 288 LSA Retransmission Packet Pacing seconds or milliseconds 289 SPF Delay seconds or milliseconds 290 Benchmarks 291 Rate-Derived Convergence Time seconds or milliseconds 292 Loss-Derived Convergence Time seconds or milliseconds 293 Restoration Convergence Time seconds or milliseconds 294 IGP Data Plane Route Convergence 296 4. Test Cases 297 4.1 Convergence Due to Link Failure 298 4.1.1 Convergence Due to Local Interface Failure 299 Objective 300 To obtain the IGP Route Convergence due to a local link failure event 301 at the DUT's Local Interface. 303 Procedure 304 1. Advertise matching IGP routes from Tester to DUT on 305 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 306 [Po07t] using the topology shown in Figure 1. Set the cost of 307 the routes so that the Preferred Egress Interface is the 308 preferred next-hop. 309 2. Send offered load at measured Throughput with fixed packet 310 size to destinations matching all IGP routes from Tester to 311 DUT on Ingress Interface [Po07t]. 312 3. Verify traffic routed over Preferred Egress Interface. 313 4. Remove Preferred Egress link on DUT's Local Interface [Po07t] by 314 performing an administrative shutdown of the interface. 315 5. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 316 link down event and converges all IGP routes and traffic over 317 the Next-Best Egress Interface. 318 6. Stop offered load. Wait 30 seconds for queues to drain. 319 Restart Offered Load. 320 7. Restore Preferred Egress link on DUT's Local Interface by 321 administratively enabling the interface. 322 8. Measure Restoration Convergence Time [Po07t] as DUT detects the 323 link up event and converges all IGP routes and traffic back 324 to the Preferred Egress Interface. 326 Results 327 The measured IGP Convergence time is influenced by the Local 328 link failure indication, SPF delay, SPF Hold time, SPF Execution 329 Time, Tree Build Time, and Hardware Update Time [Po07a]. 331 4.1.2 Convergence Due to Neighbor Interface Failure 332 Objective 333 To obtain the IGP Route Convergence due to a local link 334 failure event at the Tester's Neighbor Interface. 336 Procedure 337 1. Advertise matching IGP routes from Tester to DUT on 338 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 339 [Po07t] using the topology shown in Figure 1. Set the cost of 340 the routes so that the Preferred Egress Interface is the 341 preferred next-hop. 342 2. Send offered load at measured Throughput with fixed packet 343 size to destinations matching all IGP routes from Tester to 344 DUT on Ingress Interface [Po07t]. 346 IGP Data Plane Route Convergence 348 3. Verify traffic routed over Preferred Egress Interface. 349 4. Remove link on Tester's Neighbor Interface [Po07t] connected to 350 DUT's Preferred Egress Interface. 351 5. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 352 link down event and converges all IGP routes and traffic over 353 the Next-Best Egress Interface. 354 6. Stop offered load. Wait 30 seconds for queues to drain. 355 Restart Offered Load. 356 7. Restore link on Tester's Neighbor Interface connected to 357 DUT's Preferred Egress Interface. 358 8. Measure Restoration Convergence Time [Po07t] as DUT detects the 359 link up event and converges all IGP routes and traffic back 360 to the Preferred Egress Interface. 362 Results 363 The measured IGP Convergence time is influenced by the Local 364 link failure indication, SPF delay, SPF Hold time, SPF Execution 365 Time, Tree Build Time, and Hardware Update Time [Po07a]. 367 4.1.3 Convergence Due to Remote Interface Failure 368 Objective 369 To obtain the IGP Route Convergence due to a Remote Interface 370 Failure event. 372 Procedure 373 1. Advertise matching IGP routes from Tester to SUT on 374 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 375 [Po07t] using the topology shown in Figure 2. Set the cost of 376 the routes so that the Preferred Egress Interface is the 377 preferred next-hop. 378 2. Send offered load at measured Throughput with fixed packet 379 size to destinations matching all IGP routes from Tester to 380 DUT on Ingress Interface [Po07t]. 381 3. Verify traffic is routed over Preferred Egress Interface. 382 4. Remove link on Tester's Neighbor Interface [Po07t] connected to 383 SUT's Preferred Egress Interface. 384 5. Measure Rate-Derived Convergence Time [Po07t] as SUT detects 385 the link down event and converges all IGP routes and traffic 386 over the Next-Best Egress Interface. 387 6. Stop offered load. Wait 30 seconds for queues to drain. 388 Restart Offered Load. 389 7. Restore link on Tester's Neighbor Interface connected to 390 DUT's Preferred Egress Interface. 391 8. Measure Restoration Convergence Time [Po07t] as DUT detects the 392 link up event and converges all IGP routes and traffic back 393 to the Preferred Egress Interface. 395 IGP Data Plane Route Convergence 397 Results 398 The measured IGP Convergence time is influenced by the 399 link failure indication, LSA/LSP Flood Packet Pacing, 400 LSA/LSP Retransmission Packet Pacing, LSA/LSP Generation 401 time, SPF delay, SPF Hold time, SPF Execution Time, Tree 402 Build Time, and Hardware Update Time [Po07a]. The additional 403 convergence time contributed by LSP Propagation can be 404 obtained by subtracting the Rate-Derived Convergence Time 405 measured in 4.1.2 (Convergence Due to Neighbor Interface 406 Failure) from the Rate-Derived Convergence Time measured in 407 this test case. 409 4.2 Convergence Due to Layer 2 Session Failure 410 Objective 411 To obtain the IGP Route Convergence due to a Local Layer 2 412 Session failure event. 414 Procedure 415 1. Advertise matching IGP routes from Tester to DUT on 416 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 417 [Po07t] using the topology shown in Figure 1. Set the cost of 418 the routes so that the IGP routes along the Preferred Egress 419 Interface is the preferred next-hop. 420 2. Send offered load at measured Throughput with fixed packet 421 size to destinations matching all IGP routes from Tester to 422 DUT on Ingress Interface [Po07t]. 423 3. Verify traffic routed over Preferred Egress Interface. 424 4. Remove Layer 2 session from Tester's Neighbor Interface [Po07t] 425 connected to Preferred Egress Interface. 426 5. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 427 Layer 2 session down event and converges all IGP routes and 428 traffic over the Next-Best Egress Interface. 429 6. Restore Layer 2 session on DUT's Preferred Egress Interface. 430 7. Measure Restoration Convergence Time [Po07t] as DUT detects the 431 session up event and converges all IGP routes and traffic 432 over the Preferred Egress Interface. 434 Results 435 The measured IGP Convergence time is influenced by the Layer 2 436 failure indication, SPF delay, SPF Hold time, SPF Execution 437 Time, Tree Build Time, and Hardware Update Time [Po07a]. 439 IGP Data Plane Route Convergence 441 4.3 Convergence Due to IGP Adjacency Failure 443 Objective 444 To obtain the IGP Route Convergence due to a Local IGP Adjacency 445 failure event. 447 Procedure 448 1. Advertise matching IGP routes from Tester to DUT on 449 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 450 [Po07t] using the topology shown in Figure 1. Set the cost of 451 the routes so that the Preferred Egress Interface is the 452 preferred next-hop. 453 2. Send offered load at measured Throughput with fixed packet 454 size to destinations matching all IGP routes from Tester to 455 DUT on Ingress Interface [Po07t]. 456 3. Verify traffic routed over Preferred Egress Interface. 457 4. Remove IGP adjacency from Tester's Neighbor Interface [Po07t] 458 connected to Preferred Egress Interface. 459 5. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 460 IGP session failure event and converges all IGP routes and 461 traffic over the Next-Best Egress Interface. 462 6. Stop offered load. Wait 30 seconds for queues to drain. 463 Restart Offered Load. 464 7. Restore IGP session on DUT's Preferred Egress Interface. 465 8. Measure Restoration Convergence Time [Po07t] as DUT detects the 466 session up event and converges all IGP routes and traffic 467 over the Preferred Egress Interface. 469 Results 470 The measured IGP Convergence time is influenced by the IGP Hello 471 Interval, IGP Dead Interval, SPF delay, SPF Hold time, SPF 472 Execution Time, Tree Build Time, and Hardware Update Time [Po07a]. 474 4.4 Convergence Due to Route Withdrawal 476 Objective 477 To obtain the IGP Route Convergence due to Route Withdrawal. 479 Procedure 480 1. Advertise matching IGP routes from Tester to DUT on 481 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 482 [Po07t] using the topology shown in Figure 1. Set the cost of 483 the routes so that the Preferred Egress Interface is the 484 preferred next-hop. 485 2. Send offered load at measured Throughput with fixed packet 486 size to destinations matching all IGP routes from Tester to 487 DUT on Ingress Interface [Po07t]. 488 3. Verify traffic routed over Preferred Egress Interface. 489 4. Tester withdraws all IGP routes from DUT's Local Interface 490 on Preferred Egress Interface. 492 IGP Data Plane Route Convergence 494 5. Measure Rate-Derived Convergence Time [Po07t] as DUT withdraws 495 routes and converges all IGP routes and traffic over the 496 Next-Best Egress Interface. 497 6. Stop offered load. Wait 30 seconds for queues to drain. 498 Restart Offered Load. 499 7. Re-advertise IGP routes to DUT's Preferred Egress Interface. 500 8. Measure Restoration Convergence Time [Po07t] as DUT converges all 501 IGP routes and traffic over the Preferred Egress Interface. 503 Results 504 The measured IGP Convergence time is the SPF Processing and FIB 505 Update time as influenced by the SPF delay, SPF Hold time, SPF 506 Execution Time, Tree Build Time, and Hardware Update Time [Po07a]. 508 4.5 Convergence Due to Cost Change 509 Objective 510 To obtain the IGP Route Convergence due to route cost change. 512 Procedure 513 1. Advertise matching IGP routes from Tester to DUT on 514 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 515 [Po07t] using the topology shown in Figure 1. Set the cost of 516 the routes so that the Preferred Egress Interface is the 517 preferred next-hop. 518 2. Send offered load at measured Throughput with fixed packet 519 size to destinations matching all IGP routes from Tester to 520 DUT on Ingress Interface [Po07t]. 521 3. Verify traffic routed over Preferred Egress Interface. 522 4. Tester increases cost for all IGP routes at DUT's Preferred 523 Egress Interface so that the Next-Best Egress Interface 524 has lower cost and becomes preferred path. 525 5. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 526 cost change event and converges all IGP routes and traffic 527 over the Next-Best Egress Interface. 528 6. Stop offered load. Wait 30 seconds for queues to drain. 529 Restart Offered Load. 530 7. Re-advertise IGP routes to DUT's Preferred Egress Interface 531 with original lower cost metric. 532 8. Measure Restoration Convergence Time [Po07t] as DUT converges all 533 IGP routes and traffic over the Preferred Egress Interface. 535 Results 536 There should be no externally observable IGP Route Convergence 537 and no measured packet loss for this case. 539 4.6 Convergence Due to ECMP Member Interface Failure 540 Objective 541 To obtain the IGP Route Convergence due to a local link failure event 542 of an ECMP Member. 544 IGP Data Plane Route Convergence 546 Procedure 547 1. Configure ECMP Set as shown in Figure 3. 548 2. Advertise matching IGP routes from Tester to DUT on 549 each ECMP member. 550 3. Send offered load at measured Throughput with fixed packet 551 size to destinations matching all IGP routes from Tester to 552 DUT on Ingress Interface [Po07t]. 553 4. Verify traffic routed over all members of ECMP Set. 554 5. Remove link on Tester's Neighbor Interface [Po07t] connected to 555 one of the DUT's ECMP member interfaces. 556 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 557 link down event and converges all IGP routes and traffic 558 over the other ECMP members. 559 7. Stop offered load. Wait 30 seconds for queues to drain. 560 Restart Offered Load. 561 8. Restore link on Tester's Neighbor Interface connected to 562 DUT's ECMP member interface. 563 9. Measure Restoration Convergence Time [Po07t] as DUT detects the 564 link up event and converges IGP routes and some distribution 565 of traffic over the restored ECMP member. 567 Results 568 The measured IGP Convergence time is influenced by Local link 569 failure indication, Tree Build Time, and Hardware Update Time 570 [Po07a]. 572 4.7 Convergence Due to Parallel Link Interface Failure 573 Objective 574 To obtain the IGP Route Convergence due to a local link failure 575 event for a Member of a Parallel Link. The links can be used 576 for data Load Balancing 578 Procedure 579 1. Configure Parallel Link as shown in Figure 4. 580 2. Advertise matching IGP routes from Tester to DUT on 581 each Parallel Link member. 582 3. Send offered load at measured Throughput with fixed packet 583 size to destinations matching all IGP routes from Tester to 584 DUT on Ingress Interface [Po07t]. 585 4. Verify traffic routed over all members of Parallel Link. 586 5. Remove link on Tester's Neighbor Interface [Po07t] connected to 587 one of the DUT's Parallel Link member interfaces. 588 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 589 link down event and converges all IGP routes and traffic over 590 the other Parallel Link members. 591 7. Stop offered load. Wait 30 seconds for queues to drain. 592 Restart Offered Load. 593 8. Restore link on Tester's Neighbor Interface connected to 594 DUT's Parallel Link member interface. 596 IGP Data Plane Route Convergence 598 9. Measure Restoration Convergence Time [Po07t] as DUT detects the 599 link up event and converges IGP routes and some distribution 600 of traffic over the restored Parallel Link member. 602 Results 603 The measured IGP Convergence time is influenced by the Local 604 link failure indication, Tree Build Time, and Hardware Update 605 Time [Po07a]. 607 5. IANA Considerations 609 This document requires no IANA considerations. 611 6. Security Considerations 612 Documents of this type do not directly affect the security of 613 the Internet or corporate networks as long as benchmarking 614 is not performed on devices or systems connected to operating 615 networks. 617 7. Acknowledgements 618 Thanks to Sue Hares, Al Morton, Kevin Dubray, and participants of 619 the BMWG for their contributions to this work. 621 8. References 622 8.1 Normative References 624 [Br91] Bradner, S., "Benchmarking Terminology for Network 625 Interconnection Devices", RFC 1242, IETF, March 1991. 627 [Br97] Bradner, S., "Key words for use in RFCs to Indicate 628 Requirement Levels", RFC 2119, March 1997 630 [Br99] Bradner, S. and McQuaid, J., "Benchmarking Methodology for 631 Network Interconnect Devices", RFC 2544, IETF, March 1999. 633 [Ca90] Callon, R., "Use of OSI IS-IS for Routing in TCP/IP and Dual 634 Environments", RFC 1195, IETF, December 1990. 636 [Ma98] Mandeville, R., "Benchmarking Terminology for LAN 637 Switching Devices", RFC 2285, February 1998. 639 [Mo98] Moy, J., "OSPF Version 2", RFC 2328, IETF, April 1998. 641 [Po07a] Poretsky, S., "Considerations for Benchmarking IGP 642 Convergence", draft-ietf-bmwg-igp-dataplane-conv-app-12, 643 work in progress, February 2007. 645 [Po07t] Poretsky, S., Imhoff, B., "Benchmarking Terminology for IGP 646 Convergence", draft-ietf-bmwg-igp-dataplane-conv-term-12, 647 work in progress, February 2007. 649 8.2 Informative References 650 None 651 IGP Data Plane Route Convergence 653 9. Author's Address 655 Scott Poretsky 656 Reef Point Systems 657 8 New England Executive Park 658 Burlington, MA 01803 659 USA 660 Phone: + 1 508 439 9008 661 EMail: sporetsky@reefpoint.com 663 Brent Imhoff 664 Juniper Networks 665 1194 North Mathilda Ave 666 Sunnyvale, CA 94089 667 USA 668 Phone: + 1 314 378 2571 669 EMail: bimhoff@planetspork.com 671 Full Copyright Statement 673 Copyright (C) The IETF Trust (2007). 675 This document is subject to the rights, licenses and restrictions 676 contained in BCP 78, and except as set forth therein, the authors 677 retain all their rights. 679 This document and the information contained herein are provided 680 on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 681 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE 682 IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL 683 WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY 684 WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE 685 ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS 686 FOR A PARTICULAR PURPOSE. 688 Intellectual Property 690 The IETF takes no position regarding the validity or scope of any 691 Intellectual Property Rights or other rights that might be claimed to 692 pertain to the implementation or use of the technology described in 693 this document or the extent to which any license under such rights 694 might or might not be available; nor does it represent that it has 695 made any independent effort to identify any such rights. Information 696 on the procedures with respect to rights in RFC documents can be 697 found in BCP 78 and BCP 79. 699 Copies of IPR disclosures made to the IETF Secretariat and any 700 assurances of licenses to be made available, or the result of an 701 attempt made to obtain a general license or permission for the use of 702 such proprietary rights by implementers or users of this 703 specification can be obtained from the IETF on-line IPR repository at 704 http://www.ietf.org/ipr. 706 IGP Data Plane Route Convergence 708 The IETF invites any interested party to bring to its attention any 709 copyrights, patents or patent applications, or other proprietary 710 rights that may cover technology that may be required to implement 711 this standard. Please address the information to the IETF at ietf- 712 ipr@ietf.org. 714 Acknowledgement 715 Funding for the RFC Editor function is currently provided by the 716 Internet Society.