idnits 2.17.1 draft-ietf-bmwg-igp-dataplane-conv-meth-16.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 18. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 914. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 925. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 932. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 938. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 348 has weird spacing: '... Pacing seco...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 15, 2008) is 5671 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-17) exists of draft-ietf-bmwg-igp-dataplane-conv-app-16 == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-term-16 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group S. Poretsky 2 Internet Draft Allot Communications 3 Expires: April 2009 4 Intended Status: Informational Brent Imhoff 5 Juniper Networks 7 October 15, 2008 9 Benchmarking Methodology for 10 Link-State IGP Data Plane Route Convergence 12 14 Intellectual Property Rights (IPR) statement: 15 By submitting this Internet-Draft, each author represents that any 16 applicable patent or other IPR claims of which he or she is aware 17 have been or will be disclosed, and any of which he or she becomes 18 aware will be disclosed, in accordance with Section 6 of BCP 79. 20 Status of this Memo 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that 23 other groups may also distribute working documents as 24 Internet-Drafts. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 The list of current Internet-Drafts can be accessed at 32 http://www.ietf.org/ietf/1id-abstracts.txt. 34 The list of Internet-Draft Shadow Directories can be accessed at 35 http://www.ietf.org/shadow.html. 37 Copyright Notice 38 Copyright (C) The IETF Trust (2008). 40 ABSTRACT 41 This document describes the methodology for benchmarking Interior 42 Gateway Protocol (IGP) Route Convergence. The methodology is to 43 be used for benchmarking IGP convergence time through externally 44 observable (black box) data plane measurements. The methodology 45 can be applied to any link-state IGP, such as ISIS and OSPF. 47 Link-State IGP Data Plane Route Convergence 49 Table of Contents 50 1. Introduction ...............................................2 51 2. Existing definitions .......................................2 52 3. Test Setup..................................................3 53 3.1 Test Topologies............................................3 54 3.2 Test Considerations........................................5 55 3.3 Reporting Format...........................................8 56 4. Test Cases..................................................9 57 4.1 Convergence Due to Local Interface Failure.................9 58 4.2 Convergence Due to Remote Interface Failure................10 59 4.3 Convergence Due to Local Administrative Shutdown...........11 60 4.4 Convergence Due to Layer 2 Session Loss....................11 61 4.5 Convergence Due to Loss of IGP Adjacency...................12 62 4.6 Convergence Due to Route Withdrawal........................13 63 4.7 Convergence Due to Cost Change.............................14 64 4.8 Convergence Due to ECMP Member Interface Failure...........15 65 4.9 Convergence Due to ECMP Member Remote Interface Failure....16 66 4.10 Convergence Due to Parallel Link Interface Failure........16 67 5. IANA Considerations.........................................17 68 6. Security Considerations.....................................17 69 7. Acknowledgements............................................17 70 8. References..................................................18 71 9. Author's Address............................................18 73 1. Introduction 74 This document describes the methodology for benchmarking Interior 75 Gateway Protocol (IGP) Route Convergence. The applicability of this 76 testing is described in [Po07a] and the new terminology that it 77 introduces is defined in [Po07t]. Service Providers use IGP 78 Convergence time as a key metric of router design and architecture. 79 Customers of Service Providers observe convergence time by packet 80 loss, so IGP Route Convergence is considered a Direct Measure of 81 Quality (DMOQ). The test cases in this document are black-box tests 82 that emulate the network events that cause route convergence, as 83 described in [Po07a]. The black-box test designs benchmark the data 84 plane and account for all of the factors contributing to convergence 85 time, as discussed in [Po07a]. The methodology (and terminology) for 86 benchmarking route convergence can be applied to any link-state IGP 87 such as ISIS [Ca90] and OSPF [Mo98] and others. These methodologies 88 apply to IPv4 and IPv6 traffic and IGPs. 90 2. Existing definitions 91 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 92 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 93 document are to be interpreted as described in BCP 14, RFC 2119 94 [Br97]. RFC 2119 defines the use of these key words to help make the 95 intent of standards track documents as clear as possible. While this 96 document uses these keywords, this document is not a standards track 97 document. 99 Link-State IGP Data Plane Route Convergence 101 This document adopts the definition format in Section 2 of RFC 1242 102 [Br91]. This document uses much of the terminology defined in 103 [Po07t]. This document uses existing terminology defined in other 104 BMWG work. Examples include, but are not limited to: 106 Throughput [Ref.[Br91], section 3.17] 107 Device Under Test (DUT) [Ref.[Ma98], section 3.1.1] 108 System Under Test (SUT) [Ref.[Ma98], section 3.1.2] 109 Out-of-order Packet [Ref.[Po06], section 3.3.2] 110 Duplicate Packet [Ref.[Po06], section 3.3.3] 111 Packet Loss [Ref.[Po07t], Section 3.5] 113 3. Test Setup 115 3.1 Test Topologies 117 Figure 1 shows the test topology to measure IGP Route Convergence 118 due to local Convergence Events such as Link Failure, Layer 2 119 Session Failure, IGP Adjacency Failure, Route Withdrawal, and route 120 cost change. These test cases discussed in section 4 provide route 121 convergence times that include the Event Detection time, SPF 122 Processing time, and FIB Update time. These times are measured 123 by observing packet loss in the data plane at the Tester. 125 Figure 2 shows the test topology to measure IGP Route Convergence 126 time due to remote changes in the network topology. These times 127 are measured by observing packet loss in the data plane at the 128 Tester. In this topology the three routers are considered a System 129 Under Test (SUT). A Remote Interface [Po07t] failure on router R2 130 MUST result in convergence of traffic to router R3. NOTE: All 131 routers in the SUT must be the same model and identically 132 configured. 134 Figure 3 shows the test topology to measure IGP Route Convergence 135 time with members of an Equal Cost Multipath (ECMP) Set. These 136 times are measured by observing packet loss in the data plane at 137 the Tester. In this topology, the DUT is configured with each 138 Egress interface as a member of an ECMP set and the Tester emulates 139 multiple next-hop routers (emulates one router for each member). 141 --------- Ingress Interface --------- 142 | |<--------------------------------| | 143 | | | | 144 | | Preferred Egress Interface | | 145 | DUT |-------------------------------->| Tester| 146 | | | | 147 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 148 | | Next-Best Egress Interface | | 149 --------- --------- 151 Figure 1. Test Topology 1: IGP Convergence Test Topology 152 for Local Changes 154 Link-State IGP Data Plane Route Convergence 156 ----- --------- 157 | | Preferred | | 158 ----- |R2 |---------------------->| | 159 | |-->| | Egress Interface | | 160 | | ----- | | 161 |R1 | |Tester | 162 | | ----- | | 163 | |-->| | Next-Best | | 164 ----- |R3 |~~~~~~~~~~~~~~~~~~~~~~>| | 165 ^ | | Egress Interface | | 166 | ----- --------- 167 | | 168 |-------------------------------------- 169 Ingress Interface 171 Figure 2. Test Topology 2: IGP Convergence Test Topology 172 for Convergence Due to Remote Changes 174 --------- Ingress Interface --------- 175 | |<--------------------------------| | 176 | | | | 177 | | ECMP Set Interface 1 | | 178 | DUT |-------------------------------->| Tester| 179 | | . | | 180 | | . | | 181 | | . | | 182 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 183 | | ECMP Set Interface N | | 184 --------- --------- 186 Figure 3. Test Topology 3: IGP Convergence Test Topology 187 for ECMP Convergence 189 --------- Ingress Interface --------- 190 | |<--------------------------------| | 191 | | | | 192 | | Parallel Link Interface 1 | | 193 | DUT |-------------------------------->| Tester| 194 | | . | | 195 | | . | | 196 | | . | | 197 | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | 198 | | Parallel Link Interface N | | 199 --------- --------- 201 Figure 4. Test Topology 4: IGP Convergence Test Topology 202 for Parallel Link Convergence 204 Link-State IGP Data Plane Route Convergence 206 Figure 4 shows the test topology to measure IGP Route Convergence 207 time with members of a Parallel Link. These times are measured by 208 observing packet loss in the data plane at the Tester. In this 209 topology, the DUT is configured with each Egress interface as a 210 member of a Parallel Link and the Tester emulates the single 211 next-hop router. 213 3.2 Test Considerations 214 3.2.1 IGP Selection 215 The test cases described in section 4 MAY be used for link-state 216 IGPs, such as ISIS or OSPF. The Route Convergence test methodology 217 is identical. The IGP adjacencies are established on the Preferred 218 Egress Interface and Next-Best Egress Interface. 220 3.2.2 Routing Protocol Configuration 221 The obtained results for IGP Route Convergence may vary if 222 other routing protocols are enabled and routes learned via those 223 protocols are installed. IGP convergence times MUST be benchmarked 224 without routes installed from other protocols. 226 3.2.3 IGP Route Scaling 227 The number of IGP routes will impact the measured IGP Route 228 Convergence. To obtain results similar to those that would be 229 observed in an operational network, it is RECOMMENDED that the 230 number of installed routes and nodes closely approximates that 231 of the network (e.g. thousands of routes with tens of nodes). 232 The number of areas (for OSPF) and levels (for ISIS) can impact 233 the benchmark results. 235 3.2.4 Timers 236 There are some timers that will impact the measured IGP Convergence 237 time. Benchmarking metrics may be measured at any fixed values for 238 these timers. It is RECOMMENDED that the following timers be 239 configured to the minimum values listed: 241 Timer Recommended Value 242 ----- ----------------- 243 Link Failure Indication Delay <10milliseconds 244 IGP Hello Timer 1 second 245 IGP Dead-Interval 3 seconds 246 LSA Generation Delay 0 247 LSA Flood Packet Pacing 0 248 LSA Retransmission Packet Pacing 0 249 SPF Delay 0 250 Link-State IGP Data Plane Route Convergence 252 3.2.5 Interface Types 253 All test cases in this methodology document may be executed with any 254 interface type. All interfaces MUST be the same media and Throughput 255 [Br91][Br99] for each test case. The type of media may dictate which 256 test cases may be executed. This is because each interface type has 257 a unique mechanism for detecting link failures and the speed at which 258 that mechanism operates will influence the measure results. Media 259 and protocols MUST be configured for minimum failure detection delay 260 to minimize the contribution to the measured Convergence time. For 261 example, configure SONET with the minimum carrier-loss-delay. All 262 interfaces SHOULD be configured as point-to-point. 264 3.2.6 Packet Sampling Interval 265 The Packet Sampling Interval [Po07t] value is the fastest measurable 266 Rate-Derived Convergence Time [Po07t]. The RECOMMENDED value for the 267 Packet Sampling Interval is 10 milliseconds. Rate-Derived Convergence 268 Time is the preferred benchmark for IGP Route Convergence. This 269 benchmark must always be reported when the Packet Sampling Interval 270 is set <= 10 milliseconds on the test equipment. If the test 271 equipment does not permit the Packet Sampling Interval to be set as 272 low as 10 milliseconds, then both the Rate-Derived Convergence Time 273 and Loss-Derived Convergence Time [Po07t] MUST be reported. 275 3.2.7 Offered Load 276 The offered load MUST be the Throughput of the device as defined in 277 [Br91] and benchmarked in [Br99] at a fixed packet size. At least 278 one packet per route in the FIB for all routes in the FIB MUST be 279 offered to the DUT within the Packet Sampling interval. Packet size 280 is measured in bytes and includes the IP header and payload. The 281 packet size is selectable and MUST be recorded. The Forwarding 282 Rate [Ma98] MUST be measured at the Preferred Egress Interface and 283 the Next-Best Egress Interface. The duration of offered load MUST 284 be greater than the convergence time. The destination addresses 285 for the offered load MUST be distributed such that all routes are 286 matched and each route is offered an equal share of the total 287 Offered Load. This requirement for the Offered Load to be 288 distributed to match all destinations in the route table creates 289 separate flows that are offered to the DUT. The capability of the 290 Tester to measure packet loss for each individual flow (identified 291 by the destination address matching a route entry) and the scale 292 for the number of individual flows for which it can measure packet 293 loss should be considered when benchmarking Route-Specific 294 Convergence [Po07t]. 296 Link-State IGP Data Plane Route Convergence 298 3.2.8 Selection of Convergence Time Benchmark Metrics 299 The methodologies in the section 4 test cases MAY be applied to 300 benchmark Full Convergence and Route-Specific Convergence with 301 benchmarking metrics First Route Convergence Time, Loss-Derived 302 Convergence Time, Rate-Derived Convergence Time, Reversion 303 Convergence Time, and Route-Specific Convergence Times [Po07t]. 305 When benchmarking Full Convergence the Rate-Derived Convergence 306 Time benchmarking metric MAY be measured. When benchmarking 307 Route-Specific Convergence the Route-Specific Convergence Time 308 benchmarking metric MUST be measured and Full Convergence MAY be 309 obtained from max(Route-Specific Convergence Time). The First 310 Route Convergence Time benchmarking metric MAY be measured when 311 benchmarking either Full Convergence or Route-Specific Convergence. 313 3.2.9 Tester Capabilities 314 It is RECOMMENDED that the Tester used to execute each test case 315 have the following capabilities: 316 1. Ability to insert a timestamp in each data packet's IP 317 payload. 318 2. An internal time clock to control timestamping, time 319 measurements, and time calculations. 320 3. Ability to distinguish traffic load received on the Preferred 321 and Next-Best interfaces. 322 4. Ability to disable or tune specific Layer-2 and Layer-3 323 protocol functions on any interface(s). 325 Link-State IGP Data Plane Route Convergence 327 3.3 Reporting Format 328 For each test case, it is recommended that the reporting table below 329 is completed and all time values SHOULD be reported with resolution 330 as specified in [Po07t]. 332 Parameter Units 333 --------- ----- 334 Test Case test case number 335 Test Topology (1, 2, 3, or 4) 336 IGP (ISIS, OSPF, other) 337 Interface Type (GigE, POS, ATM, other) 338 Packet Size offered to DUT bytes 339 IGP Routes advertised to DUT number of IGP routes 340 Nodes in emulated network number of nodes 341 Packet Sampling Interval on Tester milliseconds 342 IGP Timer Values configured on DUT: 343 Interface Failure Indication Delay seconds 344 IGP Hello Timer seconds 345 IGP Dead-Interval seconds 346 LSA Generation Delay seconds 347 LSA Flood Packet Pacing seconds 348 LSA Retransmission Packet Pacing seconds 349 SPF Delay seconds 350 Forwarding Metrics 351 Total Packets Offered to DUT number of Packets 352 Total Packets Routed by DUT number of Packets 353 Convergence Packet Loss number of Packets 354 Out-of-Order Packets number of Packets 355 Duplicate Packets number of Packets 356 Convergence Benchmarks 357 Full Convergence 358 First Route Convergence Time seconds 359 Rate-Derived Convergence Time seconds 360 Loss-Derived Convergence Time seconds 361 Route-Specific Convergence 362 Number of Routes Measured number of flows 363 Route-Specific Convergence Time[n] array of seconds 364 Minimum R-S Convergence Time seconds 365 Maximum R-S Convergence Time seconds 366 Median R-S Convergence Time seconds 367 Average R-S Convergence Time seconds 368 Reversion 369 Reversion Convergence Time seconds 370 First Route Convergence Time seconds 371 Route-Specific Convergence 372 Number of Routes Measured number of flows 373 Route-Specific Convergence Time[n] array of seconds 374 Minimum R-S Convergence Time seconds 375 Maximum R-S Convergence Time seconds 376 Median R-S Convergence Time seconds 377 Average R-S Convergence Time seconds 378 Link-State IGP Data Plane Route Convergence 380 4. Test Cases 381 It is RECOMMENDED that all applicable test cases be executed for 382 best characterization of the DUT. The test cases follow a generic 383 procedure tailored to the specific DUT configuration and Convergence 384 Event[Po07t]. This generic procedure is as follows: 386 1. Establish DUT configuration and install routes. 387 2. Send offered load with traffic traversing Preferred Egress 388 Interface [Po07t]. 389 3. Introduce Convergence Event to force traffic to Next-Best 390 Egress Interface [Po07t]. 391 4. Measure First Route Convergence Time. 392 5. Measure Full Convergence from Loss-Derived Convergence Time, 393 Rate-Derived Convergence Time, and optionally the 394 Route-Specific Convergence Times. 395 6. Wait the Sustained Convergence Validation Time to ensure there 396 no residual packet loss. 397 7. Recover from Convergence Event. 398 8. Measure Reversion Convergence Time, and optionally the First 399 Route Convergence Time and Route-Specific Convergence Times. 401 4.1 Convergence Due to Local Interface Failure 402 Objective 403 To obtain the IGP Route Convergence due to a local link failure event 404 at the DUT's Local Interface. 406 Procedure 407 1. Advertise matching IGP routes from Tester to DUT on Preferred 408 Egress Interface [Po07t] and Next-Best Egress Interface [Po07t] 409 using the topology shown in Figure 1. Set the cost of the routes 410 so that the Preferred Egress Interface is the preferred next-hop. 411 2. Send offered load at measured Throughput with fixed packet 412 size to destinations matching all IGP routes from Tester to 413 DUT on Ingress Interface [Po07t]. 414 3. Verify traffic is routed over Preferred Egress Interface. 415 4. Remove link on DUT's Preferred Egress Interface. This is the 416 Convergence Event [Po07t] that produces the Convergence Event 417 Instant [Po07t]. 418 5. Measure First Route Convergence Time [Po07t] as DUT detects the 419 link down event and begins to converge IGP routes and traffic 420 over the Next-Best Egress Interface. 421 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 422 link down event and converges all IGP routes and traffic over 423 the Next-Best Egress Interface. Optionally, Route-Specific 424 Convergence Times [Po07t] MAY be measured. 425 7. Stop offered load. Wait 30 seconds for queues to drain. 426 Restart offered load. 427 8. Restore link on DUT's Preferred Egress Interface. 428 9. Measure Reversion Convergence Time [Po07t], and optionally 429 measure First Route Convergence Time [Po07t] and Route-Specific 430 Convergence Times [Po07t], as DUT detects the link up event and 431 converges all IGP routes and traffic back to the Preferred 432 Egress Interface. 434 Link-State IGP Data Plane Route Convergence 436 Results 437 The measured IGP Convergence time is influenced by the Local 438 link failure indication, SPF delay, SPF Hold time, SPF Execution 439 Time, Tree Build Time, and Hardware Update Time [Po07a]. 441 4.2 Convergence Due to Remote Interface Failure 443 Objective 444 To obtain the IGP Route Convergence due to a Remote Interface 445 Failure event. 447 Procedure 448 1. Advertise matching IGP routes from Tester to SUT on 449 Preferred Egress Interface [Po07t] and Next-Best Egress 450 Interface [Po07t] using the topology shown in Figure 2. 451 Set the cost of the routes so that the Preferred Egress 452 Interface is the preferred next-hop. 453 2. Send offered load at measured Throughput with fixed packet 454 size to destinations matching all IGP routes from Tester to 455 SUT on Ingress Interface [Po07t]. 456 3. Verify traffic is routed over Preferred Egress Interface. 457 4. Remove link on Tester's Neighbor Interface [Po07t] connected to 458 SUT's Preferred Egress Interface. This is the Convergence Event 459 [Po07t] that produces the Convergence Event Instant [Po07t]. 460 5. Measure First Route Convergence Time [Po07t] as SUT detects the 461 link down event and begins to converge IGP routes and traffic 462 over the Next-Best Egress Interface. 463 6. Measure Rate-Derived Convergence Time [Po07t] as SUT detects 464 the link down event and converges all IGP routes and traffic 465 over the Next-Best Egress Interface. Optionally, Route-Specific 466 Convergence Times [Po07t] MAY be measured. 467 7. Stop offered load. Wait 30 seconds for queues to drain. 468 Restart offered load. 469 8. Restore link on Tester's Neighbor Interface connected to 470 DUT's Preferred Egress Interface. 471 9. Measure Reversion Convergence Time [Po07t], and optionally 472 measure First Route Convergence Time [Po07t] and Route-Specific 473 Convergence Times [Po07t], as DUT detects the link up event and 474 converges all IGP routes and traffic back to the Preferred Egress 475 Interface. 477 Results 478 The measured IGP Convergence time is influenced by the link failure 479 indication, LSA/LSP Flood Packet Pacing, LSA/LSP Retransmission 480 Packet Pacing, LSA/LSP Generation time, SPF delay, SPF Hold time, 481 SPF Execution Time, Tree Build Time, and Hardware Update Time 482 [Po07a]. This test case may produce Stale Forwarding [Po07t] due to 483 microloops which may increase the measured convergence times. 485 Link-State IGP Data Plane Route Convergence 487 4.3 Convergence Due to Local Adminstrative Shutdown 488 Objective 489 To obtain the IGP Route Convergence due to a administrative shutdown 490 at the DUT's Local Interface. 492 Procedure 493 1. Advertise matching IGP routes from Tester to DUT on 494 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 495 [Po07t] using the topology shown in Figure 1. Set the cost of 496 the routes so that the Preferred Egress Interface is the 497 preferred next-hop. 498 2. Send offered load at measured Throughput with fixed packet 499 size to destinations matching all IGP routes from Tester to 500 DUT on Ingress Interface [Po07t]. 501 3. Verify traffic is routed over Preferred Egress Interface. 502 4. Perform adminstrative shutdown on the DUT's Preferred Egress 503 Interface. This is the Convergence Event [Po07t] that produces 504 the Convergence Event Instant [Po07t]. 505 5. Measure First Route Convergence Time [Po07t] as DUT detects the 506 link down event and begins to converge IGP routes and traffic 507 over the Next-Best Egress Interface. 508 6. Measure Rate-Derived Convergence Time [Po07t] as DUT converges 509 all IGP routes and traffic over the Next-Best Egress Interface. 510 Optionally, Route-Specific Convergence Times [Po07t] MAY be 511 measured. 512 7. Stop offered load. Wait 30 seconds for queues to drain. 513 Restart offered load. 514 8. Restore Preferred Egress Interface by administratively enabling 515 the interface. 516 9. Measure Reversion Convergence Time [Po07t], and optionally 517 measure First Route Convergence Time [Po07t] and Route-Specific 518 Convergence Times [Po07t], as DUT detects the link up event and 519 converges all IGP routes and traffic back to the Preferred 520 Egress Interface. 522 Results 523 The measured IGP Convergence time is influenced by SPF delay, 524 SPF Hold time, SPF Execution Time, Tree Build Time, and Hardware 525 Update Time [Po07a]. 527 4.4 Convergence Due to Layer 2 Session Loss 528 Objective 529 To obtain the IGP Route Convergence due to a local Layer 2 loss. 531 Procedure 532 1. Advertise matching IGP routes from Tester to DUT on 533 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 534 [Po07t] using the topology shown in Figure 1. Set the cost of 535 the routes so that the IGP routes along the Preferred Egress 536 Interface is the preferred next-hop. 537 2. Send offered load at measured Throughput with fixed packet 538 size to destinations matching all IGP routes from Tester to 539 DUT on Ingress Interface [Po07t]. 541 Link-State IGP Data Plane Route Convergence 543 3. Verify traffic is routed over Preferred Egress Interface. 544 4. Tester removes Layer 2 session from DUT's Preferred Egress 545 Interface [Po07t]. It is RECOMMENDED that this be achieved with 546 messaging, but the method MAY vary with the Layer 2 protocol. 547 This is the Convergence Event [Po07t] that produces the 548 Convergence Event Instant [Po07t]. 549 5. Measure First Route Convergence Time [Po07t] as DUT detects the 550 Layer 2 session down event and begins to converge IGP routes and 551 traffic over the Next-Best Egress Interface. 552 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 553 Layer 2 session down event and converges all IGP routes and 554 traffic over the Next-Best Egress Interface. Optionally, 555 Route-Specific Convergence Times [Po07t] MAY be measured. 556 7. Stop offered load. Wait 30 seconds for queues to drain. 557 Restart offered load. 558 8. Restore Layer 2 session on DUT's Preferred Egress Interface. 559 9. Measure Reversion Convergence Time [Po07t], and optionally 560 measure First Route Convergence Time [Po07t] and Route-Specific 561 Convergence Times [Po07t], as DUT detects the session up event 562 and converges all IGP routes and traffic over the Preferred Egress 563 Interface. 565 Results 566 The measured IGP Convergence time is influenced by the Layer 2 567 failure indication, SPF delay, SPF Hold time, SPF Execution 568 Time, Tree Build Time, and Hardware Update Time [Po07a]. 570 4.5 Convergence Due to Loss of IGP Adjacency 571 Objective 572 To obtain the IGP Route Convergence due to loss of the IGP 573 Adjacency. 575 Procedure 576 1. Advertise matching IGP routes from Tester to DUT on 577 Preferred Egress Interface [Po07t] and Next-Best Egress Interface 578 [Po07t] using the topology shown in Figure 1. Set the cost of 579 the routes so that the Preferred Egress Interface is the 580 preferred next-hop. 581 2. Send offered load at measured Throughput with fixed packet 582 size to destinations matching all IGP routes from Tester to 583 DUT on Ingress Interface [Po07t]. 584 3. Verify traffic is routed over Preferred Egress Interface. 585 4. Remove IGP adjacency from Tester's Neighbor Interface [Po07t] 586 connected to Preferred Egress Interface. The Layer 2 session 587 MUST be maintained. This is the Convergence Event [Po07t] that 588 produces the Convergence Event Instant [Po07t]. 589 5. Measure First Route Convergence Time [Po07t] as DUT detects the 590 loss of IGP adjacency and begins to converge IGP routes and 591 traffic over the Next-Best Egress Interface. 592 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 593 IGP session failure event and converges all IGP routes and 594 traffic over the Next-Best Egress Interface. Optionally, 595 Route-Specific Convergence Times [Po07t] MAY be measured. 597 Link-State IGP Data Plane Route Convergence 599 7. Stop offered load. Wait 30 seconds for queues to drain. 600 Restart offered load. 601 8. Restore IGP session on DUT's Preferred Egress Interface. 602 9. Measure Reversion Convergence Time [Po07t], and optionally 603 measure First Route Convergence Time [Po07t] and Route-Specific 604 Convergence Times [Po07t], as DUT detects the session recovery 605 event and converges all IGP routes and traffic over the 606 Preferred Egress Interface. 608 Results 609 The measured IGP Convergence time is influenced by the IGP Hello 610 Interval, IGP Dead Interval, SPF delay, SPF Hold time, SPF 611 Execution Time, Tree Build Time, and Hardware Update Time [Po07a]. 613 4.6 Convergence Due to Route Withdrawal 615 Objective 616 To obtain the IGP Route Convergence due to Route Withdrawal. 618 Procedure 619 1. Advertise matching IGP routes from Tester to DUT on Preferred 620 Egress Interface [Po07t] and Next-Best Egress Interface [Po07t] 621 using the topology shown in Figure 1. Set the cost of the routes 622 so that the Preferred Egress Interface is the preferred next-hop. 623 It is RECOMMENDED that the IGP routes be IGP external routes 624 for which the Tester would be emulating a preferred and a 625 next-best Autonomous System Border Router (ASBR). 626 2. Send offered load at measured Throughput with fixed packet 627 size to destinations matching all IGP routes from Tester to 628 DUT on Ingress Interface [Po07t]. 629 3. Verify traffic is routed over Preferred Egress Interface. 630 4. Tester withdraws all IGP routes from DUT's Local Interface 631 on Preferred Egress Interface. The Tester records the time it 632 sends the withdrawal message(s). This MAY be achieved with 633 inclusion of a timestamp in the traffic payload. This is the 634 Convergence Event [Po07t] that produces the Convergence Event 635 Instant [Po07t]. 636 5. Measure First Route Convergence Time [Po07t] as DUT detects the 637 route withdrawal event and begins to converge IGP routes and 638 traffic over the Next-Best Egress Interface. This is measured 639 from the time that the Tester sent the withdrawal message(s). 640 6. Measure Rate-Derived Convergence Time [Po07t] as DUT withdraws 641 routes and converges all IGP routes and traffic over the 642 Next-Best Egress Interface. Optionally, Route-Specific 643 Convergence Times [Po07t] MAY be measured. 644 7. Stop offered load. Wait 30 seconds for queues to drain. 645 Restart offered load. 646 8. Re-advertise IGP routes to DUT's Preferred Egress Interface. 647 9. Measure Reversion Convergence Time [Po07t], and optionally 648 measure First Route Convergence Time [Po07t] and Route-Specific 649 Convergence Times [Po07t], as DUT converges all IGP routes and 650 traffic over the Preferred Egress Interface. 652 Link-State IGP Data Plane Route Convergence 654 Results 655 The measured IGP Convergence time is the SPF Processing and FIB 656 Update time as influenced by the SPF or route calculation delay, 657 Hold time, Execution Time, and Hardware Update Time [Po07a]. 659 4.7 Convergence Due to Cost Change 660 Objective 661 To obtain the IGP Route Convergence due to route cost change. 663 Procedure 664 1. Advertise matching IGP routes from Tester to DUT on Preferred 665 Egress Interface [Po07t] and Next-Best Egress Interface [Po07t] 666 using the topology shown in Figure 1. Set the cost of the routes 667 so that the Preferred Egress Interface is the preferred next-hop. 668 2. Send offered load at measured Throughput with fixed packet 669 size to destinations matching all IGP routes from Tester to 670 DUT on Ingress Interface [Po07t]. 671 3. Verify traffic is routed over Preferred Egress Interface. 672 4. Tester increases cost for all IGP routes at DUT's Preferred 673 Egress Interface so that the Next-Best Egress Interface 674 has lower cost and becomes preferred path. This is the 675 Convergence Event [Po07t] that produces the Convergence Event 676 Instant [Po07t]. 677 5. Measure First Route Convergence Time [Po07t] as DUT detects the 678 cost change event and begins to converge IGP routes and traffic 679 over the Next-Best Egress Interface. 680 6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 681 cost change event and converges all IGP routes and traffic 682 over the Next-Best Egress Interface. Optionally, Route-Specific 683 Convergence Times [Po07t] MAY be measured. 684 7. Stop offered load. Wait 30 seconds for queues to drain. 685 Restart offered load. 686 8. Re-advertise IGP routes to DUT's Preferred Egress Interface 687 with original lower cost metric. 688 9. Measure Reversion Convergence Time [Po07t], and optionally 689 measure First Route Convergence Time [Po07t] and Route-Specific 690 Convergence Times [Po07t], as DUT converges all IGP routes and 691 traffic over the Preferred Egress Interface. 693 Results 694 There should be no measured packet loss for this case. 696 Link-State IGP Data Plane Route Convergence 698 4.8 Convergence Due to ECMP Member Interface Failure 700 Objective 701 To obtain the IGP Route Convergence due to a local link failure event 702 of an ECMP Member. 704 Procedure 705 1. Configure ECMP Set as shown in Figure 3. 706 2. Advertise matching IGP routes from Tester to DUT on each ECMP 707 member. 708 3. Send offered load at measured Throughput with fixed packet size to 709 destinations matching all IGP routes from Tester to DUT on Ingress 710 Interface [Po07t]. 711 4. Verify traffic is routed over all members of ECMP Set. 712 5. Remove link on Tester's Neighbor Interface [Po07t] connected to 713 one of the DUT's ECMP member interfaces. This is the Convergence 714 Event [Po07t] that produces the Convergence Event Instant [Po07t]. 715 6. Measure First Route Convergence Time [Po07t] as DUT detects the 716 link down event and begins to converge IGP routes and traffic 717 over the other ECMP members. 718 7. Measure Rate-Derived Convergence Time [Po07t] as DUT detects 719 the link down event and converges all IGP routes and traffic 720 over the other ECMP members. At the same time measure 721 Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. 722 Optionally, Route-Specific Convergence Times [Po07t] MAY be 723 measured. 724 8. Stop offered load. Wait 30 seconds for queues to drain. 725 Restart offered load. 726 9. Restore link on Tester's Neighbor Interface connected to 727 DUT's ECMP member interface. 728 10. Measure Reversion Convergence Time [Po07t], and optionally 729 measure First Route Convergence Time [Po07t] and Route-Specific 730 Convergence Times [Po07t], as DUT detects the link up event and 731 converges IGP routes and some distribution of traffic over the 732 restored ECMP member. 734 Results 735 The measured IGP Convergence time is influenced by Local link 736 failure indication, Tree Build Time, and Hardware Update Time 737 [Po07a]. 739 Link-State IGP Data Plane Route Convergence 741 4.9 Convergence Due to ECMP Member Remote Interface Failure 743 Objective 744 To obtain the IGP Route Convergence due to a remote interface 745 failure event for an ECMP Member. 747 Procedure 748 1. Configure ECMP Set as shown in Figure 2 in which the links 749 from R1 to R2 and R1 to R3 are members of an ECMP Set. 750 2. Advertise matching IGP routes from Tester to SUT to balance 751 traffic to each ECMP member. 752 3. Send offered load at measured Throughput with fixed packet 753 size to destinations matching all IGP routes from Tester to 754 SUT on Ingress Interface [Po07t]. 755 4. Verify traffic is routed over all members of ECMP Set. 756 5. Remove link on Tester's Neighbor Interface to R2 or R3. 757 This is the Convergence Event [Po07t] that produces the 758 Convergence Event Instant [Po07t]. 759 6. Measure First Route Convergence Time [Po07t] as SUT detects 760 the link down event and begins to converge IGP routes and 761 traffic over the other ECMP members. 762 7. Measure Rate-Derived Convergence Time [Po07t] as SUT detects 763 the link down event and converges all IGP routes and traffic 764 over the other ECMP members. At the same time measure 765 Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. 766 Optionally, Route-Specific Convergence Times [Po07t] MAY be 767 measured. 768 8. Stop offered load. Wait 30 seconds for queues to drain. 769 Restart offered load. 770 9. Restore link on Tester's Neighbor Interface to R2 or R3. 771 10. Measure Reversion Convergence Time [Po07t], and optionally 772 measure First Route Convergence Time [Po07t] and 773 Route-Specific Convergence Times [Po07t], as SUT detects 774 the link up event and converges IGP routes and some 775 distribution of traffic over the restored ECMP member. 777 Results 778 The measured IGP Convergence time is influenced by Local link 779 failure indication, Tree Build Time, and Hardware Update Time 780 [Po07a]. 782 4.10 Convergence Due to Parallel Link Interface Failure 784 Objective 785 To obtain the IGP Route Convergence due to a local link failure 786 event for a Member of a Parallel Link. The links can be used 787 for data Load Balancing 789 Procedure 790 1. Configure Parallel Link as shown in Figure 4. 791 2. Advertise matching IGP routes from Tester to DUT on 792 each Parallel Link member. 794 Link-State IGP Data Plane Route Convergence 796 3. Send offered load at measured Throughput with fixed packet 797 size to destinations matching all IGP routes from Tester to 798 DUT on Ingress Interface [Po07t]. 799 4. Verify traffic is routed over all members of Parallel Link. 800 5. Remove link on Tester's Neighbor Interface [Po07t] connected to 801 one of the DUT's Parallel Link member interfaces. This is the 802 Convergence Event [Po07t] that produces the Convergence Event 803 Instant [Po07t]. 804 6. Measure First Route Convergence Time [Po07t] as DUT detects the 805 link down event and begins to converge IGP routes and traffic 806 over the other Parallel Link members. 807 7. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 808 link down event and converges all IGP routes and traffic over 809 the other Parallel Link members. At the same time measure 810 Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. 811 Optionally, Route-Specific Convergence Times [Po07t] MAY be 812 measured. 813 8. Stop offered load. Wait 30 seconds for queues to drain. 814 Restart offered load. 815 9. Restore link on Tester's Neighbor Interface connected to 816 DUT's Parallel Link member interface. 817 10. Measure Reversion Convergence Time [Po07t], and optionally 818 measure First Route Convergence Time [Po07t] and 819 Route-Specific Convergence Times [Po07t], as DUT 820 detects the link up event and converges IGP routes and some 821 distribution of traffic over the restored Parallel Link member. 823 Results 824 The measured IGP Convergence time is influenced by the Local 825 link failure indication, Tree Build Time, and Hardware Update 826 Time [Po07a]. 828 5. IANA Considerations 830 This document requires no IANA considerations. 832 6. Security Considerations 833 Documents of this type do not directly affect the security of 834 the Internet or corporate networks as long as benchmarking 835 is not performed on devices or systems connected to operating 836 networks. 838 7. Acknowledgements 839 Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward, 840 Kris Michielsen and the BMWG for their contributions to this work. 842 Link-State IGP Data Plane Route Convergence 844 8. References 845 8.1 Normative References 847 [Br91] Bradner, S., "Benchmarking Terminology for Network 848 Interconnection Devices", RFC 1242, IETF, March 1991. 850 [Br97] Bradner, S., "Key words for use in RFCs to Indicate 851 Requirement Levels", RFC 2119, March 1997 853 [Br99] Bradner, S. and McQuaid, J., "Benchmarking Methodology for 854 Network Interconnect Devices", RFC 2544, IETF, March 1999. 856 [Ca90] Callon, R., "Use of OSI IS-IS for Routing in TCP/IP and Dual 857 Environments", RFC 1195, IETF, December 1990. 859 [Ma98] Mandeville, R., "Benchmarking Terminology for LAN 860 Switching Devices", RFC 2285, February 1998. 862 [Mo98] Moy, J., "OSPF Version 2", RFC 2328, IETF, April 1998. 864 [Po06] Poretsky, S., et al., "Terminology for Benchmarking 865 Network-layer Traffic Control Mechanisms", RFC 4689, 866 November 2006. 868 [Po07a] Poretsky, S., "Considerations for Benchmarking Link-State 869 IGP Convergence", draft-ietf-bmwg-igp-dataplane-conv-app-16, 870 work in progress, October 2008. 872 [Po07t] Poretsky, S., Imhoff, B., "Benchmarking Terminology for 873 Link-State IGP Convergence", 874 draft-ietf-bmwg-igp-dataplane-conv-term-16, work in 875 progress, October 2008. 877 8.2 Informative References 878 None 880 9. Author's Address 882 Scott Poretsky 883 Allot Communications 884 67 South Bedford Street, Suite 400 885 Burlington, MA 01803 886 USA 887 Phone: + 1 508 309 2179 888 Email: sporetsky@allot.com 890 Brent Imhoff 891 Juniper Networks 892 1194 North Mathilda Ave 893 Sunnyvale, CA 94089 894 USA 895 Phone: + 1 314 378 2571 896 EMail: bimhoff@planetspork.com 897 Link-State IGP Data Plane Route Convergence 899 Full Copyright Statement 901 Copyright (C) The IETF Trust (2008). 903 This document is subject to the rights, licenses and restrictions 904 contained in BCP 78, and except as set forth therein, the authors 905 retain all their rights. 907 This document and the information contained herein are provided 908 on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 909 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE 910 IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL 911 WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY 912 WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE 913 ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS 914 FOR A PARTICULAR PURPOSE. 916 Intellectual Property 918 The IETF takes no position regarding the validity or scope of any 919 Intellectual Property Rights or other rights that might be claimed to 920 pertain to the implementation or use of the technology described in 921 this document or the extent to which any license under such rights 922 might or might not be available; nor does it represent that it has 923 made any independent effort to identify any such rights. Information 924 on the procedures with respect to rights in RFC documents can be 925 found in BCP 78 and BCP 79. 927 Copies of IPR disclosures made to the IETF Secretariat and any 928 assurances of licenses to be made available, or the result of an 929 attempt made to obtain a general license or permission for the use of 930 such proprietary rights by implementers or users of this 931 specification can be obtained from the IETF on-line IPR repository at 932 http://www.ietf.org/ipr. 934 The IETF invites any interested party to bring to its attention any 935 copyrights, patents or patent applications, or other proprietary 936 rights that may cover technology that may be required to implement 937 this standard. Please address the information to the IETF at ietf- 938 ipr@ietf.org. 940 Acknowledgement 941 Funding for the RFC Editor function is currently provided by the 942 Internet Society.