idnits 2.17.1 draft-ietf-bmwg-igp-dataplane-conv-meth-23.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 1089 has weird spacing: '... Tester sec...' == Line 1093 has weird spacing: '...n delay seco...' == Line 1139 has weird spacing: '...ce Time seco...' == Line 1140 has weird spacing: '...ce Time seco...' == Line 1141 has weird spacing: '...ce Time sec...' == (3 more instances...) -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 16, 2011) is 4789 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 7 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group S. Poretsky 3 Internet-Draft Allot Communications 4 Intended status: Informational B. Imhoff 5 Expires: August 13, 2011 Juniper Networks 6 K. Michielsen 7 Cisco Systems 8 February 16, 2011 10 Benchmarking Methodology for Link-State IGP Data Plane Route Convergence 11 draft-ietf-bmwg-igp-dataplane-conv-meth-23 13 Abstract 15 This document describes the methodology for benchmarking Link-State 16 Interior Gateway Protocol (IGP) Route Convergence. The methodology 17 is to be used for benchmarking IGP convergence time through 18 externally observable (black box) data plane measurements. The 19 methodology can be applied to any link-state IGP, such as IS-IS and 20 OSPF. 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at http://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on August 13, 2011. 39 Copyright Notice 41 Copyright (c) 2011 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 This document may contain material from IETF Documents or IETF 55 Contributions published or made publicly available before November 56 10, 2008. The person(s) controlling the copyright in some of this 57 material may not have granted the IETF Trust the right to allow 58 modifications of such material outside the IETF Standards Process. 59 Without obtaining an adequate license from the person(s) controlling 60 the copyright in such materials, this document may not be modified 61 outside the IETF Standards Process, and derivative works of it may 62 not be created outside the IETF Standards Process, except to format 63 it for publication as an RFC or to translate it into languages other 64 than English. 66 Table of Contents 68 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 69 1.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . 5 70 1.2. Factors for IGP Route Convergence Time . . . . . . . . . . 5 71 1.3. Use of Data Plane for IGP Route Convergence 72 Benchmarking . . . . . . . . . . . . . . . . . . . . . . . 6 73 1.4. Applicability and Scope . . . . . . . . . . . . . . . . . 7 74 2. Existing Definitions . . . . . . . . . . . . . . . . . . . . . 7 75 3. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 8 76 3.1. Test topology for local changes . . . . . . . . . . . . . 8 77 3.2. Test topology for remote changes . . . . . . . . . . . . . 9 78 3.3. Test topology for local ECMP changes . . . . . . . . . . . 11 79 3.4. Test topology for remote ECMP changes . . . . . . . . . . 11 80 3.5. Test topology for Parallel Link changes . . . . . . . . . 12 81 4. Convergence Time and Loss of Connectivity Period . . . . . . . 13 82 4.1. Convergence Events without instant traffic loss . . . . . 14 83 4.2. Loss of Connectivity (LoC) . . . . . . . . . . . . . . . . 16 84 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 17 85 5.1. IGP Selection . . . . . . . . . . . . . . . . . . . . . . 17 86 5.2. Routing Protocol Configuration . . . . . . . . . . . . . . 17 87 5.3. IGP Topology . . . . . . . . . . . . . . . . . . . . . . . 17 88 5.4. Timers . . . . . . . . . . . . . . . . . . . . . . . . . . 18 89 5.5. Interface Types . . . . . . . . . . . . . . . . . . . . . 18 90 5.6. Offered Load . . . . . . . . . . . . . . . . . . . . . . . 19 91 5.7. Measurement Accuracy . . . . . . . . . . . . . . . . . . . 20 92 5.8. Measurement Statistics . . . . . . . . . . . . . . . . . . 20 93 5.9. Tester Capabilities . . . . . . . . . . . . . . . . . . . 20 94 6. Selection of Convergence Time Benchmark Metrics and Methods . 21 95 6.1. Loss-Derived Method . . . . . . . . . . . . . . . . . . . 21 96 6.1.1. Tester capabilities . . . . . . . . . . . . . . . . . 21 97 6.1.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 21 98 6.1.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 21 99 6.2. Rate-Derived Method . . . . . . . . . . . . . . . . . . . 22 100 6.2.1. Tester Capabilities . . . . . . . . . . . . . . . . . 22 101 6.2.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 23 102 6.2.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 23 103 6.3. Route-Specific Loss-Derived Method . . . . . . . . . . . . 24 104 6.3.1. Tester Capabilities . . . . . . . . . . . . . . . . . 24 105 6.3.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 24 106 6.3.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 24 107 7. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 24 108 8. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 26 109 8.1. Interface Failure and Recovery . . . . . . . . . . . . . . 27 110 8.1.1. Convergence Due to Local Interface Failure and 111 Recovery . . . . . . . . . . . . . . . . . . . . . . . 27 112 8.1.2. Convergence Due to Remote Interface Failure and 113 Recovery . . . . . . . . . . . . . . . . . . . . . . . 28 115 8.1.3. Convergence Due to ECMP Member Local Interface 116 Failure and Recovery . . . . . . . . . . . . . . . . . 30 117 8.1.4. Convergence Due to ECMP Member Remote Interface 118 Failure and Recovery . . . . . . . . . . . . . . . . . 31 119 8.1.5. Convergence Due to Parallel Link Interface Failure 120 and Recovery . . . . . . . . . . . . . . . . . . . . . 32 121 8.2. Other Failures and Recoveries . . . . . . . . . . . . . . 33 122 8.2.1. Convergence Due to Layer 2 Session Loss and 123 Recovery . . . . . . . . . . . . . . . . . . . . . . . 33 124 8.2.2. Convergence Due to Loss and Recovery of IGP 125 Adjacency . . . . . . . . . . . . . . . . . . . . . . 34 126 8.2.3. Convergence Due to Route Withdrawal and 127 Re-advertisement . . . . . . . . . . . . . . . . . . . 35 128 8.3. Administrative changes . . . . . . . . . . . . . . . . . . 37 129 8.3.1. Convergence Due to Local Interface Adminstrative 130 Changes . . . . . . . . . . . . . . . . . . . . . . . 37 131 8.3.2. Convergence Due to Cost Change . . . . . . . . . . . . 38 132 9. Security Considerations . . . . . . . . . . . . . . . . . . . 39 133 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 40 134 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 40 135 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 40 136 12.1. Normative References . . . . . . . . . . . . . . . . . . . 40 137 12.2. Informative References . . . . . . . . . . . . . . . . . . 41 138 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 42 140 1. Introduction 142 1.1. Motivation 144 Convergence time is a critical performance parameter. Service 145 Providers use IGP convergence time as a key metric of router design 146 and architecture. Fast network convergence can be optimally achieved 147 through deployment of fast converging routers. Customers of Service 148 Providers use packet loss due to Interior Gateway Protocol (IGP) 149 convergence as a key metric of their network service quality. IGP 150 route convergence is a Direct Measure of Quality (DMOQ) when 151 benchmarking the data plane. The fundamental basis by which network 152 users and operators benchmark convergence is packet loss and other 153 packet impairments, which are externally observable events having 154 direct impact on their application performance. For this reason it 155 is important to develop a standard methodology for benchmarking link- 156 state IGP convergence time through externally observable (black-box) 157 data plane measurements. All factors contributing to convergence 158 time are accounted for by measuring on the data plane. 160 1.2. Factors for IGP Route Convergence Time 162 There are four major categories of factors contributing to the 163 measured IGP convergence time. As discussed in [Vi02], [Ka02], 164 [Fi02], [Al00], [Al02], and [Fr05], these categories are Event 165 Detection, Shortest Path First (SPF) Processing, Link State 166 Advertisement (LSA) / Link State Packet (LSP) Advertisement, and 167 Forwarding Information Base (FIB) Update. These have numerous 168 components that influence the convergence time, including but not 169 limited to the list below: 171 o Event Detection 173 * Physical Layer failure/recovery indication time 175 * Layer 2 failure/recovery indication time 177 * IGP Hello Dead Interval 179 o SPF Processing 181 * SPF Delay Time 183 * SPF Hold time 185 * SPF Execution time 187 o LSA/LSP Advertisement 189 * LSA/LSP Generation time 191 * LSA/LSP Flood Packet Pacing 193 * LSA/LSP Retransmission Packet Pacing 195 o FIB Update 197 * Tree Build time 199 * Hardware Update time 201 o Increased Forwarding Delay due to Queueing 203 The contribution of each of these factors listed above will vary with 204 each router vendors' architecture and IGP implementation. Routers 205 may have a centralized forwarding architecture, in which one 206 forwarding table is calculated and referenced for all arriving 207 packets, or a distributed forwarding architecture, in which the 208 central forwarding table is calculated and distributed to the 209 interfaces for local look-up as packets arrive. The distributed 210 forwarding tables are typically maintained in hardware. 212 The variation in router architecture and implementation necessitates 213 the design of a convergence test that considers all of these 214 components contributing to convergence time and is independent of the 215 Device Under Test (DUT) architecture and implementation. The benefit 216 of designing a test for these considerations is that it enables 217 black-box testing in which knowledge of the routers' internal 218 implementation is not required. It is then possible to make valid 219 use of the convergence benchmarking metrics when comparing routers 220 from different vendors. 222 Convergence performance is tightly linked to the number of tasks a 223 router has to deal with. As the most impacting tasks are mainly 224 related to the control plane and the data plane, the more the DUT is 225 stressed as in a live production environment, the closer performance 226 measurement results match the ones that would be observed in a live 227 production environment. 229 1.3. Use of Data Plane for IGP Route Convergence Benchmarking 231 Customers of Service Providers use packet loss and other packet 232 impairments as metrics to calculate convergence time. Packet loss 233 and other packet impairments are externally observable events having 234 direct impact on customers' application performance. For this reason 235 it is important to develop a standard router benchmarking methodology 236 that is a Direct Measure of Quality (DMOQ) for measuring IGP 237 convergence. An additional benefit of using packet loss for 238 calculation of IGP Route Convergence time is that it enables black- 239 box tests to be designed. Data traffic can be offered to the Device 240 Under Test (DUT), an emulated network event can be forced to occur, 241 and packet loss and other impaired packets can be externally measured 242 to calculate the convergence time. Knowledge of the DUT architecture 243 and IGP implementation is not required. There is no need to rely on 244 the DUT to produce the test results. There is no need to build 245 intrusive test harnesses for the DUT. All factors contributing to 246 convergence time are accounted for by measuring on the dataplane. 248 Other work of the Benchmarking Methodology Working Group (BMWG) 249 focuses on characterizing single router control plane convergence. 250 See [Ma05], [Ma05t], and [Ma05c]. 252 1.4. Applicability and Scope 254 The methodology described in this document can be applied to IPv4 and 255 IPv6 traffic and link-state IGPs such as IS-IS [Ca90][Ho08], OSPF 256 [Mo98][Co08], and others. IGP adjacencies established over any kind 257 of tunnel (such as Traffic Engineering tunnels) are outside the scope 258 of this document. Convergence time benchmarking in topologies with 259 non point-to-point IGP adjacencies will be covered in a later 260 document. Convergence from Bidirectional Forwarding Detection (BFD) 261 is outside the scope of this document. Non-Stop Forwarding (NSF), 262 Non-Stop Routing (NSR), Graceful Restart (GR), or any other High 263 Availability mechanism are outside the scope of this document. Fast 264 reroute mechanisms such as IP Fast-Reroute [Sh10i] or MPLS Fast- 265 Reroute [Pa05] are outside the scope of this document. 267 2. Existing Definitions 269 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 270 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 271 document are to be interpreted as described in BCP 14, RFC 2119 272 [Br97]. RFC 2119 defines the use of these key words to help make the 273 intent of standards track documents as clear as possible. While this 274 document uses these keywords, this document is not a standards track 275 document. 277 This document uses much of the terminology defined in [Po11t]. For 278 any conflicting content, this document supersedes [Po11t]. This 279 document uses existing terminology defined in other documents issued 280 by the Benchmarking Methodology Working Group (BMWG). Examples 281 include, but are not limited to: 283 Throughput [Ref.[Br91], section 3.17] 284 Device Under Test (DUT) [Ref.[Ma98], section 3.1.1] 285 System Under Test (SUT) [Ref.[Ma98], section 3.1.2] 286 Out-of-Order Packet [Ref.[Po06], section 3.3.4] 287 Duplicate Packet [Ref.[Po06], section 3.3.5] 288 Stream [Ref.[Po06], section 3.3.2] 289 Loss Period [Ref.[Ko02], section 4] 290 Forwarding Delay [Ref.[Po06], section 3.2.4] 291 IP Packet Delay Variation (IPDV) [Ref.[De02], section 1.2] 293 3. Test Topologies 295 3.1. Test topology for local changes 297 Figure 1 shows the test topology to measure IGP convergence time due 298 to local Convergence Events such as Local Interface failure and 299 recovery (Section 8.1.1), layer 2 session failure and recovery 300 (Section 8.2.1), and IGP adjacency failure and recovery 301 (Section 8.2.2). This topology is also used to measure IGP 302 convergence time due to route withdrawal and readvertisement 303 (Section 8.2.3), and route cost change (Section 8.3.2) Convergence 304 Events. IGP adjacencies MUST be established between Tester and DUT: 305 one on the Ingress Interface, one on the Preferred Egress Interface, 306 and one on the Next-Best Egress Interface. For this purpose the 307 Tester emulates three routers (RTa, RTb, and RTc), each establishing 308 one adjacency with the DUT. 310 ------- 311 | | Preferred ....... 312 | |------------------. RTb . 313 ....... Ingress | | Egress Interface ....... 314 . RTa .------------| DUT | 315 ....... Interface | | Next-Best ....... 316 | |------------------. RTc . 317 | | Egress Interface ....... 318 ------- 320 Figure 1: IGP convergence test topology for local changes 322 Figure 2 shows the test topology to measure IGP convergence time due 323 to local Convergence Events with a non-Equal Cost Multipath (ECMP) 324 Preferred Egress Interface and Equal Cost Multipath (ECMP) Next-Best 325 Egress Interfaces (Section 8.1.1). In this topology, the DUT is 326 configured with each Next-Best Egress interface as a member of a 327 single ECMP set. The Preferred Egress Interface is not a member of 328 an ECMP set. The Tester emulates N+2 neighbor routers (N>0): one 329 router for the Ingress Interface (RTa), one router for the Preferred 330 Egress Interface (RTb), and N routers for the members of the ECMP set 331 (RTc1...RTcN). IGP adjacencies MUST be established between Tester 332 and DUT: one on the Ingress Interface, one on the Preferred Egress 333 Interface, and one on each member of the ECMP set. When the test 334 specifies to observe the Next-Best Egress Interface statistics, the 335 combined statistics for all ECMP members should be observed. 337 ------- 338 | | Preferred ....... 339 | |------------------. RTb . 340 | | Egress Interface ....... 341 | | 342 | | ECMP Set ........ 343 ....... Ingress | |------------------. RTc1 . 344 . RTa .------------| DUT | Interface 1 ........ 345 ....... Interface | | . 346 | | . 347 | | . 348 | | ECMP Set ........ 349 | |------------------. RTcN . 350 | | Interface N ........ 351 ------- 353 Figure 2: IGP convergence test topology for local changes with non- 354 ECMP to ECMP convergence 356 3.2. Test topology for remote changes 358 Figure 3 shows the test topology to measure IGP convergence time due 359 to Remote Interface failure and recovery (Section 8.1.2). In this 360 topology the two routers DUT1 and DUT2 are considered System Under 361 Test (SUT) and SHOULD be identically configured devices of the same 362 model. IGP adjacencies MUST be established between Tester and SUT, 363 one on the Ingress Interface, one on the Preferred Egress Interface, 364 and one on the Next-Best Egress Interface. For this purpose the 365 Tester emulates three routers (RTa, RTb, and RTc). In this topology 366 there is a possibility of a packet forwarding loop that may occur 367 transiently between DUT1 and DUT2 during convergence (micro-loop, see 368 [Sh10]). 370 -------- 371 | | -------- Preferred ....... 372 | |--| DUT2 |------------------. RTb . 373 ....... Ingress | | -------- Egress Interface ....... 374 . RTa .------------| DUT1 | 375 ....... Interface | | Next-Best ....... 376 | |----------------------------. RTc . 377 | | Egress Interface ....... 378 -------- 380 Figure 3: IGP convergence test topology for remote changes 382 Figure 4 shows the test topology to measure IGP convergence time due 383 to remote Convergence Events with a non-ECMP Preferred Egress 384 Interface and ECMP Next-Best Egress Interfaces (Section 8.1.2). In 385 this topology the two routers DUT1 and DUT2 are considered System 386 Under Test (SUT) and MUST be identically configured devices of the 387 same model. Router DUT1 is configured with the Next-Best Egress 388 Interface an ECMP set of interfaces. The Preferred Egress Interface 389 of DUT1 is not a member of an ECMP set. The Tester emulates N+2 390 neighbor routers (N>0), one for the Ingress Interface (RTa), one for 391 DUT2 (RTb) and one for each member of the ECMP set (RTc1...RTcN). 392 IGP adjacencies MUST be established between Tester and SUT, one on 393 each interface of SUT. For this purpose each of the N+2 routers 394 emulated by the Tester establishes one adjacency with the SUT. In 395 this topology there is a possibility of a packet forwarding loop that 396 may occur transiently between DUT1 and DUT2 during convergence 397 (micro-loop, see [Sh10]). When the test specifies to observe the 398 Next-Best Egress Interface statistics, the combined statistics for 399 all members of the ECMP set should be observed. 401 -------- 402 | | -------- Preferred ....... 403 | |--| DUT2 |------------------. RTb . 404 | | -------- Egress Interface ....... 405 | | 406 | | ECMP Set ........ 407 ....... Ingress | |----------------------------. RTc1 . 408 . RTa .------------| DUT1 | Interface 1 ........ 409 ....... Interface | | . 410 | | . 411 | | . 412 | | ECMP Set ........ 413 | |----------------------------. RTcN . 414 | | Interface N ........ 415 -------- 417 Figure 4: IGP convergence test topology for remote changes with non- 418 ECMP to ECMP convergence 420 3.3. Test topology for local ECMP changes 422 Figure 5 shows the test topology to measure IGP convergence time due 423 to local Convergence Events of a member of an Equal Cost Multipath 424 (ECMP) set (Section 8.1.3). In this topology, the DUT is configured 425 with each egress interface as a member of a single ECMP set and the 426 Tester emulates N+1 next-hop routers, one for the Ingress Interface 427 (RTa) and one for each member of the ECMP set (RTb1...RTbN). IGP 428 adjacencies MUST be established between Tester and DUT, one on the 429 Ingress Interface and one on each member of the ECMP set. For this 430 purpose each of the N+1 routers emulated by the Tester establishes 431 one adjacency with the DUT. When the test specifies to observe the 432 Next-Best Egress Interface statistics, the combined statistics for 433 all ECMP members except the one affected by the Convergence Event, 434 should be observed. 436 ------- 437 | | ECMP Set ........ 438 | |-------------. RTb1 . 439 | | Interface 1 ........ 440 ....... Ingress | | . 441 . RTa .------------| DUT | . 442 ....... Interface | | . 443 | | ECMP Set ........ 444 | |-------------. RTbN . 445 | | Interface N ........ 446 ------- 448 Figure 5: IGP convergence test topology for local ECMP changes 450 3.4. Test topology for remote ECMP changes 452 Figure 6 shows the test topology to measure IGP convergence time due 453 to remote Convergence Events of a member of an Equal Cost Multipath 454 (ECMP) set (Section 8.1.4). In this topology the two routers DUT1 455 and DUT2 are considered System Under Test (SUT) and MUST be 456 identically configured devices of the same model. Router DUT1 is 457 configured with each egress interface as a member of a single ECMP 458 set and the Tester emulates N+1 neighbor routers (N>0), one for the 459 Ingress Interface (RTa) and one for each member of the ECMP set 460 (RTb1...RTbN). IGP adjacencies MUST be established between Tester 461 and SUT, one on each interface of SUT. For this purpose each of the 462 N+1 routers emulated by the Tester establishes one adjacency with the 463 SUT (N-1 emulated routers are adjacent to DUT1 egress interfaces, one 464 emulated router is adjacent to DUT1 Ingress Interface, and one 465 emulated router is adjacent to DUT2). In this topology there is a 466 possibility of a packet forwarding loop that may occur transiently 467 between DUT1 and DUT2 during convergence (micro-loop, see [Sh10]). 468 When the test specifies to observe the Next-Best Egress Interface 469 statistics, the combined statistics for all ECMP members except the 470 one affected by the Convergence Event, should be observed. 472 -------- 473 | | ECMP Set -------- ........ 474 | |-------------| DUT2 |---. RTb1 . 475 | | Interface 1 -------- ........ 476 | | 477 | | ECMP Set ........ 478 ....... Ingress | |------------------------. RTb2 . 479 . RTa .------------| DUT1 | Interface 2 ........ 480 ....... Interface | | . 481 | | . 482 | | . 483 | | ECMP Set ........ 484 | |------------------------. RTbN . 485 | | Interface N ........ 486 -------- 488 Figure 6: IGP convergence test topology for remote ECMP changes 490 3.5. Test topology for Parallel Link changes 492 Figure 7 shows the test topology to measure IGP convergence time due 493 to local Convergence Events with members of a Parallel Link 494 (Section 8.1.5). In this topology, the DUT is configured with each 495 egress interface as a member of a Parallel Link and the Tester 496 emulates two neighbor routers, one for the Ingress Interface (RTa) 497 and one for the Parallel Link members (RTb). IGP adjacencies MUST be 498 established on the Ingress Interface and on all N members of the 499 Parallel Link between Tester and DUT (N>0). For this purpose the 500 routers emulated by the Tester establishes N+1 adjacencies with the 501 DUT. When the test specifies to observe the Next-Best Egress 502 Interface statistics, the combined statistics for all Parallel Link 503 members except the one affected by the Convergence Event, should be 504 observed. 506 ------- ....... 507 | | Parallel Link . . 508 | |----------------. . 509 | | Interface 1 . . 510 ....... Ingress | | . . . 511 . RTa .------------| DUT | . . RTb . 512 ....... Interface | | . . . 513 | | Parallel Link . . 514 | |----------------. . 515 | | Interface N . . 516 ------- ....... 518 Figure 7: IGP convergence test topology for Parallel Link changes 520 4. Convergence Time and Loss of Connectivity Period 522 Two concepts will be highlighted in this section: convergence time 523 and loss of connectivity period. 525 The Route Convergence [Po11t] time indicates the period in time 526 between the Convergence Event Instant [Po11t] and the instant in time 527 the DUT is ready to forward traffic for a specific route on its Next- 528 Best Egress Interface and maintains this state for the duration of 529 the Sustained Convergence Validation Time [Po11t]. To measure Route 530 Convergence time, the Convergence Event Instant and the traffic 531 received from the Next-Best Egress Interface need to be observed. 533 The Route Loss of Connectivity Period [Po11t] indicates the time 534 during which traffic to a specific route is lost following a 535 Convergence Event until Full Convergence [Po11t] completes. This 536 Route Loss of Connectivity Period can consist of one or more Loss 537 Periods [Ko02]. For the testcases described in this document it is 538 expected to have a single Loss Period. To measure Route Loss of 539 Connectivity Period, the traffic received from the Preferred Egress 540 Interface and the traffic received from the Next-Best Egress 541 Interface need to be observed. 543 The Route Loss of Connectivity Period is most important since that 544 has a direct impact on the network user's application performance. 546 In general the Route Convergence time is larger than or equal to the 547 Route Loss of Connectivity Period. Depending on which Convergence 548 Event occurs and how this Convergence Event is applied, traffic for a 549 route may still be forwarded over the Preferred Egress Interface 550 after the Convergence Event Instant, before converging to the Next- 551 Best Egress Interface. In that case the Route Loss of Connectivity 552 Period is shorter than the Route Convergence time. 554 At least one condition needs to be fulfilled for Route Convergence 555 time to be equal to Route Loss of Connectivity Period. The condition 556 is that the Convergence Event causes an instantaneous traffic loss 557 for the measured route. A fiber cut on the Preferred Egress 558 Interface is an example of such a Convergence Event. 560 A second condition applies to Route Convergence time measurements 561 based on Connectivity Packet Loss [Po11t]. This second condition is 562 that there is only a single Loss Period during Route Convergence. 563 For the testcases described in this document this is expected to be 564 the case. 566 4.1. Convergence Events without instant traffic loss 568 To measure convergence time benchmarks for Convergence Events caused 569 by a Tester, such as an IGP cost change, the Tester MAY start to 570 discard all traffic received from the Preferred Egress Interface at 571 the Convergence Event Instant, or MAY separately observe packets 572 received from the Preferred Egress Interface prior to the Convergence 573 Event Instant. This way these Convergence Events can be treated the 574 same as Convergence Events that cause instantaneous traffic loss. 576 To measure convergence time benchmarks without instantaneous traffic 577 loss (either real or induced by the Tester) at the Convergence Event 578 Instant, such as a reversion of a link failure Convergence Event, the 579 Tester SHALL only observe packet statistics on the Next-Best Egress 580 Interface. If using the Rate-Derived method to benchmark convergence 581 times for such Convergence Events, the Tester MUST collect a 582 timestamp at the Convergence Event Instant. If using a loss-derived 583 method to benchmark convergence times for such Convergence Events, 584 the Tester MUST measure the period in time between the Start Traffic 585 Instant and the Convergence Event Instant. To measure this period in 586 time the Tester can collect timestamps at the Start Traffic Instant 587 and the Convergence Event Instant. 589 The Convergence Event Instant together with the receive rate 590 observations on the Next-Best Egress Interface allow to derive the 591 convergence time benchmarks using the Rate-Derived Method [Po11t]. 593 By observing packets on the Next-Best Egress Interface only, the 594 observed Impaired Packet count is the number of Impaired Packets 595 between Traffic Start Instant and Convergence Recovery Instant. To 596 measure convergence times using a loss-derived method, the Impaired 597 Packet count between the Convergence Event Instant and the 598 Convergence Recovery Instant is needed. The time between Traffic 599 Start Instant and Convergence Event Instant must be accounted for. 600 An example may clarify this. 602 Figure 8 illustrates a Convergence Event without instantaneous 603 traffic loss for all routes. The top graph shows the Forwarding Rate 604 over all routes, the bottom graph shows the Forwarding Rate for a 605 single route Rta. Some time after the Convergence Event Instant, 606 Forwarding Rate observed on the Preferred Egress Interface starts to 607 decrease. In the example, route Rta is the first route to experience 608 packet loss at time Ta. Some time later, the Forwarding Rate 609 observed on the Next-Best Egress Interface starts to increase. In 610 the example, route Rta is the first route to complete convergence at 611 time Ta'. 613 ^ 614 Fwd | 615 Rate |------------- ............ 616 | \ . 617 | \ . 618 | \ . 619 | \ . 620 |.................-.-.-.-.-.-.---------------- 621 +----+-------+---------------+-----------------> 622 ^ ^ ^ ^ time 623 T0 CEI Ta Ta' 625 ^ 626 Fwd | 627 Rate |------------- ................. 628 Rta | | . 629 | | . 630 |.............-.-.-.-.-.-.-.-.---------------- 631 +----+-------+---------------+-----------------> 632 ^ ^ ^ ^ time 633 T0 CEI Ta Ta' 635 Preferred Egress Interface: --- 636 Next-Best Egress Interface: ... 638 With T0 the Start Traffic Instant; CEI the Convergence Event Instant; 639 Ta the time instant packet loss for route Rta starts; Ta' the time 640 instant packet impairment for route Rta ends. 642 Figure 8 644 If only packets received on the Next-Best Egress Interface are 645 observed, the duration of the loss period for route Rta can be 646 calculated from the received packets as in Equation 1. Since the 647 Convergence Event Instant is the start time for convergence time 648 measurement, the period in time between T0 and CEI needs to be 649 subtracted from the calculated result to become the convergence time, 650 as in Equation 2. 652 Next-Best Egress Interface loss period 653 = (packets transmitted 654 - packets received from Next-Best Egress Interface) / tx rate 655 = Ta' - T0 657 Equation 1 659 convergence time 660 = Next-Best Egress Interface loss period - (CEI - T0) 661 = Ta' - CEI 663 Equation 2 665 4.2. Loss of Connectivity (LoC) 667 Route Loss of Connectivity Period SHOULD be measured using the Route- 668 Specific Loss-Derived Method. Since the start instant and end 669 instant of the Route Loss of Connectivity Period can be different for 670 each route, these can not be accurately derived by only observing 671 global statistics over all routes. An example may clarify this. 673 Following a Convergence Event, route Rta is the first route for which 674 packet impairment starts, the Route Loss of Connectivity Period for 675 route Rta starts at time Ta. Route Rtb is the last route for which 676 packet impairment starts, the Route Loss of Connectivity Period for 677 route Rtb starts at time Tb with Tb>Ta. 679 ^ 680 Fwd | 681 Rate |-------- ----------- 682 | \ / 683 | \ / 684 | \ / 685 | \ / 686 | --------------- 687 +------------------------------------------> 688 ^ ^ ^ ^ time 689 Ta Tb Ta' Tb' 690 Tb'' Ta'' 692 Figure 9: Example Route Loss Of Connectivity Period 694 If the DUT implementation were such that route Rta would be the first 695 route for which traffic loss ends at time Ta' (with Ta'>Tb) and route 696 Rtb would be the last route for which traffic loss ends at time Tb' 697 (with Tb'>Ta'). By only observing global traffic statistics over all 698 routes, the minimum Route Loss of Connectivity Period would be 699 measured as Ta'-Ta. The maximum calculated Route Loss of 700 Connectivity Period would be Tb'-Ta. The real minimum and maximum 701 Route Loss of Connectivity Periods are Ta'-Ta and Tb'-Tb. 702 Illustrating this with the numbers Ta=0, Tb=1, Ta'=3, and Tb'=5, 703 would give a Loss of Connectivity Period between 3 and 5 derived from 704 the global traffic statistics, versus the real Loss of Connectivity 705 Period between 3 and 4. 707 If the DUT implementation were such that route Rtb would be the first 708 for which packet loss ends at time Tb'' and route Rta would be the 709 last for which packet impairment ends at time Ta'', then the minimum 710 and maximum Route Loss of Connectivity Periods derived by observing 711 only global traffic statistics would be Tb''-Ta, and Ta''-Ta. The 712 real minimum and maximum Route Loss of Connectivity Periods are 713 Tb''-Tb and Ta''-Ta. Illustrating this with the numbers Ta=0, Tb=1, 714 Ta''=5, Tb''=3, would give a Loss of Connectivity Period between 3 715 and 5 derived from the global traffic statistics, versus the real 716 Loss of Connectivity Period between 2 and 5. 718 The two implementation variations in the above example would result 719 in the same derived minimum and maximum Route Loss of Connectivity 720 Periods when only observing the global packet statistics, while the 721 real Route Loss of Connectivity Periods are different. 723 5. Test Considerations 725 5.1. IGP Selection 727 The test cases described in Section 8 can be used for link-state 728 IGPs, such as IS-IS or OSPF. The IGP convergence time test 729 methodology is identical. 731 5.2. Routing Protocol Configuration 733 The obtained results for IGP convergence time may vary if other 734 routing protocols are enabled and routes learned via those protocols 735 are installed. IGP convergence times SHOULD be benchmarked without 736 routes installed from other protocols. Any enabled IGP routing 737 protocol extension (such as extensions for Traffic Engineering) and 738 any enabled IGP routing protocol security mechanism must be reported 739 with the results. 741 5.3. IGP Topology 743 The Tester emulates a single IGP topology. The DUT establishes IGP 744 adjacencies with one or more of the emulated routers in this single 745 IGP topology emulated by the Tester. See test topology details in 746 Section 3. The emulated topology SHOULD only be advertised on the 747 DUT egress interfaces. 749 The number of IGP routes and number of nodes in the topology, and the 750 type of topology will impact the measured IGP convergence time. To 751 obtain results similar to those that would be observed in an 752 operational network, it is RECOMMENDED that the number of installed 753 routes and nodes closely approximate that of the network (e.g. 754 thousands of routes with tens or hundreds of nodes). 756 The number of areas (for OSPF) and levels (for IS-IS) can impact the 757 benchmark results. 759 5.4. Timers 761 There are timers that may impact the measured IGP convergence times. 762 The benchmark metrics MAY be measured at any fixed values for these 763 timers. To obtain results similar to those that would be observed in 764 an operational network, it is RECOMMENDED to configure the timers 765 with the values as configured in the operational network. 767 Examples of timers that may impact measured IGP convergence time 768 include, but are not limited to: 770 Interface failure indication 772 IGP hello timer 774 IGP dead-interval or hold-timer 776 Link State Advertisement (LSA) or Link State Packet (LSP) 777 generation delay 779 LSA or LSP flood packet pacing 781 route calculation delay 783 5.5. Interface Types 785 All test cases in this methodology document can be executed with any 786 interface type. The type of media may dictate which test cases may 787 be executed. Each interface type has a unique mechanism for 788 detecting link failures and the speed at which that mechanism 789 operates will influence the measurement results. All interfaces MUST 790 be the same media and Throughput [Br91][Br99] for each test case. 791 All interfaces SHOULD be configured as point-to-point. 793 5.6. Offered Load 795 The Throughput of the device, as defined in [Br91] and benchmarked in 796 [Br99] at a fixed packet size, needs to be determined over the 797 preferred path and over the next-best path. The Offered Load SHOULD 798 be the minimum of the measured Throughput of the device over the 799 primary path and over the backup path. The packet size is selectable 800 and MUST be recorded. Packet size is measured in bytes and includes 801 the IP header and payload. 803 The destination addresses for the Offered Load MUST be distributed 804 such that all routes or a statistically representative subset of all 805 routes are matched and each of these routes is offered an equal share 806 of the Offered Load. It is RECOMMENDED to send traffic matching all 807 routes, but a statistically representative subset of all routes can 808 be used if required. 810 Splitting traffic flows across multiple paths (as with ECMP or 811 Parallel Link sets) is in general done by hashing on various fields 812 on the IP or contained headers. The hashing is typically based on 813 the IP source and destination addresses, the protocol ID, and higher- 814 layer flow-dependent fields such as TCP/UDP ports. In practice, 815 within a network core, the hashing is based mainly or exclusively on 816 the IP source and destination addresses. Knowledge of the hashing 817 algorithm used by the DUT is not always possible beforehand, and 818 would violate the black-box spirit of this document. Therefor it is 819 RECOMMENDED to use a randomly distributed range of source and 820 destination IP addresses, protocol IDs, and higher-layer flow- 821 dependent fields for the packets of the Offered Load (see also 822 [Ne07]). The content of the Offered Load MUST remain the same during 823 the test. It is RECOMMENDED to repeat a test multiple times with 824 different random ranges of the header fields such that convergence 825 time benchmarks are measured for different distributions of traffic 826 over the available paths. 828 In the Remote Interface failure testcases using topologies 3, 4, and 829 6 there is a possibility of a packet forwarding loop that may occur 830 transiently between DUT1 and DUT2 during convergence (micro-loop, see 831 [Sh10]). The Time To Live (TTL) or Hop Limit value of the packets 832 sent by the Tester may influence the benchmark measurements since it 833 determines which device in the topology may send an ICMP Time 834 Exceeded Message for looped packets. 836 The duration of the Offered Load MUST be greater than the convergence 837 time plus the Sustained Convergence Validation Time. 839 Offered load should send a packet to each destination before sending 840 another packet to the same destination. It is RECOMMENDED that the 841 packets be transmitted in a round-robin fashion with a uniform 842 interpacket delay. 844 5.7. Measurement Accuracy 846 Since Impaired Packet count is observed to measure the Route 847 Convergence Time, the time between two successive packets offered to 848 each individual route is the highest possible accuracy of any 849 Impaired Packet based measurement. The higher the traffic rate 850 offered to each route the higher the possible measurement accuracy. 852 Also see Section 6 for method-specific measurement accuracy. 854 5.8. Measurement Statistics 856 The benchmark measurements may vary for each trial, due to the 857 statistical nature of timer expirations, cpu scheduling, etc. 858 Evaluation of the test data must be done with an understanding of 859 generally accepted testing practices regarding repeatability, 860 variance and statistical significance of a small number of trials. 862 5.9. Tester Capabilities 864 It is RECOMMENDED that the Tester used to execute each test case have 865 the following capabilities: 867 1. Ability to establish IGP adjacencies and advertise a single IGP 868 topology to one or more peers. 870 2. Ability to measure Forwarding Delay, Duplicate Packets and Out- 871 of-Order Packets. 873 3. An internal time clock to control timestamping, time 874 measurements, and time calculations. 876 4. Ability to distinguish traffic load received on the Preferred and 877 Next-Best Interfaces [Po11t]. 879 5. Ability to disable or tune specific Layer-2 and Layer-3 protocol 880 functions on any interface(s). 882 The Tester MAY be capable to make non-data plane convergence 883 observations and use those observations for measurements. The Tester 884 MAY be capable to send and receive multiple traffic Streams [Po06]. 886 Also see Section 6 for method-specific capabilities. 888 6. Selection of Convergence Time Benchmark Metrics and Methods 890 Different convergence time benchmark methods MAY be used to measure 891 convergence time benchmark metrics. The Tester capabilities are 892 important criteria to select a specific convergence time benchmark 893 method. The criteria to select a specific benchmark method include, 894 but are not limited to: 896 Tester capabilities: Sampling Interval, number of 897 Stream statistics to collect 898 Measurement accuracy: Sampling Interval, Offered Load, 899 number of routes 900 Test specification: number of routes 901 DUT capabilities: Throughput, IP Packet Delay 902 Variation 904 6.1. Loss-Derived Method 906 6.1.1. Tester capabilities 908 To enable collecting statistics of Out-of-Order Packets per flow (See 909 [Th00], Section 3) the Offered Load SHOULD consist of multiple 910 Streams [Po06] and each Stream SHOULD consist of a single flow . If 911 sending multiple Streams, the measured traffic statistics for all 912 Streams MUST be added together. 914 In order to verify Full Convergence completion and the Sustained 915 Convergence Validation Time, the Tester MUST measure Forwarding Rate 916 each Packet Sampling Interval. 918 The total number of Impaired Packets between the start of the traffic 919 and the end of the Sustained Convergence Validation Time is used to 920 calculate the Loss-Derived Convergence Time. 922 6.1.2. Benchmark Metrics 924 The Loss-Derived Method can be used to measure the Loss-Derived 925 Convergence Time, which is the average convergence time over all 926 routes, and to measure the Loss-Derived Loss of Connectivity Period, 927 which is the average Route Loss of Connectivity Period over all 928 routes. 930 6.1.3. Measurement Accuracy 932 The actual value falls within the accuracy interval [-(number of 933 destinations/Offered Load), +(number of destinations/Offered Load)] 934 around the value as measured using the Loss-Derived Method. 936 6.2. Rate-Derived Method 938 6.2.1. Tester Capabilities 940 To enable collecting statistics of Out-of-Order Packets per flow (See 941 [Th00], Section 3) the Offered Load SHOULD consist of multiple 942 Streams [Po06] and each Stream SHOULD consist of a single flow . If 943 sending multiple Streams, the measured traffic statistics for all 944 Streams MUST be added together. 946 The Tester measures Forwarding Rate each Sampling Interval. The 947 Packet Sampling Interval influences the observation of the different 948 convergence time instants. If the Packet Sampling Interval is large 949 compared to the time between the convergence time instants, then the 950 different time instants may not be easily identifiable from the 951 Forwarding Rate observation. The presence of IP Packet Delay 952 Variation (IPDV) [De02] may cause fluctuations of the Forwarding Rate 953 observation and can prevent correct observation of the different 954 convergence time instants. 956 The Packet Sampling Interval MUST be larger than or equal to the time 957 between two consecutive packets to the same destination. For maximum 958 accuracy the value for the Packet Sampling Interval SHOULD be as 959 small as possible, but the presence of IPDV may enforce using a 960 larger Packet Sampling Interval. The Packet Sampling Interval MUST 961 be reported. 963 IPDV causes fluctuations in the number of received packets during 964 each Packet Sampling Interval. To account for the presence of IPDV 965 in determining if a convergence instant has been reached, Forwarding 966 Delay SHOULD be observed during each Packet Sampling Interval. The 967 minimum and maximum number of packets expected in a Packet Sampling 968 Interval in presence of IPDV can be calculated with Equation 3. 970 number of packets expected in a Packet Sampling Interval 971 in presence of IP Packet Delay Variation 972 = expected number of packets without IP Packet Delay Variation 973 +/-( (maxDelay - minDelay) * Offered Load) 974 with minDelay and maxDelay the minimum resp. maximum Forwarding Delay 975 of packets received during the Packet Sampling Interval 977 Equation 3 979 To determine if a convergence instant has been reached the number of 980 packets received in a Packet Sampling Interval is compared with the 981 range of expected number of packets calculated in Equation 3. 983 6.2.2. Benchmark Metrics 985 The Rate-Derived Method SHOULD be used to measure First Route 986 Convergence Time and Full Convergence Time. It SHOULD NOT be used to 987 measure Loss of Connectivity Period (see Section 4). 989 6.2.3. Measurement Accuracy 991 The measurement accuracy interval of the Rate-Derived Method depends 992 on the metric being measured or calculated and the characteristics of 993 the related transition. IP Packet Delay Variation (IPDV) [De02] adds 994 uncertainty to the amount of packets received in a Packet Sampling 995 Interval and this uncertainty adds to the measurement error. The 996 effect of IPDV is not accounted for in the calculation of the 997 accuracy intervals below. IPDV is of importance for the convergence 998 instants were a variation in Forwarding Rate needs to be observed 999 (Convergence Recovery Instant and for topologies with ECMP also 1000 Convergence Event Instant and First Route Convergence Instant). 1002 If the Convergence Event Instant is observed on the dataplane using 1003 the Rate Derived Method, it needs to be instantaneous for all routes 1004 (see Section 4.1). The actual value of the Convergence Event Instant 1005 falls within the accuracy interval [-(Packet Sampling Interval + 1006 1/Offered Load), +0] around the value as measured using the Rate- 1007 Derived Method. 1009 If the Convergence Recovery Transition is non-instantaneous for all 1010 routes then the actual value of the First Route Convergence Instant 1011 falls within the accuracy interval [-(Packet Sampling Interval + time 1012 between two consecutive packets to the same destination), +0] around 1013 the value as measured using the Rate-Derived Method, and the actual 1014 value of the Convergence Recovery Instant falls within the accuracy 1015 interval [-(2 * Packet Sampling Interval), -(Packet Sampling Interval 1016 - time between two consecutive packets to the same destination)] 1017 around the value as measured using the Rate-Derived Method. 1019 The term "time between two consecutive packets to the same 1020 destination" is added in the above accuracy intervals since packets 1021 are sent in a particular order to all destinations in a stream and 1022 when part of the routes experience packet loss, it is unknown where 1023 in the transmit cycle packets to these routes are sent. This 1024 uncertainty adds to the error. 1026 The accuracy intervals of the derived metrics First Route Convergence 1027 Time and Rate-Derived Convergence Time are calculated from the above 1028 convergence instants accuracy intervals. The actual value of First 1029 Route Convergence Time falls within the accuracy interval [-(Packet 1030 Sampling Interval + time between two consecutive packets to the same 1031 destination), +(Packet Sampling Interval + 1/Offered Load)] around 1032 the calculated value. The actual value of Rate-Derived Convergence 1033 Time falls within the accuracy interval [-(2 * Packet Sampling 1034 Interval), +(time between two consecutive packets to the same 1035 destination + 1/Offered Load)] around the calculated value. 1037 6.3. Route-Specific Loss-Derived Method 1039 6.3.1. Tester Capabilities 1041 The Offered Load consists of multiple Streams. The Tester MUST 1042 measure Impaired Packet count for each Stream separately. 1044 In order to verify Full Convergence completion and the Sustained 1045 Convergence Validation Time, the Tester MUST measure Forwarding Rate 1046 each Packet Sampling Interval. This measurement at each Packet 1047 Sampling Interval MAY be per Stream. 1049 Only the total number of Impaired Packets measured per Stream at the 1050 end of the Sustained Convergence Validation Time is used to calculate 1051 the benchmark metrics with this method. 1053 6.3.2. Benchmark Metrics 1055 The Route-Specific Loss-Derived Method SHOULD be used to measure 1056 Route-Specific Convergence Times. It is the RECOMMENDED method to 1057 measure Route Loss of Connectivity Period. 1059 Under the conditions explained in Section 4, First Route Convergence 1060 Time and Full Convergence Time as benchmarked using Rate-Derived 1061 Method, may be equal to the minimum resp. maximum of the Route- 1062 Specific Convergence Times. 1064 6.3.3. Measurement Accuracy 1066 The actual value falls within the accuracy interval [-(number of 1067 destinations/Offered Load), +(number of destinations/Offered Load)] 1068 around the value as measured using the Route-Specific Loss-Derived 1069 Method. 1071 7. Reporting Format 1073 For each test case, it is RECOMMENDED that the reporting tables below 1074 be completed and all time values SHOULD be reported with a 1075 sufficiently high resolution. 1077 Parameter Units 1078 ------------------------------------- --------------------------- 1079 Test Case test case number 1080 Test Topology Test Topology Figure number 1081 IGP (IS-IS, OSPF, other) 1082 Interface Type (GigE, POS, ATM, other) 1083 Packet Size offered to DUT bytes 1084 Offered Load packets per second 1085 IGP Routes advertised to DUT number of IGP routes 1086 Nodes in emulated network number of nodes 1087 Number of Parallel or ECMP links number of links 1088 Number of Routes measured number of routes 1089 Packet Sampling Interval on Tester seconds 1090 Forwarding Delay Threshold seconds 1092 Timer Values configured on DUT: 1093 Interface failure indication delay seconds 1094 IGP Hello Timer seconds 1095 IGP Dead-Interval or hold-time seconds 1096 LSA/LSP Generation Delay seconds 1097 LSA/LSP Flood Packet Pacing seconds 1098 LSA/LSP Retransmission Packet Pacing seconds 1099 route calculation Delay seconds 1101 Test Details: 1103 Describe the IGP extensions and IGP security mechanisms that are 1104 configured on the DUT. 1106 Describe how the various fields on the IP and contained headers 1107 for the packets for the Offered Load are generated (Section 5.6). 1109 If the Offered Load matches a subset of routes, describe how this 1110 subset is selected. 1112 Describe how the Convergence Event is applied; does it cause 1113 instantaneous traffic loss or not? 1115 The table below should be completed for the initial Convergence Event 1116 and the reversion Convergence Event. 1118 Parameter Units 1119 ------------------------------------------- ---------------------- 1120 Convergence Event (initial or reversion) 1122 Traffic Forwarding Metrics: 1123 Total number of packets offered to DUT number of Packets 1124 Total number of packets forwarded by DUT number of Packets 1125 Connectivity Packet Loss number of Packets 1126 Convergence Packet Loss number of Packets 1127 Out-of-Order Packets number of Packets 1128 Duplicate Packets number of Packets 1129 excessive Forwarding Delay Packets number of Packets 1131 Convergence Benchmarks: 1132 Rate-Derived Method: 1133 First Route Convergence Time seconds 1134 Full Convergence Time seconds 1135 Loss-Derived Method: 1136 Loss-Derived Convergence Time seconds 1137 Route-Specific Loss-Derived Method: 1138 Route-Specific Convergence Time[n] array of seconds 1139 Minimum Route-Specific Convergence Time seconds 1140 Maximum Route-Specific Convergence Time seconds 1141 Median Route-Specific Convergence Time seconds 1142 Average Route-Specific Convergence Time seconds 1144 Loss of Connectivity Benchmarks: 1145 Loss-Derived Method: 1146 Loss-Derived Loss of Connectivity Period seconds 1147 Route-Specific Loss-Derived Method: 1148 Route Loss of Connectivity Period[n] array of seconds 1149 Minimum Route Loss of Connectivity Period seconds 1150 Maximum Route Loss of Connectivity Period seconds 1151 Median Route Loss of Connectivity Period seconds 1152 Average Route Loss of Connectivity Period seconds 1154 8. Test Cases 1156 It is RECOMMENDED that all applicable test cases be performed for 1157 best characterization of the DUT. The test cases follow a generic 1158 procedure tailored to the specific DUT configuration and Convergence 1159 Event [Po11t]. This generic procedure is as follows: 1161 1. Establish DUT and Tester configurations and advertise an IGP 1162 topology from Tester to DUT. 1164 2. Send Offered Load from Tester to DUT on ingress interface. 1166 3. Verify traffic is routed correctly. Verify if traffic is 1167 forwarded without Impaired Packets [Po06]. 1169 4. Introduce Convergence Event [Po11t]. 1171 5. Measure First Route Convergence Time [Po11t]. 1173 6. Measure Full Convergence Time [Po11t]. 1175 7. Stop Offered Load. 1177 8. Measure Route-Specific Convergence Times, Loss-Derived 1178 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1179 Derived Loss of Connectivity Period [Po11t]. At the same time 1180 measure number of Impaired Packets [Po11t]. 1182 9. Wait sufficient time for queues to drain. The duration of this 1183 time period MUST be larger than or equal to the Forwarding Delay 1184 Threshold. 1186 10. Restart Offered Load. 1188 11. Reverse Convergence Event. 1190 12. Measure First Route Convergence Time. 1192 13. Measure Full Convergence Time. 1194 14. Stop Offered Load. 1196 15. Measure Route-Specific Convergence Times, Loss-Derived 1197 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1198 Derived Loss of Connectivity Period. At the same time measure 1199 number of Impaired Packets [Po11t]. 1201 8.1. Interface Failure and Recovery 1203 8.1.1. Convergence Due to Local Interface Failure and Recovery 1205 Objective 1207 To obtain the IGP convergence measurements for Local Interface 1208 failure and recovery events. The Next-Best Egress Interface can be a 1209 single interface (Figure 1) or an ECMP set (Figure 2). The test with 1210 ECMP topology (Figure 2) is OPTIONAL. 1212 Procedure 1214 1. Advertise an IGP topology from Tester to DUT using the topology 1215 shown in Figure 1 or 2. 1217 2. Send Offered Load from Tester to DUT on ingress interface. 1219 3. Verify traffic is forwarded over Preferred Egress Interface. 1221 4. Remove link on the Preferred Egress Interface of the DUT. This 1222 is the Convergence Event. 1224 5. Measure First Route Convergence Time. 1226 6. Measure Full Convergence Time. 1228 7. Stop Offered Load. 1230 8. Measure Route-Specific Convergence Times and Loss-Derived 1231 Convergence Time. At the same time measure number of Impaired 1232 Packets. 1234 9. Wait sufficient time for queues to drain. 1236 10. Restart Offered Load. 1238 11. Restore link on the Preferred Egress Interface of the DUT. 1240 12. Measure First Route Convergence Time. 1242 13. Measure Full Convergence Time. 1244 14. Stop Offered Load. 1246 15. Measure Route-Specific Convergence Times, Loss-Derived 1247 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1248 Derived Loss of Connectivity Period.At the same time measure 1249 number of Impaired Packets. 1251 8.1.2. Convergence Due to Remote Interface Failure and Recovery 1253 Objective 1255 To obtain the IGP convergence measurements for Remote Interface 1256 failure and recovery events. The Next-Best Egress Interface can be a 1257 single interface (Figure 3) or an ECMP set (Figure 4). The test with 1258 ECMP topology (Figure 4) is OPTIONAL. 1260 Procedure 1262 1. Advertise an IGP topology from Tester to SUT using the topology 1263 shown in Figure 3 or 4. 1265 2. Send Offered Load from Tester to SUT on ingress interface. 1267 3. Verify traffic is forwarded over Preferred Egress Interface. 1269 4. Remove link on the interface of the Tester connected to the 1270 Preferred Egress Interface of the SUT. This is the Convergence 1271 Event. 1273 5. Measure First Route Convergence Time. 1275 6. Measure Full Convergence Time. 1277 7. Stop Offered Load. 1279 8. Measure Route-Specific Convergence Times and Loss-Derived 1280 Convergence Time. At the same time measure number of Impaired 1281 Packets. 1283 9. Wait sufficient time for queues to drain. 1285 10. Restart Offered Load. 1287 11. Restore link on the interface of the Tester connected to the 1288 Preferred Egress Interface of the SUT. 1290 12. Measure First Route Convergence Time. 1292 13. Measure Full Convergence Time. 1294 14. Stop Offered Load. 1296 15. Measure Route-Specific Convergence Times, Loss-Derived 1297 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1298 Derived Loss of Connectivity Period. At the same time measure 1299 number of Impaired Packets. 1301 Discussion 1303 In this test case there is a possibility of a packet forwarding loop 1304 that may occur transiently between DUT1 and DUT2 during convergence 1305 (micro-loop, see [Sh10]), which may increase the measured convergence 1306 times and loss of connectivity periods. 1308 8.1.3. Convergence Due to ECMP Member Local Interface Failure and 1309 Recovery 1311 Objective 1313 To obtain the IGP convergence measurements for Local Interface link 1314 failure and recovery events of an ECMP Member. 1316 Procedure 1318 1. Advertise an IGP topology from Tester to DUT using the test 1319 setup shown in Figure 5. 1321 2. Send Offered Load from Tester to DUT on ingress interface. 1323 3. Verify traffic is forwarded over the ECMP member interface of 1324 the DUT that will be failed in the next step. 1326 4. Remove link on one of the ECMP member interfaces of the DUT. 1327 This is the Convergence Event. 1329 5. Measure First Route Convergence Time. 1331 6. Measure Full Convergence Time. 1333 7. Stop Offered Load. 1335 8. Measure Route-Specific Convergence Times and Loss-Derived 1336 Convergence Time. At the same time measure number of Impaired 1337 Packets. 1339 9. Wait sufficient time for queues to drain. 1341 10. Restart Offered Load. 1343 11. Restore link on the ECMP member interface of the DUT. 1345 12. Measure First Route Convergence Time. 1347 13. Measure Full Convergence Time. 1349 14. Stop Offered Load. 1351 15. Measure Route-Specific Convergence Times, Loss-Derived 1352 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1353 Derived Loss of Connectivity Period. At the same time measure 1354 number of Impaired Packets. 1356 8.1.4. Convergence Due to ECMP Member Remote Interface Failure and 1357 Recovery 1359 Objective 1361 To obtain the IGP convergence measurements for Remote Interface link 1362 failure and recovery events for an ECMP Member. 1364 Procedure 1366 1. Advertise an IGP topology from Tester to DUT using the test 1367 setup shown in Figure 6. 1369 2. Send Offered Load from Tester to DUT on ingress interface. 1371 3. Verify traffic is forwarded over the ECMP member interface of 1372 the DUT that will be failed in the next step. 1374 4. Remove link on the interface of the Tester to R2. This is the 1375 Convergence Event Trigger. 1377 5. Measure First Route Convergence Time. 1379 6. Measure Full Convergence Time. 1381 7. Stop Offered Load. 1383 8. Measure Route-Specific Convergence Times and Loss-Derived 1384 Convergence Time. At the same time measure number of Impaired 1385 Packets. 1387 9. Wait sufficient time for queues to drain. 1389 10. Restart Offered Load. 1391 11. Restore link on the interface of the Tester to R2. 1393 12. Measure First Route Convergence Time. 1395 13. Measure Full Convergence Time. 1397 14. Stop Offered Load. 1399 15. Measure Route-Specific Convergence Times, Loss-Derived 1400 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1401 Derived Loss of Connectivity Period. At the same time measure 1402 number of Impaired Packets. 1404 Discussion 1406 In this test case there is a possibility of a packet forwarding loop 1407 that may occur temporarily between DUT1 and DUT2 during convergence 1408 (micro-loop, see [Sh10]), which may increase the measured convergence 1409 times and loss of connectivity periods. 1411 8.1.5. Convergence Due to Parallel Link Interface Failure and Recovery 1413 Objective 1415 To obtain the IGP convergence measurements for local link failure and 1416 recovery events for a member of a parallel link. The links can be 1417 used for data load balancing 1419 Procedure 1421 1. Advertise an IGP topology from Tester to DUT using the test 1422 setup shown in Figure 7. 1424 2. Send Offered Load from Tester to DUT on ingress interface. 1426 3. Verify traffic is forwarded over the parallel link member that 1427 will be failed in the next step. 1429 4. Remove link on one of the parallel link member interfaces of the 1430 DUT. This is the Convergence Event. 1432 5. Measure First Route Convergence Time. 1434 6. Measure Full Convergence Time. 1436 7. Stop Offered Load. 1438 8. Measure Route-Specific Convergence Times and Loss-Derived 1439 Convergence Time. At the same time measure number of Impaired 1440 Packets. 1442 9. Wait sufficient time for queues to drain. 1444 10. Restart Offered Load. 1446 11. Restore link on the Parallel Link member interface of the DUT. 1448 12. Measure First Route Convergence Time. 1450 13. Measure Full Convergence Time. 1452 14. Stop Offered Load. 1454 15. Measure Route-Specific Convergence Times, Loss-Derived 1455 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1456 Derived Loss of Connectivity Period. At the same time measure 1457 number of Impaired Packets. 1459 8.2. Other Failures and Recoveries 1461 8.2.1. Convergence Due to Layer 2 Session Loss and Recovery 1463 Objective 1465 To obtain the IGP convergence measurements for a local layer 2 loss 1466 and recovery. 1468 Procedure 1470 1. Advertise an IGP topology from Tester to DUT using the topology 1471 shown in Figure 1. 1473 2. Send Offered Load from Tester to DUT on ingress interface. 1475 3. Verify traffic is routed over Preferred Egress Interface. 1477 4. Remove Layer 2 session from Preferred Egress Interface of the 1478 DUT. This is the Convergence Event. 1480 5. Measure First Route Convergence Time. 1482 6. Measure Full Convergence Time. 1484 7. Stop Offered Load. 1486 8. Measure Route-Specific Convergence Times, Loss-Derived 1487 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1488 Derived Loss of Connectivity Period. At the same time measure 1489 number of Impaired Packets. 1491 9. Wait sufficient time for queues to drain. 1493 10. Restart Offered Load. 1495 11. Restore Layer 2 session on Preferred Egress Interface of the 1496 DUT. 1498 12. Measure First Route Convergence Time. 1500 13. Measure Full Convergence Time. 1502 14. Stop Offered Load. 1504 15. Measure Route-Specific Convergence Times, Loss-Derived 1505 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1506 Derived Loss of Connectivity Period. At the same time measure 1507 number of Impaired Packets. 1509 Discussion 1511 When removing the layer 2 session, the physical layer must stay up. 1512 Configure IGP timers such that the IGP adjacency does not time out 1513 before layer 2 failure is detected. 1515 To measure convergence time, traffic SHOULD start dropping on the 1516 Preferred Egress Interface on the instant the layer 2 session is 1517 removed. Alternatively the Tester SHOULD record the time the instant 1518 layer 2 session is removed and traffic loss SHOULD only be measured 1519 on the Next-Best Egress Interface. For loss-derived benchmarks the 1520 time of the Start Traffic Instant SHOULD be recorded as well. See 1521 Section 4.1. 1523 8.2.2. Convergence Due to Loss and Recovery of IGP Adjacency 1525 Objective 1527 To obtain the IGP convergence measurements for loss and recovery of 1528 an IGP Adjacency. The IGP adjacency is removed on the Tester by 1529 disabling processing of IGP routing protocol packets on the Tester. 1531 Procedure 1533 1. Advertise an IGP topology from Tester to DUT using the topology 1534 shown in Figure 1. 1536 2. Send Offered Load from Tester to DUT on ingress interface. 1538 3. Verify traffic is routed over Preferred Egress Interface. 1540 4. Remove IGP adjacency from the Preferred Egress Interface while 1541 the layer 2 session MUST be maintained. This is the Convergence 1542 Event. 1544 5. Measure First Route Convergence Time. 1546 6. Measure Full Convergence Time. 1548 7. Stop Offered Load. 1550 8. Measure Route-Specific Convergence Times, Loss-Derived 1551 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1552 Derived Loss of Connectivity Period. At the same time measure 1553 number of Impaired Packets. 1555 9. Wait sufficient time for queues to drain. 1557 10. Restart Offered Load. 1559 11. Restore IGP session on Preferred Egress Interface of the DUT. 1561 12. Measure First Route Convergence Time. 1563 13. Measure Full Convergence Time. 1565 14. Stop Offered Load. 1567 15. Measure Route-Specific Convergence Times, Loss-Derived 1568 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1569 Derived Loss of Connectivity Period. At the same time measure 1570 number of Impaired Packets. 1572 Discussion 1574 Configure layer 2 such that layer 2 does not time out before IGP 1575 adjacency failure is detected. 1577 To measure convergence time, traffic SHOULD start dropping on the 1578 Preferred Egress Interface on the instant the IGP adjacency is 1579 removed. Alternatively the Tester SHOULD record the time the instant 1580 the IGP adjacency is removed and traffic loss SHOULD only be measured 1581 on the Next-Best Egress Interface. For loss-derived benchmarks the 1582 time of the Start Traffic Instant SHOULD be recorded as well. See 1583 Section 4.1. 1585 8.2.3. Convergence Due to Route Withdrawal and Re-advertisement 1587 Objective 1589 To obtain the IGP convergence measurements for route withdrawal and 1590 re-advertisement. 1592 Procedure 1593 1. Advertise an IGP topology from Tester to DUT using the topology 1594 shown in Figure 1. The routes that will be withdrawn MUST be a 1595 set of leaf routes advertised by at least two nodes in the 1596 emulated topology. The topology SHOULD be such that before the 1597 withdrawal the DUT prefers the leaf routes advertised by a node 1598 "nodeA" via the Preferred Egress Interface, and after the 1599 withdrawal the DUT prefers the leaf routes advertised by a node 1600 "nodeB" via the Next-Best Egress Interface. 1602 2. Send Offered Load from Tester to DUT on Ingress Interface. 1604 3. Verify traffic is routed over Preferred Egress Interface. 1606 4. The Tester withdraws the set of IGP leaf routes from nodeA. 1607 This is the Convergence Event. The withdrawal update message 1608 SHOULD be a single unfragmented packet. If the routes cannot be 1609 withdrawn by a single packet, the messages SHOULD be sent using 1610 the same pacing characteristics as the DUT. The Tester MAY 1611 record the time it sends the withdrawal message(s). 1613 5. Measure First Route Convergence Time. 1615 6. Measure Full Convergence Time. 1617 7. Stop Offered Load. 1619 8. Measure Route-Specific Convergence Times, Loss-Derived 1620 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1621 Derived Loss of Connectivity Period. At the same time measure 1622 number of Impaired Packets. 1624 9. Wait sufficient time for queues to drain. 1626 10. Restart Offered Load. 1628 11. Re-advertise the set of withdrawn IGP leaf routes from nodeA 1629 emulated by the Tester. The update message SHOULD be a single 1630 unfragmented packet. If the routes cannot be advertised by a 1631 single packet, the messages SHOULD be sent using the same pacing 1632 characteristics as the DUT. The Tester MAY record the time it 1633 sends the update message(s). 1635 12. Measure First Route Convergence Time. 1637 13. Measure Full Convergence Time. 1639 14. Stop Offered Load. 1641 15. Measure Route-Specific Convergence Times, Loss-Derived 1642 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1643 Derived Loss of Connectivity Period. At the same time measure 1644 number of Impaired Packets. 1646 Discussion 1648 To measure convergence time, traffic SHOULD start dropping on the 1649 Preferred Egress Interface on the instant the routes are withdrawn by 1650 the Tester. Alternatively the Tester SHOULD record the time the 1651 instant the routes are withdrawn and traffic loss SHOULD only be 1652 measured on the Next-Best Egress Interface. For loss-derived 1653 benchmarks the time of the Start Traffic Instant SHOULD be recorded 1654 as well. See Section 4.1. 1656 8.3. Administrative changes 1658 8.3.1. Convergence Due to Local Interface Adminstrative Changes 1660 Objective 1662 To obtain the IGP convergence measurements for administratively 1663 disabling and enabling a Local Interface. 1665 Procedure 1667 1. Advertise an IGP topology from Tester to DUT using the topology 1668 shown in Figure 1. 1670 2. Send Offered Load from Tester to DUT on ingress interface. 1672 3. Verify traffic is routed over Preferred Egress Interface. 1674 4. Administratively disable the Preferred Egress Interface of the 1675 DUT. This is the Convergence Event. 1677 5. Measure First Route Convergence Time. 1679 6. Measure Full Convergence Time. 1681 7. Stop Offered Load. 1683 8. Measure Route-Specific Convergence Times, Loss-Derived 1684 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1685 Derived Loss of Connectivity Period. At the same time measure 1686 number of Impaired Packets. 1688 9. Wait sufficient time for queues to drain. 1690 10. Restart Offered Load. 1692 11. Administratively enable the Preferred Egress Interface of the 1693 DUT.. 1695 12. Measure First Route Convergence Time. 1697 13. Measure Full Convergence Time. 1699 14. Stop Offered Load. 1701 15. Measure Route-Specific Convergence Times, Loss-Derived 1702 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1703 Derived Loss of Connectivity Period. At the same time measure 1704 number of Impaired Packets. 1706 8.3.2. Convergence Due to Cost Change 1708 Objective 1710 To obtain the IGP convergence measurements for route cost change. 1712 Procedure 1714 1. Advertise an IGP topology from Tester to DUT using the topology 1715 shown in Figure 1. 1717 2. Send Offered Load from Tester to DUT on ingress interface. 1719 3. Verify traffic is routed over Preferred Egress Interface. 1721 4. The Tester, emulating the neighbor node, increases the cost for 1722 all IGP routes at Preferred Egress Interface of the DUT so that 1723 the Next-Best Egress Interface becomes preferred path. The 1724 update message advertising the higher cost MUST be a single 1725 unfragmented packet. This is the Convergence Event. The Tester 1726 MAY record the time it sends the update message advertising the 1727 higher cost on the Preferred Egress Interface. 1729 5. Measure First Route Convergence Time. 1731 6. Measure Full Convergence Time. 1733 7. Stop Offered Load. 1735 8. Measure Route-Specific Convergence Times, Loss-Derived 1736 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1737 Derived Loss of Connectivity Period. At the same time measure 1738 number of Impaired Packets. 1740 9. Wait sufficient time for queues to drain. 1742 10. Restart Offered Load. 1744 11. The Tester, emulating the neighbor node, decreases the cost for 1745 all IGP routes at Preferred Egress Interface of the DUT so that 1746 the Preferred Egress Interface becomes preferred path. The 1747 update message advertising the lower cost MUST be a single 1748 unfragmented packet. 1750 12. Measure First Route Convergence Time. 1752 13. Measure Full Convergence Time. 1754 14. Stop Offered Load. 1756 15. Measure Route-Specific Convergence Times, Loss-Derived 1757 Convergence Time, Route Loss of Connectivity Periods, and Loss- 1758 Derived Loss of Connectivity Period. At the same time measure 1759 number of Impaired Packets. 1761 Discussion 1763 To measure convergence time, traffic SHOULD start dropping on the 1764 Preferred Egress Interface on the instant the cost is changed by the 1765 Tester. Alternatively the Tester SHOULD record the time the instant 1766 the cost is changed and traffic loss SHOULD only be measured on the 1767 Next-Best Egress Interface. For loss-derived benchmarks the time of 1768 the Start Traffic Instant SHOULD be recorded as well. See Section 1769 4.1. 1771 9. Security Considerations 1773 Benchmarking activities as described in this memo are limited to 1774 technology characterization using controlled stimuli in a laboratory 1775 environment, with dedicated address space and the constraints 1776 specified in the sections above. 1778 The benchmarking network topology will be an independent test setup 1779 and MUST NOT be connected to devices that may forward the test 1780 traffic into a production network, or misroute traffic to the test 1781 management network. 1783 Further, benchmarking is performed on a "black-box" basis, relying 1784 solely on measurements observable external to the DUT/SUT. 1786 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 1787 benchmarking purposes. Any implications for network security arising 1788 from the DUT/SUT SHOULD be identical in the lab and in production 1789 networks. 1791 10. IANA Considerations 1793 This document requires no IANA considerations. 1795 11. Acknowledgements 1797 Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward, 1798 Peter De Vriendt, Anuj Dewagan, Julien Meuric, Adrian Farrel, Stewart 1799 Bryant, and the Benchmarking Methodology Working Group for their 1800 contributions to this work. 1802 12. References 1804 12.1. Normative References 1806 [Br91] Bradner, S., "Benchmarking terminology for network 1807 interconnection devices", RFC 1242, July 1991. 1809 [Br97] Bradner, S., "Key words for use in RFCs to Indicate 1810 Requirement Levels", BCP 14, RFC 2119, March 1997. 1812 [Br99] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1813 Network Interconnect Devices", RFC 2544, March 1999. 1815 [Ca90] Callon, R., "Use of OSI IS-IS for routing in TCP/IP and dual 1816 environments", RFC 1195, December 1990. 1818 [Co08] Coltun, R., Ferguson, D., Moy, J., and A. Lindem, "OSPF for 1819 IPv6", RFC 5340, July 2008. 1821 [De02] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 1822 Metric for IP Performance Metrics (IPPM)", RFC 3393, 1823 November 2002. 1825 [Ho08] Hopps, C., "Routing IPv6 with IS-IS", RFC 5308, 1826 October 2008. 1828 [Ko02] Koodli, R. and R. Ravikanth, "One-way Loss Pattern Sample 1829 Metrics", RFC 3357, August 2002. 1831 [Ma05] Manral, V., White, R., and A. Shaikh, "Benchmarking Basic 1832 OSPF Single Router Control Plane Convergence", RFC 4061, 1833 April 2005. 1835 [Ma05c] Manral, V., White, R., and A. Shaikh, "Considerations When 1836 Using Basic OSPF Convergence Benchmarks", RFC 4063, 1837 April 2005. 1839 [Ma05t] Manral, V., White, R., and A. Shaikh, "OSPF Benchmarking 1840 Terminology and Concepts", RFC 4062, April 2005. 1842 [Ma98] Mandeville, R., "Benchmarking Terminology for LAN Switching 1843 Devices", RFC 2285, February 1998. 1845 [Mo98] Moy, J., "OSPF Version 2", STD 54, RFC 2328, April 1998. 1847 [Ne07] Newman, D. and T. Player, "Hash and Stuffing: Overlooked 1848 Factors in Network Device Benchmarking", RFC 4814, 1849 March 2007. 1851 [Pa05] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute Extensions 1852 to RSVP-TE for LSP Tunnels", RFC 4090, May 2005. 1854 [Po06] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana, 1855 "Terminology for Benchmarking Network-layer Traffic Control 1856 Mechanisms", RFC 4689, October 2006. 1858 [Po11t] Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology 1859 for Benchmarking Link-State IGP Data Plane Route 1860 Convergence", draft-ietf-bmwg-igp-dataplane-conv-term-23 1861 (work in progress), January 2011. 1863 [Sh10] Shand, M. and S. Bryant, "A Framework for Loop-Free 1864 Convergence", RFC 5715, January 2010. 1866 [Sh10i] Shand, M. and S. Bryant, "IP Fast Reroute Framework", 1867 RFC 5714, January 2010. 1869 [Th00] Thaler, D. and C. Hopps, "Multipath Issues in Unicast and 1870 Multicast Next-Hop Selection", RFC 2991, November 2000. 1872 12.2. Informative References 1874 [Al00] Alaettinoglu, C., Jacobson, V., and H. Yu, "Towards 1875 Millisecond IGP Convergence", NANOG 20, October 2000. 1877 [Al02] Alaettinoglu, C. and S. Casner, "ISIS Routing on the Qwest 1878 Backbone: a Recipe for Subsecond ISIS Convergence", 1879 NANOG 24, February 2002. 1881 [Fi02] Filsfils, C., "Tutorial: Deploying Tight-SLA Services on an 1882 Internet Backbone: ISIS Fast Convergence and Differentiated 1883 Services Design", NANOG 25, June 2002. 1885 [Fr05] Francois, P., Filsfils, C., Evans, J., and O. Bonaventure, 1886 "Achieving SubSecond IGP Convergence in Large IP Networks", 1887 ACM SIGCOMM Computer Communication Review v.35 n.3, 1888 July 2005. 1890 [Ka02] Katz, D., "Why are we scared of SPF? IGP Scaling and 1891 Stability", NANOG 25, June 2002. 1893 [Vi02] Villamizar, C., "Convergence and Restoration Techniques for 1894 ISP Interior Routing", NANOG 25, June 2002. 1896 Authors' Addresses 1898 Scott Poretsky 1899 Allot Communications 1900 67 South Bedford Street, Suite 400 1901 Burlington, MA 01803 1902 USA 1904 Phone: + 1 508 309 2179 1905 Email: sporetsky@allot.com 1907 Brent Imhoff 1908 Juniper Networks 1909 1194 North Mathilda Ave 1910 Sunnyvale, CA 94089 1911 USA 1913 Phone: + 1 314 378 2571 1914 Email: bimhoff@planetspork.com 1915 Kris Michielsen 1916 Cisco Systems 1917 6A De Kleetlaan 1918 Diegem, BRABANT 1831 1919 Belgium 1921 Email: kmichiel@cisco.com