idnits 2.17.1 draft-varlashkin-router-conv-bench-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 188: '... run BGP, otherwise they MUST run BGP....' RFC 2119 keyword, line 193: '...fixed for all tests and MUST be set to...' RFC 2119 keyword, line 240: '... time, the tests SHOULD be performed w...' RFC 2119 keyword, line 250: '... SHOULD be 64 bytes at IP layer, tha...' RFC 2119 keyword, line 255: '... These metrics SHOULD be used verbat...' (11 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 20, 2011) is 4570 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'S' is mentioned on line 221, but not defined == Missing Reference: 'DUT' is mentioned on line 221, but not defined == Missing Reference: 'M1' is mentioned on line 169, but not defined == Missing Reference: 'NetA' is mentioned on line 221, but not defined == Missing Reference: 'R2' is mentioned on line 225, but not defined == Missing Reference: 'R4' is mentioned on line 173, but not defined == Missing Reference: 'ER1' is mentioned on line 176, but not defined == Missing Reference: 'ERn' is mentioned on line 176, but not defined == Missing Reference: 'NetB-1' is mentioned on line 178, but not defined == Missing Reference: 'NetB-N' is mentioned on line 178, but not defined == Missing Reference: 'R1' is mentioned on line 217, but not defined == Unused Reference: 'RFC4760' is defined on line 582, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 14 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force I. Varlashkin 3 Internet-Draft Easynet Global Services 4 Intended status: Informational R. Papneja 5 Expires: April 22, 2012 Huawei Technologies (USA) 6 B. Parise 7 Cisco 8 T. Van Unen 9 Ixia 10 October 20, 2011 12 Convergence benchmarking on contemporary routers 13 draft-varlashkin-router-conv-bench-00 15 Abstract 17 This document specifies methodology for benchmarking convergence of 18 routers without making assumptions about relation and dependencies 19 between data- and control-planes. Provided methodology is primary 20 intended for testing routers running BGP and some form of link-state 21 IGP with or without MPLS. It may also be applicable for environments 22 using MPLS-TE or GRE, however they're beyond scope of this document 23 and such application is left for further study. 25 Status of this Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at http://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on April 22, 2012. 42 Copyright Notice 44 Copyright (c) 2011 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 This document may contain material from IETF Documents or IETF 58 Contributions published or made publicly available before November 59 10, 2008. The person(s) controlling the copyright in some of this 60 material may not have granted the IETF Trust the right to allow 61 modifications of such material outside the IETF Standards Process. 62 Without obtaining an adequate license from the person(s) controlling 63 the copyright in such materials, this document may not be modified 64 outside the IETF Standards Process, and derivative works of it may 65 not be created outside the IETF Standards Process, except to format 66 it for publication as an RFC or to translate it into languages other 67 than English. 69 Table of Contents 71 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 72 2. Test topology . . . . . . . . . . . . . . . . . . . . . . . . 5 73 3. TEST PARAMETERS . . . . . . . . . . . . . . . . . . . . . . . 6 74 3.1. Packing ratios . . . . . . . . . . . . . . . . . . . . . . 7 75 3.2. Test traffic . . . . . . . . . . . . . . . . . . . . . . . 7 76 3.3. IGP metrics . . . . . . . . . . . . . . . . . . . . . . . 7 77 3.4. Internal routers matrix . . . . . . . . . . . . . . . . . 7 78 3.5. Number of next-hops . . . . . . . . . . . . . . . . . . . 8 79 3.6. 'e' - Failure and Restoration start entropy . . . . . . . 8 80 4. TEST PROCEDURES . . . . . . . . . . . . . . . . . . . . . . . 8 81 4.1. Initialisation time . . . . . . . . . . . . . . . . . . . 8 82 4.2. Generic data-plane failure test . . . . . . . . . . . . . 9 83 4.3. Generic test procedure for . . . . . . . . . . . . . . . . 9 84 5. Failure and restoration scenarios . . . . . . . . . . . . . . 10 85 5.1. Loss of Signal on the link attached to DUT . . . . . . . . 10 86 5.2. Link failure without LoS . . . . . . . . . . . . . . . . . 10 87 5.3. Non-direct link failure . . . . . . . . . . . . . . . . . 11 88 5.4. Best route withdrawal . . . . . . . . . . . . . . . . . . 11 89 5.5. iBGP next-hop failure . . . . . . . . . . . . . . . . . . 12 90 6. Test report . . . . . . . . . . . . . . . . . . . . . . . . . 12 91 7. Link bundling and Equal Cost Multi-Path . . . . . . . . . . . 13 92 8. Graceful Restart and Non-Stop Forwarding . . . . . . . . . . . 13 93 9. Security considerations . . . . . . . . . . . . . . . . . . . 13 94 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 95 11. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 14 96 12. Normative References . . . . . . . . . . . . . . . . . . . . . 14 97 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 14 99 1. Introduction 101 Ability of the network to restore traffic flow when primary path 102 fails has always been important subject for network engineers, 103 researchers and equipment manufacturers. Time to recover from a link 104 or node failure has often been linked to routeing protocols 105 convergence; and benchmarking of a routeing protocol convergence has 106 often been considered sufficient for quantifying recovery 107 performance. As long as routers could obtain new best path only 108 after relevant routeing protocols perform their calculations such 109 methodology was reasonable. However continuous improvements in 110 hardware and software result in more and more routers being able to 111 restore traffic flow even before routeing protocols converge. 112 Methodology described in this document takes such fact into account. 114 When a failure occurs on the network a router needs to: 116 1. select new best path so that the packets, which already arrived 117 to the router, can be forwarded 119 2. let other routers know about new network state so they can find 120 new best path from their perspective 122 How fast a router can perform these two functions characterise 123 router's performance with regards to convergence. Note that in 124 general case each of these characteristics may or may not be related 125 to the other. For example, some platform may need to perform 126 calculations to find new best path and only then update local FIB and 127 send relevant protocol updates to other routers, another platform can 128 update local FIB without waiting for calculations to complete but 129 still needs to wait for calculations before sending routeing protocol 130 updates, third platform can use different optimisation for both FIB 131 changes and routeing protocol updates without waiting for completion 132 of the calculations. Other variations are also possible. This 133 document makes no assumption about whether local FIB changes and 134 routeing protocol updates dependencies on each other or on routeing 135 protocol calculations. 137 Since it is not known whether local FIB is updated before or after 138 routeing protocol calculations, forwarding-plane method is proposed 139 to benchmark local convergence. And because it is not known whether 140 routeing protocol updates are linked to FIB modification or not the 141 control-plane approach is used to benchmark how fast updates are 142 propagated. However both characteristics are benchmarked using very 143 similar test topologies and procedures. Also, an attempt is made to 144 to minimise dependency on performance on non-DUT elements involved in 145 the tests. 147 At the time of writing of this document it is not known whether 148 existing network testers and protocol emulators are able to execute 149 described tests out of the box. Nevertheless the authors believe 150 that required functionality can be added with reasonable effort. 151 Alternatively the tests can be performed with help of physical 152 routers to create necessary test topology, which may have impact on 153 time required to perform the test but expected to provide same degree 154 of the test results accuracy. This also means that tests performed 155 using a protocol simulator can be repeated using physical routers and 156 results expected to be comparable. 158 This document complements draft-papneja-bgp-basic-dp-convergence. 160 2. Test topology 162 Unless specified otherwise all tests use same basic test topology 163 outlined below: 165 [R1]-----1----[R3] 166 / \ / \ 167 1 9 C2 \ 168 / \ / \ 169 [S]---[DUT] [M1] [NetA] 170 \ / || \ / 171 3 C1 || 2 / 172 \ / / \ \ / 173 [R2] / \ [R4] 174 / \ 175 / \ 176 [ER1] ... [ERn] 177 | | 178 [NetB-1] [NetB-N] 180 S is source of test traffic for data-plane tests, while for control- 181 plane tests S is an emulated or physical router with packet capturing 182 (sniffing) capability. 184 Unidirectional test traffic goes from Source to NetA. 186 IGP between DUT and R1-R4; BGP between DUT and R3, R4; no BGP between 187 R3 and R4 (important). If tunnelling (e.g. MPLS or GRE) is used 188 then R1 and R2 do not need to run BGP, otherwise they MUST run BGP. 189 Source has static default to DUT; R3 and R4 have static to NetA. 190 NetA is in BGP but not in IGP. M1 is K*M matrix of internal routers. 191 Metrics C1 is used to control whether R2 is LFA for DUT to NetA. 192 Metric C2 is used to control whether R3 or R4 are best exit towards 193 NetA. All other metrics are fixed for all tests and MUST be set to 194 exact values provided in the above diagram. IGP metrics from M1 to 195 ER1 throughout ERn can be set arbitrarily, their exact values are 196 irrelevant to this test as long as they're valid for given IGP. 198 Routers ER1 throughout ERn together with prefixes NetB-1 throughout 199 NetB-N are presented to create realistic environment but not used 200 directly in measurements. NetB-1 throughout NetB-N are distinct 201 single-prefix sets. 203 Traffic restoration depends on ability of R2 and M1 to forward 204 traffic after failure. To eliminate this dependency R2 is set to 205 always forward traffic to R3 and NetA via M1 which in turn always 206 forwards traffic directly via R3 or R2 depending on the test. One 207 possibility to achieve this is to use static routes. Another 208 alternative is to use different IGP between R2 and R3 from the one 209 used by DUT and make routes learned via this IGP preferred on R2. 210 E.g. DUT uses OSPF, then in addition to it R2&R3 also run ISIS and 211 prefer ISIS routes over OSPF ones. A protocol simulator can have 212 internal mechanism to provide required behaviour. There are no other 213 dependencies on non-DUT devices in this tests. 215 For evaluating eBGP performance following topology is used: 217 [R1] 218 / \ 219 / \ 220 / \ 221 [S]----[DUT] [NetA] 222 \ / 223 \ / 224 \ / 225 [R2] 227 Test topology for eBGP 229 In "Link failure without LoS" test direct cable between DUT and R1 is 230 replaced with connection over an L2 switch as follow: 232 [DUT]---[SW1]---[R1] 234 3. TEST PARAMETERS 235 3.1. Packing ratios 237 Routes with different prefixes but same attributes can potentially be 238 packed into single update message. Since both number of update 239 messages and number of prefixes per update can affect convergence 240 time, the tests SHOULD be performed with various prefix packing 241 ratios. This document does not specify values of individual BGP 242 attributes used to control packing ratio. 244 3.2. Test traffic 246 Traffic is sent from single source address located at the Source port 247 of the tester to one address in each prefix in NetA set. Packets are 248 sent at rate 1000 per second, which provides 1ms resolution of the 249 convergence time as measured by tests in this document. All packets 250 SHOULD be 64 bytes at IP layer, that is IP header plus IP payload. 252 3.3. IGP metrics 254 Basic test topology specifies fixed IGP metrics for some links. 255 These metrics SHOULD be used verbatim. There are also two variable 256 metrics - C1 and C2 - intended for controlling whether R2 is Loop- 257 Free-Alternate (LFA) for DUT towards NetA, and whether R3 remains 258 best exit towards NetA after path failure between DUT and R3. 259 Following values SHOULD be used for C1 and C2 depending on required 260 behaviour: 262 +------------+----------+----+----+ 263 | R2 is LFA? | R3 best? | C1 | C2 | 264 +------------+----------+----+----+ 265 | yes | yes | 1 | 1 | 266 | yes | no | 1 | 3 | 267 | no | yes | 5 | 1 | 268 | no | no | 5 | 3 | 269 +------------+----------+----+----+ 271 3.4. Internal routers matrix 273 Basic test topology has N*K grid of internal routers denoted as M1. 274 When N>1 or K>1 the cost of all links within grid MUST be set to 1 275 (one). This matrix is intended for controlling topology size, which 276 has affect on particularly SPF run-time. 278 If traffic is forwarded using a tunneling mechanism, such as MPLS or 279 GRE, the internal routers only need to have reachability information 280 about tunnel end-points. However if traditional hop-by-hop 281 forwarding is used, then internal routers MUST have routes to each 282 and every prefix within NetA set. 284 This document does not specify how internal routers should obtain 285 necessary reachability information. The only requirement is that 286 after primary DUT-NetA path failure internal routers are able to 287 forward traffic to NetA instantly. Using values of IGP metrics as 288 described earlier addresses this requirement. Also, protocol 289 simulator may have built-in mechanism to achieve desired behaviour. 291 3.5. Number of next-hops 293 Basic test topology has set of N edge routers ER1 throughout ERn, 294 each advertising unique prefix. Some BGP implementations may exhibit 295 different performance depending on number of next-hops for which IGP 296 cost has changed after failure. By varying overall number of next- 297 hops such dependency can be detected. 299 Note that prefixes NetB-1 throughout NetB-n are not used as 300 destinations for test traffic, they're only present for creating 301 "background environment". 303 3.6. 'e' - Failure and Restoration start entropy 305 Tests described in this document use fixed time T2 and variable 306 offset 'e' as starting point for simulating failure or restoration 307 event. 309 Fixing time T2 is necessary as reference point to which variable 310 offset e is added for each iteration of the test. Introduction of 311 such variable offset allows better analysis of the test results. For 312 example, DUT may run FIB changes at certain intervals. If failure 313 introduced close to the end of such interval, shorter outage will be 314 observed, and if introduced close to the beginning of such interval 315 longer outage will be observed. Running test multiple times each 316 time using different offset will help to profile DUT better. 318 Test report must contain value of T2 (same for all iterations) and 319 values of e for each iterations. This document recommends to use 320 T2=T1+8s and e from 0 to 1s in 0.01s (10ms) increments. 322 4. TEST PROCEDURES 324 This section provides generic steps that are used in all tests. 326 4.1. Initialisation time 328 The objective of this test is to measure time that must elapse 329 between starting protocols and ability of the test topology to 330 forward traffic. This test is not intended to reflect DUT 331 performance but used only as a way to find time T1 that is used in 332 all subsequent tests. 334 To execute test perform following steps: 336 1. Configure DUT and protocol simulator (or auxiliary nodes) 338 2. At T0 start traffic and then immediately start routeing protocols 340 3. When traffic starts arriving Sink Port 1 stop test. 342 The time of arrival of the first packet is T1. 344 4.2. Generic data-plane failure test 346 The purpose of failure test is to measure time required by DUT to 347 resume traffic flow after best path to destination fails. Following 348 steps are common for all failure tests: 350 1. Start protocols and mark time as T0 352 2. At time T1 start traffic to each prefix in set NetA 354 3. At T2+e simulate failure or restoration event (see Section 5) 356 4. From T2+e until T3 packets do not arrive to NetA 358 5. After packets are seen again at NetA (T3) wait until time T4 360 6. Stop traffic 362 7. Measure total number of lost packets and calculate outage knowing 363 packet-per-second 365 4.3. Generic test procedure for 367 1. At T0 bring up all interfaces and protocols, and start capturing 368 BGP packets at RS1 370 2. At T1+e simulate failure/restoration event (see Section 5) 372 3. At T2-d1 first UPDATE message is sent by DUT and at T2 it will be 373 observed at RS1 375 4. At T3-d2 last UPDATE message is sent by DUT and at T3 it will be 376 observed at RS1 378 d1 and d2 represent serialisation and propagation delay and can be 379 disregarded unless DUT-RS1 link has large delay. With this in mind, 380 T2-(T1+e) and T3-(T1+e) represent convergence time for the first and 381 last prefix respectively. 383 5. Failure and restoration scenarios 385 This section defines set of various failure and restoration scenarios 386 used in step 3 of the generic test procedures described in previous 387 section. Unless otherwise specified all scenarios are applicable to 388 both data- and control-plane test procedures. 390 5.1. Loss of Signal on the link attached to DUT 392 This scenario simulates situation where link attached to DUT fails 393 and Loss of Signal (LoS) can be observed by DUT. In other words link 394 fails and results in interface on the DUT going down. 396 To simulate LoS failure at the time defined by the test procedure 397 shut down R1 side of the link to DUT. 399 To simulate LoS restoration at the time defined by the test procedure 400 re-activate R1 side of the link to DUT. 402 5.2. Link failure without LoS 404 This scenario simulates situation where link between DUT and adjacent 405 node fails but DUT does not observe LoS. In practice such failure 406 can occur when, for example, link between DUT and adjacent node is 407 implemented via carrier equipment that does not shut link down when 408 remote side of the link fails. 410 DUT can use various methods to detect such failures, including but 411 not limited to protocol HELLO or Keep-alive packets, BFD, OAM. This 412 document does not restrict methods which DUT can use, but requires 413 use of particular method to be recorded in the test report. 415 Basic network topology is modified for the purpose of this test only 416 as follow: rather than using direct cabling between DUT and R1 the 417 link is implemented via intermediate L2 switch that supports concept 418 of VLAN's. Initially switch ports connected to DUT and R1 are placed 419 into the same VLAN (same L2 broadcast domain). 421 To simulate failure at the time defined by the test procedure move 422 switch port connected to R1 to a VLAN different from the one used for 423 switch port connected to DUT. 425 To simulate restoration at the time defined by the test procedure 426 move switch port connected to R1 back to the same VLAN as the one 427 used for switch port connected to DUT. 429 5.3. Non-direct link failure 431 This scenario simulates situation where a link not directly connected 432 to DUT but located on the primary path to destination fails. 433 Unmodified basic network topology is used. 435 Depending on technologies used in the setup different failure 436 detection techniques can be employed by DUT. This document assumes 437 that DUT relies exclusively on IGP information to learn about failure 438 and that nodes adjacent to the failed link flood this information 439 within D seconds since the event. If required exact value of D can 440 be obtained through simple additional test, but in this document D is 441 assumed to be 0 (zero). 443 It is possible, though undesirable, that some traffic and protocol 444 simulators may contunue accepting packets coming through the port 445 that leads to simulated failed link. It is essential to assert such 446 behaviour prior to the tests and if confirmed, exclude packets 447 received after failure from calculations in step 7 of the test. 449 Failure event is triggered by simulating shutdown of R3 side of the 450 link to R1 at the time defined by the test procedure. R1 MUST send 451 IGP update (depending on which protocol is used) to DUT within D 452 seconds. 454 Restoration event is triggered by simulating recovery of R3 side of 455 the link to R1 at the time defined by the test procedure. R1 MUST 456 send IGP update (depending on which protocol is used) to DUT within D 457 seconds. 459 5.4. Best route withdrawal 461 This scenario sumulates situation where best AS exit path to a 462 destination is no longer valid and ASBR sends BGP UPDATE to its iBGP 463 peers. Unmodified basic network topology is used. 465 Disconnecting R3 from NetA implies that R3 will send BGP WITHDRAW for 466 this prefixes in its update to DUT. It is possible, though 467 undesirable, that some protocol simulator and traffic generators will 468 still count packets received at sink port 1 even after prefixes were 469 withdrawn. To correctly execute this test it's mandatory that 470 traffic received at sink port 1 after withdrawing prefixes is ignored 471 and not counted as delivered. If traffic generator is not able to 472 assure such functionality (should be asserted prior to the test), 473 then packets received at the sink port 1 MUST be excluded from 474 calculation in step 7 of the test. 476 Failure event is triggered by simulating failure of the link between 477 R3 and NetA and immediate withdrawal of all corresponding prefixes by 478 R3. 480 Restoration event is triggered by simulating recovery of the link 481 between R3 and NetA and immediate BGP UPDATE for all corresponding 482 prefixes by R3. 484 5.5. iBGP next-hop failure 486 This scenario simulates situation where ASBR used as best exit to a 487 destination unexpectedly fails both at control and forwarding plane. 488 Both R1 and a router within M1 connected to R3 MUST send appropriate 489 IGP update message to the rest of the network within D seconds. To 490 detect failure DUT MAY rely on IGP information provided by rest of 491 the network or it MAY employ additional techniques. This document 492 does not restrict what detection mechanism should DUT use but 493 requires that particular mechanism is recorded in the test report. 495 Failure event is triggered by simulating removal of R3 from the test 496 topology at the time defined by the test procedure, followed by IGP 497 update as described in previous paragraph. 499 Recovery event is triggered by re-introducing R3 into the test 500 topology, followed by IGP update as described in first paragraph of 501 this section and immediate re-activation of BGP session between R3 502 and DUT. Note that recovery time calculated by this method depends 503 on DUT performance in respect to bringing up new BGP session. This 504 is intentional. Control plane convergence benchmarking can be 505 performed separately by a method that is outside of the scope of this 506 document and two results can be correlated netto data-plane 507 convergence value should that be necessary. 509 6. Test report 511 TODO: Report format is to be discussed. 513 Test report MUST contain following data for each test: 515 1. T1 and 'e' 517 2. Number of prefixes NetA and NetB 519 3. Size of M1 (recored as N*K) 520 4. Traffic rate, in packets per second, and packet size at IP layer 521 in octets 523 5. Number of lost packets during falure, and number of lost packets 524 during restoration 526 7. Link bundling and Equal Cost Multi-Path 528 Scenarios where DUT can balance traffic to NetA across multiple best 529 paths is explicitly excluded from scope of this document. There are 530 two reasons. 532 First, two different DUT may choose different path (out of all equal) 533 to forward given packet, which makes it unreasonably difficult to 534 define generic traffic that would produce comparable results when 535 testing different platforms. 537 Second, mechanisms used to handle failures in ECMP (but not 538 necessarily in link-bundling) environment are similar to those 539 handling single-path failures. Therefore it's expected that 540 convergence in ECMP scenario will be of the same order as in single- 541 path scenario. 543 8. Graceful Restart and Non-Stop Forwarding 545 While Graceful Restart and Non-Stop Forwarding mechanisms are related 546 to DUT ability to forward traffic under certain failure conditions, 547 the test covering DUT own ability to restore or preserve traffic flow 548 already covered in RFC6201. 550 9. Security considerations 552 The tests described in this document intended to be performed in 553 isolated lab environment, which inheretently has no security 554 implication on the live network of the organisation or Internet as 555 whole. 557 Authors foresee that some people or organisations might be interested 558 to benchmark performance of the live networks. The tests described 559 in this document are disruptive by their nature and will have impact 560 at least on the network where they're executed, and depending on the 561 role of that network effect can extend to other parts of the 562 Internet. Such tests MUST NOT be attempted in live environment 563 without careful consideration. 565 The fact of publishing this document does not increase potential 566 negative consequences if tests are executed in live environment 567 because information provided here is mere recording of widely known 568 and used techniques. 570 10. IANA Considerations 572 None. 574 11. Acknowledgments 576 Authors would like to thank Gregory Cauchie, Rob Shakir, David 577 Freedman, Anton Elita, Saku Ytti, Andrew Yourtchenko, for their 578 valuable contribution and peer-review of this work. 580 12. Normative References 582 [RFC4760] Bates, T., Chandra, R., Katz, D., and Y. Rekhter, 583 "Multiprotocol Extensions for BGP-4", RFC 4760, 584 January 2007. 586 Authors' Addresses 588 Ilya Varlashkin 589 Easynet Global Services 591 Email: ilya.varlashkin@easynet.com 593 Rajiv Papneja 594 Huawei Technologies (USA) 596 Email: rajiv.papneja@huawei.com 598 Bhavani Parise 599 Cisco 601 Email: bhavani@cisco.com 602 Tara Van Unen 603 Ixia 605 Email: TVanUnen@ixiacom.com