idnits 2.17.1 draft-ietf-bmwg-bgp-basic-convergence-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 14, 2014) is 3454 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'MPLSProt' is mentioned on line 441, but not defined == Missing Reference: 'IGPData' is mentioned on line 947, but not defined == Unused Reference: 'RFC4271' is defined on line 1387, but no explicit reference was found in the text == Unused Reference: 'RFC6412' is defined on line 1390, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 6 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Working Group R. Papneja 3 Internet-Draft Huawei Technologies 4 Intended status: Informational B. Parise 5 Expires: April 17, 2015 Cisco Systems 6 S. Hares 7 Adara Networks 8 D. Lee 9 IXIA 10 I. Varlashkin 11 Easynet Global Services 12 October 14, 2014 14 Basic BGP Convergence Benchmarking Methodology for Data Plane 15 Convergence 16 draft-ietf-bmwg-bgp-basic-convergence-03.txt 18 Abstract 20 BGP is widely deployed and used by several service providers as the 21 default Inter AS routing protocol. It is of utmost importance to 22 ensure that when a BGP peer or a downstream link of a BGP peer fails, 23 the alternate paths are rapidly used and routes via these alternate 24 paths are installed. This document provides the basic BGP 25 Benchmarking Methodology using existing BGP Convergence Terminology, 26 RFC 4098. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on April 17, 2015. 45 Copyright Notice 47 Copyright (c) 2014 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 This document may contain material from IETF Documents or IETF 61 Contributions published or made publicly available before November 62 10, 2008. The person(s) controlling the copyright in some of this 63 material may not have granted the IETF Trust the right to allow 64 modifications of such material outside the IETF Standards Process. 65 Without obtaining an adequate license from the person(s) controlling 66 the copyright in such materials, this document may not be modified 67 outside the IETF Standards Process, and derivative works of it may 68 not be created outside the IETF Standards Process, except to format 69 it for publication as an RFC or to translate it into languages other 70 than English. 72 Table of Contents 74 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 75 1.1. Benchmarking Definitions . . . . . . . . . . . . . . . . . 4 76 1.2. Purpose of BGP FIB (Data Plane) Convergence . . . . . . . 4 77 1.3. Control Plane Convergence . . . . . . . . . . . . . . . . 5 78 1.4. Benchmarking Testing . . . . . . . . . . . . . . . . . . . 5 79 2. Existing Definitions and Requirements . . . . . . . . . . . . 5 80 3. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 6 81 3.1. General Reference Topologies . . . . . . . . . . . . . . . 6 82 4. Test Considerations . . . . . . . . . . . . . . . . . . . . . 8 83 4.1. Number of Peers . . . . . . . . . . . . . . . . . . . . . 9 84 4.2. Number of Routes per Peer . . . . . . . . . . . . . . . . 9 85 4.3. Policy Processing/Reconfiguration . . . . . . . . . . . . 9 86 4.4. Configured Parameters (Timers, etc..) . . . . . . . . . . 9 87 4.5. Interface Types . . . . . . . . . . . . . . . . . . . . . 11 88 4.6. Measurement Accuracy . . . . . . . . . . . . . . . . . . . 11 89 4.7. Measurement Statistics . . . . . . . . . . . . . . . . . . 11 90 4.8. Authentication . . . . . . . . . . . . . . . . . . . . . . 12 91 4.9. Convergence Events . . . . . . . . . . . . . . . . . . . . 12 92 4.10. High Availability . . . . . . . . . . . . . . . . . . . . 12 93 5. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 12 94 5.1. Basic Convergence Tests . . . . . . . . . . . . . . . . . 13 95 5.1.1. RIB-IN Convergence . . . . . . . . . . . . . . . . . . 13 96 5.1.2. RIB-OUT Convergence . . . . . . . . . . . . . . . . . 15 97 5.1.3. eBGP Convergence . . . . . . . . . . . . . . . . . . . 16 98 5.1.4. iBGP Convergence . . . . . . . . . . . . . . . . . . . 16 99 5.1.5. eBGP Multihop Convergence . . . . . . . . . . . . . . 17 100 5.2. BGP Failure/Convergence Events . . . . . . . . . . . . . . 18 101 5.2.1. Physical Link Failure on DUT End . . . . . . . . . . . 18 102 5.2.2. Physical Link Failure on Remote/Emulator End . . . . . 19 103 5.2.3. ECMP Link Failure on DUT End . . . . . . . . . . . . . 20 104 5.3. BGP Adjacency Failure (Non-Physical Link Failure) on 105 Emulator . . . . . . . . . . . . . . . . . . . . . . . . . 20 106 5.4. BGP Hard Reset Test Cases . . . . . . . . . . . . . . . . 21 107 5.4.1. BGP Non-Recovering Hard Reset Event on DUT . . . . . . 21 108 5.5. BGP Soft Reset . . . . . . . . . . . . . . . . . . . . . . 22 109 5.6. BGP Route Withdrawal Convergence Time . . . . . . . . . . 24 110 5.7. BGP Path Attribute Change Convergence Time . . . . . . . . 26 111 5.8. BGP Graceful Restart Convergence Time . . . . . . . . . . 27 112 6. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 29 113 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 32 114 8. Security Considerations . . . . . . . . . . . . . . . . . . . 32 115 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 32 116 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 32 117 10.1. Normative References . . . . . . . . . . . . . . . . . . . 32 118 10.2. Informative References . . . . . . . . . . . . . . . . . . 33 119 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33 121 1. Introduction 123 This document defines the methodology for benchmarking data plane FIB 124 convergence performance of BGP in routers and switches using 125 topologies of 3 or 4 nodes. The methodology proposed in this 126 document applies to both IPv4 and IPv6 and if a particular test is 127 unique to one version, it is marked accordingly. For IPv6 128 benchmarking the device under test will require the support of Multi- 129 Protocol BGP (MP-BGP) [RFC4760, RFC2545]. Similarly both iBGP & eBGP 130 are covered in the tests as applicable. 132 The scope of this document is to provide methodology for BGP protocol 133 FIB convergence measurements with BGP functionality limited to IPv4 & 134 IPv6 as defined in RFC 4271 and Multi-Protocol BGP (MP-BGP) [RFC4760, 135 RFC2545]. Other BGP extensions to support layer-2, layer-3 virtual 136 private networks (VPN) are outside the scope of this document. 137 Interaction with IGPs (IGP interworking) is outside the scope of this 138 document. 140 1.1. Benchmarking Definitions 142 The terminology used in this document is defined in [RFC4098]. One 143 additional term is defined in this draft: FIB (Data plane) BGP 144 Convergence. 146 FIB (Data plane) convergence is defined as the completion of all FIB 147 changes so that all forwarded traffic now takes the new proposed 148 route. RFC 4098 defines the terms BGP device, FIB and the forwarded 149 traffic. Data plane convergence is different than control plane 150 convergence within a node. 152 This document defines methodology to test 154 - Data plane convergence on a single BGP device that supports the BGP 155 functionality with scope as outlined above 157 - using test topology of 3 or 4 nodes which are sufficient to 158 recreate the Convergence events used in the various tests of this 159 draft 161 1.2. Purpose of BGP FIB (Data Plane) Convergence 163 In the current Internet architecture the Inter-Autonomous System 164 (inter-AS) transit is primarily available through BGP. To maintain 165 reliable connectivity within intra-domains or across inter-domains, 166 fast recovery from failures remains most critical. To ensure minimal 167 traffic losses, many service providers are requiring BGP 168 implementations to converge the entire Internet routing table within 169 sub-seconds at FIB level. 171 Furthermore, to compare these numbers amongst various devices, 172 service providers are also looking at ways to standardize the 173 convergence measurement methods. This document offers test methods 174 for simple topologies. These simple tests will provide a quick high- 175 level check of the BGP data plane convergence across multiple 176 implementations from different vendors. 178 1.3. Control Plane Convergence 180 The convergence of BGP occurs at two levels: RIB and FIB convergence. 181 RFC 4098 defines terms for BGP control plane convergence. 182 Methodologies which test control plane convergence are out of scope 183 for this draft. 185 1.4. Benchmarking Testing 187 In order to ensure that the results obtained in tests are repeatable, 188 careful setup of initial conditions and exact steps are required. 190 This document proposes these initial conditions, test steps, and 191 result checking. To ensure uniformity of the results all optional 192 parameters SHOULD be disabled and all settings SHOULD be changed to 193 default, these may include BGP timers as well. 195 2. Existing Definitions and Requirements 197 RFC 1242, "Benchmarking Terminology for Network Interconnect Devices" 198 [RFC1242] and RFC 2285, "Benchmarking Terminology for LAN Switching 199 Devices" [RFC2285] SHOULD be reviewed in conjunction with this 200 document. WLAN-specific terms and definitions are also provided in 201 Clauses 3 and 4 of the IEEE 802.11 standard [802.11]. Commonly used 202 terms may also be found in RFC 1983 [RFC1983]. 204 For the sake of clarity and continuity, this document adopts the 205 general template for benchmarking terminology set out in Section 2 of 206 RFC 1242. Definitions are organized in alphabetical order, and 207 grouped into sections for ease of reference. The following terms are 208 assumed to be taken as defined in RFC 1242 [RFC1242]: Throughput, 209 Latency, Constant Load, Frame Loss Rate, and Overhead Behavior. In 210 addition, the following terms are taken as defined in [RFC2285]: 211 Forwarding Rates, Maximum Forwarding Rate, Loads, Device Under Test 212 (DUT), and System Under Test (SUT). 214 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 215 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 216 document are to be interpreted as described in RFC 2119 [RFC2119]. 218 3. Test Topologies 220 This section describes the test setups for use in BGP benchmarking 221 tests measuring convergence of the FIB (data plane) after the BGP 222 updates has been received. 224 These test setups have 3 or 4 nodes with the following configuration: 226 1. Basic Test Setup 228 2. Three node setup for iBGP or eBGP convergence 230 3. Setup for eBGP multihop test scenario 232 4. Four node setup for iBGP or eBGP convergence 234 Individual tests refer to these topologies. 236 Figures 1-4 use the following conventions 238 o AS-X: Autonomous System X 240 o Loopback Int: Loopback interface on the BGP enabled device 242 o HLP,HLP1,HLP2: Helper routers running the same version of BGP as 243 DUT 245 o Enable NTP or use any external clock source to synchronize to the 246 nodes 248 3.1. General Reference Topologies 250 Emulator acts as 1 or more BGP peers for different testcases. 252 +----------+ +------------+ 253 | | traffic interfaces | | 254 | |-----------------------1---- | tx | 255 | |-----------------------2---- | tr1 | 256 | |-----------------------3-----| tr2 | 257 | DUT | | Emulator | 258 | | routing interfaces | | 259 | Dp1 |--------------------------- |Emp1 | 260 | | BGP Peering | | 261 | Dp2 |---------------------------- |Emp2 | 262 | | BGP Peering | | 263 +----------+ +------------+ 265 Figure 1 Basic Test Setup 267 +------------+ +-----------+ +-----------+ 268 | | | | | | 269 | | | | | | 270 | HLP | | DUT | | Emulator | 271 | (AS-X) |--------| (AS-Y) |-----------| (AS-Z) | 272 | | | | | | 273 | | | | | | 274 | | | | | | 275 +------------+ +-----------+ +-----------+ 276 | | 277 | | 278 +--------------------------------------------+ 280 Figure 2 Three Node Setup for eBGP and iBGP Convergence 281 +----------------------------------------------+ 282 | | 283 | | 284 +------------+ +-----------+ +-----------+ 285 | | | | | | 286 | | | | | | 287 | HLP | | DUT | | Emulator | 288 | (AS-X) |--------| (AS-Y) |-----------| (AS-Z) | 289 | | | | | | 290 | | | | | | 291 | | | | | | 292 +------------+ +-----------+ +-----------+ 293 |Loopback-Int |Loopback-Int 294 | | 295 + + 297 Figure 3 BGP Convergence for eBGP Multihop Scenario 299 +---------+ +--------+ +--------+ +---------+ 300 | | | | | | | | 301 | | | | | | | | 302 | HLP1 | | DUT | | HLP2 | |Emulator | 303 | (AS-X) |-----| (AS-X) |-----| (AS-Y) |-----| (AS-Z) | 304 | | | | | | | | 305 | | | | | | | | 306 | | | | | | | | 307 +---------+ +--------+ +--------+ +---------+ 308 | | 309 | | 310 +---------------------------------------------+ 312 Figure 4 Four Node Setup for EBGP and IBGP Convergence 314 4. Test Considerations 316 The test cases for measuring convergence for iBGP and eBGP are 317 different. Both iBGP and eBGP use different mechanisms to advertise, 318 install and learn the routes. Typically, an iBGP route on the DUT is 319 installed and exported when the next-hop is valid. For eBGP the 320 route is installed on the DUT with the remote interface address as 321 the next-hop, with the exception of the multihop test case (as 322 specified in the test). 324 4.1. Number of Peers 326 Number of Peers is defined as the number of BGP neighbors or sessions 327 the DUT has at the beginning of the test. The peers are established 328 before the tests begin. The relationship could be either, iBGP or 329 eBGP peering depending upon the test case requirement. 331 The DUT establishes one or more BGP sessions with one more emulated 332 routers or helper nodes. Additional peers can be added based on the 333 testing requirements. The number of peers enabled during the testing 334 should be well documented in the report matrix. 336 4.2. Number of Routes per Peer 338 Number of Routes per Peer is defined as the number of routes 339 advertised or learnt by the DUT per session or through a neighbor 340 relationship with an emulator or helper node. The tester, emulating 341 as neighbor MUST advertise at least one route per peer. 343 Each test run must identify the route stream in terms of route 344 packing, route mixture, and number of routes. This route stream must 345 be well documented in the reporting stream. RFC 4098 defines these 346 terms. 348 It is RECOMMENDED that the user consider advertising the entire 349 current Internet routing table per peering session using an Internet 350 route mixture with unique or non-unique routes. If multiple peers 351 are used, it is important to precisely document the timing sequence 352 between the peer sending routes (as defined in RFC 4098). 354 4.3. Policy Processing/Reconfiguration 356 The DUT MUST run one baseline test where policy is Minimal policy as 357 defined in RFC 4098. Additional runs may be done with policy set-up 358 before the tests begin. Exact policy settings MUST be documented as 359 part of the test. 361 4.4. Configured Parameters (Timers, etc..) 363 There are configured parameters and timers that may impact the 364 measured BGP convergence times. 366 The benchmark metrics MAY be measured at any fixed values for these 367 configured parameters. 369 It is RECOMMENDED these configure parameters have the following 370 settings: a) default values specified by the respective RFC b) 371 platform-specific default parameters and c) values as expected in the 372 operational network. All optional BGP settings MUST be kept 373 consistent across iterations of any specific tests 375 Examples of the configured parameters that may impact measured BGP 376 convergence time include, but are not limited to: 378 1. Interface failure detection timer 380 2. BGP Keepalive timer 382 3. BGP Holdtime 384 4. BGP update delay timer 386 5. ConnectRetry timer 388 6. TCP Segment Size 390 7. Minimum Route Advertisement Interval (MRAI) 392 8. MinASOriginationInterval (MAOI) 394 9. Route Flap Dampening parameters 396 10. TCP MD5 398 11. Maximum TCP Window Size 400 12. MTU 402 The basic-test settings for the parameters should be: 404 1. Interface failure detection timer (0 ms) 406 2. BGP Keepalive timer (1 min) 408 3. BGP Holdtime (3 min) 410 4. BGP update delay timer (0 s) 411 5. ConnectRetry timer (1 s) 413 6. TCP Segment Size (4096) 415 7. Minimum Route Advertisement Interval (MRAI) (0 s) 417 8. MinASOriginationInterval (MAOI)(0 s) 419 9. Route Flap Dampening parameters (off) 421 10. TCP MD5 (off) 423 4.5. Interface Types 425 The type of media dictate which test cases may be executed, each 426 interface type has unique mechanism for detecting link failures and 427 the speed at which that mechanism operates will influence the 428 measurement results. All interfaces MUST be of the same media and 429 throughput for all iterations of each test case. 431 4.6. Measurement Accuracy 433 Since observed packet loss is used to measure the route convergence 434 time, the time between two successive packets offered to each 435 individual route is the highest possible accuracy of any packet-loss 436 based measurement. When packet jitter is much less than the 437 convergence time, it is a negligible source of error and hence it 438 will be treated as within tolerance. 440 Other options to measure convergence are the Time-Based Loss Method 441 (TBLM) and Timestamp Based Method(TBM)[MPLSProt]. 443 An exterior measurement on the input media (such as Ethernet) is 444 defined by this specification. 446 4.7. Measurement Statistics 448 The benchmark measurements may vary for each trial, due to the 449 statistical nature of timer expirations, CPU scheduling, etc. It is 450 recommended to repeat the test multiple times. Evaluation of the 451 test data must be done with an understanding of generally accepted 452 testing practices regarding repeatability, variance and statistical 453 significance of a small number of trials. 455 For any repeated tests that are averaged to remove variance, all 456 parameters MUST remain the same. 458 4.8. Authentication 460 Authentication in BGP is done using the TCP MD5 Signature Option 461 [RFC5925]. The processing of the MD5 hash, particularly in devices 462 with a large number of BGP peers and a large amount of update 463 traffic, can have an impact on the control plane of the device. If 464 authentication is enabled, it MUST be documented correctly in the 465 reporting format. 467 4.9. Convergence Events 469 Convergence events or triggers are defined as abnormal occurrences in 470 the network, which initiate route flapping in the network, and hence 471 forces the re-convergence of a steady state network. In a real 472 network, a series of convergence events may cause convergence latency 473 operators desire to test. 475 These convergence events must be defined in terms of the sequences 476 defined in RFC 4098. This basic document begins all tests with a 477 router initial set-up. Additional documents will define BGP data 478 plane convergence based on peer initialization. 480 The convergence events may or may not be tied to the actual failure A 481 Soft Reset (RFC 4098) does not clear the RIB or FIB tables. A Hard 482 reset clears the BGP peer sessions, the RIB tables, and FIB tables. 484 4.10. High Availability 486 Due to the different Non-Stop-Routing (sometimes referred to High- 487 Availability) solutions available from different vendors, it is 488 RECOMMENDED that any redundancy available in the routing processors 489 should be disabled during the convergence measurements. For cases 490 where the redundancy cannot be disabled, the results are no longer 491 comparable and the level of impacts on the measurements is out of 492 scope of this document. 494 5. Test Cases 496 All tests defined under this section assume the following: 498 a. BGP peers are in established state 500 b. BGP state should be cleared from established state to idle prior 501 to each test. This is recommended to ensure that all tests start 502 with the BGP peers being forced back to idle state and databases 503 flushed. 505 c. Furthermore the traffic generation and routing should be verified 506 in the topology to ensure there is no packet loss observed on any 507 advertised routes 509 d. The arrival timestamp of advertised routes can be measured by 510 installing an inline monitoring device between the emulator and 511 DUT, or by the span port of DUT connected with an external 512 analyzer. The time base of such inline monitor or external 513 analyzer needs to be synchronized with the protocol and traffic 514 emulator. Some modern emulator may have the capability to 515 capture and timestamp every NLRI packets leaving and arriving at 516 the emulator ports. The timestamps of these NLRI packets will be 517 almost identical to the arrival time at DUT if the cable distance 518 between the emulator and DUT is relatively short. 520 5.1. Basic Convergence Tests 522 These test cases measure characteristics of a BGP implementation in 523 non-failure scenarios like: 525 1. RIB-IN Convergence 527 2. RIB-OUT Convergence 529 3. eBGP Convergence 531 4. iBGP Convergence 533 5.1.1. RIB-IN Convergence 535 Objective: 537 This test measures the convergence time taken to receive and 538 install a route in RIB using BGP. 540 Reference Test Setup: 542 This test uses the setup as shown in figure 1 544 Procedure: 546 A. All variables affecting Convergence should be set to a basic 547 test state (as defined in section 4-4). 549 B. Establish BGP adjacency between DUT and one peer of Emulator, 550 Emp1. 552 C. To ensure adjacency establishment, wait for 3 KeepAlives from 553 the DUT or a configurable delay before proceeding with the 554 rest of the test. 556 D. Start the traffic from the Emulator tx towards the DUT 557 targeted at a routes specified in route mixture (ex. routeA) 558 Initially no traffic SHOULD be observed on the egress 559 interface as the routeA is not installed in the forwarding 560 database of the DUT. 562 E. Advertise routeA from the peer(Emp1) to the DUT and record the 563 time. 565 This is Tup(EMp1,Rt-A) also named 'XMT-Rt-time(Rt-A)'. 567 F. Record the time when the routeA from Emp1 is received at the 568 DUT. 570 This Tup(DUT,Rt-A) also named 'RCV-Rt-time(Rt-A)'. 572 G. Record the time when the traffic targeted towards routeA is 573 received by Emulator on appropriate traffic egress interface. 575 This is TR(TDr,Rt-A). This is also named DUT-XMT-Data- 576 Time(Rt-A). 578 H. The difference between the Tup(DUT,RT-A) and traffic received 579 time (TR (TDr, Rt-A) is the FIB Convergence Time for routeA in 580 the route mixture. A full convergence for the route update is 581 the measurement between the 1st route (Rt-A) and the last 582 route (Rt-last) 584 Route update convergence is 586 TR(TDr, Rt-last)- Tup(DUT, Rt-A) or 588 (DUT-XMT-Data-Time - RCV-Rt-Time)(Rt-A) 590 Note: It is recommended that a single test with the same route 591 mixture be repeated several times. A report should provide the 592 Standard Deviation of all tests and the Average. 594 Running tests with a varying number of routes and route mixtures is 595 important to get a full characterization of a single peer. 597 5.1.2. RIB-OUT Convergence 599 Objective: 601 This test measures the convergence time taken by an implementation 602 to receive, install and advertise a route using BGP. 604 Reference Test Setup: 606 This test uses the setup as shown in figure 2. 608 Procedure: 610 A. The Helper node (HLP) MUST run same version of BGP as DUT. 612 B. All devices MUST be synchronized using NTP or some local 613 reference clock. 615 C. All configuration variables for HLP, DUT and Emulator SHOULD 616 be set to the same values. These values MAY be basic-test or 617 a unique set completely described in the test set-up. 619 D. Establish BGP adjacency between DUT and Emulator. 621 E. Establish BGP adjacency between DUT and Helper Node. 623 F. To ensure adjacency establishment, wait for 3 KeepAlives from 624 the DUT or a configurable delay before proceeding with the 625 rest of the test. 627 G. Start the traffic from the Emulator towards the Helper Node 628 targeted at a specific route (e.g. routeA). Initially no 629 traffic SHOULD be observed on the egress interface as the 630 routeA is not installed in the forwarding database of the DUT. 632 H. Advertise routeA from the Emulator to the DUT and note the 633 time. 635 This is Tup(EMx, Rt-A), also named EM-XMT-Data-Time(Rt-A) 637 I. Record when routeA is received by DUT. 639 This is Tup(DUTr, Rt-A), also named DUT-RCV-Rt-Time(Rt-A) 641 J. Record the time when the routeA is forwarded by DUT towards 642 the Helper node. 644 This is Tup(DUTx, Rt-A), also named DUT-XMT-Rt-Time(Rt-A) 646 K. Record the time when the traffic targeted towards routeA is 647 received on the Route Egress Interface. This is TR(EMr, 648 Rt-A), also named DUT-XMT-Data Time(Rt-A). 650 FIB convergence = (DUT-RCV-Rt-Time - 651 DUT-XMT-Data-Time)(Rt-A) 653 RIB convergence = (DUT-RCV-Rt-Time - DUT-XMT-Rt-Time)(Rt-A) 655 Convergence for a route stream is characterized by 657 a) Individual route convergence for FIB, RIB 659 b) All route convergence of 661 FIB-convergence =DUT-RCV-Rt-Time(first)-DUT-XMT-Data- 662 Time(last) 664 RIB-convergence =DUT-RCV-Rt-Time(first)-DUT-XMT-Rt- 665 Time(last) 667 5.1.3. eBGP Convergence 669 Objective: 671 This test measures the convergence time taken by an implementation 672 to receive, install and advertise a route in an eBGP Scenario. 674 Reference Test Setup: 676 This test uses the setup as shown in figure 2 and the scenarios 677 described in RIB-IN and RIB-OUT are applicable to this test case. 679 5.1.4. iBGP Convergence 681 Objective: 683 This test measures the convergence time taken by an implementation 684 to receive, install and advertise a route in an iBGP Scenario. 686 Reference Test Setup: 688 This test uses the setup as shown in figure 2 and the scenarios 689 described in RIB-IN and RIB-OUT are applicable to this test case. 691 5.1.5. eBGP Multihop Convergence 693 Objective: 695 This test measures the convergence time taken by an implementation 696 to receive, install and advertise a route in an eBGP Multihop 697 Scenario. 699 Reference Test Setup: 701 This test uses the setup as shown in figure 3. DUT is used along 702 with a helper node. 704 Procedure: 706 A. The Helper Node (HLP) MUST run the same version of BGP as DUT. 708 B. All devices MUST be synchronized using NTP or some local 709 reference clock. 711 C. All variables affecting Convergence like authentication, 712 policies, timers SHOULD be set to basic-settings 714 D. All 3 devices, DUT, Emulator and Helper Node are configured 715 with different Autonomous Systems. 717 E. Loopback Interfaces are configured on DUT and Helper Node and 718 connectivity is established between them using any config 719 options available on the DUT. 721 F. Establish BGP adjacency between DUT and Emulator. 723 G. Establish BGP adjacency between DUT and Helper Node. 725 H. To ensure adjacency establishment, wait for 3 KeepAlives from 726 the DUT or a configurable delay before proceeding with the 727 rest of the test 729 I. Start the traffic from the Emulator towards the DUT targeted 730 at a specific route (e.g. routeA). 732 J. Initially no traffic SHOULD be observed on the egress 733 interface as the routeA is not installed in the forwarding 734 database of the DUT. 736 K. Advertise routeA from the Emulator to the DUT and note the 737 time (Tup(EMx,RouteA) also named Route-Tx-time(Rt-A). 739 L. Record the time when the route is received by the DUT. This 740 is Tup(EMr,DUT) named Route-Rcv-time(Rt-A). 742 M. Record the time when the traffic targeted towards routeA is 743 received from Egress Interface of DUT on emulator. This is 744 Tup(EMd,DUT) named Data-Rcv-time(Rt-A) 746 N. Record the time when the routeA is forwarded by DUT towards 747 the Helper node. This is Tup(EMf,DUT) also named Route-Fwd- 748 time(Rt-A) 750 FIB Convergence = (Data-Rcv-time - Route-Rcv-time)(Rt-A) 752 RIB Convergence = (Route-Fwd-time - Route-Rcv-time)(Rt-A) 754 Note: It is recommended that the test be repeated with varying number 755 of routes and route mixtures. With each set route mixture, the test 756 should be repeated multiple times. The results should record 757 average, mean, Standard Deviation 759 5.2. BGP Failure/Convergence Events 761 5.2.1. Physical Link Failure on DUT End 763 Objective: 765 This test measures the route convergence time due to local link 766 failure event at DUT's Local Interface. 768 Reference Test Setup: 770 This test uses the setup as shown in figure 1. Shutdown event is 771 defined as an administrative shutdown event on the DUT. 773 Procedure: 775 A. All variables affecting Convergence like authentication, 776 policies, timers should be set to basic-test policy. 778 B. Establish 2 BGP adjacencies from DUT to Emulator, one over the 779 peer interface and the other using a second peer interface. 781 C. Advertise the same route, routeA over both the adjacencies and 782 (Emp1) Interface to be the preferred next hop. 784 D. To ensure adjacency establishment, wait for 3 KeepAlives from 785 the DUT or a configurable delay before proceeding with the 786 rest of the test. 788 E. Start the traffic from the Emulator towards the DUT targeted 789 at a specific route (e.g. routeA). Initially traffic would be 790 observed on the best egress route (Emp1) instead of Emp2. 792 F. Trigger the shutdown event of Best Egress Interface on DUT 793 (Dp1). 795 G. Measure the Convergence Time for the event to be detected and 796 traffic to be forwarded to Next-Best Egress Interface (Dp2) 798 Time = Data-detect(Emp2) - Shutdown time 800 H. Stop the offered load and wait for the queues to drain and 801 Restart. 803 I. Bring up the link on DUT Best Egress Interface. 805 J. Measure the convergence time taken for the traffic to be 806 rerouted from (Dp2) to Best Interface (Dp1) 808 Time = Data-detect(Emp1) - Bring Up time 810 K. It is recommended that the test be repeated with varying 811 number of routes and route mixtures or with number of routes & 812 route mixtures closer to what is deployed in operational 813 networks. 815 5.2.2. Physical Link Failure on Remote/Emulator End 817 Objective: 819 This test measures the route convergence time due to local link 820 failure event at Tester's Local Interface. 822 Reference Test Setup: 824 This test uses the setup as shown in figure 1. Shutdown event is 825 defined as shutdown of the local interface of Tester via logical 826 shutdown event. The procedure used in 5.2.1 is used for the 827 termination. 829 5.2.3. ECMP Link Failure on DUT End 831 Objective: 833 This test measures the route convergence time due to local link 834 failure event at ECMP Member. The FIB configuration and BGP is 835 set to allow two ECMP routes to be installed. However, policy 836 directs the routes to be sent only over one of the paths 838 Reference Test Setup: 840 This test uses the setup as shown in figure 1 and the procedure 841 uses 5.2.1. 843 5.3. BGP Adjacency Failure (Non-Physical Link Failure) on Emulator 845 Objective: 847 This test measures the route convergence time due to BGP Adjacency 848 Failure on Emulator. 850 Reference Test Setup: 852 This test uses the setup as shown in figure 1. 854 Procedure: 856 A. All variables affecting Convergence like authentication, 857 policies, timers should be basic-policy set. 859 B. Establish 2 BGP adjacencies from DUT to Emulator, one over the 860 Best Egress Interface and the other using the Next-Best Egress 861 Interface. 863 C. Advertise the same route, routeA over both the adjacencies and 864 make Best Egress Interface to be the preferred next hop 866 D. To ensure adjacency establishment, wait for 3 KeepAlives from 867 the DUT or a configurable delay before proceeding with the 868 rest of the test. 870 E. Start the traffic from the Emulator towards the DUT targeted 871 at a specific route (e.g. routeA). Initially traffic would be 872 observed on the Best Egress interface. 874 F. Remove BGP adjacency via a software adjacency down on the 875 Emulator on the Best Egress Interface. This time is called 876 BGPadj-down-time also termed BGPpeer-down 878 G. Measure the Convergence Time for the event to be detected and 879 traffic to be forwarded to Next-Best Egress Interface. This 880 time is Tr-rr2 also called TR2-traffic-on 882 Convergence = TR2-traffic-on - BGPpeer-down 884 H. Stop the offered load and wait for the queues to drain and 885 Restart. 887 I. Bring up BGP adjacency on the Emulator over the Best Egress 888 Interface. This time is BGP-adj-up also called BGPpeer-up 890 J. Measure the convergence time taken for the traffic to be 891 rerouted to Best Interface. This time is BGP-adj-up also 892 called BGPpeer-up 894 5.4. BGP Hard Reset Test Cases 896 5.4.1. BGP Non-Recovering Hard Reset Event on DUT 898 Objective: 900 This test measures the route convergence time due to Hard Reset on 901 the DUT. 903 Reference Test Setup: 905 This test uses the setup as shown in figure 1. 907 Procedure: 909 A. The requirement for this test case is that the Hard Reset 910 Event should be non-recovering and should affect only the 911 adjacency between DUT and Emulator on the Best Egress 912 Interface. 914 B. All variables affecting SHOULD be set to basic-test values. 916 C. Establish 2 BGP adjacencies from DUT to Emulator, one over the 917 Best Egress Interface and the other using the Next-Best Egress 918 Interface. 920 D. Advertise the same route, routeA over both the adjacencies and 921 make Best Egress Interface to be the preferred next hop. 923 E. To ensure adjacency establishment, wait for 3 KeepAlives from 924 the DUT or a configurable delay before proceeding with the 925 rest of the test. 927 F. Start the traffic from the Emulator towards the DUT targeted 928 at a specific route (e.g routeA). Initially traffic would be 929 observed on the Best Egress interface. 931 G. Trigger the Hard Reset event of Best Egress Interface on DUT. 933 H. Measure the Convergence Time for the event to be detected and 934 traffic to be forwarded to Next-Best Egress Interface. 936 Time of convergence = time-traffic flow - time-reset 938 I. Stop the offered load and wait for the queues to drain and 939 Restart. 941 J. It is recommended that the test be repeated with varying 942 number of routes and route mixtures or with number of routes & 943 route mixtures closer to what is deployed in operational 944 networks. 946 K. When varying number of routes are used, convergence Time is 947 measured using the Loss Derived method [IGPData]. 949 L. Convergence Time in this scenario is influenced by Failure 950 detection time on Tester, BGP Keep Alive Time and routing, 951 forwarding table update time. 953 5.5. BGP Soft Reset 955 Objective: 957 This test measures the route convergence time taken by an 958 implementation to service a BGP Route Refresh message and 959 advertise a route. 961 Reference Test Setup: 963 This test uses the setup as shown in figure 2. 965 Procedure: 967 A. The BGP implementation on DUT & Helper Node needs to support 968 BGP Route Refresh Capability [RFC2918]. 970 B. All devices MUST be synchronized using NTP or some local 971 reference clock. 973 C. All variables affecting Convergence like authentication, 974 policies, timers should be set to basic-test defaults. 976 D. DUT and Helper Node are configured in the same Autonomous 977 System whereas Emulator is configured under a different 978 Autonomous System. 980 E. Establish BGP adjacency between DUT and Emulator. 982 F. Establish BGP adjacency between DUT and Helper Node. 984 G. To ensure adjacency establishment, wait for 3 KeepAlives from 985 the DUT or a configurable delay before proceeding with the 986 rest of the test. 988 H. Configure a policy under BGP on Helper Node to deny routes 989 received from DUT. 991 I. Advertise routeA from the Emulator to the DUT. 993 J. The DUT will try to advertise the route to Helper Node will be 994 denied. 996 K. Wait for 3 KeepAlives. 998 L. Start the traffic from the Emulator towards the Helper Node 999 targeted at a specific route say routeA. Initially no traffic 1000 would be observed on the Egress interface, as routeA is not 1001 present. 1003 M. Remove the policy on Helper Node and issue a Route Refresh 1004 request towards DUT. Note the timestamp of this event. This 1005 is the RefreshTime. 1007 N. Record the time when the traffic targeted towards routeA is 1008 received on the Egress Interface. This is RecTime. 1010 O. The following equation represents the Route Refresh 1011 Convergence Time per route. 1013 Route Refresh Convergence Time = (RecTime - RefreshTime) 1015 5.6. BGP Route Withdrawal Convergence Time 1017 Objective: 1019 This test measures the route convergence time taken by an 1020 implementation to service a BGP Withdraw message and advertise the 1021 withdraw. 1023 Reference Test Setup: 1025 This test uses the setup as shown in figure 2. 1027 Procedure: 1029 A. This test consists of 2 steps to determine the Total Withdraw 1030 Processing Time. 1032 B. Step 1: 1034 (1) All devices MUST be synchronized using NTP or some local 1035 reference clock. 1037 (2) All variables should be set to basic-test parameters. 1039 (3) DUT and Helper Node are configured in the same 1040 Autonomous System whereas Emulator is configured under a 1041 different Autonomous System. 1043 (4) Establish BGP adjacency between DUT and Emulator. 1045 (5) To ensure adjacency establishment, wait for 3 KeepAlives 1046 from the DUT or a configurable delay before proceeding 1047 with the rest of the test. 1049 (6) Start the traffic from the Emulator towards the DUT 1050 targeted at a specific route (e.g. routeA). Initially 1051 no traffic would be observed on the Egress interface as 1052 the routeA is not present on DUT. 1054 (7) Advertise routeA from the Emulator to the DUT. 1056 (8) The traffic targeted towards routeA is received on the 1057 Egress Interface. 1059 (9) Now the Tester sends request to withdraw routeA to DUT, 1060 TRx(Awith) also called WdrawTime1(Rt-A). 1062 (10) Record the time when no traffic is observed on the 1063 Egress Interface. This is the RouteRemoveTime1(Rt-A). 1065 (11) The difference between the RouteRemoveTime1 and 1066 WdrawTime1 is the WdrawConvTime1 1068 WdrawConvTime1(Rt-A) = RouteRemoveTime1(Rt-A) - 1069 WdrawTime1(Rt-A) 1071 C. Step 2: 1073 (1) Continuing from Step 1, re-advertise routeA back to DUT 1074 from Tester. 1076 (2) The DUT will try to advertise the routeA to Helper Node 1077 (This assumes there exists a session between DUT and 1078 helper node). 1080 (3) Start the traffic from the Emulator towards the Helper 1081 Node targeted at a specific route (e.g. routeA). Traffic 1082 would be observed on the Egress interface after routeA is 1083 received by the Helper Node 1085 WATime=time traffic first flows 1087 (4) Now the Tester sends a request to withdraw routeA to DUT. 1088 This is the WdrawTime2(Rt-A) 1090 WAWtime-TRx(Rt-A) = WdrawTime2(Rt-A) 1092 (5) DUT processes the withdraw and sends it to Helper Node. 1094 (6) Record the time when no traffic is observed on the Egress 1095 Interface of Helper Node. This is 1097 TR-WAW(DUT,RouteA) = RouteRemoveTime2(Rt-A) 1099 (7) Total withdraw processing time is 1101 TotalWdrawTime(Rt-A) = ((RouteRemoveTime2(Rt-A) - 1102 WdrawTime2(Rt-A)) - WdrawConvTime1(Rt-A)) 1104 5.7. BGP Path Attribute Change Convergence Time 1106 Objective: 1108 This test measures the convergence time taken by an implementation 1109 to service a BGP Path Attribute Change. 1111 Reference Test Setup: 1113 This test uses the setup as shown in figure 1. 1115 Procedure: 1117 A. This test only applies to Well-Known Mandatory Attributes like 1118 Origin, AS Path, Next Hop. 1120 B. In each iteration of test only one of these mandatory 1121 attributes need to be varied whereas the others remain the 1122 same. 1124 C. All devices MUST be synchronized using NTP or some local 1125 reference clock. 1127 D. All variables should be set to basic-test parameters. 1129 E. Advertise the route, routeA over the Best Egress Interface 1130 only, making it the preferred named Tbest. 1132 F. To ensure adjacency establishment, wait for 3 KeepAlives from 1133 the DUT or a configurable delay before proceeding with the 1134 rest of the test. 1136 G. Start the traffic from the Emulator towards the DUT targeted 1137 at the specific route (e.g. routeA). Initially traffic would 1138 be observed on the Best Egress interface. 1140 H. Now advertise the same route routeA on the Next-Best Egress 1141 Interface but by varying one of the well-known mandatory 1142 attributes to have a preferred value over that interface. We 1143 call this Tbetter. The other values need to be same as what 1144 was advertised on the Best-Egress adjacency 1145 TRx(Path-Change(Rt-A)) = Path Change Event Time(Rt-A) 1147 I. Measure the Convergence Time for the event to be detected and 1148 traffic to be forwarded to Next-Best Egress Interface 1150 DUT(Path-Change, Rt-A) = Path-switch time(Rt-A) 1152 Convergence = Path-switch time(Rt-A) - Path Change Event 1153 Time(Rt-A) 1155 J. Stop the offered load and wait for the queues to drain and 1156 Restart. 1158 K. Repeat the test for various attributes. 1160 5.8. BGP Graceful Restart Convergence Time 1162 Objective: 1164 This test measures the route convergence time taken by an 1165 implementation during a Graceful Restart Event as detailed in the 1166 Terminology document [RFC4098]. 1168 Reference Test Setup: 1170 This test uses the setup as shown in figure 4. 1172 Procedure: 1174 A. It measures the time taken by an implementation to service a 1175 BGP Graceful Restart Event and advertise a route. 1177 B. The Helper Nodes are the same model as DUT and run the same 1178 BGP implementation as DUT. 1180 C. The BGP implementation on DUT & Helper Node needs to support 1181 BGP Graceful Restart Mechanism [RFC4724]. 1183 D. All devices MUST be synchronized using NTP or some local 1184 reference clock. 1186 E. All variables are set to basic-test values. 1188 F. DUT and Helper Node-1(HLP1) are configured in the same 1189 Autonomous System whereas Emulator and Helper Node-2(HLP2) are 1190 configured under different Autonomous Systems. 1192 G. Establish BGP adjacency between DUT and Helper Nodes. 1194 H. Establish BGP adjacency between Helper Node-2 and Emulator. 1196 I. To ensure adjacency establishment, wait for 3 KeepAlives from 1197 the DUT or a configurable delay before proceeding with the 1198 rest of the test. 1200 J. Configure a policy under BGP on Helper Node-1 to deny routes 1201 received from DUT. 1203 K. Advertise routeA from the Emulator to Helper Node-2. 1205 L. Helper Node-2 advertises the route to DUT and DUT will try to 1206 advertise the route to Helper Node-1 which will be denied. 1208 M. Wait for 3 KeepAlives. 1210 N. Start the traffic from the Emulator towards the Helper Node-1 1211 targeted at the specific route (e.g. routeA). Initially no 1212 traffic would be observed on the Egress interface as the 1213 routeA is not present. 1215 O. Perform a Graceful Restart Trigger Event on DUT and note the 1216 time. This is the GREventTime. 1218 P. Remove the policy on Helper Node-1. 1220 Q. Record the time when the traffic targeted towards routeA is 1221 received on the Egress Interface 1223 TRr(DUT, routeA). This is also called RecTime(Rt-A) 1225 R. The following equation represents the Graceful Restart 1226 Convergence Time 1228 Graceful Restart Convergence Time(Rt-A) = ((RecTime(Rt-A) - 1229 GREventTime) - RIB-IN) 1231 S. It is assumed in this test case that after a Switchover is 1232 triggered on the DUT, it will not have any cycles to process 1233 BGP Refresh messages. The reason for this assumption is that 1234 there is a narrow window of time where after switchover when 1235 we remove the policy from Helper Node-1, implementations might 1236 generate Route-Refresh automatically and this request might be 1237 serviced before the DUT actually switches over and 1238 reestablishes BGP adjacencies with the peers. 1240 6. Reporting Format 1242 For each test case, it is recommended that the reporting tables below 1243 are completed and all time values SHOULD be reported with resolution 1244 as specified in [RFC4098]. 1246 Parameter Units 1247 Test case Test case number 1248 Test topology 1,2,3 or 4 1249 Parallel links Number of parallel links 1250 Interface type GigE, POS, ATM, other 1251 Convergence Event Hard reset, Soft reset, link 1252 failure, or other defined 1253 eBGP sessions Number of eBGP sessions 1254 iBGP sessions Number of iBGP sessions 1255 eBGP neighbor Number of eBGP neighbors 1256 iBGP neighbor Number of iBGP neighbors 1257 Routes per peer Number of routes 1258 Total unique routes Number of routes 1259 Total non-unique routes Number of routes 1260 IGP configured ISIS, OSPF, static, or other 1261 Route Mixture Description of Route mixture 1262 Route Packing Number of routes in an update 1263 Policy configured Yes, No 1264 Packet size offered to the DUT Bytes 1265 Offered load Packets per second 1266 Packet sampling interval on Seconds 1267 tester 1268 Forwarding delay threshold Seconds 1269 Timer Values configured on DUT 1270 Interface failure indication Seconds 1271 delay 1272 Hold time Seconds 1273 MinRouteAdvertisementInterval Seconds 1274 (MRAI) 1275 MinASOriginationInterval Seconds 1276 (MAOI) 1277 Keepalive Time Seconds 1278 ConnectRetry Seconds 1279 TCP Parameters for DUT and tester 1280 MSS Bytes 1281 Slow start threshold Bytes 1282 Maximum window size Bytes 1284 Test Details: 1286 a. If the Offered Load matches a subset of routes, describe how this 1287 subset is selected. 1289 b. Describe how the Convergence Event is applied, does it cause 1290 instantaneous traffic loss or not. 1292 c. If there is any policy configured, describe the configured 1293 policy. 1295 Complete the table below for the initial Convergence Event and the 1296 reversion Convergence Event 1297 Parameter Unit 1298 Convergence Event Initial or reversion 1299 Traffic Forwarding Metrics 1300 Total number of packets Number of packets 1301 offered to DUT 1302 Total number of packets Number of packets 1303 forwarded by DUT 1304 Connectivity Packet Loss Number of packets 1305 Convergence Packet Loss Number of packets 1306 Out-of-order packets Number of packets 1307 Duplicate packets Number of packets 1308 Convergence Benchmarks 1309 Rate-derived Method[IGP- 1310 Data]: 1311 First route convergence Seconds 1312 time 1313 Full convergence time Seconds 1314 Loss-derived Method [IGP- 1315 Data]: 1316 Loss-derived convergence Seconds 1317 time 1318 Route-Specific Loss-Derived 1319 Method: 1320 Minimum R-S convergence Seconds 1321 time 1322 Maximum R-S convergence Seconds 1323 time 1324 Median R-S convergence Seconds 1325 time 1326 Average R-S convergence Seconds 1327 time 1329 Loss of Connectivity Benchmarks 1330 Loss-derived Method: 1331 Loss-derived loss of Seconds 1332 connectivity period 1333 Route-Specific loss-derived 1334 Method: 1335 Minimum LoC period [n] Array of seconds 1336 Minimum Route LoC period Seconds 1337 Maximum Route LoC period Seconds 1338 Median Route LoC period Seconds 1339 Average Route LoC period Seconds 1341 7. IANA Considerations 1343 This draft does not require any new allocations by IANA. 1345 8. Security Considerations 1347 Benchmarking activities as described in this memo are limited to 1348 technology characterization using controlled stimuli in a laboratory 1349 environment, with dedicated address space and the constraints 1350 specified in the sections above. 1352 The benchmarking network topology will be an independent test setup 1353 and MUST NOT be connected to devices that may forward the test 1354 traffic into a production network, or misroute traffic to the test 1355 management network. 1357 Further, benchmarking is performed on a "black-box" basis, relying 1358 solely on measurements observable external to the DUT/SUT. 1360 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 1361 benchmarking purposes. Any implications for network security arising 1362 from the DUT/SUT SHOULD be identical in the lab and in production 1363 networks. 1365 9. Acknowledgements 1367 We would like to thank Anil Tandon, Arvind Pandey, Mohan Nanduri, Jay 1368 Karthik, Eric Brendel for their input and discussions on various 1369 sections in the document. We also like to acknowledge Will Liu, 1370 Semion Lisyansky, Faisal Shah for their review and feedback to the 1371 document. 1373 10. References 1375 10.1. Normative References 1377 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1378 Requirement Levels", BCP 14, RFC 2119, March 1997. 1380 [RFC2918] Chen, E., "Route Refresh Capability for BGP-4", RFC 2918, 1381 September 2000. 1383 [RFC4098] Berkowitz, H., Davies, E., Hares, S., Krishnaswamy, P., 1384 and M. Lepp, "Terminology for Benchmarking BGP Device 1385 Convergence in the Control Plane", RFC 4098, June 2005. 1387 [RFC4271] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway 1388 Protocol 4 (BGP-4)", RFC 4271, January 2006. 1390 [RFC6412] Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology 1391 for Benchmarking Link-State IGP Data-Plane Route 1392 Convergence", RFC 6412, November 2011. 1394 10.2. Informative References 1396 [RFC1242] Bradner, S., "Benchmarking terminology for network 1397 interconnection devices", RFC 1242, July 1991. 1399 [RFC1983] Malkin, G., "Internet Users' Glossary", RFC 1983, 1400 August 1996. 1402 [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN 1403 Switching Devices", RFC 2285, February 1998. 1405 [RFC2545] Marques, P. and F. Dupont, "Use of BGP-4 Multiprotocol 1406 Extensions for IPv6 Inter-Domain Routing", RFC 2545, 1407 March 1999. 1409 [RFC4724] Sangli, S., Chen, E., Fernando, R., Scudder, J., and Y. 1410 Rekhter, "Graceful Restart Mechanism for BGP", RFC 4724, 1411 January 2007. 1413 [RFC4760] Bates, T., Chandra, R., Katz, D., and Y. Rekhter, 1414 "Multiprotocol Extensions for BGP-4", RFC 4760, 1415 January 2007. 1417 [RFC5925] Touch, J., Mankin, A., and R. Bonica, "The TCP 1418 Authentication Option", RFC 5925, June 2010. 1420 Authors' Addresses 1422 Rajiv Papneja 1423 Huawei Technologies 1425 Email: rajiv.papneja@huawei.com 1427 Bhavani Parise 1428 Cisco Systems 1430 Email: bhavani@cisco.com 1431 Susan Hares 1432 Adara Networks 1434 Email: shares@ndzh.com 1436 Dean Lee 1437 IXIA 1439 Email: dlee@ixiacom.com 1441 Ilya Varlashkin 1442 Easynet Global Services 1444 Email: ilya.varlashkin@easynet.com