idnits 2.17.1 draft-papneja-bgp-basic-dp-convergence-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 32) being 69 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 20, 2011) is 4543 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'IGPData' is mentioned on line 910, but not defined == Unused Reference: 'I-D.ietf-bmwg-igp-dataplane-conv-term' is defined on line 1328, but no explicit reference was found in the text ** Downref: Normative reference to an Informational draft: draft-ietf-bmwg-igp-dataplane-conv-term (ref. 'I-D.ietf-bmwg-igp-dataplane-conv-term') Summary: 1 error (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Working Group R. Papneja 3 Internet-Draft Huawei Technologies 4 Intended status: Standards Track B. Parise 5 Expires: April 22, 2012 Cisco Systems 6 S. Hares 7 Huawei Technologies 8 I. Varlashkin 9 Easynet Global Services 10 October 20, 2011 12 Basic BGP Convergence Benchmarking Methodology for Data Plane 13 Convergence 14 draft-papneja-bgp-basic-dp-convergence-02.txt 16 Abstract 18 BGP is widely deployed and used by several service providers as the 19 default Inter AS routing protocol. It is of utmost importance to 20 ensure that when a BGP peer or a downstream link of a BGP peer fails, 21 the alternate paths are rapidly used and routes via these alternate 22 paths are installed. This document provides the basic BGP 23 Benchmarking Methodology using existing BGP Convergence Terminology, 24 RFC 4098. 26 Status of this Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at http://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on April 22, 2012. 43 Copyright Notice 45 Copyright (c) 2011 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents 50 (http://trustee.ietf.org/license-info) in effect on the date of 51 publication of this document. Please review these documents 52 carefully, as they describe your rights and restrictions with respect 53 to this document. Code Components extracted from this document must 54 include Simplified BSD License text as described in Section 4.e of 55 the Trust Legal Provisions and are provided without warranty as 56 described in the Simplified BSD License. 58 This document may contain material from IETF Documents or IETF 59 Contributions published or made publicly available before November 60 10, 2008. The person(s) controlling the copyright in some of this 61 material may not have granted the IETF Trust the right to allow 62 modifications of such material outside the IETF Standards Process. 63 Without obtaining an adequate license from the person(s) controlling 64 the copyright in such materials, this document may not be modified 65 outside the IETF Standards Process, and derivative works of it may 66 not be created outside the IETF Standards Process, except to format 67 it for publication as an RFC or to translate it into languages other 68 than English. 70 Table of Contents 72 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 73 1.1. Precise Benchmarking Definition . . . . . . . . . . . . . 4 74 1.2. Purpose of BGP FIB (Data Plane) Convergence . . . . . . . 5 75 1.3. Control Plane Convergence . . . . . . . . . . . . . . . . 5 76 1.4. Benchmarking Testing . . . . . . . . . . . . . . . . . . . 5 77 2. Existing Definitions and Requirements . . . . . . . . . . . . 5 78 3. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 6 79 3.1. General Reference Topologies . . . . . . . . . . . . . . . 6 80 4. Test Considerations . . . . . . . . . . . . . . . . . . . . . 8 81 4.1. Number of Peers . . . . . . . . . . . . . . . . . . . . . 8 82 4.2. Number of Routes per Peer . . . . . . . . . . . . . . . . 8 83 4.3. Policy Processing/Reconfiguration . . . . . . . . . . . . 9 84 4.4. Configured Parameters (Timers, etc..) . . . . . . . . . . 9 85 4.5. Interface Types . . . . . . . . . . . . . . . . . . . . . 10 86 4.6. Measurement Accuracy . . . . . . . . . . . . . . . . . . . 10 87 4.7. Measurement Statistics . . . . . . . . . . . . . . . . . . 11 88 4.8. Authentication . . . . . . . . . . . . . . . . . . . . . . 11 89 4.9. Convergence Events . . . . . . . . . . . . . . . . . . . . 11 90 4.10. High Availability . . . . . . . . . . . . . . . . . . . . 11 91 5. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 12 92 5.1. Basic Convergence Tests . . . . . . . . . . . . . . . . . 12 93 5.1.1. RIB-IN Convergence . . . . . . . . . . . . . . . . . . 12 94 5.1.2. RIB-OUT Convergence . . . . . . . . . . . . . . . . . 13 95 5.1.3. eBGP Convergence . . . . . . . . . . . . . . . . . . . 15 96 5.1.4. iBGP Convergence . . . . . . . . . . . . . . . . . . . 15 97 5.1.5. eBGP Multihop Convergence . . . . . . . . . . . . . . 16 98 5.2. BGP Failure/Convergence Events . . . . . . . . . . . . . . 17 99 5.2.1. Physical Link Failure on DUT End . . . . . . . . . . . 17 100 5.2.2. Physical Link Failure on Remote/Emulator End . . . . . 18 101 5.2.3. ECMP Link Failure on DUT End . . . . . . . . . . . . . 18 102 5.3. BGP Adjacency Failure (Non-Physical Link Failure) on 103 Emulator . . . . . . . . . . . . . . . . . . . . . . . . . 19 104 5.4. BGP Hard Reset Test Cases . . . . . . . . . . . . . . . . 20 105 5.4.1. BGP Non-Recovering Hard Reset Event on DUT . . . . . . 20 106 5.5. BGP Soft Reset . . . . . . . . . . . . . . . . . . . . . . 21 107 5.6. BGP Route Withdrawal Convergence Time . . . . . . . . . . 22 108 5.7. BGP Path Attribute Change Convergence Time . . . . . . . . 24 109 5.8. BGP Graceful Restart Convergence Time . . . . . . . . . . 26 110 6. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 27 111 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 30 112 8. Security Considerations . . . . . . . . . . . . . . . . . . . 30 113 9. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 30 114 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 30 115 10.1. Normative References . . . . . . . . . . . . . . . . . . . 30 116 10.2. Informative References . . . . . . . . . . . . . . . . . . 31 117 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 31 119 1. Introduction 121 This document defines the methodology for benchmarking data plane FIB 122 convergence performance of BGP in router and switches for simple 123 topologies of 3 or 4 nodes. The methodology proposed in this 124 document applies to both IPv4 and IPv6 and if a particular test is 125 unique to one version, it is marked accordingly. For IPv6 126 benchmarking the device under test will require the support of Multi- 127 Protocol BGP (MP-BGP) [RFC4760, RFC2545]. 129 The scope of this companion document is limited to basic BGP protocol 130 FIB convergence measurements. BGP extensions outside of carrying 131 IPv6 in (MP-BGP) [RFC4760, RFC2545] are outside the scope of this 132 document. Interaction with IGPs (IGP interworking) is outside the 133 scope of this document. 135 This document makes assumption that ability of a router to forward 136 packets implies that BGP has converged. Some modern routers may have 137 an optimisation which results in forwarding ability is restored 138 before BGP converges. Such optimisation MUST be disabled before 139 applying benchmarking methodology described here. If that is not 140 possible or if it's not known whether or not router has such 141 optimisation alternative approach described in 142 [draft-varlashkin-router-conv-bench] should be used instead. 144 1.1. Precise Benchmarking Definition 146 Since benchmarking is science of precision, let us restate the 147 purpose of this document in benchmarking terms. This document 148 defines methodology to test 150 - data plane convergence on a single BGP device that supports the BGP 151 [RFC4271] functionality 153 - in test topology of 3 or 4 nodes 155 - using Basic BGP 157 Data plane convergence is defined as the completion of all FIB 158 changes so that all forwarded traffic now takes the new proposed 159 route. RFC 4098 defines the terms BGP device, FIB and the forwarded 160 traffic. Data plane convergence is different than control plane 161 convergence within a node. 163 Basic BGP is defined as RFC 4271 functional with Multi-Protocol BGP 164 (MP-BGP) [RFC4760, RFC2545] for IPv6. The use of other extensions of 165 BGP to support layer-2, layer-3 virtual private networks (VPN) are 166 out of scope of this document. 168 The terminology used in this document is defined in [RFC4098]. One 169 additional term is defined in this draft: FIB (Data plane) BGP 170 Convergence. 172 1.2. Purpose of BGP FIB (Data Plane) Convergence 174 In the current Internet architecture the Inter-Autonomous System 175 (inter-AS) transit is primarily available through BGP. To maintain a 176 reliable connectivity within intra-domains or across inter-domains, 177 fast recovery from failures remains most critical. To ensure minimal 178 traffic losses, many service providers are requiring BGP 179 implementations to converge the entire Internet routing table within 180 sub-seconds at FIB level. 182 Furthermore, to compare these numbers amongst various devices, 183 service providers are also looking at ways to standardize the 184 convergence measurement methods. This document offers test methods 185 for simple topologies. These simple tests will provide a quick high- 186 level check, of the BGP data plane convergence across multiple 187 implementations. 189 1.3. Control Plane Convergence 191 The convergence of BGP occurs at two levels: RIB and FIB convergence. 192 RFC 4098 defines terms for BGP control plane convergence. 193 Methodologies which test control plane convergence are out of scope 194 for this draft. 196 1.4. Benchmarking Testing 198 In order to ensure that the results obtained in tests are repeatable, 199 careful setup of initial conditions and exact steps are required. 201 This document proposes these initial conditions, test steps, and 202 result checking. To ensure uniformity of the results all optional 203 parameters SHOULD be disabled and all settings SHOULD be changed to 204 default, these may include BGP timers as well. 206 2. Existing Definitions and Requirements 208 RFC 1242, "Benchmarking Terminology for Network Interconnect Devices" 209 [RFC1242] and RFC 2285, "Benchmarking Terminology for LAN Switching 210 Devices" [RFC2285] SHOULD be reviewed in conjunction with this 211 document. WLAN-specific terms and definitions are also provided in 212 Clauses 3 and 4 of the IEEE 802.11 standard [802.11]. Commonly used 213 terms may also be found in RFC 1983 [RFC1983]. 215 For the sake of clarity and continuity, this document adopts the 216 general template for benchmarking terminology set out in Section 2 of 217 RFC 1242. Definitions are organized in alphabetical order, and 218 grouped into sections for ease of reference. The following terms are 219 assumed to be taken as defined in RFC 1242 [RFC1242]: Throughput, 220 Latency, Constant Load, Frame Loss Rate, and Overhead Behavior. In 221 addition, the following terms are taken as defined in [RFC2285]: 222 Forwarding Rates, Maximum Forwarding Rate, Loads, Device Under Test 223 (DUT), and System Under Test (SUT). 225 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 226 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 227 document are to be interpreted as described in RFC 2119 [RFC2119]. 229 3. Test Topologies 231 This section describes simple test setups for use in BGP benchmarking 232 tests measuring convergence of the FIB (data plane) after the BGP 233 updates has been received. 235 These simple test nodes have 3 or 4 nodes with the following 236 configuration: 238 1. Basic Test Setup 240 2. Three node setup for iBGP or eBGP convergence 242 3. Setup for eBGP multihop test scenario 244 4. Four node setup for iBGP or eBGP convergence 246 Individual tests refer to these topologies. 248 Figures 1-4 use the following conventions 250 o AS-X: Autonomous System X 252 o Loopback Int: Loopback interface on the BGP enabled device 254 o R2: Helper router 256 3.1. General Reference Topologies 258 Emulator acts as 1 or more BGP peers for different testcases. 260 +----------+ +------------+ 261 | | traffic interfaces | | 262 | |-----------------------1---- | tx | 263 | |-----------------------2---- | tr1 | 264 | |-----------------------3-----| tr2 | 265 | DUT | routing interfaces | Emulator | 266 | | | | 267 | Drr1|--------------------------- |Err1 | 268 | | BGP Peering | | 269 | Drr2|---------------------------- |Err2 | 270 | | BGP Peering | | 271 +----------+ +------------+ 273 Figure 1 Basic Test Setup 275 +------------+ +-----------+ +-----------+ 276 | | | | | | 277 | | | | | | 278 | HLP | | DUT | | Emulator | 279 | (AS-X) |--------| (AS-Y) |-----------| (AS-Z) | 280 | | | | | | 281 | | | | | | 282 | | | | | | 283 +------------+ +-----------+ +-----------+ 285 Figure 2 Three Node Setup for eBGP and iBGP Convergence 287 +------------+ +-----------+ +-----------+ 288 | | | | | | 289 | | | | | | 290 | R1 | | DUT | | Emulator | 291 | (AS-X) |--------| (AS-Y) |-----------| (AS-Z) | 292 | | | | | | 293 | | | | | | 294 | | | | | | 295 +------------+ +-----------+ +-----------+ 296 |Loopback-Int |Loopback-Int 297 | | 298 + + 300 Figure 3 BGP Convergence for eBGP Multihop Scenario 302 +---------+ +--------+ +--------+ +---------+ 303 | | | | | | | | 304 | | | | | | | | 305 | R1 | | DUT2 | | DUT1 | |Emulator | 306 | (AS-X) |-----| (AS-X) |-----| (AS-Y) |-----| (AS-Z) | 307 | | | | | | | | 308 | | | | | | | | 309 | | | | | | | | 310 +---------+ +--------+ +--------+ +---------+ 312 Figure 4 Four Node Setup for EBGP and IBGP Convergence 314 4. Test Considerations 316 The test cases for measuring convergence for iBGP and eBGP are 317 different. Both iBGP and eBGP use different mechanisms to advertise, 318 install and learn the routes. Typically, an iBGP route on the DUT is 319 installed and exported only when the next-hop is reachable. For eBGP 320 the route is installed on the DUT with the remote interface address 321 as the next-hop with the exception of the multihop case. 323 4.1. Number of Peers 325 Number of Peers is defined as the number of BGP neighbors or sessions 326 the DUT has at the beginning of the test. The peers are established 327 before the tests begin. The relationship could be either, iBGP or 328 eBGP peering depending upon the test case requirement. 330 The DUT establishes one or more BGP sessions with one more emulated 331 routers or helper nodes. Additional peers can be added based on the 332 testing requirements. The number of peers enabled during the testing 333 should be well documented in the report matrix. 335 4.2. Number of Routes per Peer 337 It Number of Routes per Peer is defined as the number of routes 338 advertized or learnt by the DUT per session or through neighbor 339 relationship with an emulator or helper node. The tester, emulating 340 as neighbor MUST advertise at least one route per peer. 342 Each test must run must identify the route stream in terms of route 343 packing, route mixture, and number of routes. This route stream must 344 be well documented in the reporting stream. RFC 4098 defines these 345 terms. 347 It is RECOMMENDED that the user may consider advertizing the entire 348 current Internet routing table per peering session using an Internet 349 route mixture with unique or non-unique routes. If multiple peers 350 are used, it is important to precisely document the timing sequence 351 between the peer sending routes (as defined in RFC 4098). 353 4.3. Policy Processing/Reconfiguration 355 The DUT MUST run one baseline test where policy is Minimum policy as 356 defined in RFC 4098. Additional runs may be done with policy set-up 357 before the tests begin. Exact policy settings should be documented 358 as part of the test. 360 4.4. Configured Parameters (Timers, etc..) 362 There are configured parameters and timers that may impact the 363 measured BGP convergence times. 365 The benchmark metrics MAY be measured at any fixed values for these 366 configured parameters. 368 It is RECOMMENDED these configure parameters have two settings: a) 369 basic-test, and b)values as expected in the operational network. All 370 optional BGP settings MUST be kept consistent across iterations of 371 any specific tests 373 Examples of the configured parameters that may impact measured BGP 374 convergence time include, but are not limited to: 376 1. Interface failure detection timer 378 2. BGP Keepalive timer 380 3. BGP Holdtime 382 4. BGP update delay timer 384 5. ConnectRetry timer 386 6. TCP Segment Size 387 7. Minimum Route Advertisement Interval (MRAI) 389 8. MinASOriginationInterval (MAOI) 391 9. Route Flap Dampening parameters 393 10. TCP MD5 395 The basic-test settings for the parameters should be: 397 1. Interface failure detection timer (0 ms) 399 2. BGP Keepalive timer (1 min) 401 3. BGP Holdtime (3 min) 403 4. BGP update delay timer (0 s) 405 5. ConnectRetry timer (1 s) 407 6. TCP Segment Size (4096) 409 7. Minimum Route Advertisement Interval (MRAI) (0 s) 411 8. MinASOriginationInterval (MAOI)(0 s) 413 9. Route Flap Dampening parameters (off) 415 10. TCP MD5 (off) 417 4.5. Interface Types 419 The type of media dictate which test cases may be executed, each 420 interface type has unique mechanism for detecting link failures and 421 the speed at which that mechanism operates will influence the 422 measurement results. All interfaces MUST be of the same media and 423 throughput for each test case. 425 4.6. Measurement Accuracy 427 Since observed packet loss is used to measure the route convergence 428 time, the time between two successive packets offered to each 429 individual route is the highest possible accuracy of any packet-loss 430 based measurement. When packet jitter is much less than the 431 convergence time, it is a negligible source of error and hence it 432 will be treated as within tolerance. 434 An exterior measurement on the input media (such Ethernet)is defined 435 by this specification. 437 4.7. Measurement Statistics 439 The benchmark measurements may vary for each trial, due to the 440 statistical nature of timer expirations, CPU scheduling, etc. It is 441 recommended to repeat the test multiple times. Evaluation of the 442 test data must be done with an understanding of generally accepted 443 testing practices regarding repeatability, variance and statistical 444 significance of a small number of trials. 446 For any repeated tests that are averaged to remove variance, all 447 parameters MUST remain the same. 449 4.8. Authentication 451 Authentication in BGP is done using the TCP MD5 Signature Option 452 [RFC5925]. The processing of the MD5 hash, particularly in devices 453 with a large number of BGP peers and a large amount of update 454 traffic, can have an impact on the control plane of the device. If 455 authentication is enabled, it SHOULD be documented correctly in the 456 reporting format 458 4.9. Convergence Events 460 Convergence events or triggers are defined as abnormal occurrences in 461 the network, which initiate route flapping in the network, and hence 462 forces the re-convergence of a steady state network. In a real 463 network, a series of convergence events may cause convergence latency 464 operators desire to test. 466 These convergence events must be defined in terms of the sequences 467 defined in RFC 4098. This basic document begins all tests with a 468 router initial set-up. Additional documents will define BGP data 469 plane convergence based on peer initialization. 471 The convergence events may or may not be tied to the actual failure A 472 Soft Reset (RFC 4098) does not clear the RIB or FIB tables. A Hard 473 reset clears the BGP peer sessions, the RIB tables, and FIB tables. 475 4.10. High Availability 477 Due to the different Non-Stop-Routing (sometimes referred to High- 478 Availability) solutions available from different vendors, it is 479 RECOMMENDED that any redundancy available in the routing processors 480 should be disabled during the convergence measurements. 482 5. Test Cases 484 All tests defined under this section assume the following: 486 a. BGP peers should be brought to BGP Peer established state 488 b. Furthermore the traffic generation and routing should be verified 489 in the topology 491 5.1. Basic Convergence Tests 493 These test cases measure characteristics of a BGP implementation in 494 non-failure scenarios like: 496 1. RIB-IN Convergence 498 2. RIB-OUT Convergence 500 3. eBGP Convergence 502 4. iBGP Convergence 504 5.1.1. RIB-IN Convergence 506 Objective: 508 This test measures the convergence time taken to receive and 509 install a route in RIB using BGP 511 Reference Test Setup: 513 This test uses the setup as shown in figure 1 515 Procedure: 517 A. All variables affecting Convergence should be set to a basic 518 test state (as defined in section 4-4). 520 B. Establish BGP adjacency between DUT and peer x of Emulator. 522 C. To ensure adjacency establishment, wait for 3 KeepAlives from 523 the DUT or a configurable delay before proceeding with the 524 rest of the test. 526 D. Start the traffic from the Emulator peer-x towards the DUT 527 targeted at a routes specified in route mixture (ex. route A) 528 Initially no traffic SHOULD be observed on the egress 529 interface as the route A is not installed in the forwarding 530 database of the DUT. 532 E. Advertise route A from the Peer-x to the DUT and record the 533 time. 535 This is Tup(EMx,Rt-A) also named 'XMT-Rt-time'. 537 F. Record the time when the route-A from Peer-x is received at 538 the DUT. 540 This Tup(DUT,Rt-A) also named 'RCV-Rt-time'. 542 G. Record the time when the traffic targeted towards route A is 543 received by Emulator on appropriate traffic egress interface. 545 This is TR(TDx,Rt-A). This is also named DUT-XMT-Data- 546 Time. 548 H. The difference between the Tup(TDx,RT-A) and traffic received 549 time (TR (TDr, Rt-A) is the FIB Convergence Time for route-A 550 in the route mixture.A full convergence for the route update 551 is the measurement between the 1st route (Route-A) and the 552 last route (Rt-last) 554 Route update convergence is 556 TR(TDr, RT-last)- Tup(DUT, Rt-A) or 558 (DUT-XMT-Data-Time - RCV-Rt-Time)(rt-A) 560 Note: It is recommended that a single test with the same route 561 mixture be repeated several times. A report should provide the Stand 562 Deviation of all tests and the Average. 564 Running tests with a varying number of routes and route mixtures is 565 important to get a full characterization of a single peer. 567 5.1.2. RIB-OUT Convergence 569 Objective: 571 This test measures the convergence time taken by an implementation 572 to receive, install and advertise a route using BGP 574 Reference Test Setup: 576 This test uses the setup as shown in figure 2 578 Procedure: 580 A. The Helper node (HLP) run same version of BGP as DUT. 582 B. All devices MUST be synchronized using NTP or some local 583 reference clock. 585 C. All configuration variables for HLP, DUT, and Emulator SHOULD 586 be set to the same values. These values MAY be basic-test or 587 a unique set completely described in the test set-up. 589 D. Establish BGP adjacency between DUT and Emulator. 591 E. Establish BGP adjacency between DUT and Helper Node. 593 F. To ensure adjacency establishment, wait for 3 KeepAlives from 594 the DUT or a configurable delay before proceeding with the 595 rest of the test 597 G. Start the traffic from the Emulator towards the Helper Node 598 targeted at a specific route say route A. Initially no traffic 599 SHOULD be observed on the egress interface as the route-A is 600 not installed in the forwarding database of the DUT. 602 H. Advertise routeA from the Emulator to the DUT and note the 603 time. 605 This is Tup(EMx, Route-A). (also named EM-XMT-Rt-Time 607 I. Record when Route-A is received by DUT. 609 This is Tup(DUTr, Route-A). (also named DUT-RCV-Rt-Time) 611 J. Record the time when the ROUTE forward by DUT toward the 612 Helper node. 614 This is Tup(DUTx, Rt-A). (also named DUT-XMT-Rt-Time) 616 K. Record the time when the traffic targeted towards route-A is 617 received on the Route Egress Interface toward peer-X. This is 618 TR(EMr, Route-A). (also named DUT-XMT-Data Time). 620 FIB convergence = (DUT-RCV-Rt-Time - DUT-XMT-Data-Time) 622 RIB convergence = (DUT-RCV-Rt-Time - DUT-XMT-Rt-Time) 624 Convergence for a route stream is characterized by 626 a) Individual route convergence for FIB, RIB 628 b) All route convergence of 630 FIB-convergence =DUT-RCV-Rt-Time(A)-DUT-XMT-Data-Time(last) 632 RIB-convergence =DUT-RCV-Rt-Time(A)-DUT-XMT-Rt-Time(last) 634 5.1.3. eBGP Convergence 636 Objective: 638 This test measures the convergence time taken by an implementation 639 to receive, install and advertise a route in an eBGP Scenario 641 Reference Test Setup: 643 This test uses the setup as shown in figure 2 and the scenarios 644 described in RIB-IN and RIB-OUT are applicable to this test case. 646 5.1.4. iBGP Convergence 648 Objective: 650 This test measures the convergence time taken by an implementation 651 to receive, install and advertise a route in an iBGP Scenario 653 Reference Test Setup: 655 This test uses the setup as shown in figure 2 and the scenarios 656 described in RIB-IN and RIB-OUT are applicable to this test case. 658 5.1.5. eBGP Multihop Convergence 660 Objective: 662 This test measures the convergence time taken by an implementation 663 to receive, install and advertise a route in an eBGP Multihop 664 Scenario 666 Reference Test Setup: 668 This test uses the setup as shown in figure 3.Two DUTs are used 669 along with a helper node. 671 Procedure: 673 A. The DUT2 is the same model as DUT and runs the same BGP 674 implementation as DUT 676 B. All devices to be synchronized using NTP 678 C. All variables affecting Convergence like authentication, 679 policies, timers should be set to basic-settings 681 D. All 3 devices, DUT, Emulator and Helper Node are configured as 682 different Autonomous Systems 684 E. Loopback Interfaces configured on DUT and Helper Node and 685 connectivity is established between them using any config 686 options available on the DUT 688 F. Establish BGP adjacency between DUT1 and Emulator 690 G. Establish BGP adjacency between DUT2 and Helper Node 692 H. Establish BGP adjacency between DUT 1 and DUT 2 694 I. To ensure adjacency establishment, wait for 3 KeepAlives from 695 the DUT1 and DUT2 or a configurable delay before proceeding 696 with the rest of the test 698 J. Start the traffic from the Emulator towards the Helper Node 699 targeted at a specific route say routeA 701 K. Initially no traffic SHOULD be observed on the egress 702 interface as the routeA is not installed in the forwarding 703 database of the DUT 705 L. Advertise routeA from the Emulator to the DUT and note the 706 time. (Tup(EMx,RouteA) - This is also named (Route-Rec-time) 708 M. Record the time when the traffic targeted towards routeA is 709 received from Egress Interface of DUT on emulator This is 710 TR(EMr,DUT), nicknamed (Data Receive time) 712 N. The following equation represents the FIB Convergence multi- 713 node 715 eBGP Multihop Convergence Time = (Rt-RecTime - Data- 716 RcvTime) 718 Note: It is recommended that the test be repeated with varying number 719 of routes and route mixtures. With each set route mixture, the test 720 should be repeated multiple times. The results should record 721 average, mean, Standard Deviation 723 5.2. BGP Failure/Convergence Events 725 5.2.1. Physical Link Failure on DUT End 727 Objective: 729 This test measures the route convergence time due to local link 730 failure event at DUT's Local Interface 732 Reference Test Setup: 734 This test uses the setup as shown in figure 1. Shutdown event is 735 defined as an administrative shutdown event on the DUT 737 Procedure: 739 A. All variables affecting Convergence like authentication, 740 policies, timers should be set to basic-test policy 742 B. Establish 2 BGP adjacencies from DUT to Emulator, one over the 743 peer interface and the other using a second peer interface 745 C. Advertise the same route, route A over both the adjacencies 746 and (Tx1)Interface to be the preferred next hop 748 D. To ensure adjacency establishment, wait for 3 KeepAlives from 749 the DUT or a configurable delay before proceeding with the 750 rest of the test 752 E. Start the traffic from the Emulator towards the DUT targeted 753 at a specific route say route A. Initially traffic would be 754 observed on the best egress route (Err1) instead of Trr2 756 F. Trigger the shutdown event of Best Egress Interface on DUT 757 (Drr1) 759 G. Measure the Convergence Time for the event to be detected and 760 traffic to be forwarded to Next-Best Egress Interface (rr2) 762 Time = Data-detect(rr2) - Shutdown time 764 H. Stop the offered load and wait for the queues to drain and 765 Restart 767 I. Bring up the link on DUT Best Egress Interface 769 J. Measure the convergence time taken for the traffic to be 770 rerouted from (rr2) to Best Interface (rr1) 772 Time = Data-detect(rr1) - Shutdown time 774 K. It is recommended that the test be repeated with varying 775 number of routes and route mixtures or with number of routes 776 and route mixtures closer to what is deployed in operational 777 networks 779 5.2.2. Physical Link Failure on Remote/Emulator End 781 Objective: 783 This test measures the route convergence time due to local link 784 failure event at Tester's Local Interface 786 Reference Test Setup: 788 This test uses the setup as shown in figure 1. Shutdown event is 789 defined as shutdown of the local interface of Tester via logical 790 shutdown event. The procedure used in 5.2.1 is used for the 791 termination 793 5.2.3. ECMP Link Failure on DUT End 795 Objective: 797 This test measures the route convergence time due to local link 798 failure event at ECMP Member. The FIB configuration and BGP is 799 set to allow two ECMP routes to be installed. However, policy 800 directs the routes to be sent only over one of the paths 802 Reference Test Setup: 804 This test uses the setup as shown in figure 1 and the procedure 805 uses 5.2.1 807 5.3. BGP Adjacency Failure (Non-Physical Link Failure) on Emulator 809 Objective: 811 This test measures the route convergence time due to BGP Adjacency 812 Failure on Emulator 814 Reference Test Setup: 816 This test uses the setup as shown in figure 1 818 Procedure: 820 A. All variables affecting Convergence like authentication, 821 policies, timers should be basic-policy set 823 B. Establish 2 BGP adjacencies from DUT to Emulator, one over the 824 Best Egress Interface and the other using the Next-Best Egress 825 Interface 827 C. Advertise the same route, routeA over both the adjacencies and 828 make Best Egress Interface to be the preferred next hop 830 D. To ensure adjacency establishment, wait for 3 KeepAlives from 831 the DUT or a configurable delay before proceeding with the 832 rest of the test 834 E. Start the traffic from the Emulator towards the DUT targeted 835 at a specific route say routeA. Initially traffic would be 836 observed on the Best Egress interface 838 F. Remove BGP adjacency via a software adjacency down on the 839 Emulator on the Best Egress Interface. This time is called 840 BGPadj-down-time also termed BGPpeer-down 842 G. Measure the Convergence Time for the event to be detected and 843 traffic to be forwarded to Next-Best Egress Interface.This 844 time is Tr-rr2 also called TR2-traffic-on 845 Convergence = TR2-traffic-on - BGPpeer-down 847 H. Stop the offered load and wait for the queues to drain and 848 Restart 850 I. Bring up BGP adjacency on the Emulator over the Best Egress 851 Interface. This time is BGP-adj-up also called BGPpeer-up 853 J. Measure the convergence time taken for the traffic to be 854 rerouted to Best Interface. This time is BGP-adj-up also 855 called BGPpeer-up 857 5.4. BGP Hard Reset Test Cases 859 5.4.1. BGP Non-Recovering Hard Reset Event on DUT 861 Objective: 863 This test measures the route convergence time due to Hard Reset on 864 the DUT 866 Reference Test Setup: 868 This test uses the setup as shown in figure 1 870 Procedure: 872 A. The requirement for this test case is that the Hard Reset 873 Event should be non-recovering and should affect only the 874 adjacency between DUT and Emulator on the Best Egress 875 Interface 877 B. All variables affecting SHOULD be set to basic-test values 879 C. Establish 2 BGP adjacencies from DUT to Emulator, one over the 880 Best Egress Interface and the other using the Next-Best Egress 881 Interface 883 D. Advertise the same route, routeA over both the adjacencies and 884 make Best Egress Interface to be the preferred next hop 886 E. To ensure adjacency establishment, wait for 3 KeepAlives from 887 the DUT or a configurable delay before proceeding with the 888 rest of the test 890 F. Start the traffic from the Emulator towards the DUT targeted 891 at a specific route say routeA. Initially traffic would be 892 observed on the Best Egress interface 894 G. Trigger the Hard Reset event of Best Egress Interface on DUT 896 H. Measure the Convergence Time for the event to be detected and 897 traffic to be forwarded to Next-Best Egress Interface 899 Time of convergence = time-traffic flow - time-reset 901 I. Stop the offered load and wait for the queues to drain and 902 Restart 904 J. It is recommended that the test be repeated with varying 905 number of routes and route mixtures or with number of routes 906 and route mixtures closer to what is deployed in operational 907 networks 909 K. When varying number of routes are used, convergence Time is 910 measured using the Loss Derived method [IGPData] 912 L. Convergence Time in this scenario is influenced by Failure 913 detection time on Tester, BGP Keep Alive Time and routing, 914 forwarding table update time 916 5.5. BGP Soft Reset 918 Objective: 920 This test measures the route convergence time taken by an 921 implementation to service a BGP Route Refresh message and 922 advertise a route 924 Reference Test Setup: 926 This test uses the setup as shown in figure 2 928 Procedure: 930 A. The BGP implementation on DUT and Helper Node needs to support 931 BGP Route Refresh Capability [RFC2918] 933 B. All devices to be synchronized using NTP 934 C. All variables affecting Convergence like authentication, 935 policies, timers should be set to basic-test defaults 937 D. DUT and Helper Node are configured in the same Autonomous 938 System whereas Emulator is configured under a different 939 Autonomous System 941 E. Establish BGP adjacency between DUT and Emulator 943 F. Establish BGP adjacency between DUT and Helper Node 945 G. To ensure adjacency establishment, wait for 3 KeepAlives from 946 the DUT or a configurable delay before proceeding with the 947 rest of the test 949 H. Configure a policy under BGP on Helper Node to deny routes 950 received from DUT 952 I. Advertise routeA from the Emulator to the DUT 954 J. The DUT will try to advertise the route to Helper Node will be 955 denied 957 K. Wait for 3 KeepAlives 959 L. Start the traffic from the Emulator towards the Helper Node 960 targeted at a specific route say routeA. Initially no traffic 961 would be observed on the Egress interface, as routeA is not 962 present 964 M. Remove the policy on Helper Node and issue a Route Refresh 965 request towards DUT. Note the timestamp of this event. This 966 is the RefreshTime 968 N. Record the time when the traffic targeted towards routeA is 969 received on the Egress Interface. This is RecTime 971 O. The following equation represents the Route Refresh 972 Convergence Time per route 974 Route Refresh Convergence Time = (RecTime - RefreshTime) 976 5.6. BGP Route Withdrawal Convergence Time 978 Objective: 980 This test measures the route convergence time taken by an 981 implementation to service a BGP Withdraw message and advertise the 982 withdraw 984 Reference Test Setup: 986 This test uses the setup as shown in figure 2 988 Procedure: 990 A. This test consists of 2 steps to determine the Total Withdraw 991 Processing Time 993 B. Step 1: 995 (1) All devices to be synchronized using NTP 997 (2) All variables should be set to basic-test parameters 999 (3) DUT and Helper Node are configured in the same 1000 Autonomous System whereas Emulator is configured under a 1001 different Autonomous System 1003 (4) Establish BGP adjacency between DUT and Emulator 1005 (5) To ensure adjacency establishment, wait for 3 KeepAlives 1006 from the DUT or a configurable delay before proceeding 1007 with the rest of the test 1009 (6) Start the traffic from the Emulator towards the DUT 1010 targeted at a specific route say routeA. Initially no 1011 traffic would be observed on the Egress interface as the 1012 routeA is not present on DUT 1014 (7) Advertise routeA from the Emulator to the DUT 1016 (8) The traffic targeted towards routeA is received on the 1017 Egress Interface 1019 (9) Now the Tester sends request to withdraw routeA to DUT, 1020 TRx(Awith) also called WdrawTime1 1022 (10) Record the time when no traffic is observed on the 1023 Egress Interface. This is the RouteRemoveTime1(A) 1024 WdrawConvTime1 = RouteRemoveTime1(A) 1026 (11) The difference between the RouteRemoveTime1 and 1027 WdrawTime1 is the WdrawConvTime1 1029 C. Step 2: 1031 (1) Continuing from Step 1, re-advertise routeA back to DUT 1032 from Tester 1034 (2) The DUT will try to advertise the routeA to Helper Node 1035 (assumption there exists a session between DUT and helper 1036 node 1038 (3) Start the traffic from the Emulator towards the Helper 1039 Node targeted at a specific route say routeA. Traffic 1040 would be observed on the Egress interface after routeA is 1041 received by the Helper Node 1043 WATime=time traffic first flows 1045 (4) Now the Tester sends a request to withdraw routeA to DUT. 1046 This is the WdrawTime2 1048 WAWtime-TRx(RouteA) = WdrawTime2 1050 (5) DUT processes the withdraw and sends it to Helper Node 1052 (6) Record the time when no traffic is observed on the Egress 1053 Interface of Helper Node. This is 1055 TR-WAW(DUT,RouteA) = RouteRemoveTime2 1057 (7) Total withdraw processing time is 1059 TotalWdrawTime = ((RouteRemoveTime2 - WdrawTime2) - 1060 WdrawConvTime1) 1062 5.7. BGP Path Attribute Change Convergence Time 1064 Objective: 1066 This test measures the convergence time taken by an implementation 1067 to service a BGP Path Attribute Change 1069 Reference Test Setup: 1071 This test uses the setup as shown in figure 1 1073 Procedure: 1075 A. This test only applies to Well-Known Mandatory Attributes like 1076 Origin, AS Path, Next Hop 1078 B. In each iteration of test only one of these mandatory 1079 attributes need to be varied whereas the others remain the 1080 same 1082 C. All devices to be synchronized using NTP 1084 D. All variables should be set to basic-test parameters 1086 E. Advertise the route, routeA over the Best Egress Interface 1087 only, making it the preferred next hop 1089 F. To ensure adjacency establishment, wait for 3 KeepAlives from 1090 the DUT or a configurable delay before proceeding with the 1091 rest of the test 1093 G. Start the traffic from the Emulator towards the DUT targeted 1094 at the specific route say routeA. Initially traffic would be 1095 observed on the Best Egress interface 1097 H. Now advertise the same route routeA on the Next-Best Egress 1098 Interface but by varying one of the well-known mandatory 1099 attributes to have a preferred value over that interface. The 1100 other values need to be same as what was advertised on the 1101 Best-Egress adjacency 1103 TRx(Path-Change) = Path Change Event Time 1105 I. Measure the Convergence Time for the event to be detected and 1106 traffic to be forwarded to Next-Best Egress Interface 1108 DUT(Path-Change, RouteA) = Path-switch time 1110 Convergence = Path-switch time - Path Change Event Time 1112 J. Stop the offered load and wait for the queues to drain and 1113 Restart 1115 5.8. BGP Graceful Restart Convergence Time 1117 Objective: 1119 This test measures the route convergence time taken by an 1120 implementation during a Graceful Restart Event 1122 Reference Test Setup: 1124 This test uses the setup as shown in figure 4 1126 Procedure: 1128 A. It measures the time taken by an implementation to service a 1129 BGP Graceful Restart Event and advertise a route 1131 B. The Helper Nodes are the same model as DUT and run the same 1132 BGP implementation as DUT 1134 C. The BGP implementation on DUT and Helper Node needs to support 1135 BGP Graceful Restart Mechanism [RFC4724] 1137 D. All devices to be synchronized using NTP 1139 E. All variables are set to basic-test values 1141 F. DUT and Helper Node-1 are configured in the same Autonomous 1142 System whereas Emulator and Helper Node-2 are configured under 1143 different Autonomous Systems 1145 G. Establish BGP adjacency between DUT and Helper Nodes 1147 H. Establish BGP adjacency between Helper Node-2 and Emulator 1149 I. To ensure adjacency establishment, wait for 3 KeepAlives from 1150 the DUT or a configurable delay before proceeding with the 1151 rest of the test 1153 J. Configure a policy under BGP on Helper Node-1 to deny routes 1154 received from DUT 1156 K. Advertise routeA from the Emulator to Helper Node-2 1158 L. Helper Node-2 advertises the route to DUT and DUT will try to 1159 advertise the route to Helper Node-1 which will be denied 1161 M. Wait for 3 KeepAlives 1163 N. Start the traffic from the Emulator towards the Helper Node-1 1164 targeted at the specific route say routeA. Initially no 1165 traffic would be observed on the Egress interface as the 1166 routeA is not present 1168 O. Perform a Graceful Restart Trigger Event on DUT and note the 1169 time. This is the GREventTime 1171 P. Remove the policy on Helper Node-1 1173 Q. Record the time when the traffic targeted towards routeA is 1174 received on the Egress Interface 1176 TRr(DUT, routeA). This is also called RecTime 1178 R. The following equation represents the Graceful Restart 1179 Convergence Time 1181 Graceful Restart Convergence Time = ((GREventTime - 1182 RecTime) - RIB-IN) 1184 S. It is assumed in this test case that after a Switchover is 1185 triggered on the DUT, it will not have any cycles to process 1186 BGP Refresh messages. The reason for this assumption is that 1187 there is a narrow window of time where after switchover when 1188 we remove the policy from Helper Node -1, implementations 1189 might generate Route-Refresh automatically and this request 1190 might be serviced before the DUT actually switches over and 1191 reestablishes BGP adjacencies with the peers 1193 6. Reporting Format 1195 For each test case, it is recommended that the reporting tables below 1196 are completed and all time values SHOULD be reported with resolution 1197 as specified in [RFC4098] 1198 Parameter Units 1199 Test case Test case number 1200 Test topology 1,2,3 or 4 1201 Parallel links Number of parallel links 1202 Interface type GigE, POS, ATM, other 1203 Convergence Event Hard reset, Soft reset, link 1204 failure, or other defined 1205 eBGP sessions Number of eBGP sessions 1206 iBGP sessions Number of iBGP sessions 1207 eBGP neighbor Number of eBGP neighbors 1208 iBGP neighbor Number of iBGP neighbors 1209 Routes per peer Number of routes 1210 Total unique routes Number of routes 1211 Total non-unique routes Number of routes 1212 IGP configured ISIS, OSPF, static, or other 1213 Route Mixture Description of Route mixture 1214 Route Packing Number of routes in an update 1215 Policy configured Yes, No 1216 Packet size offered to the DUT Bytes 1217 Offered load Packets per second 1218 Packet sampling interval on Seconds 1219 tester 1220 Forwarding delay threshold Seconds 1221 Timer Values configured on DUT 1222 Interface failure indication Seconds 1223 delay 1224 Hold time Seconds 1225 MinRouteAdvertisementInterval Seconds 1226 (MRAI) 1227 MinASOriginationInterval Seconds 1228 (MAOI) 1229 Keepalive Time Seconds 1230 ConnectRetry Seconds 1231 TCP Parameters for DUT and tester 1232 MSS Bytes 1233 Slow start threshold Bytes 1234 Maximum window size Bytes 1236 Test Details: 1238 a. If the Offered Load matches a subset of routes, describe how this 1239 subset is selected 1241 b. Describe how the Convergence Event is applied; does it cause 1242 instantaneous traffic loss or not 1244 c. If there is any policy configured, describe the configured policy 1246 Complete the table below for the initial Convergence Event and the 1247 reversion Convergence Event 1249 Parameter Unit 1250 Convergence Event Initial or reversion 1251 Traffic Forwarding Metrics 1252 Total number of packets Number of packets 1253 offered to DUT 1254 Total number of packets Number of packets 1255 forwarded by DUT 1256 Connectivity Packet Loss Number of packets 1257 Convergence Packet Loss Number of packets 1258 Out-of-order packets Number of packets 1259 Duplicate packets Number of packets 1260 Convergence Benchmarks 1261 Rate-derived Method[IGP- 1262 Data]: 1263 First route convergence Seconds 1264 time 1265 Full convergence time Seconds 1266 Loss-derived Method [IGP- 1267 Data]: 1268 Loss-derived convergence Seconds 1269 time 1270 Route-Specific Loss-Derived 1271 Method: 1272 Minimum R-S convergence Seconds 1273 time 1274 Maximum R-S convergence Seconds 1275 time 1276 Median R-S convergence Seconds 1277 time 1278 Average R-S convergence Seconds 1279 time 1281 Loss of Connectivity Benchmarks 1282 Loss-derived Method: 1283 Loss-derived loss of Seconds 1284 connectivity period 1285 Route-Specific loss-derived 1286 Method: 1287 Minimum LoC period [n] Array of seconds 1288 Minimum Route LoC period Seconds 1289 Maximum Route LoC period Seconds 1290 Median Route LoC period Seconds 1291 Average Route LoC period Seconds 1293 7. IANA Considerations 1295 This draft does not require any new allocations by IANA. 1297 8. Security Considerations 1299 Benchmarking activities as described in this memo are limited to 1300 technology characterization using controlled stimuli in a laboratory 1301 environment, with dedicated address space and the constraints 1302 specified in the sections above. 1304 The benchmarking network topology will be an independent test setup 1305 and MUST NOT be connected to devices that may forward the test 1306 traffic into a production network, or misroute traffic to the test 1307 management network. 1309 Further, benchmarking is performed on a "black-box" basis, relying 1310 solely on measurements observable external to the DUT/SUT. 1312 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 1313 benchmarking purposes. Any implications for network security arising 1314 from the DUT/SUT SHOULD be identical in the lab and in production 1315 networks. 1317 9. Acknowledgments 1319 The following people made textual contributions to this document: Jay 1320 Karthik (Cisco Systems), Eric Brendel (AT&T), and Mohan Nanduri 1321 (Microsoft) The authors would like to thank them for the helpful 1322 discussions and their contributions to the document. 1324 10. References 1326 10.1. Normative References 1328 [I-D.ietf-bmwg-igp-dataplane-conv-term] 1329 Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology 1330 for Benchmarking Link-State IGP Data Plane Route 1331 Convergence", draft-ietf-bmwg-igp-dataplane-conv-term-23 1332 (work in progress), February 2011. 1334 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1335 Requirement Levels", BCP 14, RFC 2119, March 1997. 1337 [RFC2918] Chen, E., "Route Refresh Capability for BGP-4", RFC 2918, 1338 September 2000. 1340 [RFC4271] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway 1341 Protocol 4 (BGP-4)", RFC 4271, January 2006. 1343 10.2. Informative References 1345 [RFC1242] Bradner, S., "Benchmarking terminology for network 1346 interconnection devices", RFC 1242, July 1991. 1348 [RFC1983] Malkin, G., "Internet Users' Glossary", RFC 1983, 1349 August 1996. 1351 [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN 1352 Switching Devices", RFC 2285, February 1998. 1354 [RFC2545] Marques, P. and F. Dupont, "Use of BGP-4 Multiprotocol 1355 Extensions for IPv6 Inter-Domain Routing", RFC 2545, 1356 March 1999. 1358 [RFC4098] Berkowitz, H., Davies, E., Hares, S., Krishnaswamy, P., 1359 and M. Lepp, "Terminology for Benchmarking BGP Device 1360 Convergence in the Control Plane", RFC 4098, June 2005. 1362 [RFC4724] Sangli, S., Chen, E., Fernando, R., Scudder, J., and Y. 1363 Rekhter, "Graceful Restart Mechanism for BGP", RFC 4724, 1364 January 2007. 1366 [RFC4760] Bates, T., Chandra, R., Katz, D., and Y. Rekhter, 1367 "Multiprotocol Extensions for BGP-4", RFC 4760, 1368 January 2007. 1370 [RFC5925] Touch, J., Mankin, A., and R. Bonica, "The TCP 1371 Authentication Option", RFC 5925, June 2010. 1373 Authors' Addresses 1375 Rajiv Papneja 1376 Huawei Technologies 1378 Email: rajiv.papneja@huawei.com 1379 Bhavani Parise 1380 Cisco Systems 1382 Email: bhavani@cisco.com 1384 Susan Hares 1385 Huawei Technologies 1387 Email: shares@huawei.com 1389 Ilya Varlashkin 1390 Easynet Global Services 1392 Email: ilya.varlashkin@easynet.com 1394 Eric Brendel 1395 Independent Consultant 1397 Email: brendel@pektel.com 1399 Mohan Nanduri 1400 Microsoft 1402 Email: mnanduri@microsoft.com 1404 Jay Karthik 1405 Cisco Systems 1407 Email: jkarthik@cisco.com