idnits 2.17.1 draft-ietf-bmwg-acc-bench-meth-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 20. -- Found old boilerplate from RFC 3978, Section 5.5 on line 525. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 536. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 543. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 549. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 22 instances of lines with control characters in the document. ** The abstract seems to contain references ([4]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (December 2006) is 6341 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: '2' is defined on line 454, but no explicit reference was found in the text == Unused Reference: '3' is defined on line 457, but no explicit reference was found in the text == Unused Reference: '5' is defined on line 466, but no explicit reference was found in the text == Unused Reference: 'RFC3871' is defined on line 476, but no explicit reference was found in the text == Unused Reference: 'NANOG25' is defined on line 480, but no explicit reference was found in the text == Unused Reference: 'IEEECQR' is defined on line 483, but no explicit reference was found in the text == Unused Reference: 'CONVMETH' is defined on line 486, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1242 (ref. '1') ** Downref: Normative reference to an Informational RFC: RFC 2285 (ref. '2') ** Downref: Normative reference to an Informational RFC: RFC 2544 (ref. '3') == Outdated reference: A later version (-13) exists of draft-ietf-bmwg-acc-bench-term-09 ** Downref: Normative reference to an Informational draft: draft-ietf-bmwg-acc-bench-term (ref. '4') == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-term-11 ** Downref: Normative reference to an Informational draft: draft-ietf-bmwg-igp-dataplane-conv-term (ref. '5') == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-meth-11 Summary: 10 errors (**), 0 flaws (~~), 12 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group 2 INTERNET-DRAFT 3 Expires in: December 2006 4 Scott Poretsky 5 Reef Point Systems 7 Shankar Rao 8 Qwest Communications 10 June 2006 12 Methodology Guidelines for 13 Accelerated Stress Benchmarking 14 16 Intellectual Property Rights (IPR) statement: 17 By submitting this Internet-Draft, each author represents that any 18 applicable patent or other IPR claims of which he or she is aware 19 have been or will be disclosed, and any of which he or she becomes 20 aware will be disclosed, in accordance with Section 6 of BCP 79. 22 Status of this Memo 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF), its areas, and its working groups. Note that 26 other groups may also distribute working documents as 27 Internet-Drafts. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 The list of current Internet-Drafts can be accessed at 35 http://www.ietf.org/ietf/1id-abstracts.txt. 37 The list of Internet-Draft Shadow Directories can be accessed at 38 http://www.ietf.org/shadow.html. 40 Copyright Notice 41 Copyright (C) The Internet Society (2006). 43 ABSTRACT 44 Routers in an operational network are simultaneously configured 45 With multiple protocols and security policies while forwarding 46 traffic and being managed. To accurately benchmark a router for 47 deployment it is necessary that the router be tested in these 48 simultaneous operational conditions, which is known as Stress 49 Testing. This document provides the Methodology Guidelines for 50 performing Acceleratd Stress Benchmarking of networking devices. 51 Descriptions of Test Topology, Benchmarks and Reporting Format 52 are provided in addition to procedures for conducting various 53 test cases. The methodology is to be used with the companion 54 terminology document [4]. These guidelines can be used as the 55 basis for additional methodology documents that benchmark 56 stress conditions for specific network technologies. 58 Stress Benchmarking 59 Table of Contents 60 1. Introduction ............................................... 2 61 2. Existing definitions ....................................... 3 62 3. Test Setup.................................................. 3 63 3.1 Test Topologies............................................ 3 64 3.2 Test Considerations........................................ 3 65 3.3 Reporting Format........................................... 4 66 3.3.1 Configuration Sets....................................... 5 67 3.3.2 Startup Conditions....................................... 6 68 3.3.3 Instability Conditions................................... 6 69 3.3.4 Benchmarks............................................... 7 70 4. Example Test Case Procedure................................. 7 71 5. IANA Considerations......................................... 8 72 6. Security Considerations..................................... 9 73 7. Normative References........................................ 9 74 8. Informative References......................................10 75 9. Author's Address............................................10 77 1. Introduction 78 Router testing benchmarks have consistently been made in a monolithic 79 fashion wherein a single protocol or behavior is measured in an 80 isolated environment. It is important to know the limits for a 81 networking device's behavior for each protocol in isolation, however 82 this does not produce a reliable benchmark of the device's behavior 83 in an operational network. 85 Routers in an operational network are simultaneously configured with 86 multiple protocols and security policies while forwarding traffic 87 and being managed. To accurately benchmark a router for deployment 88 it is necessary to test that router in operational conditions by 89 simultaneously configuring and scaling network protocols and security 90 policies, forwarding traffic, and managing the device. It is helpful 91 to accelerate these network operational conditions with Instability 92 Conditions [4] so that the networking devices are stress tested. 94 This document provides the Methodology for performing Stress 95 Benchmarking of networking devices. Descriptions of Test Topology, 96 Benchmarks and Reporting Format are provided in addition to 97 procedures for conducting various test cases. The methodology is 98 to be used with the companion terminology document [4]. 100 Stress Testing of networking devices provides the following benefits: 101 1. Evaluation of multiple protocols enabled simultaneously as 102 configured in deployed networks 103 2. Evaluation of System and Software Stability 104 3. Evaluation of Manageability under stressful conditions 105 4. Identification of Buffer Overflow conditions 106 5. Identification of Software Coding bugs such as: 107 a. Memory Leaks 108 b. Suboptimal CPU Utilization 109 c. Coding Logic 110 Stress Benchmarking 112 These benefits produce significant advantages for network operations: 113 1. Increased stability of routers and protocols 114 2. Hardened routers to DoS attacks 115 3. Verified manageability under stress 116 4. Planning router resources for growth and scale 118 2. Existing definitions 119 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 120 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 121 document are to be interpreted as described in BCP 14, RFC 2119 122 [6]. RFC 2119 defines the use of these key words to help make the 123 intent of standards track documents as clear as possible. While this 124 document uses these keywords, this document is not a standards track 125 document. 127 Terms related to Accelerated Stress Benchmarking are defined in [4]. 129 3. Test Setup 130 3.1 Test Topologies 131 Figure 1 shows the physical configuration to be used for the 132 methodologies provided in this document. The number of interfaces 133 between the tester and DUT will scale depending upon the number of 134 control protocol sessions and traffic forwarding interfaces. A 135 separate device may be required to externally manage the device in 136 the case that the test equipment does not support such 137 functionality. Figure 2 shows the logical configuration for the 138 stress test methodologies. Each plane may be emulated by single or 139 multiple test equipment. 141 3.2 Test Considerations 142 The Accelerated Stress Benchmarking test can be applied in 143 service provider test environments to benchmark DUTs under 144 stress in an environment that is reflective of an operational 145 network. A particular Configuration Set is defined and the 146 DUT is benchmarked using this configuration set and the 147 Instability Conditions. Varying Configuration Sets and/or 148 Instability Conditions applied in an iterative fashion can 149 provide an accurate characterization of the DUT 150 to help determine future network deployments. 152 Stress Benchmarking 154 ___________ 155 | DUT | 156 ___|Management | 157 | | | 158 | ----------- 159 \/ 160 ___________ 161 | | 162 | DUT | 163 |--->| |<---| 164 xN | ----------- | xN 165 interfaces | | interfaces 166 | ___________ | 167 | | | | 168 |--->| Tester |<---| 169 | | 170 ----------- 172 Figure 1. Physical Configuration 174 ___________ ___________ 175 | Control | | Management| 176 | Plane |___ ___| Plane | 177 | | | | | | 178 ----------- | | ----------- 179 \/ \/ ___________ 180 ___________ | Security | 181 | |<-----------| Plane | 182 | DUT | | | 183 |--->| |<---| ----------- 184 | ----------- | 185 | | 186 | ___________ | 187 | | Data | | 188 |--->| Plane |<---| 189 | | 190 ----------- 192 Figure 2. Logical Configuration 194 3.3 Reporting Format 196 Each methodology requires reporting of information for test 197 repeatability when benchmarking the same or different devices. 198 The information that are the Configuration Sets, Instability 199 Conditions, and Benchmarks, as defined in [4]. Example 200 reporting formats for each are provided below. 202 Stress Benchmarking 204 3.3.1 Configuration Sets 206 Configuration Sets may include and are not limited to the following 207 examples. 209 Example Routing Protocol Configuration Set- 210 PARAMETER UNITS 211 BGP Enabled/Disabled 212 Number of EBGP Peers Peers 213 Number of IBGP Peers Peers 214 Number of BGP Route Instances Routes 215 Number of BGP Installed Routes Routes 216 MBGP Enabled/Disabled 217 Number of MBGP Route Instances Routes 218 Number of MBGP Installed Routes Routes 219 IGP Enabled/Disabled 220 IGP-TE Enabled/Disabled 221 Number of IGP Adjacencies Adjacencies 222 Number of IGP Routes Routes 223 Number of Nodes per Area Nodes 225 Example MPLS Protocol Configuration Set- 226 PARAMETER UNITS 227 MPLS-TE Enabled/Disabled 228 Number of Tunnels as Ingress Tunnels 229 Number of Tunnels as Mid-Point Tunnels 230 Number of Tunnels as Egress Tunnels 231 LDP Enabled/Disabled 232 Number of Sessions Sessions 233 Number of FECs FECs 235 Example Multicast Protocol Configuration Set- 236 PARAMETER UNITS 237 PIM-SM Enabled/Disabled 238 RP Enabled/Disabled 239 Number of Multicast Groups Groups 240 MSDP Enabled/Disabled 242 Example Data Plane Configuration Set- 243 PARAMETER UNITS 244 Traffic Forwarding Enabled/Disabled 245 Aggregate Offered Load bps (or pps) 246 Number of Ingress Interfaces number 247 Number of Egress Interfaces number 249 TRAFFIC PROFILE 250 Packet Size(s) bytes 251 Offered Load (interface) array of bps 252 Number of Flows number 253 Encapsulation(flow) array of encapsulation type 254 Stress Benchmarking 256 Management Configuration Set- 257 PARAMETER UNITS 258 SNMP GET Rate SNMP Gets/minute 259 Logging Enabled/Disabled 260 Protocol Debug Enabled/Disabled 261 Telnet Rate Sessions/Hour 262 FTP Rate Sessions/Hour 263 Concurrent Telnet Sessions Sessions 264 Concurrent FTP Session Sessions 265 Packet Statistics Collector Enabled/Disabled 266 Statistics Sampling Rate X:1 packets 268 Security Configuration Set - 269 PARAMETER UNITS 270 Packet Filters Enabled/Disabled 271 Number of Filters For-Me number 272 Number of Filter Rules For-Me number 273 Number of Traffic Filters number 274 Number of Traffic Filter Rules number 275 IPsec tunnels number 276 SSH Enabled/Disabled 277 Number of simultaneous SSH sessions number 278 RADIUS Enabled/Disabled 279 TACACS Enabled/Disabled 281 3.3.2 Startup Conditions 282 Startup Conditions may include and are not limited to the following 283 examples: 284 PARAMETER UNITS 285 EBGP peering sessions negotiated Total EBGP Sessions 286 IBGP peering sessions negotiated Total IBGP Sessions 287 ISIS adjacencies established Total ISIS Adjacencies 288 ISIS routes learned rate ISIS Routes per Second 289 IPsec tunnels negotiated Total IPsec Tunnels 290 IPsec tunnel establishment rate IPsec tunnels per second 292 3.3.3 Instability Conditions 293 Instability Conditions may include and are not limited to the 294 following examples: 295 PARAMETER UNITS 296 Interface Shutdown Cycling Rate interfaces per minute 297 ISIS Route Flap Rate routes per minutes 298 LSP Reroute Rate LSP per minute 299 Overloaded Links number 300 Amount Links Overloaded % of bandwidth 301 FTP Rate Mb/minute 302 IPsec Tunnel Flap Rate tunnels per minute 303 Filter Policy Changes policies per hour 304 SSH Session Rate SSH sessions per hour 305 Telnet Session Rate Telnet session per hour 306 Command Entry Rate Commands per Hour 307 Message Flood Rate Messages per second 308 Stress Benchmarking 310 3.3.4 Benchmarks 312 Benchmarks are as defined in [1] and listed as follow: 313 PARAMETER UNITS PHASE 314 Stable Aggregate Forwarding Rate pps Startup 315 Stable Latency seconds Startup 316 Stable Session Count sessions Startup 317 Unstable Aggregate Forwarding Rate pps Instability 318 Degraded Aggregate Forwarding Rate pps Instability 319 Ave. Degraded Aggregate Forwarding Rate pps Instability 320 Unstable Latency seconds Instability 321 Unstable Uncontrolled Sessions Lost sessions Instability 322 Recovered Aggregate Forwarding Rate pps Recovery 323 Recovered Latency seconds Recovery 324 Recovery Time seconds Recovery 325 Recovered Uncontrolled Sessions sessions Recovery 327 4. Example Test Case Procedure 328 1. Report Configuration Set 330 BGP Enabled 331 10 EBGP Peers 332 30 IBGP Peers 333 500K BGP Route Instances 334 160K BGP FIB Routes 336 ISIS Enabled 337 ISIS-TE Disabled 338 30 ISIS Adjacencies 339 10K ISIS Level-1 Routes 340 250 ISIS Nodes per Area 342 MPLS Disabled 343 IP Multicast Disabled 345 IPsec Enabled 346 10K IPsec tunnels 347 640 Firewall Policies 348 100 Firewall Rules per Policy 350 Traffic Forwarding Enabled 351 Aggregate Offered Load 10Gbps 352 30 Ingress Interfaces 353 30 Egress Interfaces 354 Packet Size(s) = 64, 128, 256, 512, 1024, 1280, 1518 bytes 355 Forwarding Rate[1..30] = 1Gbps 356 10000 Flows 357 Encapsulation[1..5000] = IPv4 358 Encapsulation[5001.10000] = IPsec 359 Stress Benchmarking 361 Logging Enabled 362 Protocol Debug Disabled 363 SNMP Enabled 364 SSH Enabled 365 10 Concurrent SSH Sessions 366 FTP Enabled 367 RADIUS Enabled 368 TACACS Disabled 369 Packet Statistics Collector Enabled 371 2. Begin Startup Conditions with the DUT 373 10 EBGP peering sessions negotiated 374 30 EBGP peering sessions negotiated 375 1K BGP routes learned per second 376 30 ISIS Adjacencies 377 1K ISIS routes learned per second 378 10K IPsec tunnels negotiated 380 3. Establish Configuration Sets with the DUT 382 4. Report Stability Benchmarks as follow: 384 Stable Aggregate Forwarding Rate 385 Stable Latency 386 Stable Session Count 388 It is RECOMMENDED that the benchmarks be measured and 389 recorded at one-second intervals. 391 5. Apply Instability Conditions 393 Interface Shutdown Cycling Rate = 1 interface every 5 minutes 394 BGP Session Flap Rate = 1 session every 10 minutes 395 BGP Route Flap Rate = 100 routes per minute 396 ISIS Route Flap Rate = 100 routes per minute 397 IPsec Tunnel Flap Rate = 1 tunnel per minute 398 Overloaded Links = 5 of 30 399 Amount Links Overloaded = 20% 400 SNMP GETs = 1 per sec 401 SSH Session Rate = 6 sessions per hour 402 SSH Session Duration = 10 minutes 403 Command Rate via SSH = 20 commands per minute 404 FTP Restart Rate = 10 continuous transfers (Puts/Gets) 405 per hour 406 FTP Transfer Rate = 100 Mbps 407 Statistics Sampling Rate = 1:1 packets 408 RADIUS Server Loss Rate = 1 per Hour 409 RADIUS Server Loss Duration = 3 seconds 411 6. Apply Instability Condition specific to test case. 413 Stress Benchmarking 415 7. Report Instability Benchmarks as follow: 416 Unstable Aggregate Forwarding Rate 417 Degraded Aggregate Forwarding Rate 418 Ave. Degraded Aggregate Forwarding Rate 419 Unstable Latency 420 Unstable Uncontrolled Sessions Lost 422 It is RECOMMENDED that the benchmarks be measured and 423 recorded at one-second intervals. 425 8. Stop applying all Instability Conditions 427 9. Report Recovery Benchmarks as follow: 429 Recovered Aggregate Forwarding Rate 430 Recovered Latency 431 Recovery Time 432 Recovered Uncontrolled Sessions Lost 434 It is RECOMMENDED that the benchmarks be measured and 435 recorded at one-second intervals. 437 10. Optional - Change Configuration Set and/or Instability 438 Conditions for next iteration 440 5. IANA Considerations 441 This document requires no IANA considerations. 443 6. Security Considerations 444 Documents of this type do not directly affect the security of 445 the Internet or of corporate networks as long as benchmarking 446 is not performed on devices or systems connected to operating 447 networks. 449 7. Normative References 451 [1] Bradner, S., Editor, "Benchmarking Terminology for Network 452 Interconnection Devices", RFC 1242, October 1991. 454 [2] Mandeville, R., "Benchmarking Terminology for LAN Switching 455 Devices", RFC 2285, June 1998. 457 [3] Bradner, S. and McQuaid, J., "Benchmarking Methodology for 458 Network Interconnect Devices", RFC 2544, March 1999. 460 [4] Poretsky, S. and Rao, S., "Terminology for Accelerated 461 Stress Benchmarking", draft-ietf-bmwg-acc-bench-term-09, 462 work in progress, June 2006. 464 Stress Benchmarking 466 [5] Poretsky, S., "Benchmarking Terminology for IGP Data Plane 467 Route Convergence", 468 draft-ietf-bmwg-igp-dataplane-conv-term-11, work in 469 progress, June 2006. 471 [6] Bradner, S., "Key words for use in RFCs to Indicate 472 Requirement Levels", RFC 2119, March 1997. 474 8. Informative References 476 [RFC3871] RFC 3871 "Operational Security Requirements for Large 477 Internet Service Provider (ISP) IP Network Infrastructure. 478 G. Jones, Ed.. IETF, September 2004. 480 [NANOG25] "Core Router Evaluation for Higher Availability", 481 Scott Poretsky, NANOG 25, June 8, 2002, Toronto, CA. 483 [IEEECQR] "Router Stress Testing to Validate Readiness for 484 Network Deployment", Scott Poretsky, IEEE CQR 2003. 486 [CONVMETH] Poretsky, S., "Benchmarking Methodology for IGP Data 487 Plane Route Convergence", 488 draft-ietf-bmwg-igp-dataplane-conv-meth-11, work in 489 progress, June 2006. 491 9. Author's Address 493 Scott Poretsky 494 Reef Point Systems 495 8 New England Executive Park 496 Burlington, MA 01803 497 USA 498 Phone: + 1 781 395 5090 499 EMail: sporetsky@reefpoint.com 501 Shankar Rao 502 1801 California Street 503 8th Floor 504 Qwest Communications 505 Denver, CO 80202 506 USA 507 Phone: + 1 303 437 6643 508 Email: shankar.rao@qwest.com 509 Stress Benchmarking 511 Full Copyright Statement 513 Copyright (C) The Internet Society (2006). 515 This document is subject to the rights, licenses and restrictions 516 contained in BCP 78, and except as set forth therein, the authors 517 retain all their rights. 519 This document and the information contained herein are provided on an 520 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 521 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 522 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 523 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 524 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 525 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 527 Intellectual Property 529 The IETF takes no position regarding the validity or scope of any 530 Intellectual Property Rights or other rights that might be claimed to 531 pertain to the implementation or use of the technology described in 532 this document or the extent to which any license under such rights 533 might or might not be available; nor does it represent that it has 534 made any independent effort to identify any such rights. Information 535 on the procedures with respect to rights in RFC documents can be 536 found in BCP 78 and BCP 79. 538 Copies of IPR disclosures made to the IETF Secretariat and any 539 assurances of licenses to be made available, or the result of an 540 attempt made to obtain a general license or permission for the use of 541 such proprietary rights by implementers or users of this 542 specification can be obtained from the IETF on-line IPR repository at 543 http://www.ietf.org/ipr. 545 The IETF invites any interested party to bring to its attention any 546 copyrights, patents or patent applications, or other proprietary 547 rights that may cover technology that may be required to implement 548 this standard. Please address the information to the IETF at ietf- 549 ipr@ietf.org. 551 Acknowledgement 553 Funding for the RFC Editor function is currently provided by the 554 Internet Society.