idnits 2.17.1 draft-ietf-bmwg-acc-bench-term-13.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1353. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1364. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1371. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1377. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 24 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 25, 2008) is 5898 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: '3' is defined on line 1260, but no explicit reference was found in the text == Unused Reference: 'RFC3871' is defined on line 1272, but no explicit reference was found in the text == Unused Reference: 'NANOG25' is defined on line 1276, but no explicit reference was found in the text == Unused Reference: 'IEEECQR' is defined on line 1279, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 6 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group S. Poretsky 2 Internet Draft NextPoint Networks 3 Expires: August 2008 4 Intended Status: Informational Shankar Rao 5 Qwest Communications 7 February 25, 2008 9 Terminology for Accelerated Stress Benchmarking 10 12 Intellectual Property Rights (IPR) statement: 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Status of this Memo 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as 23 Internet-Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt. 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 Copyright Notice 37 Copyright (C) The IETF Trust (2008). 39 ABSTRACT 40 This document provides the Terminology for performing Accelerated 41 Stress Benchmarking of networking devices. The three phases of 42 the Stress Test: Startup, Instability and Recovery are defined 43 along with the benchmarks and configuration terms associated with 44 the each phase. Also defined are the Benchmark Planes fundamental 45 to stress testing configuration, setup and measurement. The 46 terminology is to be used with the companion framework and 47 methodology documents. 49 Table of Contents 50 1. Introduction ............................................... 3 51 2. Existing definitions ....................................... 3 52 3. Term definitions............................................ 4 53 Stress Benchmarking 55 3.1 General Terms............................................. 4 56 3.1.1 Benchmark Planes...................................... 4 57 3.1.2 Configuration Sets.................................... 5 58 3.1.3 Startup Conditions.................................... 5 59 3.1.4 Instability Conditions................................ 6 60 3.1.5 Aggregate Forwarding Rate............................. 6 61 3.1.6 Discontinued Sessions................................. 7 62 3.1.7 Uncontrolled Session Loss............................. 7 63 3.2 Benchmark Planes.......................................... 8 64 3.2.1 Control Plane......................................... 8 65 3.2.2 Data Plane............................................ 8 66 3.2.3 Management Plane...................................... 8 67 3.2.4 Security Plane........................................ 9 68 3.3 Startup...................................................10 69 3.3.1 Startup Phase.........................................10 70 3.3.2 Benchmarks............................................10 71 3.3.2.1 Stable Aggregate Forwarding Rate..................10 72 3.3.2.2 Stable Latency....................................11 73 3.3.2.3 Stable Session Count..............................11 74 3.3.3 Control Plane.........................................12 75 3.3.3.1 Control Plane Configuration Set...................12 76 3.3.3.2 Control Plane Startup Conditions..................13 77 3.3.4 Data Plane............................................13 78 3.3.4.1 Data Plane Configuration Set......................13 79 3.3.4.2 Traffic Profile...................................14 80 3.3.5 Management Plane......................................14 81 3.3.5.1 Management Plane Configuration Set................14 82 3.3.6 Security Plane........................................15 83 3.3.6.1 Security Plane Configuration Set..................15 84 3.3.6.2 Security Plane Startup Conditions.................16 85 3.4 Instability...............................................17 86 3.4.1 Instability Phase.....................................17 87 3.4.2 Benchmarks............................................17 88 3.4.2.1 Unstable Aggregate Forwarding Rate................17 89 3.4.2.2 Aggregate Forwarding Rate Degradation.............18 90 3.4.2.3 Average Aggregate Forwarding Rate Degradation.....18 91 3.4.2.4 Unstable Latency..................................19 92 3.4.2.5 Unstable Uncontrolled Sessions Lost...............19 93 3.4.3 Instability Conditions................................20 94 3.4.3.1 Control Plane Instability Conditions..............20 95 3.4.3.2 Data Plane Instability Conditions.................20 96 3.4.3.3 Management Plane Instability Conditions...........21 97 3.4.3.4 Security Plane Instability Conditions.............21 98 3.5 Recovery..................................................22 99 3.5.1 Recovery Phase........................................22 100 3.5.2 Benchmarks............................................22 101 3.5.2.1 Recovered Aggregate Forwarding Rate...............22 102 3.5.2.2 Recovered Latency.................................23 103 3.5.2.3 Recovery Time.....................................23 104 3.5.2.4 Recovered Uncontrolled Sessions Lost..............24 105 3.5.2.5 Variability Benchmarks............................24 106 4. IANA Considerations.........................................25 107 Stress Benchmarking 109 5. Security Considerations.....................................25 110 6. Acknowledgements............................................25 111 7. References..................................................25 112 8. Author's Address............................................26 113 Appendix 1 - White Box Benchmarks..............................26 115 1. Introduction 117 Routers in an operational network are configured with multiple 118 protocols and security policies while simultaneously forwarding 119 traffic and being managed. To accurately benchmark a router for 120 deployment, it is necessary to test that router under operational 121 conditions by simultaneously configuring and scaling network 122 protocols and security policies, forwarding traffic, and managing 123 the device in a lab environment. It is useful to accelerate these 124 network operational conditions so that the router under test can 125 be benchmarked in a lab environment with a shorter test duration. 126 Testing a router in accelerated network conditions is known as 127 Accelerated Stress Benchmarking. 129 This document provides the Terminology for performing Stress 130 Benchmarking of networking devices. The three phases of the Stress 131 Test: Startup, Instability and Recovery are defined along with the 132 benchmark and configuration terms associated with the each phase. 133 Benchmarks for stress testing are defined using the Aggregate 134 Forwarding Rate and control plane Session Count during each phase 135 of the test. For each plane, the Configuration Set, Startup 136 Conditions, and Instability Conditions are defined. Also defined are 137 the Benchmark Planes fundamental to stress testing configuration, 138 setup and measurement. These are the Control Plane, Data Plane, 139 Management Plane and Security Plane. Multiple benchmarks are measured 140 for each Benchmark Plane during each Phase. Benchmarks can be 141 compared across multiple planes for the same DUT or at the same 142 plane for 2 or more DUTS. Benchmarks of internal DUT characteristics 143 such as memory and CPU utilization (also known as White Box 144 benchmarks) are described in Appendix 1, to allow additional 145 characterization of DUT behavior. The terminology is to be used with 146 the companion methodology document [4]. The sequence of phases, 147 actions, and benchmarks are shown in Table 1. 149 2. Existing definitions 150 RFC 1242 [1] and RFC 2285 [2] should be consulted before 151 attempting to make use of this document. For the sake of clarity 152 and continuity this RFC adopts the template for definitions set 153 out in Section 2 of RFC 1242. Definitions are indexed and grouped 154 together in sections for ease of reference. 156 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 157 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 158 document are to be interpreted as described in BCP 14, RFC 2119 159 [5]. RFC 2119 defines the use of these key words to help make the 160 intent of standards track documents as clear as possible. While this 161 document uses these keywords, it is not a standards track document. 163 Stress Benchmarking 165 Table 1. Phase Sequence and Benchmarks 166 III. Recovery Phase II. Instability Phase I. Startup Phase 167 <-----------------<---<-------------------<----<--------------< 168 Remove Instability Achieve Configuration Apply Startup 169 Conditions Set and Apply Conditions 170 Instability Conditions 172 Benchmark: Benchmark: Benchmark: 173 Recovered Aggregate Unstable Aggregate Stable Aggregate 174 Forwarding Rate Forwarding Rate Forwarding Rate 176 Degraded Aggregate 177 Forwarding Rate 179 Average Degraded 180 Forwarding Rate 182 Recovered Latency Unstable Latency Startup Latency 184 Recovered Uncontrolled Recovered Uncontrolled Stable Session Count 185 Sessions Lost Sessions Lost 187 Recovery Time 189 3. Term definitions 190 3.1 General Terms 191 3.1.1 Benchmark Planes 193 Definition: 194 The features, conditions, and behavior for the Accelerated Stress 195 Benchmarking. 197 Discussion: 198 There are four Benchmark Planes: Control Plane, Data Plane, 199 Management Plane, and Security Plane as shown in Figure 1. 200 Configuration, Startup Conditions, Instability Conditions, and 201 Failure Conditions used for each test are defined for each of 202 these four Benchmark Planes. 204 Measurement units: 205 N/A 207 Issues: 208 None 210 See Also: 211 Control Plane 212 Data Plane 213 Management Plane 214 Security Plane 215 Stress Benchmarking 217 ___________ ___________ 218 | Control | | Management| 219 | Plane |___ ___| Plane | 220 | | | | | | 221 ----------- | | ----------- 222 \/ \/ ___________ 223 ___________ | Security | 224 | |<-----------| Plane | 225 | DUT | | | 226 |--->| |<---| ----------- 227 | ----------- | 228 | | 229 | ___________ | 230 | | Data | | 231 |--->| Plane |<---| 232 | | 233 ----------- 235 Figure 1. Router Accelerated Stress Benchmarking Planes 237 3.1.2 Configuration Sets 239 Definition: 240 The offered load, features, and scaling limits used during the 241 Accelerated Stress Benchmarking. 243 Discussion: 244 There are four Configuration Sets: Control Plane Configuration 245 Set, Data Plane Configuration Set, Management Plane Configuration 246 Set, and Security Plane Configuration Set. The minimum 247 Configuration Set that MUST be used is discussed in the 248 Methodology document [4]. 250 Measurement units: 251 N/A 253 Issues: 254 None 256 See Also: 257 Control Plane Configuration Set 258 Data Plane Configuration Set 259 Management Plane Configuration Set 260 Security Plane Configuration Set 262 3.1.3 Startup Conditions 264 Definition: 265 Test conditions applied at the start of the Accelerated 266 Stress Benchmark to establish conditions for the remainder of 267 the test. 269 Stress Benchmarking 271 Discussion: 272 Startup Conditions may cause stress on the DUT and produce 273 failure. Startup Conditions are defined for the Control 274 Plane and Security Plane. 276 Measurement units: 277 N/A 279 Issues: 280 None 282 See Also: 283 Control Plane Startup Conditions 284 Data Plane Startup Conditions 285 Management Plane Startup Conditions 286 Security Plane Startup Conditions 288 3.1.4 Instability Conditions 290 Definition: 291 Test conditions applied during the Accelerated Stress 292 Benchmark to produce instability and stress the DUT. 294 Discussion: 295 Instability Conditions are applied to the DUT after the 296 Startup Conditions have completed. Instability Conditions 297 occur for the Control Plane, Data Plane, Management Plane, 298 and Security Plane. 300 Measurement units: 301 N/A 303 Issues: None 305 See Also: 306 Control Plane Instability Conditions 307 Data Plane Instability Conditions 308 Management Plane Instability Conditions 309 Security Plane Instability Conditions 311 3.1.5 Aggregate Forwarding Rate 313 Definition: 314 Sum of forwarding rates for all interfaces on the 315 DUT. 317 Discussion: 318 Each interface of the DUT forwards traffic at some 319 measured rate. The Aggregate Forwarding Rate is the 320 sum of forwarding rates for all interfaces on the DUT. 322 Stress Benchmarking 324 Measurement units: 325 pps 327 Issues: 328 None 330 See Also: 331 Startup Phase 333 3.1.6 Discontinued Sessions 335 Definition: 336 Control Plane sessions that are intentionally brought 337 down during the Stress test. 339 Discussion: 340 Discontinued Sessions is performed during the test in 341 order to stress the DUT by forcing it to tear down Control 342 Plane sessions while handling traffic. It is assumed that 343 the test equipment is able to control protocol session 344 state with the DUT and is therefore able to introduce 345 Discontinued Sessions. 347 Measurement units: 348 None 350 Issues: 351 None 353 See Also: 354 Uncontrolled Session Loss 356 3.1.7 Uncontrolled Session Loss 357 Definition: 358 Control Plane sessions that are in the down state 359 but were not intentionally brought down during the 360 Stress test. 362 Discussion: 363 The test equipment is able to control protocol 364 session state with the DUT. The test equipment 365 is also to monitor for sessions lost with the 366 DUT which the test equipment itself did not 367 intentionally bring down. 369 Measurement units: 370 N/A 372 Issues: 373 None 375 See Also: 376 Discontinued Sessions 377 Stress Benchmarking 379 3.2 Benchmark Planes 381 3.2.1 Control Plane 382 Definition: 383 The Description of the control protocols enabled for 384 the Accelerated Stress Benchmarking. 386 Discussion: 387 The Control Plane defines the Configuration, Startup 388 Conditions, and Instability Conditions of the control 389 protocols. Control Plane protocols MAY include routing 390 protocols, multicast protocols, and MPLS protocols. 391 These can be enabled or disabled for a benchmark test. 393 Measurement units: 394 N/A 396 Issues: 397 None 399 See Also: 400 Benchmark Planes 401 Control Plane Configuration Set 402 Control Plane Startup Conditions 403 Control Plane Instability Conditions 405 3.2.2 Data Plane 406 Definition: 407 The data traffic profile used for the Accelerated Stress 408 Benchmarking. 410 Discussion: 411 The Data Plane defines the Configuration, Startup 412 Conditions, and Instability Conditions of the data 413 traffic. The Data Plane includes the traffic and 414 interface profile. 416 Issues: 417 None 419 Measurement Units: 420 N/A 422 See Also: 423 Benchmark Planes 424 Data Plane Configuration Set 425 Data Plane Startup Conditions 426 Data Plane Instability Conditions 428 3.2.3 Management Plane 429 Definition: 430 The Management features and tools used for the 431 Accelerated Stress Benchmarking. 433 Stress Benchmarking 435 Discussion: 436 A key component of the Accelerated Stress Benchmarking is the 437 Management Plane to assess manageability of the router 438 under stress. The Management Plane defines the Configuration, 439 Startup Conditions, and Instability Conditions of the 440 management protocols and features. The Management Plane 441 includes SNMP, Logging/Debug, Statistics Collection, and 442 management configuration sessions such as telnet, SSH, and 443 serial console. 445 Measurement units: 446 N/A 448 Issues: 449 None 451 See Also: 452 Benchmark Planes 453 Management Plane Configuration Set 454 Management Plane Startup Conditions 455 Management Plane Instability Conditions 457 3.2.4 Security Plane 459 Definition: 460 The Security features used during the Accelerated Stress 461 Benchmarking. 463 Discussion: 464 The Security Plane defines the Configuration, Startup 465 Conditions, and Instability Conditions of the security 466 features and protocols. The Security Plane includes the 467 ACLs, Firewall, Secure Protocols, and User Login. 469 Measurement units: 470 N/A 472 Issues: 473 None 475 See Also: 476 Benchmark Planes 477 Security Plane Configuration Set 478 Security Plane Startup Conditions 479 Security Plane Instability Conditions 480 Stress Benchmarking 482 3.3 Startup 484 3.3.1 Startup Phase 486 Definition 487 The step of the benchmarking test in which the 488 Startup Conditions are generated with the DUT. This 489 begins with the attempt to establish the first session 490 and ends when the last Control Plane session is 491 established. 493 Discussion: 494 The Startup Phase is the first Phase of the benchmarking 495 test preceding the Instability Phase and Recovery Phase. 496 It is specified by the Configuration Sets and Startup 497 Conditions for each Benchmark Plane. The Startup Phase ends 498 and Instability Phase MUST begin when the Configuration Sets 499 are achieved with the DUT. The DUT MUST be stable and 500 without failure during the Startup Phase to continue to the 501 Instability Phase. If there is failure during the Startup 502 Phase then the test MUST be restarted with new Startup 503 Conditions. 505 Measurement Units: 506 None 508 Issues: 509 None 511 See Also: 512 Benchmark Plane 513 Configuration Sets 514 Startup Conditions 515 Instability Phase 516 Recovery Phase 518 3.3.2 Benchmarks 519 3.3.2.1 Stable Aggregate Forwarding Rate 521 Definition: 522 Sum of forwarding rates for all interfaces on the 523 DUT during the Startup Phase. 525 Discussion: 526 The Stable Aggregate Forwarding Rate is calculated from 527 measurement samples throughout the entire Startup Phase. 528 Stable Aggregate Forwarding Rate is the calculated average 529 of the samples measured during the Startup Phase. It is 530 RECOMMENDED that the sample measurements be made on every 531 DUT interface every 1 second. 533 Measurement units: 534 pps 535 Stress Benchmarking 537 Issues: 538 The act of the DUT establishing the Startup Conditions 539 could influence the forwarding rate in certain 540 implementations so that this "baseline" for the 541 remainder of the test is lowered. The alternative is 542 to change the definition of Stable Aggregate 543 Forwarding Rate so that it is measured after Startup 544 Conditions are achieved. The disadvantage of that 545 definition would be that it loses measurement of any 546 impact that establishing Startup Conditions would have 547 on forwarding rate. When comparing the Startup Aggregate 548 Forwarding Rate benchmark of two devices it is preferred 549 to know the impact establishing Startup Conditions has 550 on Forwarding Rate. 552 See Also: 553 Startup Phase 554 Aggregate Forwarding Rate 556 3.3.2.2 Stable Latency 558 Definition: 559 Average measured latency of traffic forwarded by the DUT 560 during the Startup Phase. 562 Discussion: 563 Stable Latency is the calculated average Latency during 564 the Startup Phase. 566 Measurement units: 567 seconds 569 Issues: 570 None 572 See Also: 573 Startup Phase 574 Stable Aggregate Forwarding Rate 576 3.3.2.3 Stable Session Count 578 Definition: 579 Total number of control plane sessions/adjacencies 580 established and maintained by the DUT during the Startup 581 Phase and prior to Instability Conditions being initiated. 583 Discussion: 584 This measurement SHOULD be made after the Control 585 Plane Startup Conditions are applied to the DUT. 587 Stress Benchmarking 589 Measurement units: 590 sessions 592 Issues: 593 None 595 See Also: 596 Startup Phase 598 3.3.3 Control Plane 600 3.3.3.1 Control Plane Configuration Set 602 Definition: 603 The control protocols and scaling values used for the 604 Accelerated Stress Benchmarking. 606 Discussion: 607 Control Plane Configuration Set is represented in Figure 2 and 608 specifies protocol configurations for protocols such as, but not 609 limitied to, Routing, Multicast, SIP and MPLS. Specific 610 protocols can be enabled or disabled for a benchmark test. 612 Measurement units: 613 N/A 615 Issues: 616 None 618 See Also: 619 Data Plane Configuration Set 620 Management Configuration Set 621 Security Configuration Set 623 ____________ ____________ ____________ 624 | Routing | | Multicast | | MPLS | 625 | Protocols |___ | Protocols | __| Protocols | 626 | | | | | | | | 627 ------------ | ------------ | ------------ 628 | | | 629 | | | 630 | \/ | 631 | ___________ | 632 | | | | 633 ____________ +------->| DUT |<------+ 634 | SIP |----------->| | 635 | Sessions | ----------- 636 ------------ 638 Figure 2. Control Plane Configuration Module 639 Stress Benchmarking 641 3.3.3.2 Control Plane Startup Conditions 643 Definition: 644 Control Plane conditions that occur at the start 645 of the Accelerated Stress Benchmarking to establish conditions 646 for the remainder of the test. 648 Discussion: 649 Startup Conditions may cause stress on the DUT and produce 650 failure. Startup Conditions for the Control Plane include 651 session establishment rate, number of sessions established 652 and number of routes learned. 654 Measurement units: 655 N/A 657 Issues: 658 None 660 See Also: 661 Startup Conditions 662 Security Plane Startup Conditions 663 Control Plane Configuration Set 665 3.3.4 Data Plane 666 3.3.4.1 Data Plane Configuration Set 668 Definition: 669 The data traffic profile and interfaces that are enabled for 670 the Accelerated Stress Benchmarking. 672 Discussion: 673 Data Plane Configuration Set includes the Traffic Profile and 674 interfaces used for the Accelerated Stress Benchmarking. 675 The interface type(s) and number of interfaces for each 676 interface type MUST be reported. 678 Measurement Units: 679 N/A 681 Issues: None 683 See Also: 684 Traffic Profile 685 Stress Benchmarking 687 3.3.4.2 Traffic Profile 688 Definition 689 The characteristics of the Offered Load to the DUT on each 690 interface for the Accelerated Stress Benchmarking. 692 Discussion 693 The Traffic Profile specifies the number of packet size(s), 694 packet rate, number of flows, and encapsulation on a 695 per-interface basis used for the offered load to the DUT. 697 Measurement Units: 698 Traffic Profile is reported as follows: 700 Parameter Units 701 --------- ------ 702 Packet Size(s) bytes 703 Packet Rate(interface) array of packets per second 704 Aggregate Offered Load pps 705 Number of Flows number of flows 706 Traffic Type array of (RTP, UDP, TCP, other) 707 Encapsulation(flow) array of encapsulation type 708 Mirroring enabled/disabled 710 Issues: 711 None 713 See Also: 714 Data Plane Configuration Set 716 3.3.5 Management Plane 717 3.3.5.1 Management Plane Configuration Set 719 Definition: 720 The router management features enabled for the 721 Accelerated Stress Benchmark. 723 Discussion: 724 A key component of the Accelerated Stress Benchmark is the 725 Management Configuration Set to assess manageability of the 726 router under stress. The Management Configuration Set defines 727 the management configuration of the DUT. Features that are 728 part of the Management Configuration Set include access, SNMP, 729 Logging/Debug, and Statistics Collection, and services such as 730 FTP, as shown in Figure 3. These features SHOULD be enabled 731 throughout the Stress test. 733 Stress Benchmarking 735 Measurement units: 736 N/A 738 Issues: 739 None 741 See Also: 742 Control Plane Configuration Set 743 Data Plane Configuration Set 744 Security Plane Configuration Set 746 ____________ ____________ 747 | | | Logging/ | 748 | SNMP | __| Debug | 749 | | | | | 750 ------------ | ------------ 751 | | 752 | | 753 \/ | 754 ___________ | 755 | | | 756 | DUT |<---| 757 | | 758 ----------- 759 | 760 | 761 \/ 762 ___________ 763 | Packet | 764 | Statistics| 765 | Collector | 766 | | 767 ----------- 769 Figure 3. Management Plane Configuration Set 771 3.3.6 Security Plane 772 3.3.6.1 Security Plane Configuration Set 774 Definition: 775 Security features and scaling enabled for the Accelerated Stress 776 Test. 778 Discussion: 779 The Security Plane Configuration Set includes the configuration 780 and scaling of ACLs, Firewall, IPsec, and User Access, as shown 781 in Figure 4. Tunnels SHOULD be established and policies 782 configured. Instability is introduced by flapping tunnels and 783 configuring and removing policies. 785 Stress Benchmarking 787 ____________ ____________ ____________ 788 | | | Secure | | User | 789 |ACL/Firewall| | Protocol | __| Access | 790 | | | | | | | 791 ------------ ------------ | ------------ 792 | | | 793 | | | 794 | \/ | 795 | ___________ | 796 | | | | 797 |------->| DUT |<--------| 798 | | 799 ----------- 800 Figure 4. Security Configuration Module 802 Measurement units: 803 N/A 805 Issues: 806 None 808 See Also: 809 ACL Configuration Set 810 Secure Protocol Configuration Set 811 Password Login Configuration Set 813 3.3.6.2 Security Plane Startup Conditions 814 Definition: 815 Security Plane conditions that occur at the start 816 of the Accelerated Stress Benchmarking to establish conditions 817 for the remainder of the test. 819 Discussion: 820 Startup Conditions may cause stress on the DUT and produce 821 failure. Startup Conditions for the Security Plane include 822 session establishment rate, number of sessions established 823 and number of policies learned, and number of user access 824 sessions opened. 826 Measurement units: 827 N/A 829 Issues: 830 None 832 See Also: 833 Startup Conditions 834 Data Plane Startup Conditions 835 Management Plane Startup Conditions 836 Security Plane Startup Conditions 837 Stress Benchmarking 839 3.4 Instability 841 3.4.1 Instability Phase 843 Definition: 844 The step of the benchmarking test in which the Instability 845 Conditions are offered to the DUT. 847 Discussion: 848 The Instability Phase is the middle phase of the benchmarking 849 test following the Startup Phase and preceding the Recovery 850 Phase. The Startup Phase MUST complete without failure to 851 begin the Instability Phase. 853 Measurement Units: 854 None 856 Issues: 857 None 859 See Also: 860 Instability Conditions 861 Startup Phase 862 Recovery Phase 864 3.4.2 Benchmarks 865 3.4.2.1 Unstable Aggregate Forwarding Rate 867 Definition: 868 Rate of traffic forwarded by the DUT during the 869 Instability Phase. 871 Discussion: 872 Unstable Aggregated Forwarding Rate is an instantaneous 873 measurement of the Aggregate Forwarding Rate during the 874 Instability Phase. 876 Measurement units: 877 pps 879 Issues: 880 None 882 See Also: 883 Instability Conditions 884 Aggregate Forwarding Rate 885 Stress Benchmarking 887 3.4.2.2 Aggregate Forwarding Rate Degradation 889 Definition: 890 The reduction in Aggregate Forwarding Rate during the 891 Instability Phase. 893 Discussion: 894 The Aggregate Forwarding Rate Degradation is calculated 895 for each measurement of the Unstable Aggregate Forwarding 896 Rate. The Aggregate Forwarding Rate Degradation is 897 calculated by subtracting each measurement of the Unstable 898 Aggregate Forwarding Rate from the Stable Aggregate 899 Forwarding Rate, such that 901 Aggregate Forwarding Rate Degradation= 902 Stable Aggregate Forwarding Rate - 903 Unstable Aggregate Forwarding Rate 905 Ideally, the Aggregate Forwarding Rate Degradation is zero. 907 Measurement Units: 908 pps 910 Issues: 911 None 913 See Also: 914 Instability Phase 915 Unstable Aggregate Forwarding Rate 917 3.4.2.3 Average Aggregate Forwarding Rate Degradation 919 Definition 920 DUT Benchmark that is the calculated average of the 921 obtained Degraded Forwarding Rates. 923 Discussion: 924 Average Aggregate Forwarding Rate Degradation= 925 (Sum (Stable Aggregate Forwarding Rate) - 926 Sum (Unstable Aggregate Forwarding Rate)) / Number of Samples 928 Measurement Units: 929 pps 931 Issues: 932 None 934 See Also: 935 Aggregate Forwarding Rate Degradation 936 Stress Benchmarking 938 3.4.2.4 Unstable Latency 940 Definition: 941 The average increase in measured packet latency during 942 the Instability Phase compared to the Startup Phase. 944 Discussion: 945 Latency SHOULD be measured at a fixed interval during the 946 Instability Phase. Unstable Latency is the difference 947 between Stable Latency and the average Latency measured 948 during the Instability Phase. It is expected that there 949 be an increase in average latency from the Startup Phase 950 to the Instability phase, but it is possible that the 951 difference be zero. The Unstable Latency cannot be a 952 negative number. 954 Measurement units: 955 seconds 957 Issues: 958 None 960 See Also: 961 Instability Phase 962 Stable Latency 964 3.4.2.5 Unstable Uncontrolled Sessions Lost 966 Definition: 967 Control Plane sessions that are in the down state 968 but were not intentionally brought down during the 969 Instability Phase. 971 Discussion: 972 The test equipment is able to control protocol 973 session state with the DUT. The test equipment 974 is also to monitor for sessions lost with the 975 DUT which the test equipment itself did not 976 intentionally bring down. 978 Measurement units: 979 sessions 981 Issues: 982 None 984 See Also: 985 Discontinued Sessions 986 Uncontrolled Session Loss 987 Stress Benchmarking 989 3.4.3 Instability Conditions 991 3.4.3.1 Control Plane Instability Conditions 993 Definition: 994 Control Plane conditions that occur during the Accelerated 995 Stress Benchmark to produce instability and stress the DUT. 997 Discussion: 998 Control Plane Instability Conditions are experienced by the DUT 999 after the Startup Conditions have completed. Control Plane 1000 Instability Conditions experienced by the DUT include session 1001 loss, route withdrawal, and route cost changes. 1003 Measurement units: 1004 N/A 1006 Issues: 1007 None 1009 See Also: 1010 Instability Conditions 1011 Data Plane Instability Conditions 1012 Management Plane Instability Conditions 1013 Security Plane Instability Conditions 1015 3.4.3.2 Data Plane Instability Conditions 1016 Definition: 1017 Data Plane conditions that occur during the Accelerated Stress 1018 Benchmark to produce instability and stress the DUT. 1020 Discussion: 1021 Data Plane Instability Conditions are experienced by the DUT 1022 after the Startup Conditions have completed. Data Plane 1023 Instability Conditions experienced by the DUT include interface 1024 shutdown, link loss, and overloaded links. 1026 Measurement units: 1027 N/A 1029 Issues: 1030 None 1032 See Also: 1033 Instability Conditions 1034 Control Plane Instability Conditions 1035 Management Plane Instability Conditions 1036 Security Plane Instability Conditions 1037 Stress Benchmarking 1039 3.4.3.3 Management Plane Instability Conditions 1041 Definition: 1042 Management Plane conditions that occur during the Accelerated 1043 Stress Benchmark to produce instability and stress the DUT. 1045 Discussion: 1046 Management Plane Instability Conditions are experienced by the 1047 DUT after the Startup Conditions have completed. Management 1048 Plane Instability Conditions experienced by the DUT include 1049 repeated FTP of large files. 1051 Measurement units: 1052 N/A 1054 Issues: 1055 None 1057 See Also: 1058 Instability Conditions 1059 Control Plane Instability Conditions 1060 Data Plane Instability Conditions 1061 Security Plane Instability Conditions 1063 3.4.3.4 Security Plane Instability Conditions 1065 Definition: 1066 Security Plane conditions that occur during the Accelerated 1067 Stress Benchmark to produce instability and stress the DUT. 1069 Discussion: 1070 Security Plane Instability Conditions are experienced by the DUT 1071 after the Startup Conditions have completed. Security Plane 1072 Instability Conditions experienced by the DUT include session 1073 loss and uninitiated policy changes. 1075 Measurement units: 1076 N/A 1078 Issues: 1079 None 1081 See Also: 1082 Instability Conditions 1083 Control Plane Instability Conditions 1084 Data Plane Instability Conditions 1085 Management Plane Instability Conditions 1086 Stress Benchmarking 1088 3.5 Recovery 1089 3.5.1 Recovery Phase 1091 Definition: 1092 The step of the benchmarking test in which the 1093 Startup Conditions are generated with the DUT, but 1094 the Instability Conditions are no longer offered to 1095 the DUT. 1097 Discussion: 1098 The Recovery Phase is the final Phase of the 1099 benchmarking test following the Startup Phase and 1100 Instability Phase. Startup Conditions MUST NOT be 1101 Restarted. 1103 Measurement Units: None 1105 Issues: None 1107 See Also: 1108 Startup Conditions 1109 Startup Phase 1110 Instability Conditions 1111 Instability Phase 1113 3.5.2 Benchmarks 1114 3.5.2.1 Recovered Aggregate Forwarding Rate 1115 Definition 1116 Rate of traffic forwarded by the DUT during the Recovery 1117 Phase. 1119 Discussion: 1120 Recovered Aggregate Forwarding Rate is an instantaneous 1121 measurement of the Aggregate Forwarding Rate during the 1122 Recovery Phase. Ideally, each measurement of the Recovered 1123 Aggregate Forwarding Rate equals the Stable Aggregate 1124 Forwarding Rate because the Instability Conditions 1125 do not exist in both the Startup and Recovery Phases. 1127 Measurement Units: 1128 pps 1130 Issues: None 1132 See Also: 1133 Aggregate Forwarding Rate 1134 Recovery Phase 1135 Recovered Aggregate Forwarding Rate 1136 Startup Phase 1137 Stable Aggregate Forwarding Rate 1138 Stress Benchmarking 1140 3.5.2.2 Recovered Latency 1142 Definition: 1143 The average increase in measured packet latency during 1144 the Recovery Phase compared to the Startup Phase. 1146 Discussion: 1147 Latency SHOULD be measured at a fixed interval during the 1148 Recovery Phase. Unstable Latency is the difference 1149 between Stable Latency and the average Latency measured 1150 during the Recovery Phase. It is expected that there 1151 be no increase in average latency from the Startup Phase 1152 to the Recovery Phase. The Recovered Latency cannot be a 1153 negative number. 1155 Measurement units: 1156 seconds 1158 Issues: None 1160 See Also: 1161 Recovery Phase 1162 Stable Latency 1164 3.5.2.3 Recovery Time 1166 Definition 1167 The amount of time for the Recovered Aggregate Forwarding 1168 Rate to become equal to the Stable Aggregate Forwarding Rate. 1170 Discussion 1171 Recovery Time is measured beginning at the instant the 1172 Instability Phase ends until the Recovered Aggregate 1173 Forwarding Rate equals the Stable Aggregate Forwarding 1174 Rate for a minimum duration of 180 consecutive seconds. 1176 Measurement Units: 1177 milliseconds 1179 Issues: 1180 None 1182 See Also: 1183 Recovered Aggregate Forwarding Rate 1184 Stable Aggregate Forwarding Rate 1185 Stress Benchmarking 1187 3.5.2.4 Recovered Uncontrolled Control Plane Sessions Lost 1189 Definition: 1190 Control Plane sessions that are in the down state 1191 but were not intentionally brought down during the 1192 Recovery Phase. 1194 Discussion: 1195 The test equipment is able to control protocol 1196 session state with the DUT. The test equipment 1197 is also to monitor for sessions lost with the 1198 DUT which the test equipment itself did not 1199 intentionally bring down. 1201 Measurement units: 1202 sessions 1204 Issues: 1205 None 1207 See Also: 1208 Discontinued Sessions 1209 Uncontrolled Session Loss 1211 3.5.2.5 Variability Benchmarks 1213 Definition: 1214 The difference between the measured Benchmarks of the 1215 same DUT over multiple iterations. 1217 Discussion: 1218 Ideally, the measured benchmarks should be the same for multiple 1219 iterations with the same DUT. Configuration Sets and 1220 Instability Conditions MUST be held constant for this 1221 benchmark. Whether the DUT can exhibit such predictable and 1222 repeatable behavior is an important benchmark in itself. 1224 Measurement units: 1225 As applicable to each Benchmark. The results are to be 1226 presented in a table format for successive Iterations. 1227 Ideally, the differences should be zero. 1229 Issues: 1230 None 1232 See Also: 1233 Startup Period 1234 Instability Period 1235 Recovery Period 1236 Stress Benchmarking 1238 4. IANA Considerations 1239 This document requires no IANA considerations. 1241 5. Security Considerations 1242 Documents of this type do not directly affect the security of 1243 the Internet or of corporate networks as long as benchmarking 1244 is not performed on devices or systems connected to operating 1245 networks. 1247 6. Acknowledgements 1248 The authors would like to thank the BMWG and particularly 1249 Al Morton, Jay Karthik, and George Jones for their contributions. 1251 7. References 1253 7.1 Normative References 1254 [1] Bradner, S., Editor, "Benchmarking Terminology for Network 1255 Interconnection Devices", RFC 1242, March 1991. 1257 [2] Mandeville, R., "Benchmarking Terminology for LAN Switching 1258 Devices", RFC 2285, June 1998. 1260 [3] Bradner, S. and McQuaid, J., "Benchmarking Methodology for 1261 Network Interconnect Devices", RFC 2544, March 1999. 1263 [4] Poretsky, S. and Rao, S., "Methodology Guidelines for 1264 Accelerated Stress Benchmarking", 1265 draft-ietf-bmwg-acc-bench-meth-09, work in progress, 1266 February 2008. 1268 [5] Bradner, S., "Key words for use in RFCs to Indicate 1269 Requirement Levels", RFC 2119, March 1997. 1271 7.2 Informative References 1272 [RFC3871] Jones, G., "Operational Security Requirements for Large 1273 Internet Service Provider (ISP) IP Network Infrastructure.", 1274 IETF RFC 3871 , September 2004. 1276 [NANOG25] Poretsky, S., "Core Router Evaluation for Higher 1277 Availability", NANOG 25, June 8, 2002, Toronto, CA. 1279 [IEEECQR] Poretsky, S., "Router Stress Testing to Validate 1280 Readiness for Network Deployment", IEEE CQR 2003. 1282 Stress Benchmarking 1284 8. Author's Address 1286 Scott Poretsky 1287 NextPoint Networks 1288 3 Federal Street 1289 Billerica, MA 01821 1290 USA 1291 Phone: + 1 508 439 9008 1292 EMail: sporetsky@nextpointnetworks.com 1294 Shankar Rao 1295 1801 California Street 1296 8th Floor 1297 Qwest Communications 1298 Denver, CO 80202 USA 1299 Phone: + 1 303 437 6643 1300 Email: shankar.rao@qwest.com 1302 Appendix 1. White Box Benchmarking Terminology 1303 Minimum Available Memory 1304 Definition: 1305 Minimum DUT Available Memory during the duration of the 1306 Accelerated Stress Benchmark. 1308 Discussion: 1309 This benchmark enables the assessment of resources in the DUT. 1310 It is necessary to monitor DUT memory to measure this benchmark. 1312 Measurement units: 1313 bytes 1315 Issues: None 1317 See Also: 1318 Maximum CPU Utilization 1320 Maximum CPU Utilization 1321 Definition: 1322 Maximum DUT CPU utilization during the duration of the 1323 Accelerated Stress Benchmark. 1325 Discussion: 1326 This benchmark enables the assessment of resources in the DUT. 1327 It is necessary to monitor DUT CPU Utilization to measure 1328 this benchmark. 1330 Measurement units: % 1332 Issues: None 1334 See Also: 1335 Minimum Available Memory 1336 Stress Benchmarking 1338 Full Copyright Statement 1340 Copyright (C) The IETF Trust (2008). 1342 This document is subject to the rights, licenses and restrictions 1343 contained in BCP 78, and except as set forth therein, the authors 1344 retain all their rights. 1346 This document and the information contained herein are provided 1347 on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 1348 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE 1349 IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL 1350 WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY 1351 WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE 1352 ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS 1353 FOR A PARTICULAR PURPOSE. 1355 Intellectual Property 1357 The IETF takes no position regarding the validity or scope of any 1358 Intellectual Property Rights or other rights that might be claimed to 1359 pertain to the implementation or use of the technology described in 1360 this document or the extent to which any license under such rights 1361 might or might not be available; nor does it represent that it has 1362 made any independent effort to identify any such rights. Information 1363 on the procedures with respect to rights in RFC documents can be 1364 found in BCP 78 and BCP 79. 1366 Copies of IPR disclosures made to the IETF Secretariat and any 1367 assurances of licenses to be made available, or the result of an 1368 attempt made to obtain a general license or permission for the use of 1369 such proprietary rights by implementers or users of this 1370 specification can be obtained from the IETF on-line IPR repository at 1371 http://www.ietf.org/ipr. 1373 The IETF invites any interested party to bring to its attention any 1374 copyrights, patents or patent applications, or other proprietary 1375 rights that may cover technology that may be required to implement 1376 this standard. Please address the information to the IETF at ietf- 1377 ipr@ietf.org. 1379 Acknowledgement 1381 Funding for the RFC Editor function is currently provided by the 1382 Internet Society.