idnits 2.17.1 draft-ietf-bmwg-ngfw-performance-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([RFC3511]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. -- The draft header indicates that this document obsoletes RFC3511, but the abstract doesn't seem to directly say this. It does mention RFC3511 though, so this could be OK. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (November 2021) is 891 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 3511 (Obsoleted by RFC 9411) -- Obsolete informational reference (is this intentional?): RFC 7230 (Obsoleted by RFC 9110, RFC 9112) Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Methodology Working Group B. Balarajah 3 Internet-Draft 4 Obsoletes: 3511 (if approved) C. Rossenhoevel 5 Intended status: Informational EANTC AG 6 Expires: 20 May 2022 B. Monkman 7 NetSecOPEN 8 November 2021 10 Benchmarking Methodology for Network Security Device Performance 11 draft-ietf-bmwg-ngfw-performance-12 13 Abstract 15 This document provides benchmarking terminology and methodology for 16 next-generation network security devices including next-generation 17 firewalls (NGFW), next-generation intrusion prevention systems 18 (NGIPS), and unified threat management (UTM) implementations. The 19 main areas covered in this document are test terminology, test 20 configuration parameters, and benchmarking methodology for NGFW and 21 NGIPS. This document aims to improve the applicability, 22 reproducibility, and transparency of benchmarks and to align the test 23 methodology with today's increasingly complex layer 7 security 24 centric network application use cases. As a result, this document 25 makes [RFC3511] obsolete. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at https://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on 5 May 2022. 44 Copyright Notice 46 Copyright (c) 2021 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 51 license-info) in effect on the date of publication of this document. 52 Please review these documents carefully, as they describe your rights 53 and restrictions with respect to this document. Code Components 54 extracted from this document must include Simplified BSD License text 55 as described in Section 4.e of the Trust Legal Provisions and are 56 provided without warranty as described in the Simplified BSD License. 58 Table of Contents 60 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 61 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 62 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 63 4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 4 64 4.1. Testbed Configuration . . . . . . . . . . . . . . . . . . 5 65 4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 6 66 4.2.1. Security Effectiveness Configuration . . . . . . . . 12 67 4.3. Test Equipment Configuration . . . . . . . . . . . . . . 12 68 4.3.1. Client Configuration . . . . . . . . . . . . . . . . 12 69 4.3.2. Backend Server Configuration . . . . . . . . . . . . 15 70 4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 17 71 4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 17 72 5. Testbed Considerations . . . . . . . . . . . . . . . . . . . 18 73 6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 19 74 6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 19 75 6.2. Detailed Test Results . . . . . . . . . . . . . . . . . . 21 76 6.3. Benchmarks and Key Performance Indicators . . . . . . . . 21 77 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 23 78 7.1. Throughput Performance with Application Traffic Mix . . . 23 79 7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 23 80 7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 23 81 7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 23 82 7.1.4. Test Procedures and Expected Results . . . . . . . . 25 83 7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 26 84 7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 26 85 7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 27 86 7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 27 87 7.2.4. Test Procedures and Expected Results . . . . . . . . 28 88 7.3. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 30 89 7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 30 90 7.3.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 30 91 7.3.3. Test Parameters . . . . . . . . . . . . . . . . . . . 30 92 7.3.4. Test Procedures and Expected Results . . . . . . . . 32 93 7.4. HTTP Transaction Latency . . . . . . . . . . . . . . . . 33 94 7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 33 95 7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 33 96 7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 34 97 7.4.4. Test Procedures and Expected Results . . . . . . . . 35 98 7.5. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 36 99 7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 36 100 7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 36 101 7.5.3. Test Parameters . . . . . . . . . . . . . . . . . . . 37 102 7.5.4. Test Procedures and Expected Results . . . . . . . . 38 103 7.6. TCP/HTTPS Connections per Second . . . . . . . . . . . . 39 104 7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 40 105 7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 40 106 7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 40 107 7.6.4. Test Procedures and Expected Results . . . . . . . . 42 108 7.7. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 43 109 7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 43 110 7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 43 111 7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 43 112 7.7.4. Test Procedures and Expected Results . . . . . . . . 45 113 7.8. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 46 114 7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 46 115 7.8.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 46 116 7.8.3. Test Parameters . . . . . . . . . . . . . . . . . . . 46 117 7.8.4. Test Procedures and Expected Results . . . . . . . . 48 118 7.9. Concurrent TCP/HTTPS Connection Capacity . . . . . . . . 49 119 7.9.1. Objective . . . . . . . . . . . . . . . . . . . . . . 49 120 7.9.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 49 121 7.9.3. Test Parameters . . . . . . . . . . . . . . . . . . . 49 122 7.9.4. Test Procedures and Expected Results . . . . . . . . 51 123 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 52 124 9. Security Considerations . . . . . . . . . . . . . . . . . . . 53 125 10. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 53 126 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 53 127 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 53 128 12.1. Normative References . . . . . . . . . . . . . . . . . . 53 129 12.2. Informative References . . . . . . . . . . . . . . . . . 53 130 Appendix A. Test Methodology - Security Effectiveness 131 Evaluation . . . . . . . . . . . . . . . . . . . . . . . 54 132 A.1. Test Objective . . . . . . . . . . . . . . . . . . . . . 55 133 A.2. Testbed Setup . . . . . . . . . . . . . . . . . . . . . . 55 134 A.3. Test Parameters . . . . . . . . . . . . . . . . . . . . . 55 135 A.3.1. DUT/SUT Configuration Parameters . . . . . . . . . . 55 136 A.3.2. Test Equipment Configuration Parameters . . . . . . . 55 137 A.4. Test Results Validation Criteria . . . . . . . . . . . . 56 138 A.5. Measurement . . . . . . . . . . . . . . . . . . . . . . . 56 139 A.6. Test Procedures and Expected Results . . . . . . . . . . 57 140 A.6.1. Step 1: Background Traffic . . . . . . . . . . . . . 57 141 A.6.2. Step 2: CVE Emulation . . . . . . . . . . . . . . . . 58 142 Appendix B. DUT/SUT Classification . . . . . . . . . . . . . . . 58 143 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 58 145 1. Introduction 147 18 years have passed since IETF recommended test methodology and 148 terminology for firewalls initially ([RFC3511]). The requirements 149 for network security element performance and effectiveness have 150 increased tremendously since then. Security function implementations 151 have evolved to more advanced areas and have diversified into 152 intrusion detection and prevention, threat management, analysis of 153 encrypted traffic, etc. In an industry of growing importance, well- 154 defined, and reproducible key performance indicators (KPIs) are 155 increasingly needed as they enable fair and reasonable comparison of 156 network security functions. All these reasons have led to the 157 creation of a new next-generation network security device 158 benchmarking document, which makes [RFC3511] obsolete. 160 2. Requirements 162 The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 163 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 164 "OPTIONAL" in this document are to be interpreted as described in BCP 165 14 [RFC2119], [RFC8174] when, and only when, they appear in all 166 capitals, as shown here. 168 3. Scope 170 This document provides testing terminology and testing methodology 171 for modern and next-generation network security devices that are 172 configured in Active ("Inline", see Figure 1 and Figure 2) mode. It 173 covers the validation of security effectiveness configurations of 174 network security devices, followed by performance benchmark testing. 175 This document focuses on advanced, realistic, and reproducible 176 testing methods. Additionally, it describes testbed environments, 177 test tool requirements, and test result formats. 179 4. Test Setup 181 Test setup defined in this document applies to all benchmarking tests 182 described in Section 7. The test setup MUST be contained within an 183 Isolated Test Environment (see Section 3 of [RFC6815]). 185 4.1. Testbed Configuration 187 Testbed configuration MUST ensure that any performance implications 188 that are discovered during the benchmark testing aren't due to the 189 inherent physical network limitations such as the number of physical 190 links and forwarding performance capabilities (throughput and 191 latency) of the network devices in the testbed. For this reason, 192 this document recommends avoiding external devices such as switches 193 and routers in the testbed wherever possible. 195 In some deployment scenarios, the network security devices (Device 196 Under Test/System Under Test) are connected to routers and switches, 197 which will reduce the number of entries in MAC or ARP tables of the 198 Device Under Test/System Under Test (DUT/SUT). If MAC or ARP tables 199 have many entries, this may impact the actual DUT/SUT performance due 200 to MAC and ARP/ND (Neighbor Discovery) table lookup processes. This 201 document also recommends using test equipment with the capability of 202 emulating layer 3 routing functionality instead of adding external 203 routers in the testbed. 205 The testbed setup Option 1 (Figure 1) is the RECOMMENDED testbed 206 setup for the benchmarking test. 208 +-----------------------+ +-----------------------+ 209 | +-------------------+ | +-----------+ | +-------------------+ | 210 | | Emulated Router(s)| | | | | | Emulated Router(s)| | 211 | | (Optional) | +----- DUT/SUT +-----+ (Optional) | | 212 | +-------------------+ | | | | +-------------------+ | 213 | +-------------------+ | +-----------+ | +-------------------+ | 214 | | Clients | | | | Servers | | 215 | +-------------------+ | | +-------------------+ | 216 | | | | 217 | Test Equipment | | Test Equipment | 218 +-----------------------+ +-----------------------+ 220 Figure 1: Testbed Setup - Option 1 222 If the test equipment used is not capable of emulating layer 3 223 routing functionality or if the number of used ports is mismatched 224 between test equipment and the DUT/SUT (need for test equipment port 225 aggregation), the test setup can be configured as shown in Figure 2. 227 +-------------------+ +-----------+ +--------------------+ 228 |Aggregation Switch/| | | | Aggregation Switch/| 229 | Router +------+ DUT/SUT +------+ Router | 230 | | | | | | 231 +----------+--------+ +-----------+ +--------+-----------+ 232 | | 233 | | 234 +-----------+-----------+ +-----------+-----------+ 235 | | | | 236 | +-------------------+ | | +-------------------+ | 237 | | Emulated Router(s)| | | | Emulated Router(s)| | 238 | | (Optional) | | | | (Optional) | | 239 | +-------------------+ | | +-------------------+ | 240 | +-------------------+ | | +-------------------+ | 241 | | Clients | | | | Servers | | 242 | +-------------------+ | | +-------------------+ | 243 | | | | 244 | Test Equipment | | Test Equipment | 245 +-----------------------+ +-----------------------+ 247 Figure 2: Testbed Setup - Option 2 249 4.2. DUT/SUT Configuration 251 A unique DUT/SUT configuration MUST be used for all benchmarking 252 tests described in Section 7. Since each DUT/SUT will have its own 253 unique configuration, users SHOULD configure their device with the 254 same parameters and security features that would be used in the 255 actual deployment of the device or a typical deployment in order to 256 achieve maximum network security coverage. The DUT/SUT MUST be 257 configured in "Inline" mode so that the traffic is actively inspected 258 by the DUT/SUT. Also "Fail-Open" behavior MUST be disabled on the 259 DUT/SUT. 261 Table 1 and Table 2 below describe the RECOMMENDED and OPTIONAL sets 262 of network security feature list for NGFW and NGIPS respectively. 263 The selected security features SHOULD be consistently enabled on the 264 DUT/SUT for all benchmarking tests described in Section 7. 266 To improve repeatability, a summary of the DUT/SUT configuration 267 including a description of all enabled DUT/SUT features MUST be 268 published with the benchmarking results. 270 +============================+=============+==========+ 271 | DUT/SUT (NGFW) Features | RECOMMENDED | OPTIONAL | 272 +============================+=============+==========+ 273 | SSL Inspection | x | | 274 +----------------------------+-------------+----------+ 275 | IDS/IPS | x | | 276 +----------------------------+-------------+----------+ 277 | Anti-Spyware | x | | 278 +----------------------------+-------------+----------+ 279 | Anti-Virus | x | | 280 +----------------------------+-------------+----------+ 281 | Anti-Botnet | x | | 282 +----------------------------+-------------+----------+ 283 | Web Filtering | | x | 284 +----------------------------+-------------+----------+ 285 | Data Loss Protection (DLP) | | x | 286 +----------------------------+-------------+----------+ 287 | DDoS | | x | 288 +----------------------------+-------------+----------+ 289 | Certificate Validation | | x | 290 +----------------------------+-------------+----------+ 291 | Logging and Reporting | x | | 292 +----------------------------+-------------+----------+ 293 | Application Identification | x | | 294 +----------------------------+-------------+----------+ 296 Table 1: NGFW Security Features 298 +============================+=============+==========+ 299 | DUT/SUT (NGIPS) Features | RECOMMENDED | OPTIONAL | 300 +============================+=============+==========+ 301 | SSL Inspection | x | | 302 +----------------------------+-------------+----------+ 303 | Anti-Malware | x | | 304 +----------------------------+-------------+----------+ 305 | Anti-Spyware | x | | 306 +----------------------------+-------------+----------+ 307 | Anti-Botnet | x | | 308 +----------------------------+-------------+----------+ 309 | Logging and Reporting | x | | 310 +----------------------------+-------------+----------+ 311 | Application Identification | x | | 312 +----------------------------+-------------+----------+ 313 | Deep Packet Inspection | x | | 314 +----------------------------+-------------+----------+ 315 | Anti-Evasion | x | | 316 +----------------------------+-------------+----------+ 318 Table 2: NGIPS Security Features 320 The following table provides a brief description of the security 321 features. 323 +================+================================================+ 324 | DUT/SUT | Description | 325 | Features | | 326 +================+================================================+ 327 | SSL Inspection | DUT/SUT intercepts and decrypts inbound HTTPS | 328 | | traffic between servers and clients. Once the | 329 | | content inspection has been completed, DUT/SUT | 330 | | encrypts the HTTPS traffic with ciphers and | 331 | | keys used by the clients and servers. | 332 +----------------+------------------------------------------------+ 333 | IDS/IPS | DUT/SUT detects and blocks exploits targeting | 334 | | known and unknown vulnerabilities across the | 335 | | monitored network. | 336 +----------------+------------------------------------------------+ 337 | Anti-Malware | DUT/SUT detects and prevents the transmission | 338 | | of malicious executable code and any | 339 | | associated communications across the monitored | 340 | | network. This includes data exfiltration as | 341 | | well as command and control channels. | 342 +----------------+------------------------------------------------+ 343 | Anti-Spyware | Anti-Spyware is a subcategory of Anti Malware. | 344 | | Spyware transmits information without the | 345 | | user's knowledge or permission. DUT/SUT | 346 | | detects and block initial infection or | 347 | | transmission of data. | 348 +----------------+------------------------------------------------+ 349 | Anti-Botnet | DUT/SUT detects traffic to or from botnets. | 350 +----------------+------------------------------------------------+ 351 | Anti-Evasion | DUT/SUT detects and mitigates attacks that | 352 | | have been obfuscated in some manner. | 353 +----------------+------------------------------------------------+ 354 | Web Filtering | DUT/SUT detects and blocks malicious website | 355 | | including defined classifications of website | 356 | | across the monitored network. | 357 +----------------+------------------------------------------------+ 358 | DLP | DUT/SUT detects and prevents data breaches and | 359 | | data exfiltration, or it detects and blocks | 360 | | the transmission of sensitive data across the | 361 | | monitored network. | 362 +----------------+------------------------------------------------+ 363 | Certificate | DUT/SUT validates certificates used in | 364 | Validation | encrypted communications across the monitored | 365 | | network. | 366 +----------------+------------------------------------------------+ 367 | Logging and | DUT/SUT logs and reports all traffic at the | 368 | Reporting | flow level across the monitored network. | 369 +----------------+------------------------------------------------+ 370 | Application | DUT/SUT detects known applications as defined | 371 | Identification | within the traffic mix selected across the | 372 | | monitored network. | 373 +----------------+------------------------------------------------+ 375 Table 3: Security Feature Description 377 Below is a summary of the DUT/SUT configuration: 379 * DUT/SUT MUST be configured in "inline" mode. 381 * "Fail-Open" behavior MUST be disabled. 383 * All RECOMMENDED security features are enabled. 385 * Logging SHOULD be enabled. DUT/SUT SHOULD log all traffic at the 386 flow level - Logging to an external device is permissible. 388 * Geographical location filtering, and Application Identification 389 and Control SHOULD be configured to trigger based on a site or 390 application from the defined traffic mix. 392 In addition, a realistic number of access control rules (ACL) SHOULD 393 be configured on the DUT/SUT where ACLs are configurable and 394 reasonable based on the deployment scenario. This document 395 determines the number of access policy rules for four different 396 classes of DUT/SUT: Extra Small (XS), Small (S), Medium (M), and 397 Large (L). A sample DUT/SUT classification is described in 398 Appendix B. 400 The Access Control Rules (ACL) defined in Figure 3 MUST be configured 401 from top to bottom in the correct order as shown in the table. This 402 is due to ACL types listed in specificity decreasing order, with 403 "block" first, followed by "allow", representing a typical ACL based 404 security policy. The ACL entries SHOULD be configured with routable 405 IP subnets by the DUT/SUT. (Note: There will be differences between 406 how security vendors implement ACL decision making.) The configured 407 ACL MUST NOT block the security and measurement traffic used for the 408 benchmarking tests. 410 +---------------+ 411 | DUT/SUT | 412 | Classification| 413 | # Rules | 414 +-----------+-----------+--------------------+------+---+---+---+---+ 415 | | Match | | | | | | | 416 | Rules Type| Criteria | Description |Action| XS| S | M | L | 417 +-------------------------------------------------------------------+ 418 |Application|Application| Any application | block| 5 | 10| 20| 50| 419 |layer | | not included in | | | | | | 420 | | | the measurement | | | | | | 421 | | | traffic | | | | | | 422 +-------------------------------------------------------------------+ 423 |Transport |SRC IP and | Any SRC IP subnet | block| 25| 50|100|250| 424 |layer |TCP/UDP | used and any DST | | | | | | 425 | |DST ports | ports not used in | | | | | | 426 | | | the measurement | | | | | | 427 | | | traffic | | | | | | 428 +-------------------------------------------------------------------+ 429 |IP layer |SRC/DST IP | Any SRC/DST IP | block| 25| 50|100|250| 430 | | | subnet not used | | | | | | 431 | | | in the measurement | | | | | | 432 | | | traffic | | | | | | 433 +-------------------------------------------------------------------+ 434 |Application|Application| Half of the | allow| 10| 10| 10| 10| 435 |layer | | applications | | | | | | 436 | | | included in the | | | | | | 437 | | | measurement traffic| | | | | | 438 | | |(see the note below)| | | | | | 439 +-------------------------------------------------------------------+ 440 |Transport |SRC IP and | Half of the SRC | allow| >1| >1| >1| >1| 441 |layer |TCP/UDP | IPs used and any | | | | | | 442 | |DST ports | DST ports used in | | | | | | 443 | | | the measurement | | | | | | 444 | | | traffic | | | | | | 445 | | | (one rule per | | | | | | 446 | | | subnet) | | | | | | 447 +-------------------------------------------------------------------+ 448 |IP layer |SRC IP | The rest of the | allow| >1| >1| >1| >1| 449 | | | SRC IP subnet | | | | | | 450 | | | range used in the | | | | | | 451 | | | measurement | | | | | | 452 | | | traffic | | | | | | 453 | | | (one rule per | | | | | | 454 | | | subnet) | | | | | | 455 +-----------+-----------+--------------------+------+---+---+---+---+ 457 Figure 3: DUT/SUT Access List 459 Note: If half of the applications included in the measurement traffic 460 is less than 10, the missing number of ACL entries (dummy rules) can 461 be configured for any application traffic not included in the 462 measurement traffic. 464 4.2.1. Security Effectiveness Configuration 466 The Security features (defined in Table 1 and Table 2) of the DUT/SUT 467 MUST be configured effectively to detect, prevent, and report the 468 defined security vulnerability sets. This section defines the 469 selection of the security vulnerability sets from Common 470 vulnerabilities and Exposures (CVE) list for the testing. The 471 vulnerability set SHOULD reflect a minimum of 500 CVEs from no older 472 than 10 calendar years to the current year. These CVEs SHOULD be 473 selected with a focus on in-use software commonly found in business 474 applications, with a Common vulnerability Scoring System (CVSS) 475 Severity of High (7-10). 477 This document is primarily focused on performance benchmarking. 478 However, it is RECOMMENDED to validate the security features 479 configuration of the DUT/SUT by evaluating the security effectiveness 480 as a prerequisite for performance benchmarking tests defined in the 481 section 7. In case the benchmarking tests are performed without 482 evaluating security effectiveness, the test report MUST explain the 483 implications of this. The methodology for evaluating security 484 effectiveness is defined in Appendix A. 486 4.3. Test Equipment Configuration 488 In general, test equipment allows configuring parameters in different 489 protocol layers. These parameters thereby influence the traffic 490 flows which will be offered and impact performance measurements. 492 This section specifies common test equipment configuration parameters 493 applicable for all benchmarking tests defined in Section 7. Any 494 benchmarking test specific parameters are described under the test 495 setup section of each benchmarking test individually. 497 4.3.1. Client Configuration 499 This section specifies which parameters SHOULD be considered while 500 configuring clients using test equipment. Also, this section 501 specifies the RECOMMENDED values for certain parameters. The values 502 are the defaults used in most of the client operating systems 503 currently. 505 4.3.1.1. TCP Stack Attributes 507 The TCP stack SHOULD use a congestion control algorithm at client and 508 server endpoints. The IPv4 and IPv6 Maximum Segment Size (MSS) 509 SHOULD be set to 1460 bytes and 1440 bytes respectively and a TX and 510 RX initial receive windows of 64 KByte. Client initial congestion 511 window SHOULD NOT exceed 10 times the MSS. Delayed ACKs are 512 permitted and the maximum client delayed ACK SHOULD NOT exceed 10 513 times the MSS before a forced ACK. Up to three retries SHOULD be 514 allowed before a timeout event is declared. All traffic MUST set the 515 TCP PSH flag to high. The source port range SHOULD be in the range 516 of 1024 - 65535. Internal timeout SHOULD be dynamically scalable per 517 RFC 793. The client SHOULD initiate and close TCP connections. The 518 TCP connection MUST be initiated via a TCP three-way handshake (SYN, 519 SYN/ACK, ACK), and it MUST be closed via either a TCP three-way close 520 (FIN, FIN/ACK, ACK), or a TCP four-way close (FIN, ACK, FIN, ACK). 522 4.3.1.2. Client IP Address Space 524 The sum of the client IP space SHOULD contain the following 525 attributes. 527 * The IP blocks SHOULD consist of multiple unique, discontinuous 528 static address blocks. 530 * A default gateway is permitted. 532 * The DSCP (differentiated services code point) marking is set to DF 533 (Default Forwarding) '000000' on IPv4 Type of Service (ToS) field 534 and IPv6 traffic class field. 536 The following equation can be used to define the total number of 537 client IP addresses that will be configured on the test equipment. 539 Desired total number of client IP = Target throughput [Mbit/s] / 540 Average throughput per IP address [Mbit/s] 542 As shown in the example list below, the value for "Average throughput 543 per IP address" can be varied depending on the deployment and use 544 case scenario. 546 (Option 1) DUT/SUT deployment scenario 1 : 6-7 Mbit/s per IP (e.g. 547 1,400-1,700 IPs per 10Gbit/s throughput) 549 (Option 2) DUT/SUT deployment scenario 2 : 0.1-0.2 Mbit/s per IP 550 (e.g. 50,000-100,000 IPs per 10Gbit/s throughput) 552 Based on deployment and use case scenario, client IP addresses SHOULD 553 be distributed between IPv4 and IPv6. The following options MAY be 554 considered for a selection of traffic mix ratio. 556 (Option 1) 100 % IPv4, no IPv6 558 (Option 2) 80 % IPv4, 20% IPv6 560 (Option 3) 50 % IPv4, 50% IPv6 562 (Option 4) 20 % IPv4, 80% IPv6 564 (Option 5) no IPv4, 100% IPv6 566 Note: The IANA has assigned IP address range for the testing purpose 567 as described in Section 8. If the test scenario requires more IP 568 addresses or subnets than the IANA assigned, this document recommends 569 using non routable Private IPv4 address ranges or Unique Local 570 Address (ULA) IPv6 address ranges for the testing. 572 4.3.1.3. Emulated Web Browser Attributes 574 The client emulated web browser (emulated browser) contains 575 attributes that will materially affect how traffic is loaded. The 576 objective is to emulate modern, typical browser attributes to improve 577 realism of the result set. 579 For HTTP traffic emulation, the emulated browser MUST negotiate HTTP 580 version 1.1 or higher. Depending on test scenarios and chosen HTTP 581 version, the emulated browser MAY open multiple TCP connections per 582 Server endpoint IP at any time depending on how many sequential 583 transactions need to be processed. For HTTP/2 or HTTP/3, the 584 emulated browser MAY open multiple concurrent streams per connection 585 (multiplexing). If HTTP/3 is used the emulated browser MUST open 586 Quick UDP Internet Connections (QUIC). HTTP settings such as number 587 of connection per server IP, number of requests per connection, and 588 number of streams per connection MUST be documented. This document 589 refers to [RFC8446] for HTTP/2 and [RFC9000] for QUIC. The emulated 590 browser SHOULD advertise a User-Agent header. The emulated browser 591 SHOULD enforce content length validation. Depending on test 592 scenarios and selected HTTP version, HTTP header compression MAY be 593 set to enable or disable. This setting (compression enabled or 594 disabled) MUST be documented in the report. 596 For encrypted traffic, the following attributes SHALL define the 597 negotiated encryption parameters. The test clients MUST use TLS 598 version 1.2 or higher. TLS record size MAY be optimized for the 599 HTTPS response object size up to a record size of 16 KByte. If 600 Server Name Indication (SNI) is required in the traffic mix profile, 601 the client endpoint MUST send TLS extension Server Name Indication 602 (SNI) information when opening a security tunnel. Each client 603 connection MUST perform a full handshake with server certificate and 604 MUST NOT use session reuse or resumption. 606 The following TLS 1.2 supported ciphers and keys are RECOMMENDED to 607 use for HTTPS based benchmarking tests defined in Section 7. 609 1. ECDHE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 610 Algorithm: ecdsa_secp256r1_sha256 and Supported group: secp256r1) 612 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 613 Algorithm: rsa_pkcs1_sha256 and Supported group: secp256r1) 615 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 616 Algorithm: ecdsa_secp384r1_sha384 and Supported group: secp521r1) 618 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 619 Algorithm: rsa_pkcs1_sha384 and Supported group: secp256r1) 621 Note: The above ciphers and keys were those commonly used enterprise 622 grade encryption cipher suites for TLS 1.2. It is recognized that 623 these will evolve over time. Individual certification bodies SHOULD 624 use ciphers and keys that reflect evolving use cases. These choices 625 MUST be documented in the resulting test reports with detailed 626 information on the ciphers and keys used along with reasons for the 627 choices. 629 [RFC8446] defines the following cipher suites for use with TLS 1.3. 631 1. TLS_AES_128_GCM_SHA256 633 2. TLS_AES_256_GCM_SHA384 635 3. TLS_CHACHA20_POLY1305_SHA256 637 4. TLS_AES_128_CCM_SHA256 639 5. TLS_AES_128_CCM_8_SHA256 641 4.3.2. Backend Server Configuration 643 This section specifies which parameters should be considered while 644 configuring emulated backend servers using test equipment. 646 4.3.2.1. TCP Stack Attributes 648 The TCP stack on the server side SHOULD be configured similar to the 649 client side configuration described in Section 4.3.1.1. In addition, 650 server initial congestion window MUST NOT exceed 10 times the MSS. 651 Delayed ACKs are permitted and the maximum server delayed ACK MUST 652 NOT exceed 10 times the MSS before a forced ACK. 654 4.3.2.2. Server Endpoint IP Addressing 656 The sum of the server IP space SHOULD contain the following 657 attributes. 659 * The server IP blocks SHOULD consist of unique, discontinuous 660 static address blocks with one IP per server Fully Qualified 661 Domain Name (FQDN) endpoint per test port. 663 * A default gateway is permitted. The DSCP (differentiated services 664 code point) marking is set to DF (Default Forwarding) '000000' on 665 IPv4 Type of Service (ToS) field and IPv6 traffic class field. 667 * The server IP addresses SHOULD be distributed between IPv4 and 668 IPv6 with a ratio identical to the clients distribution ratio. 670 Note: The IANA has assigned IP address range for the testing purpose 671 as described in Section 8. If the test scenario requires more IP 672 addresses or subnets than the IANA assigned, this document recommends 673 using non routable Private IPv4 address ranges or Unique Local 674 Address (ULA) IPv6 address ranges for the testing. 676 4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes 678 The server pool for HTTP SHOULD listen on TCP port 80 and emulate the 679 same HTTP version and settings chosen by the client (emulated web 680 browser). The Server MUST advertise server type in the Server 681 response header [RFC7230]. For HTTPS server, TLS 1.2 or higher MUST 682 be used with a maximum record size of 16 KByte and MUST NOT use 683 ticket resumption or session ID reuse. The server SHOULD listen on 684 TCP port 443. The server SHALL serve a certificate to the client. 685 The HTTPS server MUST check host SNI information with the FQDN if SNI 686 is in use. Cipher suite and key size on the server side MUST be 687 configured similar to the client side configuration described in 688 Section 4.3.1.3. 690 4.3.3. Traffic Flow Definition 692 This section describes the traffic pattern between client and server 693 endpoints. At the beginning of the test, the server endpoint 694 initializes and will be ready to accept connection states including 695 initialization of the TCP stack as well as bound HTTP and HTTPS 696 servers. When a client endpoint is needed, it will initialize and be 697 given attributes such as a MAC and IP address. The behavior of the 698 client is to sweep through the given server IP space, generating a 699 recognizable service by the DUT. Sequential and pseudorandom sweep 700 methods are acceptable. The method used MUST be stated in the final 701 report. Thus, a balanced mesh between client endpoints and server 702 endpoints will be generated in a client IP and port to server IP and 703 port combination. Each client endpoint performs the same actions as 704 other endpoints, with the difference being the source IP of the 705 client endpoint and the target server IP pool. The client MUST use 706 the server IP address or FQDN in the host header [RFC7230]. 708 4.3.3.1. Description of Intra-Client Behavior 710 Client endpoints are independent of other clients that are 711 concurrently executing. When a client endpoint initiates traffic, 712 this section describes how the client steps through different 713 services. Once the test is initialized, the client endpoints 714 randomly hold (perform no operation) for a few milliseconds for 715 better randomization of the start of client traffic. Each client 716 will either open a new TCP connection or connect to a TCP persistence 717 stack still open to that specific server. At any point that the 718 traffic profile may require encryption, a TLS encryption tunnel will 719 form presenting the URL or IP address request to the server. If 720 using SNI, the server MUST then perform an SNI name check with the 721 proposed FQDN compared to the domain embedded in the certificate. 722 Only when correct, will the server process the HTTPS response object. 723 The initial response object to the server is based on benchmarking 724 tests described in Section 7. Multiple additional sub-URLs (response 725 objects on the service page) MAY be requested simultaneously. This 726 MAY be to the same server IP as the initial URL. Each sub-object 727 will also use a canonical FQDN and URL path, as observed in the 728 traffic mix used. 730 4.3.4. Traffic Load Profile 732 The loading of traffic is described in this section. The loading of 733 a traffic load profile has five phases: Init, ramp up, sustain, ramp 734 down, and collection. 736 1. Init phase: Testbed devices including the client and server 737 endpoints should negotiate layer 2-3 connectivity such as MAC 738 learning and ARP. Only after successful MAC learning or ARP/ND 739 resolution SHALL the test iteration move to the next phase. No 740 measurements are made in this phase. The minimum RECOMMENDED 741 time for Init phase is 5 seconds. During this phase, the 742 emulated clients SHOULD NOT initiate any sessions with the DUT/ 743 SUT, in contrast, the emulated servers should be ready to accept 744 requests from DUT/SUT or from emulated clients. 746 2. Ramp up phase: The test equipment SHOULD start to generate the 747 test traffic. It SHOULD use a set of the approximate number of 748 unique client IP addresses to generate traffic. The traffic 749 SHOULD ramp up from zero to desired target objective. The target 750 objective is defined for each benchmarking test. The duration 751 for the ramp up phase MUST be configured long enough that the 752 test equipment does not overwhelm the DUT/SUTs stated performance 753 metrics defined in Section 6.3 namely, TCP Connections Per 754 Second, Inspected Throughput, Concurrent TCP Connections, and 755 Application Transactions Per Second. No measurements are made in 756 this phase. 758 3. Sustain phase: Starts when all required clients are active and 759 operating at their desired load condition. In the sustain phase, 760 the test equipment SHOULD continue generating traffic to constant 761 target value for a constant number of active clients. The 762 minimum RECOMMENDED time duration for sustain phase is 300 763 seconds. This is the phase where measurements occur. The test 764 equipment SHOULD measure and record statistics continuously. The 765 sampling interval for collecting the raw results and calculating 766 the statistics SHOULD be less than 2 seconds. 768 4. Ramp down phase: No new connections are established, and no 769 measurements are made. The time duration for ramp up and ramp 770 down phase SHOULD be the same. 772 5. Collection phase: The last phase is administrative and will occur 773 when the test equipment merges and collates the report data. 775 5. Testbed Considerations 777 This section describes steps for a reference test (pre-test) that 778 control the test environment including test equipment, focusing on 779 physical and virtualized environments and as well as test equipments. 780 Below are the RECOMMENDED steps for the reference test. 782 1. Perform the reference test either by configuring the DUT/SUT in 783 the most trivial setup (fast forwarding) or without presence of 784 the DUT/SUT. 786 2. Generate traffic from traffic generator. Choose a traffic 787 profile used for HTTP or HTTPS throughput performance test with 788 smallest object size. 790 3. Ensure that any ancillary switching or routing functions added in 791 the test equipment does not limit the performance by introducing 792 network metrics such as packet loss and latency. This is 793 specifically important for virtualized components (e.g., 794 vSwitches, vRouters). 796 4. Verify that the generated traffic (performance) of the test 797 equipment matches and reasonably exceeds the expected maximum 798 performance of the DUT/SUT. 800 5. Record the network performance metrics packet loss latency 801 introduced by the test environment (without DUT/SUT). 803 6. Assert that the testbed characteristics are stable during the 804 entire test session. Several factors might influence stability 805 specifically, for virtualized testbeds. For example, additional 806 workloads in a virtualized system, load balancing, and movement 807 of virtual machines during the test, or simple issues such as 808 additional heat created by high workloads leading to an emergency 809 CPU performance reduction. 811 The reference test SHOULD be performed before the benchmarking tests 812 (described in section 7) start. 814 6. Reporting 816 This section describes how the benchmarking test report should be 817 formatted and presented. It is RECOMMENDED to include two main 818 sections in the report, namely the introduction and the detailed test 819 results sections. 821 6.1. Introduction 823 The following attributes SHOULD be present in the introduction 824 section of the test report. 826 1. The time and date of the execution of the tests 828 2. Summary of testbed software and hardware details 829 a. DUT/SUT hardware/virtual configuration 831 * This section SHOULD clearly identify the make and model of 832 the DUT/SUT 834 * The port interfaces, including speed and link information 836 * If the DUT/SUT is a Virtual Network Function (VNF), host 837 (server) hardware and software details, interface 838 acceleration type such as DPDK and SR-IOV, used CPU cores, 839 used RAM, resource sharing (e.g. Pinning details and NUMA 840 Node) configuration details, hypervisor version, virtual 841 switch version 843 * details of any additional hardware relevant to the DUT/SUT 844 such as controllers 846 b. DUT/SUT software 848 * Operating system name 850 * Version 852 * Specific configuration details (if any) 854 c. DUT/SUT enabled features 856 * Configured DUT/SUT features (see Table 1 and Table 2) 858 * Attributes of the above-mentioned features 860 * Any additional relevant information about the features 862 d. Test equipment hardware and software 864 * Test equipment vendor name 866 * Hardware details including model number, interface type 868 * Test equipment firmware and test application software 869 version 871 e. Key test parameters 873 * Used cipher suites and keys 875 * IPv4 and IPv6 traffic distribution 876 * Number of configured ACL 878 f. Details of application traffic mix used in the benchmarking 879 test "Throughput Performance with Application Traffic Mix" 880 (Section 7.1) 882 * Name of applications and layer 7 protocols 884 * Percentage of emulated traffic for each application and 885 layer 7 protocols 887 * Percentage of encrypted traffic and used cipher suites and 888 keys (The RECOMMENDED ciphers and keys are defined in 889 Section 4.3.1.3) 891 * Used object sizes for each application and layer 7 892 protocols 894 3. Results Summary / Executive Summary 896 a. Results SHOULD resemble a pyramid in how it is reported, with 897 the introduction section documenting the summary of results 898 in a prominent, easy to read block. 900 6.2. Detailed Test Results 902 In the result section of the test report, the following attributes 903 SHOULD be present for each benchmarking test. 905 a. KPIs MUST be documented separately for each benchmarking test. 906 The format of the KPI metrics SHOULD be presented as described in 907 Section 6.3. 909 b. The next level of details SHOULD be graphs showing each of these 910 metrics over the duration (sustain phase) of the test. This 911 allows the user to see the measured performance stability changes 912 over time. 914 6.3. Benchmarks and Key Performance Indicators 916 This section lists key performance indicators (KPIs) for overall 917 benchmarking tests. All KPIs MUST be measured during the sustain 918 phase of the traffic load profile described in Section 4.3.4. All 919 KPIs MUST be measured from the result output of test equipment. 921 * Concurrent TCP Connections 922 The aggregate number of simultaneous connections between hosts 923 across the DUT/SUT, or between hosts and the DUT/SUT (defined in 924 [RFC2647]). 926 * TCP Connections Per Second 928 The average number of successfully established TCP connections per 929 second between hosts across the DUT/SUT, or between hosts and the 930 DUT/SUT. The TCP connection MUST be initiated via a TCP three-way 931 handshake (SYN, SYN/ACK, ACK). Then the TCP session data is sent. 932 The TCP session MUST be closed via either a TCP three-way close 933 (FIN, FIN/ACK, ACK), or a TCP four-way close (FIN, ACK, FIN, ACK), 934 and MUST NOT by RST. 936 * Application Transactions Per Second 938 The average number of successfully completed transactions per 939 second. For a particular transaction to be considered successful, 940 all data MUST have been transferred in its entirety. In case of 941 HTTP(S) transactions, it MUST have a valid status code (200 OK), 942 and the appropriate FIN, FIN/ACK sequence MUST have been 943 completed. 945 * TLS Handshake Rate 947 The average number of successfully established TLS connections per 948 second between hosts across the DUT/SUT, or between hosts and the 949 DUT/SUT. 951 * Inspected Throughput 953 The number of bits per second of examined and allowed traffic a 954 network security device is able to transmit to the correct 955 destination interface(s) in response to a specified offered load. 956 The throughput benchmarking tests defined in Section 7 SHOULD 957 measure the average Layer 2 throughput value when the DUT/SUT is 958 "inspecting" traffic. This document recommends presenting the 959 inspected throughput value in Gbit/s rounded to two places of 960 precision with a more specific Kbit/s in parenthesis. 962 * Time to First Byte (TTFB) 964 TTFB is the elapsed time between the start of sending the TCP SYN 965 packet from the client and the client receiving the first packet 966 of application data from the server or DUT/SUT. The benchmarking 967 tests HTTP Transaction Latency (Section 7.4) and HTTPS Transaction 968 Latency (Section 7.8) measure the minimum, average and maximum 969 TTFB. The value SHOULD be expressed in milliseconds. 971 * URL Response time / Time to Last Byte (TTLB) 973 URL Response time / TTLB is the elapsed time between the start of 974 sending the TCP SYN packet from the client and the client 975 receiving the last packet of application data from the server or 976 DUT/SUT. The benchmarking tests HTTP Transaction Latency 977 (Section 7.4) and HTTPS Transaction Latency (Section 7.8) measure 978 the minimum, average and maximum TTLB. The value SHOULD be 979 expressed in millisecond. 981 7. Benchmarking Tests 983 7.1. Throughput Performance with Application Traffic Mix 985 7.1.1. Objective 987 Using a relevant application traffic mix, determine the sustainable 988 inspected throughput supported by the DUT/SUT. 990 Based on the test customer's specific use case, testers can choose 991 the relevant application traffic mix for this test. The details 992 about the traffic mix MUST be documented in the report. At least the 993 following traffic mix details MUST be documented and reported 994 together with the test results: 996 Name of applications and layer 7 protocols 998 Percentage of emulated traffic for each application and layer 7 999 protocol 1001 Percentage of encrypted traffic and used cipher suites and keys 1002 (The RECOMMENDED ciphers and keys are defined in Section 4.3.1.3.) 1004 Used object sizes for each application and layer 7 protocols 1006 7.1.2. Test Setup 1008 Testbed setup MUST be configured as defined in Section 4. Any 1009 benchmarking test specific testbed configuration changes MUST be 1010 documented. 1012 7.1.3. Test Parameters 1014 In this section, the benchmarking test specific parameters SHOULD be 1015 defined. 1017 7.1.3.1. DUT/SUT Configuration Parameters 1019 DUT/SUT parameters MUST conform to the requirements defined in 1020 Section 4.2. Any configuration changes for this specific 1021 benchmarking test MUST be documented. In case the DUT/SUT is 1022 configured without SSL inspection, the test report MUST explain the 1023 implications of this to the relevant application traffic mix 1024 encrypted traffic. 1026 7.1.3.2. Test Equipment Configuration Parameters 1028 Test equipment configuration parameters MUST conform to the 1029 requirements defined in Section 4.3. The following parameters MUST 1030 be documented for this benchmarking test: 1032 Client IP address range defined in Section 4.3.1.2 1034 Server IP address range defined in Section 4.3.2.2 1036 Traffic distribution ratio between IPv4 and IPv6 defined in 1037 Section 4.3.1.2 1039 Target inspected throughput: Aggregated line rate of interface(s) 1040 used in the DUT/SUT or the value defined based on requirement for 1041 a specific deployment scenario 1043 Initial throughput: 10% of the "Target inspected throughput" Note: 1044 Initial throughput is not a KPI to report. This value is 1045 configured on the traffic generator and used to perform Step 1: 1046 "Test Initialization and Qualification" described under the 1047 Section 7.1.4. 1049 One of the ciphers and keys defined in Section 4.3.1.3 are 1050 RECOMMENDED to use for this benchmarking test. 1052 7.1.3.3. Traffic Profile 1054 Traffic profile: This test MUST be run with a relevant application 1055 traffic mix profile. 1057 7.1.3.4. Test Results Validation Criteria 1059 The following criteria are the test results validation criteria. The 1060 test results validation criteria MUST be monitored during the whole 1061 sustain phase of the traffic load profile. 1063 a. Number of failed application transactions (receiving any HTTP 1064 response code other than 200 OK) MUST be less than 0.001% (1 out 1065 of 100,000 transactions) of total attempted transactions. 1067 b. Number of Terminated TCP connections due to unexpected TCP RST 1068 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1069 connections) of total initiated TCP connections. 1071 7.1.3.5. Measurement 1073 Following KPI metrics MUST be reported for this benchmarking test: 1075 Mandatory KPIs (benchmarks): Inspected Throughput, TTFB (minimum, 1076 average, and maximum), TTLB (minimum, average, and maximum) and 1077 Application Transactions Per Second 1079 Note: TTLB MUST be reported along with the object size used in the 1080 traffic profile. 1082 Optional KPIs: TCP Connections Per Second and TLS Handshake Rate 1084 7.1.4. Test Procedures and Expected Results 1086 The test procedures are designed to measure the inspected throughput 1087 performance of the DUT/SUT at the sustaining period of traffic load 1088 profile. The test procedure consists of three major steps: Step 1 1089 ensures the DUT/SUT is able to reach the performance value (initial 1090 throughput) and meets the test results validation criteria when it 1091 was very minimally utilized. Step 2 determines the DUT/SUT is able 1092 to reach the target performance value within the test results 1093 validation criteria. Step 3 determines the maximum achievable 1094 performance value within the test results validation criteria. 1096 This test procedure MAY be repeated multiple times with different IP 1097 types: IPv4 only, IPv6 only, and IPv4 and IPv6 mixed traffic 1098 distribution. 1100 7.1.4.1. Step 1: Test Initialization and Qualification 1102 Verify the link status of all connected physical interfaces. All 1103 interfaces are expected to be in "UP" status. 1105 Configure traffic load profile of the test equipment to generate test 1106 traffic at the "Initial throughput" rate as described in 1107 Section 7.1.3.2. The test equipment SHOULD follow the traffic load 1108 profile definition as described in Section 4.3.4. The DUT/SUT SHOULD 1109 reach the "Initial throughput" during the sustain phase. Measure all 1110 KPI as defined in Section 7.1.3.5. The measured KPIs during the 1111 sustain phase MUST meet all the test results validation criteria 1112 defined in Section 7.1.3.4. 1114 If the KPI metrics do not meet the test results validation criteria, 1115 the test procedure MUST NOT be continued to step 2. 1117 7.1.4.2. Step 2: Test Run with Target Objective 1119 Configure test equipment to generate traffic at the "Target inspected 1120 throughput" rate defined in Section 7.1.3.2. The test equipment 1121 SHOULD follow the traffic load profile definition as described in 1122 Section 4.3.4. The test equipment SHOULD start to measure and record 1123 all specified KPIs. Continue the test until all traffic profile 1124 phases are completed. 1126 Within the test results validation criteria, the DUT/SUT is expected 1127 to reach the desired value of the target objective ("Target inspected 1128 throughput") in the sustain phase. Follow step 3, if the measured 1129 value does not meet the target value or does not fulfill the test 1130 results validation criteria. 1132 7.1.4.3. Step 3: Test Iteration 1134 Determine the achievable average inspected throughput within the test 1135 results validation criteria. Final test iteration MUST be performed 1136 for the test duration defined in Section 4.3.4. 1138 7.2. TCP/HTTP Connections Per Second 1140 7.2.1. Objective 1142 Using HTTP traffic, determine the sustainable TCP connection 1143 establishment rate supported by the DUT/SUT under different 1144 throughput load conditions. 1146 To measure connections per second, test iterations MUST use different 1147 fixed HTTP response object sizes (the different load conditions) 1148 defined in Section 7.2.3.2. 1150 7.2.2. Test Setup 1152 Testbed setup SHOULD be configured as defined in Section 4. Any 1153 specific testbed configuration changes (number of interfaces and 1154 interface type, etc.) MUST be documented. 1156 7.2.3. Test Parameters 1158 In this section, benchmarking test specific parameters SHOULD be 1159 defined. 1161 7.2.3.1. DUT/SUT Configuration Parameters 1163 DUT/SUT parameters MUST conform to the requirements defined in 1164 Section 4.2. Any configuration changes for this specific 1165 benchmarking test MUST be documented. 1167 7.2.3.2. Test Equipment Configuration Parameters 1169 Test equipment configuration parameters MUST conform to the 1170 requirements defined in Section 4.3. The following parameters MUST 1171 be documented for this benchmarking test: 1173 Client IP address range defined in Section 4.3.1.2 1175 Server IP address range defined in Section 4.3.2.2 1177 Traffic distribution ratio between IPv4 and IPv6 defined in 1178 Section 4.3.1.2 1180 Target connections per second: Initial value from product datasheet 1181 or the value defined based on requirement for a specific deployment 1182 scenario 1184 Initial connections per second: 10% of "Target connections per 1185 second" (Note: Initial connections per second is not a KPI to report. 1186 This value is configured on the traffic generator and used to perform 1187 the Step1: "Test Initialization and Qualification" described under 1188 the Section 7.2.4. 1190 The client SHOULD negotiate HTTP and close the connection with FIN 1191 immediately after completion of one transaction. In each test 1192 iteration, client MUST send GET request requesting a fixed HTTP 1193 response object size. 1195 The RECOMMENDED response object sizes are 1, 2, 4, 16, and 64 KByte. 1197 7.2.3.3. Test Results Validation Criteria 1199 The following criteria are the test results validation criteria. The 1200 Test results validation criteria MUST be monitored during the whole 1201 sustain phase of the traffic load profile. 1203 a. Number of failed application transactions (receiving any HTTP 1204 response code other than 200 OK) MUST be less than 0.001% (1 out 1205 of 100,000 transactions) of total attempted transactions. 1207 b. Number of terminated TCP connections due to unexpected TCP RST 1208 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1209 connections) of total initiated TCP connections. 1211 c. During the sustain phase, traffic SHOULD be forwarded at a 1212 constant rate (considered as a constant rate if any deviation of 1213 traffic forwarding rate is less than 5%). 1215 d. Concurrent TCP connections MUST be constant during steady state 1216 and any deviation of concurrent TCP connections SHOULD be less 1217 than 10%. This confirms the DUT opens and closes TCP connections 1218 at approximately the same rate. 1220 7.2.3.4. Measurement 1222 TCP Connections Per Second MUST be reported for each test iteration 1223 (for each object size). 1225 7.2.4. Test Procedures and Expected Results 1227 The test procedure is designed to measure the TCP connections per 1228 second rate of the DUT/SUT at the sustaining period of the traffic 1229 load profile. The test procedure consists of three major steps: Step 1230 1 ensures the DUT/SUT is able to reach the performance value (Initial 1231 connections per second) and meets the test results validation 1232 criteria when it was very minimally utilized. Step 2 determines the 1233 DUT/SUT is able to reach the target performance value within the test 1234 results validation criteria. Step 3 determines the maximum 1235 achievable performance value within the test results validation 1236 criteria. 1238 This test procedure MAY be repeated multiple times with different IP 1239 types: IPv4 only, IPv6 only, and IPv4 and IPv6 mixed traffic 1240 distribution. 1242 7.2.4.1. Step 1: Test Initialization and Qualification 1244 Verify the link status of all connected physical interfaces. All 1245 interfaces are expected to be in "UP" status. 1247 Configure the traffic load profile of the test equipment to establish 1248 "Initial connections per second" as defined in Section 7.2.3.2. The 1249 traffic load profile SHOULD be defined as described in Section 4.3.4. 1251 The DUT/SUT SHOULD reach the "Initial connections per second" before 1252 the sustain phase. The measured KPIs during the sustain phase MUST 1253 meet all the test results validation criteria defined in 1254 Section 7.2.3.3. 1256 If the KPI metrics do not meet the test results validation criteria, 1257 the test procedure MUST NOT continue to "Step 2". 1259 7.2.4.2. Step 2: Test Run with Target Objective 1261 Configure test equipment to establish the target objective ("Target 1262 connections per second") defined in Section 7.2.3.2. The test 1263 equipment SHOULD follow the traffic load profile definition as 1264 described in Section 4.3.4. 1266 During the ramp up and sustain phase of each test iteration, other 1267 KPIs such as inspected throughput, concurrent TCP connections and 1268 application transactions per second MUST NOT reach the maximum value 1269 the DUT/SUT can support. The test results for specific test 1270 iterations SHOULD NOT be reported, if the above-mentioned KPI 1271 (especially inspected throughput) reaches the maximum value. 1272 (Example: If the test iteration with 64 KByte of HTTP response object 1273 size reached the maximum inspected throughput limitation of the DUT/ 1274 SUT, the test iteration MAY be interrupted and the result for 64 1275 KByte SHOULD NOT be reported.) 1277 The test equipment SHOULD start to measure and record all specified 1278 KPIs. Continue the test until all traffic profile phases are 1279 completed. 1281 Within the test results validation criteria, the DUT/SUT is expected 1282 to reach the desired value of the target objective ("Target 1283 connections per second") in the sustain phase. Follow step 3, if the 1284 measured value does not meet the target value or does not fulfill the 1285 test results validation criteria. 1287 7.2.4.3. Step 3: Test Iteration 1289 Determine the achievable TCP connections per second within the test 1290 results validation criteria. 1292 7.3. HTTP Throughput 1294 7.3.1. Objective 1296 Determine the sustainable inspected throughput of the DUT/SUT for 1297 HTTP transactions varying the HTTP response object size. 1299 7.3.2. Test Setup 1301 Testbed setup SHOULD be configured as defined in Section 4. Any 1302 specific testbed configuration changes (number of interfaces and 1303 interface type, etc.) MUST be documented. 1305 7.3.3. Test Parameters 1307 In this section, benchmarking test specific parameters SHOULD be 1308 defined. 1310 7.3.3.1. DUT/SUT Configuration Parameters 1312 DUT/SUT parameters MUST conform to the requirements defined in 1313 Section 4.2. Any configuration changes for this specific 1314 benchmarking test MUST be documented. 1316 7.3.3.2. Test Equipment Configuration Parameters 1318 Test equipment configuration parameters MUST conform to the 1319 requirements defined in Section 4.3. The following parameters MUST 1320 be documented for this benchmarking test: 1322 Client IP address range defined in Section 4.3.1.2 1324 Server IP address range defined in Section 4.3.2.2 1326 Traffic distribution ratio between IPv4 and IPv6 defined in 1327 Section 4.3.1.2 1329 Target inspected throughput: Aggregated line rate of interface(s) 1330 used in the DUT/SUT or the value defined based on requirement for a 1331 specific deployment scenario 1332 Initial throughput: 10% of "Target inspected throughput" Note: 1333 Initial throughput is not a KPI to report. This value is configured 1334 on the traffic generator and used to perform Step 1: "Test 1335 Initialization and Qualification" described under Section 7.3.4. 1337 Number of HTTP response object requests (transactions) per 1338 connection: 10 1340 RECOMMENDED HTTP response object size: 1, 16, 64, 256 KByte, and 1341 mixed objects defined in Table 4. 1343 +=====================+============================+ 1344 | Object size (KByte) | Number of requests/ Weight | 1345 +=====================+============================+ 1346 | 0.2 | 1 | 1347 +---------------------+----------------------------+ 1348 | 6 | 1 | 1349 +---------------------+----------------------------+ 1350 | 8 | 1 | 1351 +---------------------+----------------------------+ 1352 | 9 | 1 | 1353 +---------------------+----------------------------+ 1354 | 10 | 1 | 1355 +---------------------+----------------------------+ 1356 | 25 | 1 | 1357 +---------------------+----------------------------+ 1358 | 26 | 1 | 1359 +---------------------+----------------------------+ 1360 | 35 | 1 | 1361 +---------------------+----------------------------+ 1362 | 59 | 1 | 1363 +---------------------+----------------------------+ 1364 | 347 | 1 | 1365 +---------------------+----------------------------+ 1367 Table 4: Mixed Objects 1369 7.3.3.3. Test Results Validation Criteria 1371 The following criteria are the test results validation criteria. The 1372 test results validation criteria MUST be monitored during the whole 1373 sustain phase of the traffic load profile. 1375 a. Number of failed application transactions (receiving any HTTP 1376 response code other than 200 OK) MUST be less than 0.001% (1 out 1377 of 100,000 transactions) of attempt transactions. 1379 b. Traffic SHOULD be forwarded at a constant rate (considered as a 1380 constant rate if any deviation of traffic forwarding rate is less 1381 than 5%). 1383 c. Concurrent TCP connections MUST be constant during steady state 1384 and any deviation of concurrent TCP connections SHOULD be less 1385 than 10%. This confirms the DUT opens and closes TCP connections 1386 at approximately the same rate. 1388 7.3.3.4. Measurement 1390 Inspected Throughput and HTTP Transactions per Second MUST be 1391 reported for each object size. 1393 7.3.4. Test Procedures and Expected Results 1395 The test procedure is designed to measure HTTP throughput of the DUT/ 1396 SUT. The test procedure consists of three major steps: Step 1 1397 ensures the DUT/SUT is able to reach the performance value (Initial 1398 throughput) and meets the test results validation criteria when it 1399 was very minimal utilized. Step 2 determines the DUT/SUT is able to 1400 reach the target performance value within the test results validation 1401 criteria. Step 3 determines the maximum achievable performance value 1402 within the test results validation criteria. 1404 This test procedure MAY be repeated multiple times with different 1405 IPv4 and IPv6 traffic distribution and HTTP response object sizes. 1407 7.3.4.1. Step 1: Test Initialization and Qualification 1409 Verify the link status of all connected physical interfaces. All 1410 interfaces are expected to be in "UP" status. 1412 Configure traffic load profile of the test equipment to establish 1413 "Initial inspected throughput" as defined in Section 7.3.3.2. 1415 The traffic load profile SHOULD be defined as described in 1416 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial inspected 1417 throughput" during the sustain phase. Measure all KPI as defined in 1418 Section 7.3.3.4. 1420 The measured KPIs during the sustain phase MUST meet the test results 1421 validation criteria "a" defined in Section 7.3.3.3. The test results 1422 validation criteria "b" and "c" are OPTIONAL for step 1. 1424 If the KPI metrics do not meet the test results validation criteria, 1425 the test procedure MUST NOT be continued to "Step 2". 1427 7.3.4.2. Step 2: Test Run with Target Objective 1429 Configure test equipment to establish the target objective ("Target 1430 inspected throughput") defined in Section 7.3.3.2. The test 1431 equipment SHOULD start to measure and record all specified KPIs. 1432 Continue the test until all traffic profile phases are completed. 1434 Within the test results validation criteria, the DUT/SUT is expected 1435 to reach the desired value of the target objective in the sustain 1436 phase. Follow step 3, if the measured value does not meet the target 1437 value or does not fulfill the test results validation criteria. 1439 7.3.4.3. Step 3: Test Iteration 1441 Determine the achievable inspected throughput within the test results 1442 validation criteria and measure the KPI metric Transactions per 1443 Second. Final test iteration MUST be performed for the test duration 1444 defined in Section 4.3.4. 1446 7.4. HTTP Transaction Latency 1448 7.4.1. Objective 1450 Using HTTP traffic, determine the HTTP transaction latency when DUT 1451 is running with sustainable HTTP transactions per second supported by 1452 the DUT/SUT under different HTTP response object sizes. 1454 Test iterations MUST be performed with different HTTP response object 1455 sizes in two different scenarios. One with a single transaction and 1456 the other with multiple transactions within a single TCP connection. 1457 For consistency both the single and multiple transaction test MUST be 1458 configured with the same HTTP version 1460 Scenario 1: The client MUST negotiate HTTP and close the connection 1461 with FIN immediately after completion of a single transaction (GET 1462 and RESPONSE). 1464 Scenario 2: The client MUST negotiate HTTP and close the connection 1465 FIN immediately after completion of 10 transactions (GET and 1466 RESPONSE) within a single TCP connection. 1468 7.4.2. Test Setup 1470 Testbed setup SHOULD be configured as defined in Section 4. Any 1471 specific testbed configuration changes (number of interfaces and 1472 interface type, etc.) MUST be documented. 1474 7.4.3. Test Parameters 1476 In this section, benchmarking test specific parameters SHOULD be 1477 defined. 1479 7.4.3.1. DUT/SUT Configuration Parameters 1481 DUT/SUT parameters MUST conform to the requirements defined in 1482 Section 4.2. Any configuration changes for this specific 1483 benchmarking test MUST be documented. 1485 7.4.3.2. Test Equipment Configuration Parameters 1487 Test equipment configuration parameters MUST conform to the 1488 requirements defined in Section 4.3. The following parameters MUST 1489 be documented for this benchmarking test: 1491 Client IP address range defined in Section 4.3.1.2 1493 Server IP address range defined in Section 4.3.2.2 1495 Traffic distribution ratio between IPv4 and IPv6 defined in 1496 Section 4.3.1.2 1498 Target objective for scenario 1: 50% of the connections per second 1499 measured in benchmarking test TCP/HTTP Connections Per Second 1500 (Section 7.2) 1502 Target objective for scenario 2: 50% of the inspected throughput 1503 measured in benchmarking test HTTP Throughput (Section 7.3) 1505 Initial objective for scenario 1: 10% of "Target objective for 1506 scenario 1" 1508 Initial objective for scenario 2: 10% of "Target objective for 1509 scenario 2" 1511 Note: The Initial objectives are not a KPI to report. These values 1512 are configured on the traffic generator and used to perform the 1513 Step1: "Test Initialization and Qualification" described under the 1514 Section 7.4.4. 1516 HTTP transaction per TCP connection: Test scenario 1 with single 1517 transaction and test scenario 2 with 10 transactions. 1519 HTTP with GET request requesting a single object. The RECOMMENDED 1520 object sizes are 1, 16, and 64 KByte. For each test iteration, 1521 client MUST request a single HTTP response object size. 1523 7.4.3.3. Test Results Validation Criteria 1525 The following criteria are the test results validation criteria. The 1526 Test results validation criteria MUST be monitored during the whole 1527 sustain phase of the traffic load profile. 1529 a. Number of failed application transactions (receiving any HTTP 1530 response code other than 200 OK) MUST be less than 0.001% (1 out 1531 of 100,000 transactions) of attempt transactions. 1533 b. Number of terminated TCP connections due to unexpected TCP RST 1534 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1535 connections) of total initiated TCP connections. 1537 c. During the sustain phase, traffic SHOULD be forwarded at a 1538 constant rate (considered as a constant rate if any deviation of 1539 traffic forwarding rate is less than 5%). 1541 d. Concurrent TCP connections MUST be constant during steady state 1542 and any deviation of concurrent TCP connections SHOULD be less 1543 than 10%. This confirms the DUT opens and closes TCP connections 1544 at approximately the same rate. 1546 e. After ramp up the DUT MUST achieve the "Target objective" defined 1547 in Section 7.4.3.2 and remain in that state for the entire test 1548 duration (sustain phase). 1550 7.4.3.4. Measurement 1552 TTFB (minimum, average, and maximum) and TTLB (minimum, average and 1553 maximum) MUST be reported for each object size. 1555 7.4.4. Test Procedures and Expected Results 1557 The test procedure is designed to measure TTFB or TTLB when the DUT/ 1558 SUT is operating close to 50% of its maximum achievable connections 1559 per second or inspected throughput. The test procedure consists of 1560 two major steps: Step 1 ensures the DUT/SUT is able to reach the 1561 initial performance values and meets the test results validation 1562 criteria when it was very minimally utilized. Step 2 measures the 1563 latency values within the test results validation criteria. 1565 This test procedure MAY be repeated multiple times with different IP 1566 types (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 1567 distribution), HTTP response object sizes and single and multiple 1568 transactions per connection scenarios. 1570 7.4.4.1. Step 1: Test Initialization and Qualification 1572 Verify the link status of all connected physical interfaces. All 1573 interfaces are expected to be in "UP" status. 1575 Configure traffic load profile of the test equipment to establish 1576 "Initial objective" as defined in Section 7.4.3.2. The traffic load 1577 profile SHOULD be defined as described in Section 4.3.4. 1579 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 1580 phase. The measured KPIs during the sustain phase MUST meet all the 1581 test results validation criteria defined in Section 7.4.3.3. 1583 If the KPI metrics do not meet the test results validation criteria, 1584 the test procedure MUST NOT be continued to "Step 2". 1586 7.4.4.2. Step 2: Test Run with Target Objective 1588 Configure test equipment to establish "Target objective" defined in 1589 Section 7.4.3.2. The test equipment SHOULD follow the traffic load 1590 profile definition as described in Section 4.3.4. 1592 The test equipment SHOULD start to measure and record all specified 1593 KPIs. Continue the test until all traffic profile phases are 1594 completed. 1596 Within the test results validation criteria, the DUT/SUT MUST reach 1597 the desired value of the target objective in the sustain phase. 1599 Measure the minimum, average, and maximum values of TTFB and TTLB. 1601 7.5. Concurrent TCP/HTTP Connection Capacity 1603 7.5.1. Objective 1605 Determine the number of concurrent TCP connections that the DUT/ SUT 1606 sustains when using HTTP traffic. 1608 7.5.2. Test Setup 1610 Testbed setup SHOULD be configured as defined in Section 4. Any 1611 specific testbed configuration changes (number of interfaces and 1612 interface type, etc.) MUST be documented. 1614 7.5.3. Test Parameters 1616 In this section, benchmarking test specific parameters SHOULD be 1617 defined. 1619 7.5.3.1. DUT/SUT Configuration Parameters 1621 DUT/SUT parameters MUST conform to the requirements defined in 1622 Section 4.2. Any configuration changes for this specific 1623 benchmarking test MUST be documented. 1625 7.5.3.2. Test Equipment Configuration Parameters 1627 Test equipment configuration parameters MUST conform to the 1628 requirements defined in Section 4.3. The following parameters MUST 1629 be noted for this benchmarking test: 1631 Client IP address range defined in Section 4.3.1.2 1633 Server IP address range defined in Section 4.3.2.2 1635 Traffic distribution ratio between IPv4 and IPv6 defined in 1636 Section 4.3.1.2 1638 Target concurrent connection: Initial value from product datasheet 1639 or the value defined based on requirement for a specific 1640 deployment scenario. 1642 Initial concurrent connection: 10% of "Target concurrent 1643 connection" Note: Initial concurrent connection is not a KPI to 1644 report. This value is configured on the traffic generator and 1645 used to perform the Step1: "Test Initialization and Qualification" 1646 described under the Section 7.5.4. 1648 Maximum connections per second during ramp up phase: 50% of 1649 maximum connections per second measured in benchmarking test TCP/ 1650 HTTP Connections per second (Section 7.2) 1652 Ramp up time (in traffic load profile for "Target concurrent 1653 connection"): "Target concurrent connection" / "Maximum 1654 connections per second during ramp up phase" 1656 Ramp up time (in traffic load profile for "Initial concurrent 1657 connection"): "Initial concurrent connection" / "Maximum 1658 connections per second during ramp up phase" 1660 The client MUST negotiate HTTP and each client MAY open multiple 1661 concurrent TCP connections per server endpoint IP. 1663 Each client sends 10 GET requests requesting 1 KByte HTTP response 1664 object in the same TCP connection (10 transactions/TCP connection) 1665 and the delay (think time) between each transaction MUST be X 1666 seconds. 1668 X = ("Ramp up time" + "steady state time") /10 1670 The established connections SHOULD remain open until the ramp down 1671 phase of the test. During the ramp down phase, all connections 1672 SHOULD be successfully closed with FIN. 1674 7.5.3.3. Test Results Validation Criteria 1676 The following criteria are the test results validation criteria. The 1677 Test results validation criteria MUST be monitored during the whole 1678 sustain phase of the traffic load profile. 1680 a. Number of failed application transactions (receiving any HTTP 1681 response code other than 200 OK) MUST be less than 0.001% (1 out 1682 of 100,000 transaction) of total attempted transactions. 1684 b. Number of terminated TCP connections due to unexpected TCP RST 1685 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1686 connections) of total initiated TCP connections. 1688 c. During the sustain phase, traffic SHOULD be forwarded at a 1689 constant rate (considered as a constant rate if any deviation of 1690 traffic forwarding rate is less than 5%). 1692 7.5.3.4. Measurement 1694 Average Concurrent TCP Connections MUST be reported for this 1695 benchmarking test. 1697 7.5.4. Test Procedures and Expected Results 1699 The test procedure is designed to measure the concurrent TCP 1700 connection capacity of the DUT/SUT at the sustaining period of 1701 traffic load profile. The test procedure consists of three major 1702 steps: Step 1 ensures the DUT/SUT is able to reach the performance 1703 value (Initial concurrent connection) and meets the test results 1704 validation criteria when it was very minimally utilized. Step 2 1705 determines the DUT/SUT is able to reach the target performance value 1706 within the test results validation criteria. Step 3 determines the 1707 maximum achievable performance value within the test results 1708 validation criteria. 1710 This test procedure MAY be repeated multiple times with different 1711 IPv4 and IPv6 traffic distribution. 1713 7.5.4.1. Step 1: Test Initialization and Qualification 1715 Verify the link status of all connected physical interfaces. All 1716 interfaces are expected to be in "UP" status. 1718 Configure test equipment to establish "Initial concurrent TCP 1719 connections" defined in Section 7.5.3.2. Except ramp up time, the 1720 traffic load profile SHOULD be defined as described in Section 4.3.4. 1722 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 1723 concurrent TCP connections". The measured KPIs during the sustain 1724 phase MUST meet all the test results validation criteria defined in 1725 Section 7.5.3.3. 1727 If the KPI metrics do not meet the test results validation criteria, 1728 the test procedure MUST NOT be continued to "Step 2". 1730 7.5.4.2. Step 2: Test Run with Target Objective 1732 Configure test equipment to establish the target objective ("Target 1733 concurrent TCP connections"). The test equipment SHOULD follow the 1734 traffic load profile definition (except ramp up time) as described in 1735 Section 4.3.4. 1737 During the ramp up and sustain phase, the other KPIs such as 1738 inspected throughput, TCP connections per second, and application 1739 transactions per second MUST NOT reach the maximum value the DUT/SUT 1740 can support. 1742 The test equipment SHOULD start to measure and record KPIs defined in 1743 Section 7.5.3.4. Continue the test until all traffic profile phases 1744 are completed. 1746 Within the test results validation criteria, the DUT/SUT is expected 1747 to reach the desired value of the target objective in the sustain 1748 phase. Follow step 3, if the measured value does not meet the target 1749 value or does not fulfill the test results validation criteria. 1751 7.5.4.3. Step 3: Test Iteration 1753 Determine the achievable concurrent TCP connections capacity within 1754 the test results validation criteria. 1756 7.6. TCP/HTTPS Connections per Second 1757 7.6.1. Objective 1759 Using HTTPS traffic, determine the sustainable SSL/TLS session 1760 establishment rate supported by the DUT/SUT under different 1761 throughput load conditions. 1763 Test iterations MUST include common cipher suites and key strengths 1764 as well as forward looking stronger keys. Specific test iterations 1765 MUST include ciphers and keys defined in Section 7.6.3.2. 1767 For each cipher suite and key strengths, test iterations MUST use a 1768 single HTTPS response object size defined in Section 7.6.3.2 to 1769 measure connections per second performance under a variety of DUT/SUT 1770 security inspection load conditions. 1772 7.6.2. Test Setup 1774 Testbed setup SHOULD be configured as defined in Section 4. Any 1775 specific testbed configuration changes (number of interfaces and 1776 interface type, etc.) MUST be documented. 1778 7.6.3. Test Parameters 1780 In this section, benchmarking test specific parameters SHOULD be 1781 defined. 1783 7.6.3.1. DUT/SUT Configuration Parameters 1785 DUT/SUT parameters MUST conform to the requirements defined in 1786 Section 4.2. Any configuration changes for this specific 1787 benchmarking test MUST be documented. 1789 7.6.3.2. Test Equipment Configuration Parameters 1791 Test equipment configuration parameters MUST conform to the 1792 requirements defined in Section 4.3. The following parameters MUST 1793 be documented for this benchmarking test: 1795 Client IP address range defined in Section 4.3.1.2 1797 Server IP address range defined in Section 4.3.2.2 1799 Traffic distribution ratio between IPv4 and IPv6 defined in 1800 Section 4.3.1.2 1802 Target connections per second: Initial value from product datasheet 1803 or the value defined based on requirement for a specific deployment 1804 scenario. 1806 Initial connections per second: 10% of "Target connections per 1807 second" Note: Initial connections per second is not a KPI to report. 1808 This value is configured on the traffic generator and used to perform 1809 the Step1: "Test Initialization and Qualification" described under 1810 the Section 7.6.4. 1812 RECOMMENDED ciphers and keys defined in Section 4.3.1.3 1814 The client MUST negotiate HTTPS and close the connection with FIN 1815 immediately after completion of one transaction. In each test 1816 iteration, client MUST send GET request requesting a fixed HTTPS 1817 response object size. The RECOMMENDED object sizes are 1, 2, 4, 16, 1818 and 64 KByte. 1820 7.6.3.3. Test Results Validation Criteria 1822 The following criteria are the test results validation criteria. The 1823 test results validation criteria MUST be monitored during the whole 1824 test duration. 1826 a. Number of failed application transactions (receiving any HTTP 1827 response code other than 200 OK) MUST be less than 0.001% (1 out 1828 of 100,000 transactions) of attempt transactions. 1830 b. Number of terminated TCP connections due to unexpected TCP RST 1831 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1832 connections) of total initiated TCP connections. 1834 c. During the sustain phase, traffic SHOULD be forwarded at a 1835 constant rate (considered as a constant rate if any deviation of 1836 traffic forwarding rate is less than 5%). 1838 d. Concurrent TCP connections MUST be constant during steady state 1839 and any deviation of concurrent TCP connections SHOULD be less 1840 than 10%. This confirms the DUT opens and closes TCP connections 1841 at approximately the same rate. 1843 7.6.3.4. Measurement 1845 TCP connections per second MUST be reported for each test iteration 1846 (for each object size). 1848 The KPI metric TLS Handshake Rate can be measured in the test using 1 1849 KByte object size. 1851 7.6.4. Test Procedures and Expected Results 1853 The test procedure is designed to measure the TCP connections per 1854 second rate of the DUT/SUT at the sustaining period of traffic load 1855 profile. The test procedure consists of three major steps: Step 1 1856 ensures the DUT/SUT is able to reach the performance value (Initial 1857 connections per second) and meets the test results validation 1858 criteria when it was very minimally utilized. Step 2 determines the 1859 DUT/SUT is able to reach the target performance value within the test 1860 results validation criteria. Step 3 determines the maximum 1861 achievable performance value within the test results validation 1862 criteria. 1864 This test procedure MAY be repeated multiple times with different 1865 IPv4 and IPv6 traffic distribution. 1867 7.6.4.1. Step 1: Test Initialization and Qualification 1869 Verify the link status of all connected physical interfaces. All 1870 interfaces are expected to be in "UP" status. 1872 Configure traffic load profile of the test equipment to establish 1873 "Initial connections per second" as defined in Section 7.6.3.2. The 1874 traffic load profile SHOULD be defined as described in Section 4.3.4. 1876 The DUT/SUT SHOULD reach the "Initial connections per second" before 1877 the sustain phase. The measured KPIs during the sustain phase MUST 1878 meet all the test results validation criteria defined in 1879 Section 7.6.3.3. 1881 If the KPI metrics do not meet the test results validation criteria, 1882 the test procedure MUST NOT be continued to "Step 2". 1884 7.6.4.2. Step 2: Test Run with Target Objective 1886 Configure test equipment to establish "Target connections per second" 1887 defined in Section 7.6.3.2. The test equipment SHOULD follow the 1888 traffic load profile definition as described in Section 4.3.4. 1890 During the ramp up and sustain phase, other KPIs such as inspected 1891 throughput, concurrent TCP connections, and application transactions 1892 per second MUST NOT reach the maximum value the DUT/SUT can support. 1893 The test results for specific test iteration SHOULD NOT be reported, 1894 if the above mentioned KPI (especially inspected throughput) reaches 1895 the maximum value. (Example: If the test iteration with 64 KByte of 1896 HTTPS response object size reached the maximum inspected throughput 1897 limitation of the DUT, the test iteration MAY be interrupted and the 1898 result for 64 KByte SHOULD NOT be reported). 1900 The test equipment SHOULD start to measure and record all specified 1901 KPIs. Continue the test until all traffic profile phases are 1902 completed. 1904 Within the test results validation criteria, the DUT/SUT is expected 1905 to reach the desired value of the target objective ("Target 1906 connections per second") in the sustain phase. Follow step 3, if the 1907 measured value does not meet the target value or does not fulfill the 1908 test results validation criteria. 1910 7.6.4.3. Step 3: Test Iteration 1912 Determine the achievable connections per second within the test 1913 results validation criteria. 1915 7.7. HTTPS Throughput 1917 7.7.1. Objective 1919 Determine the sustainable inspected throughput of the DUT/SUT for 1920 HTTPS transactions varying the HTTPS response object size. 1922 Test iterations MUST include common cipher suites and key strengths 1923 as well as forward looking stronger keys. Specific test iterations 1924 MUST include the ciphers and keys defined in Section 7.7.3.2. 1926 7.7.2. Test Setup 1928 Testbed setup SHOULD be configured as defined in Section 4. Any 1929 specific testbed configuration changes (number of interfaces and 1930 interface type, etc.) MUST be documented. 1932 7.7.3. Test Parameters 1934 In this section, benchmarking test specific parameters SHOULD be 1935 defined. 1937 7.7.3.1. DUT/SUT Configuration Parameters 1939 DUT/SUT parameters MUST conform to the requirements defined in 1940 Section 4.2. Any configuration changes for this specific 1941 benchmarking test MUST be documented. 1943 7.7.3.2. Test Equipment Configuration Parameters 1945 Test equipment configuration parameters MUST conform to the 1946 requirements defined in Section 4.3. The following parameters MUST 1947 be documented for this benchmarking test: 1949 Client IP address range defined in Section 4.3.1.2 1951 Server IP address range defined in Section 4.3.2.2 1953 Traffic distribution ratio between IPv4 and IPv6 defined in 1954 Section 4.3.1.2 1956 Target inspected throughput: Aggregated line rate of interface(s) 1957 used in the DUT/SUT or the value defined based on requirement for a 1958 specific deployment scenario. 1960 Initial throughput: 10% of "Target inspected throughput" Note: 1961 Initial throughput is not a KPI to report. This value is configured 1962 on the traffic generator and used to perform the Step1: "Test 1963 Initialization and Qualification" described under the Section 7.7.4. 1965 Number of HTTPS response object requests (transactions) per 1966 connection: 10 1968 RECOMMENDED ciphers and keys defined in Section 4.3.1.3 1970 RECOMMENDED HTTPS response object size: 1, 16, 64, 256 KByte, and 1971 mixed objects defined in Table 4 under Section 7.3.3.2. 1973 7.7.3.3. Test Results Validation Criteria 1975 The following criteria are the test results validation criteria. The 1976 test results validation criteria MUST be monitored during the whole 1977 sustain phase of the traffic load profile. 1979 a. Number of failed Application transactions (receiving any HTTP 1980 response code other than 200 OK) MUST be less than 0.001% (1 out 1981 of 100,000 transactions) of attempt transactions. 1983 b. Traffic SHOULD be forwarded at a constant rate (considered as a 1984 constant rate if any deviation of traffic forwarding rate is less 1985 than 5%). 1987 c. Concurrent TCP connections MUST be constant during steady state 1988 and any deviation of concurrent TCP connections SHOULD be less 1989 than 10%. This confirms the DUT opens and closes TCP connections 1990 at approximately the same rate. 1992 7.7.3.4. Measurement 1994 Inspected Throughput and HTTP Transactions per Second MUST be 1995 reported for each object size. 1997 7.7.4. Test Procedures and Expected Results 1999 The test procedure consists of three major steps: Step 1 ensures the 2000 DUT/SUT is able to reach the performance value (Initial throughput) 2001 and meets the test results validation criteria when it was very 2002 minimally utilized. Step 2 determines the DUT/SUT is able to reach 2003 the target performance value within the test results validation 2004 criteria. Step 3 determines the maximum achievable performance value 2005 within the test results validation criteria. 2007 This test procedure MAY be repeated multiple times with different 2008 IPv4 and IPv6 traffic distribution and HTTPS response object sizes. 2010 7.7.4.1. Step 1: Test Initialization and Qualification 2012 Verify the link status of all connected physical interfaces. All 2013 interfaces are expected to be in "UP" status. 2015 Configure traffic load profile of the test equipment to establish 2016 "Initial throughput" as defined in Section 7.7.3.2. 2018 The traffic load profile SHOULD be defined as described in 2019 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial throughput" 2020 during the sustain phase. Measure all KPI as defined in 2021 Section 7.7.3.4. 2023 The measured KPIs during the sustain phase MUST meet the test results 2024 validation criteria "a" defined in Section 7.7.3.3. The test results 2025 validation criteria "b" and "c" are OPTIONAL for step 1. 2027 If the KPI metrics do not meet the test results validation criteria, 2028 the test procedure MUST NOT be continued to "Step 2". 2030 7.7.4.2. Step 2: Test Run with Target Objective 2032 Configure test equipment to establish the target objective ("Target 2033 inspected throughput") defined in Section 7.7.3.2. The test 2034 equipment SHOULD start to measure and record all specified KPIs. 2035 Continue the test until all traffic profile phases are completed. 2037 Within the test results validation criteria, the DUT/SUT is expected 2038 to reach the desired value of the target objective in the sustain 2039 phase. Follow step 3, if the measured value does not meet the target 2040 value or does not fulfill the test results validation criteria. 2042 7.7.4.3. Step 3: Test Iteration 2044 Determine the achievable average inspected throughput within the test 2045 results validation criteria. Final test iteration MUST be performed 2046 for the test duration defined in Section 4.3.4. 2048 7.8. HTTPS Transaction Latency 2050 7.8.1. Objective 2052 Using HTTPS traffic, determine the HTTPS transaction latency when 2053 DUT/SUT is running with sustainable HTTPS transactions per second 2054 supported by the DUT/SUT under different HTTPS response object size. 2056 Scenario 1: The client MUST negotiate HTTPS and close the connection 2057 with FIN immediately after completion of a single transaction (GET 2058 and RESPONSE). 2060 Scenario 2: The client MUST negotiate HTTPS and close the connection 2061 with FIN immediately after completion of 10 transactions (GET and 2062 RESPONSE) within a single TCP connection. 2064 7.8.2. Test Setup 2066 Testbed setup SHOULD be configured as defined in Section 4. Any 2067 specific testbed configuration changes (number of interfaces and 2068 interface type, etc.) MUST be documented. 2070 7.8.3. Test Parameters 2072 In this section, benchmarking test specific parameters SHOULD be 2073 defined. 2075 7.8.3.1. DUT/SUT Configuration Parameters 2077 DUT/SUT parameters MUST conform to the requirements defined in 2078 Section 4.2. Any configuration changes for this specific 2079 benchmarking test MUST be documented. 2081 7.8.3.2. Test Equipment Configuration Parameters 2083 Test equipment configuration parameters MUST conform to the 2084 requirements defined in Section 4.3. The following parameters MUST 2085 be documented for this benchmarking test: 2087 Client IP address range defined in Section 4.3.1.2 2089 Server IP address range defined in Section 4.3.2.2 2090 Traffic distribution ratio between IPv4 and IPv6 defined in 2091 Section 4.3.1.2 2093 RECOMMENDED cipher suites and key sizes defined in Section 4.3.1.3 2095 Target objective for scenario 1: 50% of the connections per second 2096 measured in benchmarking test TCP/HTTPS Connections per second 2097 (Section 7.6) 2099 Target objective for scenario 2: 50% of the inspected throughput 2100 measured in benchmarking test HTTPS Throughput (Section 7.7) 2102 Initial objective for scenario 1: 10% of "Target objective for 2103 scenario 1" 2105 Initial objective for scenario 2: 10% of "Target objective for 2106 scenario 2" 2108 Note: The Initial objectives are not a KPI to report. These values 2109 are configured on the traffic generator and used to perform the 2110 Step1: "Test Initialization and Qualification" described under the 2111 Section 7.8.4. 2113 HTTPS transaction per TCP connection: Test scenario 1 with single 2114 transaction and scenario 2 with 10 transactions 2116 HTTPS with GET request requesting a single object. The RECOMMENDED 2117 object sizes are 1, 16, and 64 KByte. For each test iteration, 2118 client MUST request a single HTTPS response object size. 2120 7.8.3.3. Test Results Validation Criteria 2122 The following criteria are the test results validation criteria. The 2123 Test results validation criteria MUST be monitored during the whole 2124 sustain phase of the traffic load profile. 2126 a. Number of failed application transactions (receiving any HTTP 2127 response code other than 200 OK) MUST be less than 0.001% (1 out 2128 of 100,000 transactions) of attempt transactions. 2130 b. Number of terminated TCP connections due to unexpected TCP RST 2131 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2132 connections) of total initiated TCP connections. 2134 c. During the sustain phase, traffic SHOULD be forwarded at a 2135 constant rate (considered as a constant rate if any deviation of 2136 traffic forwarding rate is less than 5%). 2138 d. Concurrent TCP connections MUST be constant during steady state 2139 and any deviation of concurrent TCP connections SHOULD be less 2140 than 10%. This confirms the DUT opens and closes TCP connections 2141 at approximately the same rate. 2143 e. After ramp up the DUT/SUT MUST achieve the "Target objective" 2144 defined in the parameter Section 7.8.3.2 and remain in that state 2145 for the entire test duration (sustain phase). 2147 7.8.3.4. Measurement 2149 TTFB (minimum, average, and maximum) and TTLB (minimum, average and 2150 maximum) MUST be reported for each object size. 2152 7.8.4. Test Procedures and Expected Results 2154 The test procedure is designed to measure TTFB or TTLB when the DUT/ 2155 SUT is operating close to 50% of its maximum achievable connections 2156 per second or inspected throughput. The test procedure consists of 2157 two major steps: Step 1 ensures the DUT/SUT is able to reach the 2158 initial performance values and meets the test results validation 2159 criteria when it was very minimally utilized. Step 2 measures the 2160 latency values within the test results validation criteria. 2162 This test procedure MAY be repeated multiple times with different IP 2163 types (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 2164 distribution), HTTPS response object sizes and single, and multiple 2165 transactions per connection scenarios. 2167 7.8.4.1. Step 1: Test Initialization and Qualification 2169 Verify the link status of all connected physical interfaces. All 2170 interfaces are expected to be in "UP" status. 2172 Configure traffic load profile of the test equipment to establish 2173 "Initial objective" as defined in the Section 7.8.3.2. The traffic 2174 load profile SHOULD be defined as described in Section 4.3.4. 2176 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 2177 phase. The measured KPIs during the sustain phase MUST meet all the 2178 test results validation criteria defined in Section 7.8.3.3. 2180 If the KPI metrics do not meet the test results validation criteria, 2181 the test procedure MUST NOT be continued to "Step 2". 2183 7.8.4.2. Step 2: Test Run with Target Objective 2185 Configure test equipment to establish "Target objective" defined in 2186 Section 7.8.3.2. The test equipment SHOULD follow the traffic load 2187 profile definition as described in Section 4.3.4. 2189 The test equipment SHOULD start to measure and record all specified 2190 KPIs. Continue the test until all traffic profile phases are 2191 completed. 2193 Within the test results validation criteria, the DUT/SUT MUST reach 2194 the desired value of the target objective in the sustain phase. 2196 Measure the minimum, average, and maximum values of TTFB and TTLB. 2198 7.9. Concurrent TCP/HTTPS Connection Capacity 2200 7.9.1. Objective 2202 Determine the number of concurrent TCP connections the DUT/SUT 2203 sustains when using HTTPS traffic. 2205 7.9.2. Test Setup 2207 Testbed setup SHOULD be configured as defined in Section 4. Any 2208 specific testbed configuration changes (number of interfaces and 2209 interface type, etc.) MUST be documented. 2211 7.9.3. Test Parameters 2213 In this section, benchmarking test specific parameters SHOULD be 2214 defined. 2216 7.9.3.1. DUT/SUT Configuration Parameters 2218 DUT/SUT parameters MUST conform to the requirements defined in 2219 Section 4.2. Any configuration changes for this specific 2220 benchmarking test MUST be documented. 2222 7.9.3.2. Test Equipment Configuration Parameters 2224 Test equipment configuration parameters MUST conform to the 2225 requirements defined in Section 4.3. The following parameters MUST 2226 be documented for this benchmarking test: 2228 Client IP address range defined in Section 4.3.1.2 2230 Server IP address range defined in Section 4.3.2.2 2231 Traffic distribution ratio between IPv4 and IPv6 defined in 2232 Section 4.3.1.2 2234 RECOMMENDED cipher suites and key sizes defined in Section 4.3.1.3 2236 Target concurrent connections: Initial value from product 2237 datasheet or the value defined based on requirement for a specific 2238 deployment scenario. 2240 Initial concurrent connections: 10% of "Target concurrent 2241 connections" Note: Initial concurrent connection is not a KPI to 2242 report. This value is configured on the traffic generator and 2243 used to perform the Step1: "Test Initialization and Qualification" 2244 described under the Section 7.9.4. 2246 Connections per second during ramp up phase: 50% of maximum 2247 connections per second measured in benchmarking test TCP/HTTPS 2248 Connections per second (Section 7.6) 2250 Ramp up time (in traffic load profile for "Target concurrent 2251 connections"): "Target concurrent connections" / "Maximum 2252 connections per second during ramp up phase" 2254 Ramp up time (in traffic load profile for "Initial concurrent 2255 connections"): "Initial concurrent connections" / "Maximum 2256 connections per second during ramp up phase" 2258 The client MUST perform HTTPS transaction with persistence and each 2259 client can open multiple concurrent TCP connections per server 2260 endpoint IP. 2262 Each client sends 10 GET requests requesting 1 KByte HTTPS response 2263 objects in the same TCP connections (10 transactions/TCP connection) 2264 and the delay (think time) between each transaction MUST be X 2265 seconds. 2267 X = ("Ramp up time" + "steady state time") /10 2269 The established connections SHOULD remain open until the ramp down 2270 phase of the test. During the ramp down phase, all connections 2271 SHOULD be successfully closed with FIN. 2273 7.9.3.3. Test Results Validation Criteria 2275 The following criteria are the test results validation criteria. The 2276 Test results validation criteria MUST be monitored during the whole 2277 sustain phase of the traffic load profile. 2279 a. Number of failed application transactions (receiving any HTTP 2280 response code other than 200 OK) MUST be less than 0.001% (1 out 2281 of 100,000 transactions) of total attempted transactions. 2283 b. Number of terminated TCP connections due to unexpected TCP RST 2284 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2285 connections) of total initiated TCP connections. 2287 c. During the sustain phase, traffic SHOULD be forwarded at a 2288 constant rate (considered as a constant rate if any deviation of 2289 traffic forwarding rate is less than 5%). 2291 7.9.3.4. Measurement 2293 Average Concurrent TCP Connections MUST be reported for this 2294 benchmarking test. 2296 7.9.4. Test Procedures and Expected Results 2298 The test procedure is designed to measure the concurrent TCP 2299 connection capacity of the DUT/SUT at the sustaining period of 2300 traffic load profile. The test procedure consists of three major 2301 steps: Step 1 ensures the DUT/SUT is able to reach the performance 2302 value (Initial concurrent connection) and meets the test results 2303 validation criteria when it was very minimally utilized. Step 2 2304 determines the DUT/SUT is able to reach the target performance value 2305 within the test results validation criteria. Step 3 determines the 2306 maximum achievable performance value within the test results 2307 validation criteria. 2309 This test procedure MAY be repeated multiple times with different 2310 IPv4 and IPv6 traffic distribution. 2312 7.9.4.1. Step 1: Test Initialization and Qualification 2314 Verify the link status of all connected physical interfaces. All 2315 interfaces are expected to be in "UP" status. 2317 Configure test equipment to establish "Initial concurrent TCP 2318 connections" defined in Section 7.9.3.2. Except ramp up time, the 2319 traffic load profile SHOULD be defined as described in Section 4.3.4. 2321 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 2322 concurrent TCP connections". The measured KPIs during the sustain 2323 phase MUST meet the test results validation criteria "a" and "b" 2324 defined in Section 7.9.3.3. 2326 If the KPI metrics do not meet the test results validation criteria, 2327 the test procedure MUST NOT be continued to "Step 2". 2329 7.9.4.2. Step 2: Test Run with Target Objective 2331 Configure test equipment to establish the target objective ("Target 2332 concurrent TCP connections"). The test equipment SHOULD follow the 2333 traffic load profile definition (except ramp up time) as described in 2334 Section 4.3.4. 2336 During the ramp up and sustain phase, the other KPIs such as 2337 inspected throughput, TCP connections per second, and application 2338 transactions per second MUST NOT reach to the maximum value that the 2339 DUT/SUT can support. 2341 The test equipment SHOULD start to measure and record KPIs defined in 2342 Section 7.9.3.4. Continue the test until all traffic profile phases 2343 are completed. 2345 Within the test results validation criteria, the DUT/SUT is expected 2346 to reach the desired value of the target objective in the sustain 2347 phase. Follow step 3, if the measured value does not meet the target 2348 value or does not fulfill the test results validation criteria. 2350 7.9.4.3. Step 3: Test Iteration 2352 Determine the achievable concurrent TCP connections within the test 2353 results validation criteria. 2355 8. IANA Considerations 2357 This document makes no specific request of IANA. 2359 The IANA has assigned IPv4 and IPv6 address blocks in [RFC6890] that 2360 have been registered for special purposes. The IPv6 address block 2361 2001:2::/48 has been allocated for the purpose of IPv6 Benchmarking 2362 [RFC5180] and the IPv4 address block 198.18.0.0/15 has been allocated 2363 for the purpose of IPv4 Benchmarking [RFC2544]. This assignment was 2364 made to minimize the chance of conflict in case a testing device were 2365 to be accidentally connected to part of the Internet. 2367 9. Security Considerations 2369 The primary goal of this document is to provide benchmarking 2370 terminology and methodology for next-generation network security 2371 devices for use in a laboratory isolated test environment. However, 2372 readers should be aware that there is some overlap between 2373 performance and security issues. Specifically, the optimal 2374 configuration for network security device performance may not be the 2375 most secure, and vice-versa. The cipher suites recommended in this 2376 document are for test purpose only. The cipher suite recommendation 2377 for a real deployment is outside the scope of this document. 2379 10. Contributors 2381 The following individuals contributed significantly to the creation 2382 of this document: 2384 Alex Samonte, Amritam Putatunda, Aria Eslambolchizadeh, Chao Guo, 2385 Chris Brown, Cory Ford, David DeSanto, Jurrie Van Den Breekel, 2386 Michelle Rhines, Mike Jack, Ryan Liles, Samaresh Nair, Stephen 2387 Goudreault, Tim Carlin, and Tim Otto. 2389 11. Acknowledgements 2391 The authors wish to acknowledge the members of NetSecOPEN for their 2392 participation in the creation of this document. Additionally, the 2393 following members need to be acknowledged: 2395 Anand Vijayan, Chris Marshall, Jay Lindenauer, Michael Shannon, Mike 2396 Deichman, Ryan Riese, and Toulnay Orkun. 2398 12. References 2400 12.1. Normative References 2402 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2403 Requirement Levels", BCP 14, RFC 2119, 2404 DOI 10.17487/RFC2119, March 1997, 2405 . 2407 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2408 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 2409 May 2017, . 2411 12.2. Informative References 2413 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 2414 Network Interconnect Devices", RFC 2544, 2415 DOI 10.17487/RFC2544, March 1999, 2416 . 2418 [RFC2647] Newman, D., "Benchmarking Terminology for Firewall 2419 Performance", RFC 2647, DOI 10.17487/RFC2647, August 1999, 2420 . 2422 [RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin, 2423 "Benchmarking Methodology for Firewall Performance", 2424 RFC 3511, DOI 10.17487/RFC3511, April 2003, 2425 . 2427 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 2428 Dugatkin, "IPv6 Benchmarking Methodology for Network 2429 Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May 2430 2008, . 2432 [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, 2433 "Applicability Statement for RFC 2544: Use on Production 2434 Networks Considered Harmful", RFC 6815, 2435 DOI 10.17487/RFC6815, November 2012, 2436 . 2438 [RFC6890] Cotton, M., Vegoda, L., Bonica, R., Ed., and B. Haberman, 2439 "Special-Purpose IP Address Registries", BCP 153, 2440 RFC 6890, DOI 10.17487/RFC6890, April 2013, 2441 . 2443 [RFC7230] Fielding, R., Ed. and J. Reschke, Ed., "Hypertext Transfer 2444 Protocol (HTTP/1.1): Message Syntax and Routing", 2445 RFC 7230, DOI 10.17487/RFC7230, June 2014, 2446 . 2448 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 2449 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, 2450 . 2452 [RFC9000] Iyengar, J., Ed. and M. Thomson, Ed., "QUIC: A UDP-Based 2453 Multiplexed and Secure Transport", RFC 9000, 2454 DOI 10.17487/RFC9000, May 2021, 2455 . 2457 Appendix A. Test Methodology - Security Effectiveness Evaluation 2458 A.1. Test Objective 2460 This test methodology verifies the DUT/SUT is able to detect, 2461 prevent, and report the vulnerabilities. 2463 In this test, background test traffic will be generated to utilize 2464 the DUT/SUT. In parallel, the CVEs will be sent to the DUT/SUT as 2465 encrypted and as well as clear text payload formats using a traffic 2466 generator. The selection of the CVEs is described in Section 4.2.1. 2468 * Number of blocked CVEs 2470 * Number of bypassed (nonblocked) CVEs 2472 * Background traffic performance (verify if the background traffic 2473 is impacted while sending CVE toward DUT/SUT) 2475 * Accuracy of DUT/SUT statistics in term of vulnerabilities 2476 reporting 2478 A.2. Testbed Setup 2480 The same testbed MUST be used for security effectiveness test and as 2481 well as for benchmarking test cases defined in Section 7. 2483 A.3. Test Parameters 2485 In this section, the benchmarking test specific parameters SHOULD be 2486 defined. 2488 A.3.1. DUT/SUT Configuration Parameters 2490 DUT/SUT configuration parameters MUST conform to the requirements 2491 defined in Section 4.2. The same DUT configuration MUST be used for 2492 Security effectiveness test and as well as for benchmarking test 2493 cases defined in Section 7. The DUT/SUT MUST be configured in inline 2494 mode and all detected attack traffic MUST be dropped and the session 2495 SHOULD be reset 2497 A.3.2. Test Equipment Configuration Parameters 2499 Test equipment configuration parameters MUST conform to the 2500 requirements defined in Section 4.3. The same client and server IP 2501 ranges MUST be configured as used in the benchmarking test cases. In 2502 addition, the following parameters MUST be documented for this 2503 benchmarking test: 2505 * Background Traffic: 45% of maximum HTTP throughput and 45% of 2506 Maximum HTTPS throughput supported by the DUT/SUT (measured with 2507 object size 64 KByte in the benchmarking tests "HTTP(S) 2508 Throughput" defined in Section 7.3 and Section 7.7). 2510 * RECOMMENDED CVE traffic transmission Rate: 10 CVEs per second 2512 * It is RECOMMENDED to generate each CVE multiple times 2513 (sequentially) at 10 CVEs per second 2515 * Ciphers and keys for the encrypted CVE traffic MUST use the same 2516 cipher configured for HTTPS traffic related benchmarking tests 2517 (Section 7.6 - Section 7.9) 2519 A.4. Test Results Validation Criteria 2521 The following criteria are the test results validation criteria. The 2522 test results validation criteria MUST be monitored during the whole 2523 test duration. 2525 a. Number of failed application transaction in the background 2526 traffic MUST be less than 0.01% of attempted transactions. 2528 b. Number of terminated TCP connections of the background traffic 2529 (due to unexpected TCP RST sent by DUT/SUT) MUST be less than 2530 0.01% of total initiated TCP connections in the background 2531 traffic. 2533 c. During the sustain phase, traffic SHOULD be forwarded at a 2534 constant rate (considered as a constant rate if any deviation of 2535 traffic forwarding rate is less than 5%). 2537 d. False positive MUST NOT occur in the background traffic. 2539 A.5. Measurement 2541 Following KPI metrics MUST be reported for this test scenario: 2543 Mandatory KPIs: 2545 * Blocked CVEs: It SHOULD be represented in the following ways: 2547 - Number of blocked CVEs out of total CVEs 2549 - Percentage of blocked CVEs 2551 * Unblocked CVEs: It SHOULD be represented in the following ways: 2553 - Number of unblocked CVEs out of total CVEs 2555 - Percentage of unblocked CVEs 2557 * Background traffic behavior: It SHOULD be represented one of the 2558 followings ways: 2560 - No impact: Considered as "no impact'" if any deviation of 2561 traffic forwarding rate is less than or equal to 5 % (constant 2562 rate) 2564 - Minor impact: Considered as "minor impact" if any deviation of 2565 traffic forwarding rate is greater than 5% and less than or 2566 equal to10% (i.e. small spikes) 2568 - Heavily impacted: Considered as "Heavily impacted" if any 2569 deviation of traffic forwarding rate is greater than 10% (i.e. 2570 large spikes) or reduced the background HTTP(S) throughput 2571 greater than 10% 2573 * DUT/SUT reporting accuracy: DUT/SUT MUST report all detected 2574 vulnerabilities. 2576 Optional KPIs: 2578 * List of unblocked CVEs 2580 A.6. Test Procedures and Expected Results 2582 The test procedure is designed to measure the security effectiveness 2583 of the DUT/SUT at the sustaining period of the traffic load profile. 2584 The test procedure consists of two major steps. This test procedure 2585 MAY be repeated multiple times with different IPv4 and IPv6 traffic 2586 distribution. 2588 A.6.1. Step 1: Background Traffic 2590 Generate background traffic at the transmission rate defined in 2591 Appendix A.3.2. 2593 The DUT/SUT MUST reach the target objective (HTTP(S) throughput) in 2594 sustain phase. The measured KPIs during the sustain phase MUST meet 2595 all the test results validation criteria defined in Appendix A.4. 2597 If the KPI metrics do not meet the acceptance criteria, the test 2598 procedure MUST NOT be continued to "Step 2". 2600 A.6.2. Step 2: CVE Emulation 2602 While generating background traffic (in sustain phase), send the CVE 2603 traffic as defined in the parameter section. 2605 The test equipment SHOULD start to measure and record all specified 2606 KPIs. Continue the test until all CVEs are sent. 2608 The measured KPIs MUST meet all the test results validation criteria 2609 defined in Appendix A.4. 2611 In addition, the DUT/SUT SHOULD report the vulnerabilities correctly. 2613 Appendix B. DUT/SUT Classification 2615 This document aims to classify the DUT/SUT in four different 2616 categories based on its maximum supported firewall throughput 2617 performance number defined in the vendor datasheet. This 2618 classification MAY help user to determine specific configuration 2619 scale (e.g., number of ACL entries), traffic profiles, and attack 2620 traffic profiles, scaling those proportionally to DUT/SUT sizing 2621 category. 2623 The four different categories are Extra Small (XS), Small (S), Medium 2624 (M), and Large (L). The RECOMMENDED throughput values for the 2625 following categories are: 2627 Extra Small (XS) - Supported throughput less than or equal to1Gbit/s 2629 Small (S) - Supported throughput greater than 1Gbit/s and less than 2630 or equal to 5Gbit/s 2632 Medium (M) - Supported throughput greater than 5Gbit/s and less than 2633 or equal to10Gbit/s 2635 Large (L) - Supported throughput greater than 10Gbit/s 2637 Authors' Addresses 2639 Balamuhunthan Balarajah 2640 Berlin 2641 Germany 2643 Email: bm.balarajah@gmail.com 2644 Carsten Rossenhoevel 2645 EANTC AG 2646 Salzufer 14 2647 10587 Berlin 2648 Germany 2650 Email: cross@eantc.de 2652 Brian Monkman 2653 NetSecOPEN 2654 417 Independence Court 2655 Mechanicsburg, PA 17050 2656 United States of America 2658 Email: bmonkman@netsecopen.org