idnits 2.17.1 draft-ietf-bmwg-ngfw-performance-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. -- The draft header indicates that this document obsoletes RFC3511, but the abstract doesn't seem to mention this, which it should. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (September 2021) is 946 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 2616 (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) -- Obsolete informational reference (is this intentional?): RFC 3511 (Obsoleted by RFC 9411) Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Methodology Working Group B. Balarajah 3 Internet-Draft 4 Obsoletes: 3511 (if approved) C. Rossenhoevel 5 Intended status: Informational EANTC AG 6 Expires: 30 March 2022 B. Monkman 7 NetSecOPEN 8 September 2021 10 Benchmarking Methodology for Network Security Device Performance 11 draft-ietf-bmwg-ngfw-performance-10 13 Abstract 15 This document provides benchmarking terminology and methodology for 16 next-generation network security devices including next-generation 17 firewalls (NGFW), next-generation intrusion prevention systems 18 (NGIPS), and unified threat management (UTM) implementations. This 19 document aims to improve the applicability, reproducibility, and 20 transparency of benchmarks and to align the test methodology with 21 today's increasingly complex layer 7 security centric network 22 application use cases. The main areas covered in this document are 23 test terminology, test configuration parameters, and benchmarking 24 methodology for NGFW and NGIPS. 26 Status of This Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at https://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on 5 March 2022. 43 Copyright Notice 45 Copyright (c) 2021 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 50 license-info) in effect on the date of publication of this document. 51 Please review these documents carefully, as they describe your rights 52 and restrictions with respect to this document. Code Components 53 extracted from this document must include Simplified BSD License text 54 as described in Section 4.e of the Trust Legal Provisions and are 55 provided without warranty as described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 60 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 61 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 62 4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 4 63 4.1. Testbed Configuration . . . . . . . . . . . . . . . . . . 5 64 4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 6 65 4.2.1. Security Effectiveness Configuration . . . . . . . . 12 66 4.3. Test Equipment Configuration . . . . . . . . . . . . . . 12 67 4.3.1. Client Configuration . . . . . . . . . . . . . . . . 12 68 4.3.2. Backend Server Configuration . . . . . . . . . . . . 15 69 4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 17 70 4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 17 71 5. Testbed Considerations . . . . . . . . . . . . . . . . . . . 18 72 6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 19 73 6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 19 74 6.2. Detailed Test Results . . . . . . . . . . . . . . . . . . 21 75 6.3. Benchmarks and Key Performance Indicators . . . . . . . . 21 76 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 23 77 7.1. Throughput Performance with Application Traffic Mix . . . 23 78 7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 23 79 7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 23 80 7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 23 81 7.1.4. Test Procedures and Expected Results . . . . . . . . 25 82 7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 26 83 7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 26 84 7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 27 85 7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 27 86 7.2.4. Test Procedures and Expected Results . . . . . . . . 28 87 7.3. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 30 88 7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 30 89 7.3.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 30 90 7.3.3. Test Parameters . . . . . . . . . . . . . . . . . . . 30 91 7.3.4. Test Procedures and Expected Results . . . . . . . . 32 92 7.4. HTTP Transaction Latency . . . . . . . . . . . . . . . . 33 93 7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 33 94 7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 33 95 7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 34 96 7.4.4. Test Procedures and Expected Results . . . . . . . . 35 97 7.5. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 36 98 7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 36 99 7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 36 100 7.5.3. Test Parameters . . . . . . . . . . . . . . . . . . . 37 101 7.5.4. Test Procedures and Expected Results . . . . . . . . 38 102 7.6. TCP/HTTPS Connections per Second . . . . . . . . . . . . 39 103 7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 40 104 7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 40 105 7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 40 106 7.6.4. Test Procedures and Expected Results . . . . . . . . 42 107 7.7. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 43 108 7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 43 109 7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 43 110 7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 43 111 7.7.4. Test Procedures and Expected Results . . . . . . . . 45 112 7.8. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 46 113 7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 46 114 7.8.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 46 115 7.8.3. Test Parameters . . . . . . . . . . . . . . . . . . . 46 116 7.8.4. Test Procedures and Expected Results . . . . . . . . 48 117 7.9. Concurrent TCP/HTTPS Connection Capacity . . . . . . . . 49 118 7.9.1. Objective . . . . . . . . . . . . . . . . . . . . . . 49 119 7.9.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 49 120 7.9.3. Test Parameters . . . . . . . . . . . . . . . . . . . 49 121 7.9.4. Test Procedures and Expected Results . . . . . . . . 51 122 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 52 123 9. Security Considerations . . . . . . . . . . . . . . . . . . . 53 124 10. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 53 125 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 53 126 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 53 127 12.1. Normative References . . . . . . . . . . . . . . . . . . 53 128 12.2. Informative References . . . . . . . . . . . . . . . . . 53 129 Appendix A. Test Methodology - Security Effectiveness 130 Evaluation . . . . . . . . . . . . . . . . . . . . . . . 55 131 A.1. Test Objective . . . . . . . . . . . . . . . . . . . . . 55 132 A.2. Testbed Setup . . . . . . . . . . . . . . . . . . . . . . 55 133 A.3. Test Parameters . . . . . . . . . . . . . . . . . . . . . 55 134 A.3.1. DUT/SUT Configuration Parameters . . . . . . . . . . 55 135 A.3.2. Test Equipment Configuration Parameters . . . . . . . 55 136 A.4. Test Results Validation Criteria . . . . . . . . . . . . 56 137 A.5. Measurement . . . . . . . . . . . . . . . . . . . . . . . 56 138 A.6. Test Procedures and Expected Results . . . . . . . . . . 57 139 A.6.1. Step 1: Background Traffic . . . . . . . . . . . . . 57 140 A.6.2. Step 2: CVE Emulation . . . . . . . . . . . . . . . . 58 141 Appendix B. DUT/SUT Classification . . . . . . . . . . . . . . . 58 142 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 58 144 1. Introduction 146 18 years have passed since IETF recommended test methodology and 147 terminology for firewalls initially ([RFC3511]). The requirements 148 for network security element performance and effectiveness have 149 increased tremendously since then. Security function implementations 150 have evolved to more advanced areas and have diversified into 151 intrusion detection and prevention, threat management, analysis of 152 encrypted traffic, etc. In an industry of growing importance, well- 153 defined, and reproducible key performance indicators (KPIs) are 154 increasingly needed as they enable fair and reasonable comparison of 155 network security functions. All these reasons have led to the 156 creation of a new next-generation network security device 157 benchmarking document and this document obsoletes [RFC3511]. 159 2. Requirements 161 The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 162 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 163 "OPTIONAL" in this document are to be interpreted as described in BCP 164 14 [RFC2119], [RFC8174] when, and only when, they appear in all 165 capitals, as shown here. 167 3. Scope 169 This document provides testing terminology and testing methodology 170 for modern and next-generation network security devices that are 171 configured in Active ("Inline") mode. It covers the validation of 172 security effectiveness configurations of network security devices, 173 followed by performance benchmark testing. This document focuses on 174 advanced, realistic, and reproducible testing methods. Additionally, 175 it describes testbed environments, test tool requirements, and test 176 result formats. 178 4. Test Setup 180 Test setup defined in this document applies to all benchmarking tests 181 described in Section 7. The test setup MUST be contained within an 182 Isolated Test Environment (see Section 3 of [RFC6815]). 184 4.1. Testbed Configuration 186 Testbed configuration MUST ensure that any performance implications 187 that are discovered during the benchmark testing aren't due to the 188 inherent physical network limitations such as the number of physical 189 links and forwarding performance capabilities (throughput and 190 latency) of the network devices in the testbed. For this reason, 191 this document recommends avoiding external devices such as switches 192 and routers in the testbed wherever possible. 194 In some deployment scenarios, the network security devices (Device 195 Under Test/System Under Test) are connected to routers and switches, 196 which will reduce the number of entries in MAC or ARP tables of the 197 Device Under Test/System Under Test (DUT/SUT). If MAC or ARP tables 198 have many entries, this may impact the actual DUT/SUT performance due 199 to MAC and ARP/ND (Neighbor Discovery) table lookup processes. This 200 document also recommends using test equipment with the capability of 201 emulating layer 3 routing functionality instead of adding external 202 routers in the testbed. 204 The testbed setup Option 1 (Figure 1) is the RECOMMENDED testbed 205 setup for the benchmarking test. 207 +-----------------------+ +-----------------------+ 208 | +-------------------+ | +-----------+ | +-------------------+ | 209 | | Emulated Router(s)| | | | | | Emulated Router(s)| | 210 | | (Optional) | +----- DUT/SUT +-----+ (Optional) | | 211 | +-------------------+ | | | | +-------------------+ | 212 | +-------------------+ | +-----------+ | +-------------------+ | 213 | | Clients | | | | Servers | | 214 | +-------------------+ | | +-------------------+ | 215 | | | | 216 | Test Equipment | | Test Equipment | 217 +-----------------------+ +-----------------------+ 219 Figure 1: Testbed Setup - Option 1 221 If the test equipment used is not capable of emulating layer 3 222 routing functionality or if the number of used ports is mismatched 223 between test equipment and the DUT/SUT (need for test equipment port 224 aggregation), the test setup can be configured as shown in Figure 2. 226 +-------------------+ +-----------+ +--------------------+ 227 |Aggregation Switch/| | | | Aggregation Switch/| 228 | Router +------+ DUT/SUT +------+ Router | 229 | | | | | | 230 +----------+--------+ +-----------+ +--------+-----------+ 231 | | 232 | | 233 +-----------+-----------+ +-----------+-----------+ 234 | | | | 235 | +-------------------+ | | +-------------------+ | 236 | | Emulated Router(s)| | | | Emulated Router(s)| | 237 | | (Optional) | | | | (Optional) | | 238 | +-------------------+ | | +-------------------+ | 239 | +-------------------+ | | +-------------------+ | 240 | | Clients | | | | Servers | | 241 | +-------------------+ | | +-------------------+ | 242 | | | | 243 | Test Equipment | | Test Equipment | 244 +-----------------------+ +-----------------------+ 246 Figure 2: Testbed Setup - Option 2 248 4.2. DUT/SUT Configuration 250 A unique DUT/SUT configuration MUST be used for all benchmarking 251 tests described in Section 7. Since each DUT/SUT will have its own 252 unique configuration, users SHOULD configure their device with the 253 same parameters and security features that would be used in the 254 actual deployment of the device or a typical deployment in order to 255 achieve maximum network security coverage. The DUT/SUT MUST be 256 configured in "Inline" mode so that the traffic is actively inspected 257 by the DUT/SUT. Also "Fail-Open" behavior MUST be disabled on the 258 DUT/SUT. 260 Table 1 and Table 2 below describe the RECOMMENDED and OPTIONAL sets 261 of network security feature list for NGFW and NGIPS respectively. 262 The selected security features SHOULD be consistently enabled on the 263 DUT/SUT for all benchmarking tests described in Section 7. 265 To improve repeatability, a summary of the DUT/SUT configuration 266 including a description of all enabled DUT/SUT features MUST be 267 published with the benchmarking results. 269 +------------------------+ 270 | NGFW | 271 +--------------- +-------------+----------+ 272 | | | | 273 |DUT/SUT Features| RECOMMENDED | OPTIONAL | 274 | | | | 275 +----------------+-------------+----------+ 276 |SSL Inspection | x | | 277 +----------------+-------------+----------+ 278 |IDS/IPS | x | | 279 +----------------+-------------+----------+ 280 |Anti-Spyware | x | | 281 +----------------+-------------+----------+ 282 |Anti-Virus | x | | 283 +----------------+-------------+----------+ 284 |Anti-Botnet | x | | 285 +----------------+-------------+----------+ 286 |Web Filtering | | x | 287 +----------------+-------------+----------+ 288 |Data Loss | | | 289 |Protection (DLP)| | x | 290 +----------------+-------------+----------+ 291 |DDoS | | x | 292 +----------------+-------------+----------+ 293 |Certificate | | x | 294 |Validation | | | 295 +----------------+-------------+----------+ 296 |Logging and | x | | 297 |Reporting | | | 298 +----------------+-------------+----------+ 299 |Application | x | | 300 |Identification | | | 301 +----------------+-------------+----------+ 303 Figure 3: Table 1: NGFW Security Features 304 +------------------------+ 305 | NGIPS | 306 +----------------+-------------+----------+ 307 | | | | 308 |DUT/SUT Features| RECOMMENDED | OPTIONAL | 309 | | | | 310 +----------------+-------------+----------+ 311 |SSL Inspection | x | | 312 +----------------+-------------+----------+ 313 |Anti-Malware | x | | 314 +----------------+-------------+----------+ 315 |Anti-Spyware | x | | 316 +----------------+-------------+----------+ 317 |Anti-Botnet | x | | 318 +----------------+-------------+----------+ 319 |Logging and | x | | 320 |Reporting | | | 321 +----------------+-------------+----------+ 322 |Application | x | | 323 |Identification | | | 324 +----------------+-------------+----------+ 325 |Deep Packet | x | | 326 |Inspection | | | 327 +----------------+-------------+----------+ 328 |Anti-Evasion | x | | 329 +----------------+-------------+----------+ 331 Figure 4: Table 2: NGIPS Security Features 333 The following table provides a brief description of the security 334 features. 336 +------------------+------------------------------------------------+ 337 | DUT/SUT Features | Description | 338 +------------------+------------------------------------------------+ 339 | SSL Inspection | DUT/SUT intercepts and decrypts inbound HTTPS | 340 | | traffic between servers and clients. Once the | 341 | | content inspection has been completed, DUT/SUT | 342 | | encrypts the HTTPS traffic with ciphers | 343 | | and keys used by the clients and servers. | 344 +------------------+------------------------------------------------+ 345 | IDS/IPS | DUT/SUT detects and blocks exploits | 346 | | targeting known and unknown vulnerabilities | 347 | | across the monitored network. | 348 +------------------+------------------------------------------------+ 349 | Anti-Malware | DUT/SUT detects and prevents the transmission | 350 | | of malicious executable code and any associated| 351 | | communications across the monitored network. | 352 | | This includes data exfiltration as well as | 353 | | command and control channels. | 354 +------------------+------------------------------------------------+ 355 | Anti-Spyware | Anti-Spyware is a subcategory of Anti Malware. | 356 | | Spyware transmits information without the | 357 | | user's knowledge or permission. DUT/SUT detects| 358 | | and block initial infection or transmission of | 359 | | data. | 360 +------------------+------------------------------------------------+ 361 | Anti-Botnet | DUT/SUT detects traffic to or from botnets. | 362 +------------------+------------------------------------------------+ 363 | Anti-Evasion | DUT/SUT detects and mitigates attacks that have| 364 | | been obfuscated in some manner. | 365 +------------------+------------------------------------------------+ 366 | Web Filtering | DUT/SUT detects and blocks malicious website | 367 | | including defined classifications of website | 368 | | across the monitored network. | 369 +------------------+------------------------------------------------+ 370 | DLP | DUT/SUT detects and prevents data breaches and | 371 | | data exfiltration, or it detects and blocks the| 372 | | transmission of sensitive data across the | 373 | | monitored network. | 374 +------------------+------------------------------------------------+ 375 | Certificate | DUT/SUT validates certificates used in | 376 | Validation | encrypted communications across the monitored | 377 | | network. | 378 +------------------+------------------------------------------------+ 379 | Logging and | DUT/SUT logs and reports all traffic at the | 380 | Reporting | flow level across the monitored network. | 381 +------------------+------------------------------------------------+ 382 | Application | DUT/SUT detects known applications as defined | 383 | Identification | within the traffic mix selected across | 384 | | the monitored network. | 385 +------------------+------------------------------------------------+ 387 Figure 5: Table 3: Security Feature Description 389 Below is a summary of the DUT/SUT configuration: 391 * DUT/SUT MUST be configured in "inline" mode. 393 * "Fail-Open" behavior MUST be disabled. 395 * All RECOMMENDED security features are enabled. 397 * Logging SHOULD be enabled. DUT/SUT SHOULD log all traffic at the 398 flow level - Logging to an external device is permissible. 400 * Geographical location filtering, and Application Identification 401 and Control SHOULD be configured to trigger based on a site or 402 application from the defined traffic mix. 404 In addition, a realistic number of access control rules (ACL) SHOULD 405 be configured on the DUT/SUT where ACLs are configurable and 406 reasonable based on the deployment scenario. This document 407 determines the number of access policy rules for four different 408 classes of DUT/SUT: Extra Small (XS), Small (S), Medium (M), and 409 Large (L). A sample DUT/SUT classification is described in 410 Appendix B. 412 The Access Control Rules (ACL) defined in Table 4 MUST be configured 413 from top to bottom in the correct order as shown in the table. This 414 is due to ACL types listed in specificity decreasing order, with 415 "block" first, followed by "allow", representing a typical ACL based 416 security policy. The ACL entries SHOULD be configured with routable 417 IP subnets by the DUT/SUT. (Note: There will be differences between 418 how security vendors implement ACL decision making.) The configured 419 ACL MUST NOT block the security and measurement traffic used for the 420 benchmarking tests. 422 +---------------+ 423 | DUT/SUT | 424 | Classification| 425 | # Rules | 426 +-----------+-----------+--------------------+------+---+---+---+---+ 427 | | Match | | | | | | | 428 | Rules Type| Criteria | Description |Action| XS| S | M | L | 429 +-------------------------------------------------------------------+ 430 |Application|Application| Any application | block| 5 | 10| 20| 50| 431 |layer | | not included in | | | | | | 432 | | | the measurement | | | | | | 433 | | | traffic | | | | | | 434 +-------------------------------------------------------------------+ 435 |Transport |SRC IP and | Any SRC IP subnet | block| 25| 50|100|250| 436 |layer |TCP/UDP | used and any DST | | | | | | 437 | |DST ports | ports not used in | | | | | | 438 | | | the measurement | | | | | | 439 | | | traffic | | | | | | 440 +-------------------------------------------------------------------+ 441 |IP layer |SRC/DST IP | Any SRC/DST IP | block| 25| 50|100|250| 442 | | | subnet not used | | | | | | 443 | | | in the measurement | | | | | | 444 | | | traffic | | | | | | 445 +-------------------------------------------------------------------+ 446 |Application|Application| Half of the | allow| 10| 10| 10| 10| 447 |layer | | applications | | | | | | 448 | | | included in the | | | | | | 449 | | | measurement traffic| | | | | | 450 | | |(see the note below)| | | | | | 451 +-------------------------------------------------------------------+ 452 |Transport |SRC IP and | Half of the SRC | allow| >1| >1| >1| >1| 453 |layer |TCP/UDP | IPs used and any | | | | | | 454 | |DST ports | DST ports used in | | | | | | 455 | | | the measurement | | | | | | 456 | | | traffic | | | | | | 457 | | | (one rule per | | | | | | 458 | | | subnet) | | | | | | 459 +-------------------------------------------------------------------+ 460 |IP layer |SRC IP | The rest of the | allow| >1| >1| >1| >1| 461 | | | SRC IP subnet | | | | | | 462 | | | range used in the | | | | | | 463 | | | measurement | | | | | | 464 | | | traffic | | | | | | 465 | | | (one rule per | | | | | | 466 | | | subnet) | | | | | | 467 +-----------+-----------+--------------------+------+---+---+---+---+ 469 Figure 6: Table 4: DUT/SUT Access List 471 Note: If half of the applications included in the measurement traffic 472 is less than 10, the missing number of ACL entries (dummy rules) can 473 be configured for any application traffic not included in the 474 measurement traffic. 476 4.2.1. Security Effectiveness Configuration 478 The Security features (defined in table 1 and 2) of the DUT/SUT MUST 479 be configured effectively to detect, prevent, and report the defined 480 security vulnerability sets. This section defines the selection of 481 the security vulnerability sets from Common vulnerabilities and 482 Exposures (CVE) list for the testing. The vulnerability set SHOULD 483 reflect a minimum of 500 CVEs from no older than 10 calendar years to 484 the current year. These CVEs SHOULD be selected with a focus on in- 485 use software commonly found in business applications, with a Common 486 vulnerability Scoring System (CVSS) Severity of High (7-10). 488 This document is primarily focused on performance benchmarking. 489 However, it is RECOMMENDED to validate the security features 490 configuration of the DUT/SUT by evaluating the security effectiveness 491 as a prerequisite for performance benchmarking tests defined in the 492 section 7. In case the benchmarking tests are performed without 493 evaluating security effectiveness, the test report MUST explain the 494 implications of this. The methodology for evaluating security 495 effectiveness is defined in Appendix A. 497 4.3. Test Equipment Configuration 499 In general, test equipment allows configuring parameters in different 500 protocol layers. These parameters thereby influence the traffic 501 flows which will be offered and impact performance measurements. 503 This section specifies common test equipment configuration parameters 504 applicable for all benchmarking tests defined in Section 7. Any 505 benchmarking test specific parameters are described under the test 506 setup section of each benchmarking test individually. 508 4.3.1. Client Configuration 510 This section specifies which parameters SHOULD be considered while 511 configuring clients using test equipment. Also, this section 512 specifies the RECOMMENDED values for certain parameters. The values 513 are the defaults used in most of the client operating systems 514 currently. 516 4.3.1.1. TCP Stack Attributes 518 The TCP stack SHOULD use a congestion control algorithm at client and 519 server endpoints. The IPv4 and IPv6 Maximum Segment Size (MSS) 520 SHOULD be set to 1460 bytes and 1440 bytes respectively and a TX and 521 RX initial receive windows of 64 KByte. Client initial congestion 522 window SHOULD NOT exceed 10 times the MSS. Delayed ACKs are 523 permitted and the maximum client delayed ACK SHOULD NOT exceed 10 524 times the MSS before a forced ACK. Up to three retries SHOULD be 525 allowed before a timeout event is declared. All traffic MUST set the 526 TCP PSH flag to high. The source port range SHOULD be in the range 527 of 1024 - 65535. Internal timeout SHOULD be dynamically scalable per 528 RFC 793. The client SHOULD initiate and close TCP connections. The 529 TCP connection MUST be initiated via a TCP three-way handshake (SYN, 530 SYN/ACK, ACK), and it MUST be closed via either a TCP three-way close 531 (FIN, FIN/ACK, ACK), or a TCP four-way close (FIN, ACK, FIN, ACK). 533 4.3.1.2. Client IP Address Space 535 The sum of the client IP space SHOULD contain the following 536 attributes. 538 * The IP blocks SHOULD consist of multiple unique, discontinuous 539 static address blocks. 541 * A default gateway is permitted. 543 * The DSCP (differentiated services code point) marking is set to DF 544 (Default Forwarding) '000000' on IPv4 Type of Service (ToS) field 545 and IPv6 traffic class field. 547 The following equation can be used to define the total number of 548 client IP addresses that will be configured on the test equipment. 550 Desired total number of client IP = Target throughput [Mbit/s] / 551 Average throughput per IP address [Mbit/s] 553 As shown in the example list below, the value for "Average throughput 554 per IP address" can be varied depending on the deployment and use 555 case scenario. 557 (Option 1) DUT/SUT deployment scenario 1 : 6-7 Mbit/s per IP (e.g. 558 1,400-1,700 IPs per 10Gbit/s throughput) 560 (Option 2) DUT/SUT deployment scenario 2 : 0.1-0.2 Mbit/s per IP 561 (e.g. 50,000-100,000 IPs per 10Gbit/s throughput) 563 Based on deployment and use case scenario, client IP addresses SHOULD 564 be distributed between IPv4 and IPv6. The following options MAY be 565 considered for a selection of traffic mix ratio. 567 (Option 1) 100 % IPv4, no IPv6 569 (Option 2) 80 % IPv4, 20% IPv6 571 (Option 3) 50 % IPv4, 50% IPv6 573 (Option 4) 20 % IPv4, 80% IPv6 575 (Option 5) no IPv4, 100% IPv6 577 Note: The IANA has assigned IP address range for the testing purpose 578 as described in Section 8. If the test scenario requires more IP 579 addresses or subnets than the IANA assigned, this document recommends 580 using non routable Private IPv4 address ranges or Unique Local 581 Address (ULA) IPv6 address ranges for the testing. 583 4.3.1.3. Emulated Web Browser Attributes 585 The client emulated web browser (emulated browser) contains 586 attributes that will materially affect how traffic is loaded. The 587 objective is to emulate modern, typical browser attributes to improve 588 realism of the result set. 590 For HTTP traffic emulation, the emulated browser MUST negotiate HTTP 591 version 1.1 or higher. Depending on test scenarios and chosen HTTP 592 version, the emulated browser MAY open multiple TCP connections per 593 Server endpoint IP at any time depending on how many sequential 594 transactions need to be processed. For HTTP/2 or HTTP/3, the 595 emulated browser MAY open multiple concurrent streams per connection 596 (multiplexing). If HTTP/3 is used the emulated browser MUST open 597 Quick UDP Internet Connections (QUIC). HTTP settings such as number 598 of connection per server IP, number of requests per connection, and 599 number of streams per connection MUST be documented. This document 600 refers to [RFC8446] for HTTP/2 and [RFC9000]for QUIC. The emulated 601 browser SHOULD advertise a User-Agent header. The emulated browser 602 SHOULD enforce content length validation. Depending on test 603 scenarios and selected HTTP version, HTTP header compression MAY be 604 set to enable or disable. This setting (compression enabled or 605 disabled) MUST be documented in the report. 607 For encrypted traffic, the following attributes SHALL define the 608 negotiated encryption parameters. The test clients MUST use TLS 609 version 1.2 or higher. TLS record size MAY be optimized for the 610 HTTPS response object size up to a record size of 16 KByte. If 611 Server Name Indication (SNI) is required in the traffic mix profile, 612 the client endpoint MUST send TLS extension Server Name Indication 613 (SNI) information when opening a security tunnel. Each client 614 connection MUST perform a full handshake with server certificate and 615 MUST NOT use session reuse or resumption. 617 The following TLS 1.2 supported ciphers and keys are RECOMMENDED to 618 use for HTTPS based benchmarking tests defined in Section 7. 620 1. ECDHE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 621 Algorithm: ecdsa_secp256r1_sha256 and Supported group: sepc256r1) 623 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 624 Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256r1) 626 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 627 Algorithm: ecdsa_secp384r1_sha384 and Supported group: sepc521r1) 629 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 630 Algorithm: rsa_pkcs1_sha384 and Supported group: secp256r1) 632 Note: The above ciphers and keys were those commonly used enterprise 633 grade encryption cipher suites for TLS 1.2. It is recognized that 634 these will evolve over time. Individual certification bodies SHOULD 635 use ciphers and keys that reflect evolving use cases. These choices 636 MUST be documented in the resulting test reports with detailed 637 information on the ciphers and keys used along with reasons for the 638 choices. 640 [RFC8446] defines the following cipher suites for use with TLS 1.3. 642 1. TLS_AES_128_GCM_SHA256 644 2. TLS_AES_256_GCM_SHA384 646 3. TLS_CHACHA20_POLY1305_SHA256 648 4. TLS_AES_128_CCM_SHA256 650 5. TLS_AES_128_CCM_8_SHA256 652 4.3.2. Backend Server Configuration 654 This section specifies which parameters should be considered while 655 configuring emulated backend servers using test equipment. 657 4.3.2.1. TCP Stack Attributes 659 The TCP stack on the server side SHOULD be configured similar to the 660 client side configuration described in Section 4.3.1.1. In addition, 661 server initial congestion window MUST NOT exceed 10 times the MSS. 662 Delayed ACKs are permitted and the maximum server delayed ACK MUST 663 NOT exceed 10 times the MSS before a forced ACK. 665 4.3.2.2. Server Endpoint IP Addressing 667 The sum of the server IP space SHOULD contain the following 668 attributes. 670 * The server IP blocks SHOULD consist of unique, discontinuous 671 static address blocks with one IP per server Fully Qualified 672 Domain Name (FQDN) endpoint per test port. 674 * A default gateway is permitted. The DSCP (differentiated services 675 code point) marking is set to DF (Default Forwarding) '000000' on 676 IPv4 Type of Service (ToS) field and IPv6 traffic class field. 678 * The server IP addresses SHOULD be distributed between IPv4 and 679 IPv6 with a ratio identical to the clients distribution ratio. 681 Note: The IANA has assigned IP address range for the testing purpose 682 as described in Section 8. If the test scenario requires more IP 683 addresses or subnets than the IANA assigned, this document recommends 684 using non routable Private IPv4 address ranges or Unique Local 685 Address (ULA) IPv6 address ranges for the testing. 687 4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes 689 The server pool for HTTP SHOULD listen on TCP port 80 and emulate the 690 same HTTP version and settings chosen by the client (emulated web 691 browser). The Server MUST advertise server type in the Server 692 response header [RFC2616]. For HTTPS server, TLS 1.2 or higher MUST 693 be used with a maximum record size of 16 KByte and MUST NOT use 694 ticket resumption or session ID reuse. The server SHOULD listen on 695 TCP port 443. The server SHALL serve a certificate to the client. 696 The HTTPS server MUST check host SNI information with the FQDN if SNI 697 is in use. Cipher suite and key size on the server side MUST be 698 configured similar to the client side configuration described in 699 Section 4.3.1.3. 701 4.3.3. Traffic Flow Definition 703 This section describes the traffic pattern between client and server 704 endpoints. At the beginning of the test, the server endpoint 705 initializes and will be ready to accept connection states including 706 initialization of the TCP stack as well as bound HTTP and HTTPS 707 servers. When a client endpoint is needed, it will initialize and be 708 given attributes such as a MAC and IP address. The behavior of the 709 client is to sweep through the given server IP space, generating a 710 recognizable service by the DUT. Sequential and pseudorandom sweep 711 methods are acceptable. The method used MUST be stated in the final 712 report. Thus, a balanced\ mesh between client endpoints and server 713 endpoints will be generated in a client IP and port to server IP and 714 port combination. Each client endpoint performs the same actions as 715 other endpoints, with the difference being the source IP of the 716 client endpoint and the target server IP pool. The client MUST use 717 the server IP address or FQDN in the host header [RFC2616]. 719 4.3.3.1. Description of Intra-Client Behavior 721 Client endpoints are independent of other clients that are 722 concurrently executing. When a client endpoint initiates traffic, 723 this section describes how the client steps through different 724 services. Once the test is initialized, the client endpoints 725 randomly hold (perform no operation) for a few milliseconds for 726 better randomization of the start of client traffic. Each client 727 will either open a new TCP connection or connect to a TCP persistence 728 stack still open to that specific server. At any point that the 729 traffic profile may require encryption, a TLS encryption tunnel will 730 form presenting the URL or IP address request to the server. If 731 using SNI, the server MUST then perform an SNI name check with the 732 proposed FQDN compared to the domain embedded in the certificate. 733 Only when correct, will the server process the HTTPS response object. 734 The initial response object to the server is based on benchmarking 735 tests described in Section 7. Multiple additional sub-URLs (response 736 objects on the service page) MAY be requested simultaneously. This 737 MAY be to the same server IP as the initial URL. Each sub-object 738 will also use a canonical FQDN and URL path, as observed in the 739 traffic mix used. 741 4.3.4. Traffic Load Profile 743 The loading of traffic is described in this section. The loading of 744 a traffic load profile has five phases: Init, ramp up, sustain, ramp 745 down, and collection. 747 1. Init phase: Testbed devices including the client and server 748 endpoints should negotiate layer 2-3 connectivity such as MAC 749 learning and ARP. Only after successful MAC learning or ARP/ND 750 resolution SHALL the test iteration move to the next phase. No 751 measurements are made in this phase. The minimum RECOMMENDED 752 time for Init phase is 5 seconds. During this phase, the 753 emulated clients SHOULD NOT initiate any sessions with the DUT/ 754 SUT, in contrast, the emulated servers should be ready to accept 755 requests from DUT/SUT or from emulated clients. 757 2. Ramp up phase: The test equipment SHOULD start to generate the 758 test traffic. It SHOULD use a set of the approximate number of 759 unique client IP addresses to generate traffic. The traffic 760 SHOULD ramp up from zero to desired target objective. The target 761 objective is defined for each benchmarking test. The duration 762 for the ramp up phase MUST be configured long enough that the 763 test equipment does not overwhelm the DUT/SUTs stated performance 764 metrics defined in Section 6.3 namely, TCP Connections Per 765 Second, Inspected Throughput, Concurrent TCP Connections, and 766 Application Transactions Per Second. No measurements are made in 767 this phase. 769 3. Sustain phase: Starts when all required clients are active and 770 operating at their desired load condition. In the sustain phase, 771 the test equipment SHOULD continue generating traffic to constant 772 target value for a constant number of active clients. The 773 minimum RECOMMENDED time duration for sustain phase is 300 774 seconds. This is the phase where measurements occur. The test 775 equipment SHOULD measure and record statistics continuously. The 776 sampling interval for collecting the raw results and calculating 777 the statistics SHOULD be less than 2 seconds. 779 4. Ramp down phase: No new connections are established, and no 780 measurements are made. The time duration for ramp up and ramp 781 down phase SHOULD be the same. 783 5. Collection phase: The last phase is administrative and will occur 784 when the test equipment merges and collates the report data. 786 5. Testbed Considerations 788 This section describes steps for a reference test (pre-test) that 789 control the test environment including test equipment, focusing on 790 physical and virtualized environments and as well as test equipments. 791 Below are the RECOMMENDED steps for the reference test. 793 1. Perform the reference test either by configuring the DUT/SUT in 794 the most trivial setup (fast forwarding) or without presence of 795 the DUT/SUT. 797 2. Generate traffic from traffic generator. Choose a traffic 798 profile used for HTTP or HTTPS throughput performance test with 799 smallest object size. 801 3. Ensure that any ancillary switching or routing functions added in 802 the test equipment does not limit the performance by introducing 803 network metrics such as packet loss and latency. This is 804 specifically important for virtualized components (e.g., 805 vSwitches, vRouters). 807 4. Verify that the generated traffic (performance) of the test 808 equipment matches and reasonably exceeds the expected maximum 809 performance of the DUT/SUT. 811 5. Record the network performance metrics packet loss latency 812 introduced by the test environment (without DUT/SUT). 814 6. Assert that the testbed characteristics are stable during the 815 entire test session. Several factors might influence stability 816 specifically, for virtualized testbeds. For example, additional 817 workloads in a virtualized system, load balancing, and movement 818 of virtual machines during the test, or simple issues such as 819 additional heat created by high workloads leading to an emergency 820 CPU performance reduction. 822 The reference test SHOULD be performed before the benchmarking tests 823 (described in section 7) start. 825 6. Reporting 827 This section describes how the benchmarking test report should be 828 formatted and presented. It is RECOMMENDED to include two main 829 sections in the report, namely the introduction and the detailed test 830 results sections. 832 6.1. Introduction 834 The following attributes SHOULD be present in the introduction 835 section of the test report. 837 1. The time and date of the execution of the tests 839 2. Summary of testbed software and hardware details 840 a. DUT/SUT hardware/virtual configuration 842 * This section SHOULD clearly identify the make and model of 843 the DUT/SUT 845 * The port interfaces, including speed and link information 847 * If the DUT/SUT is a Virtual Network Function (VNF), host 848 (server) hardware and software details, interface 849 acceleration type such as DPDK and SR-IOV, used CPU cores, 850 used RAM, resource sharing (e.g. Pinning details and NUMA 851 Node) configuration details, hypervisor version, virtual 852 switch version 854 * details of any additional hardware relevant to the DUT/SUT 855 such as controllers 857 b. DUT/SUT software 859 * Operating system name 861 * Version 863 * Specific configuration details (if any) 865 c. DUT/SUT enabled features 867 * Configured DUT/SUT features (see Table 1 and Table 2) 869 * Attributes of the above-mentioned features 871 * Any additional relevant information about the features 873 d. Test equipment hardware and software 875 * Test equipment vendor name 877 * Hardware details including model number, interface type 879 * Test equipment firmware and test application software 880 version 882 e. Key test parameters 884 * Used cipher suites and keys 886 * IPv4 and IPv6 traffic distribution 887 * Number of configured ACL 889 f. Details of application traffic mix used in the benchmarking 890 test "Throughput Performance with Application Traffic Mix" 891 (Section 7.1) 893 * Name of applications and layer 7 protocols 895 * Percentage of emulated traffic for each application and 896 layer 7 protocols 898 * Percentage of encrypted traffic and used cipher suites and 899 keys (The RECOMMENDED ciphers and keys are defined in 900 Section 4.3.1.3) 902 * Used object sizes for each application and layer 7 903 protocols 905 3. Results Summary / Executive Summary 907 a. Results SHOULD resemble a pyramid in how it is reported, with 908 the introduction section documenting the summary of results 909 in a prominent, easy to read block. 911 6.2. Detailed Test Results 913 In the result section of the test report, the following attributes 914 SHOULD be present for each benchmarking test. 916 a. KPIs MUST be documented separately for each benchmarking test. 917 The format of the KPI metrics SHOULD be presented as described in 918 Section 6.3. 920 b. The next level of details SHOULD be graphs showing each of these 921 metrics over the duration (sustain phase) of the test. This 922 allows the user to see the measured performance stability changes 923 over time. 925 6.3. Benchmarks and Key Performance Indicators 927 This section lists key performance indicators (KPIs) for overall 928 benchmarking tests. All KPIs MUST be measured during the sustain 929 phase of the traffic load profile described in Section 4.3.4. All 930 KPIs MUST be measured from the result output of test equipment. 932 * Concurrent TCP Connections 933 The aggregate number of simultaneous connections between hosts 934 across the DUT/SUT, or between hosts and the DUT/SUT (defined in 935 [RFC2647]). 937 * TCP Connections Per Second 939 The average number of successfully established TCP connections per 940 second between hosts across the DUT/SUT, or between hosts and the 941 DUT/SUT. The TCP connection MUST be initiated via a TCP three-way 942 handshake (SYN, SYN/ACK, ACK). Then the TCP session data is sent. 943 The TCP session MUST be closed via either a TCP three-way close 944 (FIN, FIN/ACK, ACK), or a TCP four-way close (FIN, ACK, FIN, ACK), 945 and MUST NOT by RST. 947 * Application Transactions Per Second 949 The average number of successfully completed transactions per 950 second. For a particular transaction to be considered successful, 951 all data MUST have been transferred in its entirety. In case of 952 HTTP(S) transactions, it MUST have a valid status code (200 OK), 953 and the appropriate FIN, FIN/ACK sequence MUST have been 954 completed. 956 * TLS Handshake Rate 958 The average number of successfully established TLS connections per 959 second between hosts across the DUT/SUT, or between hosts and the 960 DUT/SUT. 962 * Inspected Throughput 964 The number of bits per second of examined and allowed traffic a 965 network security device is able to transmit to the correct 966 destination interface(s) in response to a specified offered load. 967 The throughput benchmarking tests defined in Section 7 SHOULD 968 measure the average Layer 2 throughput value when the DUT/SUT is 969 "inspecting" traffic. This document recommends presenting the 970 inspected throughput value in Gbit/s rounded to two places of 971 precision with a more specific Kbit/s in parenthesis. 973 * Time to First Byte (TTFB) 975 TTFB is the elapsed time between the start of sending the TCP SYN 976 packet from the client and the client receiving the first packet 977 of application data from the server or DUT/SUT. The benchmarking 978 tests HTTP Transaction Latency (Section 7.4) and HTTPS Transaction 979 Latency (Section 7.8) measure the minimum, average and maximum 980 TTFB. The value SHOULD be expressed in milliseconds. 982 * URL Response time / Time to Last Byte (TTLB) 984 URL Response time / TTLB is the elapsed time between the start of 985 sending the TCP SYN packet from the client and the client 986 receiving the last packet of application data from the server or 987 DUT/SUT. The benchmarking tests HTTP Transaction Latency 988 (Section 7.4) and HTTPS Transaction Latency (Section 7.8) measure 989 the minimum, average and maximum TTLB. The value SHOULD be 990 expressed in millisecond. 992 7. Benchmarking Tests 994 7.1. Throughput Performance with Application Traffic Mix 996 7.1.1. Objective 998 Using a relevant application traffic mix, determine the sustainable 999 inspected throughput supported by the DUT/SUT. 1001 Based on customer use case, users can choose the relevant application 1002 traffic mix for this test. The details about the traffic mix MUST be 1003 documented in the report. At least the following traffic mix details 1004 MUST be documented and reported together with the test results: 1006 Name of applications and layer 7 protocols 1008 Percentage of emulated traffic for each application and layer 7 1009 protocol 1011 Percentage of encrypted traffic and used cipher suites and keys 1012 (The RECOMMENDED ciphers and keys are defined in Section 4.3.1.3.) 1014 Used object sizes for each application and layer 7 protocols 1016 7.1.2. Test Setup 1018 Testbed setup MUST be configured as defined in Section 4. Any 1019 benchmarking test specific testbed configuration changes MUST be 1020 documented. 1022 7.1.3. Test Parameters 1024 In this section, the benchmarking test specific parameters SHOULD be 1025 defined. 1027 7.1.3.1. DUT/SUT Configuration Parameters 1029 DUT/SUT parameters MUST conform to the requirements defined in 1030 Section 4.2. Any configuration changes for this specific 1031 benchmarking test MUST be documented. In case the DUT/SUT is 1032 configured without SSL inspection, the test report MUST explain the 1033 implications of this to the relevant application traffic mix 1034 encrypted traffic. 1036 7.1.3.2. Test Equipment Configuration Parameters 1038 Test equipment configuration parameters MUST conform to the 1039 requirements defined in Section 4.3. The following parameters MUST 1040 be documented for this benchmarking test: 1042 Client IP address range defined in Section 4.3.1.2 1044 Server IP address range defined in Section 4.3.2.2 1046 Traffic distribution ratio between IPv4 and IPv6 defined in 1047 Section 4.3.1.2 1049 Target inspected throughput: Aggregated line rate of interface(s) 1050 used in the DUT/SUT or the value defined based on requirement for 1051 a specific deployment scenario 1053 Initial throughput: 10% of the "Target inspected throughput" Note: 1054 Initial throughput is not a KPI to report. This value is 1055 configured on the traffic generator and used to perform Step 1: 1056 "Test Initialization and Qualification" described under the 1057 Section 7.1.4. 1059 One of the ciphers and keys defined in Section 4.3.1.3 are 1060 RECOMMENDED to use for this benchmarking test. 1062 7.1.3.3. Traffic Profile 1064 Traffic profile: This test MUST be run with a relevant application 1065 traffic mix profile. 1067 7.1.3.4. Test Results Validation Criteria 1069 The following criteria are the test results validation criteria. The 1070 test results validation criteria MUST be monitored during the whole 1071 sustain phase of the traffic load profile. 1073 a. Number of failed application transactions (receiving any HTTP 1074 response code other than 200 OK) MUST be less than 0.001% (1 out 1075 of 100,000 transactions) of total attempted transactions. 1077 b. Number of Terminated TCP connections due to unexpected TCP RST 1078 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1079 connections) of total initiated TCP connections. 1081 7.1.3.5. Measurement 1083 Following KPI metrics MUST be reported for this benchmarking test: 1085 Mandatory KPIs (benchmarks): Inspected Throughput, TTFB (minimum, 1086 average, and maximum), TTLB (minimum, average, and maximum) and 1087 Application Transactions Per Second 1089 Note: TTLB MUST be reported along with the object size used in the 1090 traffic profile. 1092 Optional KPIs: TCP Connections Per Second and TLS Handshake Rate 1094 7.1.4. Test Procedures and Expected Results 1096 The test procedures are designed to measure the inspected throughput 1097 performance of the DUT/SUT at the sustaining period of traffic load 1098 profile. The test procedure consists of three major steps: Step 1 1099 ensures the DUT/SUT is able to reach the performance value (initial 1100 throughput) and meets the test results validation criteria when it 1101 was very minimally utilized. Step 2 determines the DUT/SUT is able 1102 to reach the target performance value within the test results 1103 validation criteria. Step 3 determines the maximum achievable 1104 performance value within the test results validation criteria. 1106 This test procedure MAY be repeated multiple times with different IP 1107 types: IPv4 only, IPv6 only, and IPv4 and IPv6 mixed traffic 1108 distribution. 1110 7.1.4.1. Step 1: Test Initialization and Qualification 1112 Verify the link status of all connected physical interfaces. All 1113 interfaces are expected to be in "UP" status. 1115 Configure traffic load profile of the test equipment to generate test 1116 traffic at the "Initial throughput" rate as described in 1117 Section 7.1.3.2. The test equipment SHOULD follow the traffic load 1118 profile definition as described in Section 4.3.4. The DUT/SUT SHOULD 1119 reach the "Initial throughput" during the sustain phase. Measure all 1120 KPI as defined in Section 7.1.3.5. The measured KPIs during the 1121 sustain phase MUST meet all the test results validation criteria 1122 defined in Section 7.1.3.4. 1124 If the KPI metrics do not meet the test results validation criteria, 1125 the test procedure MUST NOT be continued to step 2. 1127 7.1.4.2. Step 2: Test Run with Target Objective 1129 Configure test equipment to generate traffic at the "Target inspected 1130 throughput" rate defined in Section 7.1.3.2. The test equipment 1131 SHOULD follow the traffic load profile definition as described in 1132 Section 4.3.4. The test equipment SHOULD start to measure and record 1133 all specified KPIs. Continue the test until all traffic profile 1134 phases are completed. 1136 Within the test results validation criteria, the DUT/SUT is expected 1137 to reach the desired value of the target objective ("Target inspected 1138 throughput") in the sustain phase. Follow step 3, if the measured 1139 value does not meet the target value or does not fulfill the test 1140 results validation criteria. 1142 7.1.4.3. Step 3: Test Iteration 1144 Determine the achievable average inspected throughput within the test 1145 results validation criteria. Final test iteration MUST be performed 1146 for the test duration defined in Section 4.3.4. 1148 7.2. TCP/HTTP Connections Per Second 1150 7.2.1. Objective 1152 Using HTTP traffic, determine the sustainable TCP connection 1153 establishment rate supported by the DUT/SUT under different 1154 throughput load conditions. 1156 To measure connections per second, test iterations MUST use different 1157 fixed HTTP response object sizes (the different load conditions) 1158 defined in Section 7.2.3.2. 1160 7.2.2. Test Setup 1162 Testbed setup SHOULD be configured as defined in Section 4. Any 1163 specific testbed configuration changes (number of interfaces and 1164 interface type, etc.) MUST be documented. 1166 7.2.3. Test Parameters 1168 In this section, benchmarking test specific parameters SHOULD be 1169 defined. 1171 7.2.3.1. DUT/SUT Configuration Parameters 1173 DUT/SUT parameters MUST conform to the requirements defined in 1174 Section 4.2. Any configuration changes for this specific 1175 benchmarking test MUST be documented. 1177 7.2.3.2. Test Equipment Configuration Parameters 1179 Test equipment configuration parameters MUST conform to the 1180 requirements defined in Section 4.3. The following parameters MUST 1181 be documented for this benchmarking test: 1183 Client IP address range defined in Section 4.3.1.2 1185 Server IP address range defined in Section 4.3.2.2 1187 Traffic distribution ratio between IPv4 and IPv6 defined in 1188 Section 4.3.1.2 1190 Target connections per second: Initial value from product datasheet 1191 or the value defined based on requirement for a specific deployment 1192 scenario 1194 Initial connections per second: 10% of "Target connections per 1195 second" (Note: Initial connections per second is not a KPI to report. 1196 This value is configured on the traffic generator and used to perform 1197 the Step1: "Test Initialization and Qualification" described under 1198 the Section 7.2.4. 1200 The client SHOULD negotiate HTTP and close the connection with FIN 1201 immediately after completion of one transaction. In each test 1202 iteration, client MUST send GET request requesting a fixed HTTP 1203 response object size. 1205 The RECOMMENDED response object sizes are 1, 2, 4, 16, and 64 KByte. 1207 7.2.3.3. Test Results Validation Criteria 1209 The following criteria are the test results validation criteria. The 1210 Test results validation criteria MUST be monitored during the whole 1211 sustain phase of the traffic load profile. 1213 a. Number of failed application transactions (receiving any HTTP 1214 response code other than 200 OK) MUST be less than 0.001% (1 out 1215 of 100,000 transactions) of total attempted transactions. 1217 b. Number of terminated TCP connections due to unexpected TCP RST 1218 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1219 connections) of total initiated TCP connections. 1221 c. During the sustain phase, traffic SHOULD be forwarded at a 1222 constant rate (considered as a constant rate if any deviation of 1223 traffic forwarding rate is less than 5%). 1225 d. Concurrent TCP connections MUST be constant during steady state 1226 and any deviation of concurrent TCP connections SHOULD be less 1227 than 10%. This confirms the DUT opens and closes TCP connections 1228 at approximately the same rate. 1230 7.2.3.4. Measurement 1232 TCP Connections Per Second MUST be reported for each test iteration 1233 (for each object size). 1235 7.2.4. Test Procedures and Expected Results 1237 The test procedure is designed to measure the TCP connections per 1238 second rate of the DUT/SUT at the sustaining period of the traffic 1239 load profile. The test procedure consists of three major steps: Step 1240 1 ensures the DUT/SUT is able to reach the performance value (Initial 1241 connections per second) and meets the test results validation 1242 criteria when it was very minimally utilized. Step 2 determines the 1243 DUT/SUT is able to reach the target performance value within the test 1244 results validation criteria. Step 3 determines the maximum 1245 achievable performance value within the test results validation 1246 criteria. 1248 This test procedure MAY be repeated multiple times with different IP 1249 types: IPv4 only, IPv6 only, and IPv4 and IPv6 mixed traffic 1250 distribution. 1252 7.2.4.1. Step 1: Test Initialization and Qualification 1254 Verify the link status of all connected physical interfaces. All 1255 interfaces are expected to be in "UP" status. 1257 Configure the traffic load profile of the test equipment to establish 1258 "Initial connections per second" as defined in Section 7.2.3.2. The 1259 traffic load profile SHOULD be defined as described in Section 4.3.4. 1261 The DUT/SUT SHOULD reach the "Initial connections per second" before 1262 the sustain phase. The measured KPIs during the sustain phase MUST 1263 meet all the test results validation criteria defined in 1264 Section 7.2.3.3. 1266 If the KPI metrics do not meet the test results validation criteria, 1267 the test procedure MUST NOT continue to "Step 2". 1269 7.2.4.2. Step 2: Test Run with Target Objective 1271 Configure test equipment to establish the target objective ("Target 1272 connections per second") defined in Section 7.2.3.2. The test 1273 equipment SHOULD follow the traffic load profile definition as 1274 described in Section 4.3.4. 1276 During the ramp up and sustain phase of each test iteration, other 1277 KPIs such as inspected throughput, concurrent TCP connections and 1278 application transactions per second MUST NOT reach the maximum value 1279 the DUT/SUT can support. The test results for specific test 1280 iterations SHOULD NOT be reported, if the above-mentioned KPI 1281 (especially inspected throughput) reaches the maximum value. 1282 (Example: If the test iteration with 64 KByte of HTTP response object 1283 size reached the maximum inspected throughput limitation of the DUT/ 1284 SUT, the test iteration MAY be interrupted and the result for 64 1285 KByte SHOULD NOT be reported.) 1287 The test equipment SHOULD start to measure and record all specified 1288 KPIs. Continue the test until all traffic profile phases are 1289 completed. 1291 Within the test results validation criteria, the DUT/SUT is expected 1292 to reach the desired value of the target objective ("Target 1293 connections per second") in the sustain phase. Follow step 3, if the 1294 measured value does not meet the target value or does not fulfill the 1295 test results validation criteria. 1297 7.2.4.3. Step 3: Test Iteration 1299 Determine the achievable TCP connections per second within the test 1300 results validation criteria. 1302 7.3. HTTP Throughput 1304 7.3.1. Objective 1306 Determine the sustainable inspected throughput of the DUT/SUT for 1307 HTTP transactions varying the HTTP response object size. 1309 7.3.2. Test Setup 1311 Testbed setup SHOULD be configured as defined in Section 4. Any 1312 specific testbed configuration changes (number of interfaces and 1313 interface type, etc.) MUST be documented. 1315 7.3.3. Test Parameters 1317 In this section, benchmarking test specific parameters SHOULD be 1318 defined. 1320 7.3.3.1. DUT/SUT Configuration Parameters 1322 DUT/SUT parameters MUST conform to the requirements defined in 1323 Section 4.2. Any configuration changes for this specific 1324 benchmarking test MUST be documented. 1326 7.3.3.2. Test Equipment Configuration Parameters 1328 Test equipment configuration parameters MUST conform to the 1329 requirements defined in Section 4.3. The following parameters MUST 1330 be documented for this benchmarking test: 1332 Client IP address range defined in Section 4.3.1.2 1334 Server IP address range defined in Section 4.3.2.2 1336 Traffic distribution ratio between IPv4 and IPv6 defined in 1337 Section 4.3.1.2 1339 Target inspected throughput: Aggregated line rate of interface(s) 1340 used in the DUT/SUT or the value defined based on requirement for a 1341 specific deployment scenario 1342 Initial throughput: 10% of "Target inspected throughput" Note: 1343 Initial throughput is not a KPI to report. This value is configured 1344 on the traffic generator and used to perform Step 1: "Test 1345 Initialization and Qualification" described under Section 7.3.4. 1347 Number of HTTP response object requests (transactions) per 1348 connection: 10 1350 RECOMMENDED HTTP response object size: 1, 16, 64, 256 KByte, and 1351 mixed objects defined in table 5. 1353 +---------------------+---------------------+ 1354 | Object size (KByte) | Number of requests/ | 1355 | | Weight | 1356 +---------------------+---------------------+ 1357 | 0.2 | 1 | 1358 +---------------------+---------------------+ 1359 | 6 | 1 | 1360 +---------------------+---------------------+ 1361 | 8 | 1 | 1362 +---------------------+---------------------+ 1363 | 9 | 1 | 1364 +---------------------+---------------------+ 1365 | 10 | 1 | 1366 +---------------------+---------------------+ 1367 | 25 | 1 | 1368 +---------------------+---------------------+ 1369 | 26 | 1 | 1370 +---------------------+---------------------+ 1371 | 35 | 1 | 1372 +---------------------+---------------------+ 1373 | 59 | 1 | 1374 +---------------------+---------------------+ 1375 | 347 | 1 | 1376 +---------------------+---------------------+ 1378 Figure 7: Table 5: Mixed Objects 1380 7.3.3.3. Test Results Validation Criteria 1382 The following criteria are the test results validation criteria. The 1383 test results validation criteria MUST be monitored during the whole 1384 sustain phase of the traffic load profile. 1386 a. Number of failed application transactions (receiving any HTTP 1387 response code other than 200 OK) MUST be less than 0.001% (1 out 1388 of 100,000 transactions) of attempt transactions. 1390 b. Traffic SHOULD be forwarded at a constant rate (considered as a 1391 constant rate if any deviation of traffic forwarding rate is less 1392 than 5%). 1394 c. Concurrent TCP connections MUST be constant during steady state 1395 and any deviation of concurrent TCP connections SHOULD be less 1396 than 10%. This confirms the DUT opens and closes TCP connections 1397 at approximately the same rate. 1399 7.3.3.4. Measurement 1401 Inspected Throughput and HTTP Transactions per Second MUST be 1402 reported for each object size. 1404 7.3.4. Test Procedures and Expected Results 1406 The test procedure is designed to measure HTTP throughput of the DUT/ 1407 SUT. The test procedure consists of three major steps: Step 1 1408 ensures the DUT/SUT is able to reach the performance value (Initial 1409 throughput) and meets the test results validation criteria when it 1410 was very minimal utilized. Step 2 determines the DUT/SUT is able to 1411 reach the target performance value within the test results validation 1412 criteria. Step 3 determines the maximum achievable performance value 1413 within the test results validation criteria. 1415 This test procedure MAY be repeated multiple times with different 1416 IPv4 and IPv6 traffic distribution and HTTP response object sizes. 1418 7.3.4.1. Step 1: Test Initialization and Qualification 1420 Verify the link status of all connected physical interfaces. All 1421 interfaces are expected to be in "UP" status. 1423 Configure traffic load profile of the test equipment to establish 1424 "Initial inspected throughput" as defined in Section 7.3.3.2. 1426 The traffic load profile SHOULD be defined as described in 1427 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial inspected 1428 throughput" during the sustain phase. Measure all KPI as defined in 1429 Section 7.3.3.4. 1431 The measured KPIs during the sustain phase MUST meet the test results 1432 validation criteria "a" defined in Section 7.3.3.3. The test results 1433 validation criteria "b" and "c" are OPTIONAL for step 1. 1435 If the KPI metrics do not meet the test results validation criteria, 1436 the test procedure MUST NOT be continued to "Step 2". 1438 7.3.4.2. Step 2: Test Run with Target Objective 1440 Configure test equipment to establish the target objective ("Target 1441 inspected throughput") defined in Section 7.3.3.2. The test 1442 equipment SHOULD start to measure and record all specified KPIs. 1443 Continue the test until all traffic profile phases are completed. 1445 Within the test results validation criteria, the DUT/SUT is expected 1446 to reach the desired value of the target objective in the sustain 1447 phase. Follow step 3, if the measured value does not meet the target 1448 value or does not fulfill the test results validation criteria. 1450 7.3.4.3. Step 3: Test Iteration 1452 Determine the achievable inspected throughput within the test results 1453 validation criteria and measure the KPI metric Transactions per 1454 Second. Final test iteration MUST be performed for the test duration 1455 defined in Section 4.3.4. 1457 7.4. HTTP Transaction Latency 1459 7.4.1. Objective 1461 Using HTTP traffic, determine the HTTP transaction latency when DUT 1462 is running with sustainable HTTP transactions per second supported by 1463 the DUT/SUT under different HTTP response object sizes. 1465 Test iterations MUST be performed with different HTTP response object 1466 sizes in two different scenarios. One with a single transaction and 1467 the other with multiple transactions within a single TCP connection. 1468 For consistency both the single and multiple transaction test MUST be 1469 configured with the same HTTP version 1471 Scenario 1: The client MUST negotiate HTTP and close the connection 1472 with FIN immediately after completion of a single transaction (GET 1473 and RESPONSE). 1475 Scenario 2: The client MUST negotiate HTTP and close the connection 1476 FIN immediately after completion of 10 transactions (GET and 1477 RESPONSE) within a single TCP connection. 1479 7.4.2. Test Setup 1481 Testbed setup SHOULD be configured as defined in Section 4. Any 1482 specific testbed configuration changes (number of interfaces and 1483 interface type, etc.) MUST be documented. 1485 7.4.3. Test Parameters 1487 In this section, benchmarking test specific parameters SHOULD be 1488 defined. 1490 7.4.3.1. DUT/SUT Configuration Parameters 1492 DUT/SUT parameters MUST conform to the requirements defined in 1493 Section 4.2. Any configuration changes for this specific 1494 benchmarking test MUST be documented. 1496 7.4.3.2. Test Equipment Configuration Parameters 1498 Test equipment configuration parameters MUST conform to the 1499 requirements defined in Section 4.3. The following parameters MUST 1500 be documented for this benchmarking test: 1502 Client IP address range defined in Section 4.3.1.2 1504 Server IP address range defined in Section 4.3.2.2 1506 Traffic distribution ratio between IPv4 and IPv6 defined in 1507 Section 4.3.1.2 1509 Target objective for scenario 1: 50% of the connections per second 1510 measured in benchmarking test TCP/HTTP Connections Per Second 1511 (Section 7.2) 1513 Target objective for scenario 2: 50% of the inspected throughput 1514 measured in benchmarking test HTTP Throughput (Section 7.3) 1516 Initial objective for scenario 1: 10% of "Target objective for 1517 scenario 1" 1519 Initial objective for scenario 2: 10% of "Target objective for 1520 scenario 2" 1522 Note: The Initial objectives are not a KPI to report. These values 1523 are configured on the traffic generator and used to perform the 1524 Step1: "Test Initialization and Qualification" described under the 1525 Section 7.4.4. 1527 HTTP transaction per TCP connection: Test scenario 1 with single 1528 transaction and test scenario 2 with 10 transactions. 1530 HTTP with GET request requesting a single object. The RECOMMENDED 1531 object sizes are 1, 16, and 64 KByte. For each test iteration, 1532 client MUST request a single HTTP response object size. 1534 7.4.3.3. Test Results Validation Criteria 1536 The following criteria are the test results validation criteria. The 1537 Test results validation criteria MUST be monitored during the whole 1538 sustain phase of the traffic load profile. 1540 a. Number of failed application transactions (receiving any HTTP 1541 response code other than 200 OK) MUST be less than 0.001% (1 out 1542 of 100,000 transactions) of attempt transactions. 1544 b. Number of terminated TCP connections due to unexpected TCP RST 1545 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1546 connections) of total initiated TCP connections. 1548 c. During the sustain phase, traffic SHOULD be forwarded at a 1549 constant rate (considered as a constant rate if any deviation of 1550 traffic forwarding rate is less than 5%). 1552 d. Concurrent TCP connections MUST be constant during steady state 1553 and any deviation of concurrent TCP connections SHOULD be less 1554 than 10%. This confirms the DUT opens and closes TCP connections 1555 at approximately the same rate. 1557 e. After ramp up the DUT MUST achieve the "Target objective" defined 1558 in Section 7.4.3.2 and remain in that state for the entire test 1559 duration (sustain phase). 1561 7.4.3.4. Measurement 1563 TTFB (minimum, average, and maximum) and TTLB (minimum, average and 1564 maximum) MUST be reported for each object size. 1566 7.4.4. Test Procedures and Expected Results 1568 The test procedure is designed to measure TTFB or TTLB when the DUT/ 1569 SUT is operating close to 50% of its maximum achievable connections 1570 per second or inspected throughput. The test procedure consists of 1571 two major steps: Step 1 ensures the DUT/SUT is able to reach the 1572 initial performance values and meets the test results validation 1573 criteria when it was very minimally utilized. Step 2 measures the 1574 latency values within the test results validation criteria. 1576 This test procedure MAY be repeated multiple times with different IP 1577 types (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 1578 distribution), HTTP response object sizes and single and multiple 1579 transactions per connection scenarios. 1581 7.4.4.1. Step 1: Test Initialization and Qualification 1583 Verify the link status of all connected physical interfaces. All 1584 interfaces are expected to be in "UP" status. 1586 Configure traffic load profile of the test equipment to establish 1587 "Initial objective" as defined in Section 7.4.3.2. The traffic load 1588 profile SHOULD be defined as described in Section 4.3.4. 1590 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 1591 phase. The measured KPIs during the sustain phase MUST meet all the 1592 test results validation criteria defined in Section 7.4.3.3. 1594 If the KPI metrics do not meet the test results validation criteria, 1595 the test procedure MUST NOT be continued to "Step 2". 1597 7.4.4.2. Step 2: Test Run with Target Objective 1599 Configure test equipment to establish "Target objective" defined in 1600 Section 7.4.3.2. The test equipment SHOULD follow the traffic load 1601 profile definition as described in Section 4.3.4. 1603 The test equipment SHOULD start to measure and record all specified 1604 KPIs. Continue the test until all traffic profile phases are 1605 completed. 1607 Within the test results validation criteria, the DUT/SUT MUST reach 1608 the desired value of the target objective in the sustain phase. 1610 Measure the minimum, average, and maximum values of TTFB and TTLB. 1612 7.5. Concurrent TCP/HTTP Connection Capacity 1614 7.5.1. Objective 1616 Determine the number of concurrent TCP connections that the DUT/ SUT 1617 sustains when using HTTP traffic. 1619 7.5.2. Test Setup 1621 Testbed setup SHOULD be configured as defined in Section 4. Any 1622 specific testbed configuration changes (number of interfaces and 1623 interface type, etc.) MUST be documented. 1625 7.5.3. Test Parameters 1627 In this section, benchmarking test specific parameters SHOULD be 1628 defined. 1630 7.5.3.1. DUT/SUT Configuration Parameters 1632 DUT/SUT parameters MUST conform to the requirements defined in 1633 Section 4.2. Any configuration changes for this specific 1634 benchmarking test MUST be documented. 1636 7.5.3.2. Test Equipment Configuration Parameters 1638 Test equipment configuration parameters MUST conform to the 1639 requirements defined in Section 4.3. The following parameters MUST 1640 be noted for this benchmarking test: 1642 Client IP address range defined in Section 4.3.1.2 1644 Server IP address range defined in Section 4.3.2.2 1646 Traffic distribution ratio between IPv4 and IPv6 defined in 1647 Section 4.3.1.2 1649 Target concurrent connection: Initial value from product datasheet 1650 or the value defined based on requirement for a specific 1651 deployment scenario. 1653 Initial concurrent connection: 10% of "Target concurrent 1654 connection" Note: Initial concurrent connection is not a KPI to 1655 report. This value is configured on the traffic generator and 1656 used to perform the Step1: "Test Initialization and Qualification" 1657 described under the Section 7.5.4. 1659 Maximum connections per second during ramp up phase: 50% of 1660 maximum connections per second measured in benchmarking test TCP/ 1661 HTTP Connections per second (Section 7.2) 1663 Ramp up time (in traffic load profile for "Target concurrent 1664 connection"): "Target concurrent connection" / "Maximum 1665 connections per second during ramp up phase" 1667 Ramp up time (in traffic load profile for "Initial concurrent 1668 connection"): "Initial concurrent connection" / "Maximum 1669 connections per second during ramp up phase" 1671 The client MUST negotiate HTTP and each client MAY open multiple 1672 concurrent TCP connections per server endpoint IP. 1674 Each client sends 10 GET requests requesting 1 KByte HTTP response 1675 object in the same TCP connection (10 transactions/TCP connection) 1676 and the delay (think time) between each transaction MUST be X 1677 seconds. 1679 X = ("Ramp up time" + "steady state time") /10 1681 The established connections SHOULD remain open until the ramp down 1682 phase of the test. During the ramp down phase, all connections 1683 SHOULD be successfully closed with FIN. 1685 7.5.3.3. Test Results Validation Criteria 1687 The following criteria are the test results validation criteria. The 1688 Test results validation criteria MUST be monitored during the whole 1689 sustain phase of the traffic load profile. 1691 a. Number of failed application transactions (receiving any HTTP 1692 response code other than 200 OK) MUST be less than 0.001% (1 out 1693 of 100,000 transaction) of total attempted transactions. 1695 b. Number of terminated TCP connections due to unexpected TCP RST 1696 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1697 connections) of total initiated TCP connections. 1699 c. During the sustain phase, traffic SHOULD be forwarded at a 1700 constant rate (considered as a constant rate if any deviation of 1701 traffic forwarding rate is less than 5%). 1703 7.5.3.4. Measurement 1705 Average Concurrent TCP Connections MUST be reported for this 1706 benchmarking test. 1708 7.5.4. Test Procedures and Expected Results 1710 The test procedure is designed to measure the concurrent TCP 1711 connection capacity of the DUT/SUT at the sustaining period of 1712 traffic load profile. The test procedure consists of three major 1713 steps: Step 1 ensures the DUT/SUT is able to reach the performance 1714 value (Initial concurrent connection) and meets the test results 1715 validation criteria when it was very minimally utilized. Step 2 1716 determines the DUT/SUT is able to reach the target performance value 1717 within the test results validation criteria. Step 3 determines the 1718 maximum achievable performance value within the test results 1719 validation criteria. 1721 This test procedure MAY be repeated multiple times with different 1722 IPv4 and IPv6 traffic distribution. 1724 7.5.4.1. Step 1: Test Initialization and Qualification 1726 Verify the link status of all connected physical interfaces. All 1727 interfaces are expected to be in "UP" status. 1729 Configure test equipment to establish "Initial concurrent TCP 1730 connections" defined in Section 7.5.3.2. Except ramp up time, the 1731 traffic load profile SHOULD be defined as described in Section 4.3.4. 1733 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 1734 concurrent TCP connections". The measured KPIs during the sustain 1735 phase MUST meet all the test results validation criteria defined in 1736 Section 7.5.3.3. 1738 If the KPI metrics do not meet the test results validation criteria, 1739 the test procedure MUST NOT be continued to "Step 2". 1741 7.5.4.2. Step 2: Test Run with Target Objective 1743 Configure test equipment to establish the target objective ("Target 1744 concurrent TCP connections"). The test equipment SHOULD follow the 1745 traffic load profile definition (except ramp up time) as described in 1746 Section 4.3.4. 1748 During the ramp up and sustain phase, the other KPIs such as 1749 inspected throughput, TCP connections per second, and application 1750 transactions per second MUST NOT reach the maximum value the DUT/SUT 1751 can support. 1753 The test equipment SHOULD start to measure and record KPIs defined in 1754 Section 7.5.3.4. Continue the test until all traffic profile phases 1755 are completed. 1757 Within the test results validation criteria, the DUT/SUT is expected 1758 to reach the desired value of the target objective in the sustain 1759 phase. Follow step 3, if the measured value does not meet the target 1760 value or does not fulfill the test results validation criteria. 1762 7.5.4.3. Step 3: Test Iteration 1764 Determine the achievable concurrent TCP connections capacity within 1765 the test results validation criteria. 1767 7.6. TCP/HTTPS Connections per Second 1768 7.6.1. Objective 1770 Using HTTPS traffic, determine the sustainable SSL/TLS session 1771 establishment rate supported by the DUT/SUT under different 1772 throughput load conditions. 1774 Test iterations MUST include common cipher suites and key strengths 1775 as well as forward looking stronger keys. Specific test iterations 1776 MUST include ciphers and keys defined in Section 7.6.3.2. 1778 For each cipher suite and key strengths, test iterations MUST use a 1779 single HTTPS response object size defined in Section 7.6.3.2 to 1780 measure connections per second performance under a variety of DUT/SUT 1781 security inspection load conditions. 1783 7.6.2. Test Setup 1785 Testbed setup SHOULD be configured as defined in Section 4. Any 1786 specific testbed configuration changes (number of interfaces and 1787 interface type, etc.) MUST be documented. 1789 7.6.3. Test Parameters 1791 In this section, benchmarking test specific parameters SHOULD be 1792 defined. 1794 7.6.3.1. DUT/SUT Configuration Parameters 1796 DUT/SUT parameters MUST conform to the requirements defined in 1797 Section 4.2. Any configuration changes for this specific 1798 benchmarking test MUST be documented. 1800 7.6.3.2. Test Equipment Configuration Parameters 1802 Test equipment configuration parameters MUST conform to the 1803 requirements defined in Section 4.3. The following parameters MUST 1804 be documented for this benchmarking test: 1806 Client IP address range defined in Section 4.3.1.2 1808 Server IP address range defined in Section 4.3.2.2 1810 Traffic distribution ratio between IPv4 and IPv6 defined in 1811 Section 4.3.1.2 1813 Target connections per second: Initial value from product datasheet 1814 or the value defined based on requirement for a specific deployment 1815 scenario. 1817 Initial connections per second: 10% of "Target connections per 1818 second" Note: Initial connections per second is not a KPI to report. 1819 This value is configured on the traffic generator and used to perform 1820 the Step1: "Test Initialization and Qualification" described under 1821 the Section 7.6.4. 1823 RECOMMENDED ciphers and keys defined in Section 4.3.1.3 1825 The client MUST negotiate HTTPS and close the connection with FIN 1826 immediately after completion of one transaction. In each test 1827 iteration, client MUST send GET request requesting a fixed HTTPS 1828 response object size. The RECOMMENDED object sizes are 1, 2, 4, 16, 1829 and 64 KByte. 1831 7.6.3.3. Test Results Validation Criteria 1833 The following criteria are the test results validation criteria. The 1834 test results validation criteria MUST be monitored during the whole 1835 test duration. 1837 a. Number of failed application transactions (receiving any HTTP 1838 response code other than 200 OK) MUST be less than 0.001% (1 out 1839 of 100,000 transactions) of attempt transactions. 1841 b. Number of terminated TCP connections due to unexpected TCP RST 1842 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1843 connections) of total initiated TCP connections. 1845 c. During the sustain phase, traffic SHOULD be forwarded at a 1846 constant rate (considered as a constant rate if any deviation of 1847 traffic forwarding rate is less than 5%). 1849 d. Concurrent TCP connections MUST be constant during steady state 1850 and any deviation of concurrent TCP connections SHOULD be less 1851 than 10%. This confirms the DUT opens and closes TCP connections 1852 at approximately the same rate. 1854 7.6.3.4. Measurement 1856 TCP connections per second MUST be reported for each test iteration 1857 (for each object size). 1859 The KPI metric TLS Handshake Rate can be measured in the test using 1 1860 KByte object size. 1862 7.6.4. Test Procedures and Expected Results 1864 The test procedure is designed to measure the TCP connections per 1865 second rate of the DUT/SUT at the sustaining period of traffic load 1866 profile. The test procedure consists of three major steps: Step 1 1867 ensures the DUT/SUT is able to reach the performance value (Initial 1868 connections per second) and meets the test results validation 1869 criteria when it was very minimally utilized. Step 2 determines the 1870 DUT/SUT is able to reach the target performance value within the test 1871 results validation criteria. Step 3 determines the maximum 1872 achievable performance value within the test results validation 1873 criteria. 1875 This test procedure MAY be repeated multiple times with different 1876 IPv4 and IPv6 traffic distribution. 1878 7.6.4.1. Step 1: Test Initialization and Qualification 1880 Verify the link status of all connected physical interfaces. All 1881 interfaces are expected to be in "UP" status. 1883 Configure traffic load profile of the test equipment to establish 1884 "Initial connections per second" as defined in Section 7.6.3.2. The 1885 traffic load profile SHOULD be defined as described in Section 4.3.4. 1887 The DUT/SUT SHOULD reach the "Initial connections per second" before 1888 the sustain phase. The measured KPIs during the sustain phase MUST 1889 meet all the test results validation criteria defined in 1890 Section 7.6.3.3. 1892 If the KPI metrics do not meet the test results validation criteria, 1893 the test procedure MUST NOT be continued to "Step 2". 1895 7.6.4.2. Step 2: Test Run with Target Objective 1897 Configure test equipment to establish "Target connections per second" 1898 defined in Section 7.6.3.2. The test equipment SHOULD follow the 1899 traffic load profile definition as described in Section 4.3.4. 1901 During the ramp up and sustain phase, other KPIs such as inspected 1902 throughput, concurrent TCP connections, and application transactions 1903 per second MUST NOT reach the maximum value the DUT/SUT can support. 1904 The test results for specific test iteration SHOULD NOT be reported, 1905 if the above mentioned KPI (especially inspected throughput) reaches 1906 the maximum value. (Example: If the test iteration with 64 KByte of 1907 HTTPS response object size reached the maximum inspected throughput 1908 limitation of the DUT, the test iteration MAY be interrupted and the 1909 result for 64 KByte SHOULD NOT be reported). 1911 The test equipment SHOULD start to measure and record all specified 1912 KPIs. Continue the test until all traffic profile phases are 1913 completed. 1915 Within the test results validation criteria, the DUT/SUT is expected 1916 to reach the desired value of the target objective ("Target 1917 connections per second") in the sustain phase. Follow step 3, if the 1918 measured value does not meet the target value or does not fulfill the 1919 test results validation criteria. 1921 7.6.4.3. Step 3: Test Iteration 1923 Determine the achievable connections per second within the test 1924 results validation criteria. 1926 7.7. HTTPS Throughput 1928 7.7.1. Objective 1930 Determine the sustainable inspected throughput of the DUT/SUT for 1931 HTTPS transactions varying the HTTPS response object size. 1933 Test iterations MUST include common cipher suites and key strengths 1934 as well as forward looking stronger keys. Specific test iterations 1935 MUST include the ciphers and keys defined in Section 7.7.3.2. 1937 7.7.2. Test Setup 1939 Testbed setup SHOULD be configured as defined in Section 4. Any 1940 specific testbed configuration changes (number of interfaces and 1941 interface type, etc.) MUST be documented. 1943 7.7.3. Test Parameters 1945 In this section, benchmarking test specific parameters SHOULD be 1946 defined. 1948 7.7.3.1. DUT/SUT Configuration Parameters 1950 DUT/SUT parameters MUST conform to the requirements defined in 1951 Section 4.2. Any configuration changes for this specific 1952 benchmarking test MUST be documented. 1954 7.7.3.2. Test Equipment Configuration Parameters 1956 Test equipment configuration parameters MUST conform to the 1957 requirements defined in Section 4.3. The following parameters MUST 1958 be documented for this benchmarking test: 1960 Client IP address range defined in Section 4.3.1.2 1962 Server IP address range defined in Section 4.3.2.2 1964 Traffic distribution ratio between IPv4 and IPv6 defined in 1965 Section 4.3.1.2 1967 Target inspected throughput: Aggregated line rate of interface(s) 1968 used in the DUT/SUT or the value defined based on requirement for a 1969 specific deployment scenario. 1971 Initial throughput: 10% of "Target inspected throughput" Note: 1972 Initial throughput is not a KPI to report. This value is configured 1973 on the traffic generator and used to perform the Step1: "Test 1974 Initialization and Qualification" described under the Section 7.7.4. 1976 Number of HTTPS response object requests (transactions) per 1977 connection: 10 1979 RECOMMENDED ciphers and keys defined in Section 4.3.1.3 1981 RECOMMENDED HTTPS response object size: 1, 16, 64, 256 KByte, and 1982 mixed objects defined in Table 5 under Section 7.3.3.2. 1984 7.7.3.3. Test Results Validation Criteria 1986 The following criteria are the test results validation criteria. The 1987 test results validation criteria MUST be monitored during the whole 1988 sustain phase of the traffic load profile. 1990 a. Number of failed Application transactions (receiving any HTTP 1991 response code other than 200 OK) MUST be less than 0.001% (1 out 1992 of 100,000 transactions) of attempt transactions. 1994 b. Traffic SHOULD be forwarded at a constant rate (considered as a 1995 constant rate if any deviation of traffic forwarding rate is less 1996 than 5%). 1998 c. Concurrent TCP connections MUST be constant during steady state 1999 and any deviation of concurrent TCP connections SHOULD be less 2000 than 10%. This confirms the DUT opens and closes TCP connections 2001 at approximately the same rate. 2003 7.7.3.4. Measurement 2005 Inspected Throughput and HTTP Transactions per Second MUST be 2006 reported for each object size. 2008 7.7.4. Test Procedures and Expected Results 2010 The test procedure consists of three major steps: Step 1 ensures the 2011 DUT/SUT is able to reach the performance value (Initial throughput) 2012 and meets the test results validation criteria when it was very 2013 minimally utilized. Step 2 determines the DUT/SUT is able to reach 2014 the target performance value within the test results validation 2015 criteria. Step 3 determines the maximum achievable performance value 2016 within the test results validation criteria. 2018 This test procedure MAY be repeated multiple times with different 2019 IPv4 and IPv6 traffic distribution and HTTPS response object sizes. 2021 7.7.4.1. Step 1: Test Initialization and Qualification 2023 Verify the link status of all connected physical interfaces. All 2024 interfaces are expected to be in "UP" status. 2026 Configure traffic load profile of the test equipment to establish 2027 "Initial throughput" as defined in Section 7.7.3.2. 2029 The traffic load profile SHOULD be defined as described in 2030 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial throughput" 2031 during the sustain phase. Measure all KPI as defined in 2032 Section 7.7.3.4. 2034 The measured KPIs during the sustain phase MUST meet the test results 2035 validation criteria "a" defined in Section 7.7.3.3. The test results 2036 validation criteria "b" and "c" are OPTIONAL for step 1. 2038 If the KPI metrics do not meet the test results validation criteria, 2039 the test procedure MUST NOT be continued to "Step 2". 2041 7.7.4.2. Step 2: Test Run with Target Objective 2043 Configure test equipment to establish the target objective ("Target 2044 inspected throughput") defined in Section 7.7.3.2. The test 2045 equipment SHOULD start to measure and record all specified KPIs. 2046 Continue the test until all traffic profile phases are completed. 2048 Within the test results validation criteria, the DUT/SUT is expected 2049 to reach the desired value of the target objective in the sustain 2050 phase. Follow step 3, if the measured value does not meet the target 2051 value or does not fulfill the test results validation criteria. 2053 7.7.4.3. Step 3: Test Iteration 2055 Determine the achievable average inspected throughput within the test 2056 results validation criteria. Final test iteration MUST be performed 2057 for the test duration defined in Section 4.3.4. 2059 7.8. HTTPS Transaction Latency 2061 7.8.1. Objective 2063 Using HTTPS traffic, determine the HTTPS transaction latency when 2064 DUT/SUT is running with sustainable HTTPS transactions per second 2065 supported by the DUT/SUT under different HTTPS response object size. 2067 Scenario 1: The client MUST negotiate HTTPS and close the connection 2068 with FIN immediately after completion of a single transaction (GET 2069 and RESPONSE). 2071 Scenario 2: The client MUST negotiate HTTPS and close the connection 2072 with FIN immediately after completion of 10 transactions (GET and 2073 RESPONSE) within a single TCP connection. 2075 7.8.2. Test Setup 2077 Testbed setup SHOULD be configured as defined in Section 4. Any 2078 specific testbed configuration changes (number of interfaces and 2079 interface type, etc.) MUST be documented. 2081 7.8.3. Test Parameters 2083 In this section, benchmarking test specific parameters SHOULD be 2084 defined. 2086 7.8.3.1. DUT/SUT Configuration Parameters 2088 DUT/SUT parameters MUST conform to the requirements defined in 2089 Section 4.2. Any configuration changes for this specific 2090 benchmarking test MUST be documented. 2092 7.8.3.2. Test Equipment Configuration Parameters 2094 Test equipment configuration parameters MUST conform to the 2095 requirements defined in Section 4.3. The following parameters MUST 2096 be documented for this benchmarking test: 2098 Client IP address range defined in Section 4.3.1.2 2100 Server IP address range defined in Section 4.3.2.2 2101 Traffic distribution ratio between IPv4 and IPv6 defined in 2102 Section 4.3.1.2 2104 RECOMMENDED cipher suites and key sizes defined in Section 4.3.1.3 2106 Target objective for scenario 1: 50% of the connections per second 2107 measured in benchmarking test TCP/HTTPS Connections per second 2108 (Section 7.6) 2110 Target objective for scenario 2: 50% of the inspected throughput 2111 measured in benchmarking test HTTPS Throughput (Section 7.7) 2113 Initial objective for scenario 1: 10% of "Target objective for 2114 scenario 1" 2116 Initial objective for scenario 2: 10% of "Target objective for 2117 scenario 2" 2119 Note: The Initial objectives are not a KPI to report. These values 2120 are configured on the traffic generator and used to perform the 2121 Step1: "Test Initialization and Qualification" described under the 2122 Section 7.8.4. 2124 HTTPS transaction per TCP connection: Test scenario 1 with single 2125 transaction and scenario 2 with 10 transactions 2127 HTTPS with GET request requesting a single object. The RECOMMENDED 2128 object sizes are 1, 16, and 64 KByte. For each test iteration, 2129 client MUST request a single HTTPS response object size. 2131 7.8.3.3. Test Results Validation Criteria 2133 The following criteria are the test results validation criteria. The 2134 Test results validation criteria MUST be monitored during the whole 2135 sustain phase of the traffic load profile. 2137 a. Number of failed application transactions (receiving any HTTP 2138 response code other than 200 OK) MUST be less than 0.001% (1 out 2139 of 100,000 transactions) of attempt transactions. 2141 b. Number of terminated TCP connections due to unexpected TCP RST 2142 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2143 connections) of total initiated TCP connections. 2145 c. During the sustain phase, traffic SHOULD be forwarded at a 2146 constant rate (considered as a constant rate if any deviation of 2147 traffic forwarding rate is less than 5%). 2149 d. Concurrent TCP connections MUST be constant during steady state 2150 and any deviation of concurrent TCP connections SHOULD be less 2151 than 10%. This confirms the DUT opens and closes TCP connections 2152 at approximately the same rate. 2154 e. After ramp up the DUT/SUT MUST achieve the "Target objective" 2155 defined in the parameter Section 7.8.3.2 and remain in that state 2156 for the entire test duration (sustain phase). 2158 7.8.3.4. Measurement 2160 TTFB (minimum, average, and maximum) and TTLB (minimum, average and 2161 maximum) MUST be reported for each object size. 2163 7.8.4. Test Procedures and Expected Results 2165 The test procedure is designed to measure TTFB or TTLB when the DUT/ 2166 SUT is operating close to 50% of its maximum achievable connections 2167 per second or inspected throughput. The test procedure consists of 2168 two major steps: Step 1 ensures the DUT/SUT is able to reach the 2169 initial performance values and meets the test results validation 2170 criteria when it was very minimally utilized. Step 2 measures the 2171 latency values within the test results validation criteria. 2173 This test procedure MAY be repeated multiple times with different IP 2174 types (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 2175 distribution), HTTPS response object sizes and single, and multiple 2176 transactions per connection scenarios. 2178 7.8.4.1. Step 1: Test Initialization and Qualification 2180 Verify the link status of all connected physical interfaces. All 2181 interfaces are expected to be in "UP" status. 2183 Configure traffic load profile of the test equipment to establish 2184 "Initial objective" as defined in the Section 7.8.3.2. The traffic 2185 load profile SHOULD be defined as described in Section 4.3.4. 2187 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 2188 phase. The measured KPIs during the sustain phase MUST meet all the 2189 test results validation criteria defined in Section 7.8.3.3. 2191 If the KPI metrics do not meet the test results validation criteria, 2192 the test procedure MUST NOT be continued to "Step 2". 2194 7.8.4.2. Step 2: Test Run with Target Objective 2196 Configure test equipment to establish "Target objective" defined in 2197 Section 7.8.3.2. The test equipment SHOULD follow the traffic load 2198 profile definition as described in Section 4.3.4. 2200 The test equipment SHOULD start to measure and record all specified 2201 KPIs. Continue the test until all traffic profile phases are 2202 completed. 2204 Within the test results validation criteria, the DUT/SUT MUST reach 2205 the desired value of the target objective in the sustain phase. 2207 Measure the minimum, average, and maximum values of TTFB and TTLB. 2209 7.9. Concurrent TCP/HTTPS Connection Capacity 2211 7.9.1. Objective 2213 Determine the number of concurrent TCP connections the DUT/SUT 2214 sustains when using HTTPS traffic. 2216 7.9.2. Test Setup 2218 Testbed setup SHOULD be configured as defined in Section 4. Any 2219 specific testbed configuration changes (number of interfaces and 2220 interface type, etc.) MUST be documented. 2222 7.9.3. Test Parameters 2224 In this section, benchmarking test specific parameters SHOULD be 2225 defined. 2227 7.9.3.1. DUT/SUT Configuration Parameters 2229 DUT/SUT parameters MUST conform to the requirements defined in 2230 Section 4.2. Any configuration changes for this specific 2231 benchmarking test MUST be documented. 2233 7.9.3.2. Test Equipment Configuration Parameters 2235 Test equipment configuration parameters MUST conform to the 2236 requirements defined in Section 4.3. The following parameters MUST 2237 be documented for this benchmarking test: 2239 Client IP address range defined in Section 4.3.1.2 2241 Server IP address range defined in Section 4.3.2.2 2242 Traffic distribution ratio between IPv4 and IPv6 defined in 2243 Section 4.3.1.2 2245 RECOMMENDED cipher suites and key sizes defined in Section 4.3.1.3 2247 Target concurrent connections: Initial value from product 2248 datasheet or the value defined based on requirement for a specific 2249 deployment scenario. 2251 Initial concurrent connections: 10% of "Target concurrent 2252 connections" Note: Initial concurrent connection is not a KPI to 2253 report. This value is configured on the traffic generator and 2254 used to perform the Step1: "Test Initialization and Qualification" 2255 described under the Section 7.9.4. 2257 Connections per second during ramp up phase: 50% of maximum 2258 connections per second measured in benchmarking test TCP/HTTPS 2259 Connections per second (Section 7.6) 2261 Ramp up time (in traffic load profile for "Target concurrent 2262 connections"): "Target concurrent connections" / "Maximum 2263 connections per second during ramp up phase" 2265 Ramp up time (in traffic load profile for "Initial concurrent 2266 connections"): "Initial concurrent connections" / "Maximum 2267 connections per second during ramp up phase" 2269 The client MUST perform HTTPS transaction with persistence and each 2270 client can open multiple concurrent TCP connections per server 2271 endpoint IP. 2273 Each client sends 10 GET requests requesting 1 KByte HTTPS response 2274 objects in the same TCP connections (10 transactions/TCP connection) 2275 and the delay (think time) between each transaction MUST be X 2276 seconds. 2278 X = ("Ramp up time" + "steady state time") /10 2280 The established connections SHOULD remain open until the ramp down 2281 phase of the test. During the ramp down phase, all connections 2282 SHOULD be successfully closed with FIN. 2284 7.9.3.3. Test Results Validation Criteria 2286 The following criteria are the test results validation criteria. The 2287 Test results validation criteria MUST be monitored during the whole 2288 sustain phase of the traffic load profile. 2290 a. Number of failed application transactions (receiving any HTTP 2291 response code other than 200 OK) MUST be less than 0.001% (1 out 2292 of 100,000 transactions) of total attempted transactions. 2294 b. Number of terminated TCP connections due to unexpected TCP RST 2295 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2296 connections) of total initiated TCP connections. 2298 c. During the sustain phase, traffic SHOULD be forwarded at a 2299 constant rate (considered as a constant rate if any deviation of 2300 traffic forwarding rate is less than 5%). 2302 7.9.3.4. Measurement 2304 Average Concurrent TCP Connections MUST be reported for this 2305 benchmarking test. 2307 7.9.4. Test Procedures and Expected Results 2309 The test procedure is designed to measure the concurrent TCP 2310 connection capacity of the DUT/SUT at the sustaining period of 2311 traffic load profile. The test procedure consists of three major 2312 steps: Step 1 ensures the DUT/SUT is able to reach the performance 2313 value (Initial concurrent connection) and meets the test results 2314 validation criteria when it was very minimally utilized. Step 2 2315 determines the DUT/SUT is able to reach the target performance value 2316 within the test results validation criteria. Step 3 determines the 2317 maximum achievable performance value within the test results 2318 validation criteria. 2320 This test procedure MAY be repeated multiple times with different 2321 IPv4 and IPv6 traffic distribution. 2323 7.9.4.1. Step 1: Test Initialization and Qualification 2325 Verify the link status of all connected physical interfaces. All 2326 interfaces are expected to be in "UP" status. 2328 Configure test equipment to establish "Initial concurrent TCP 2329 connections" defined in Section 7.9.3.2. Except ramp up time, the 2330 traffic load profile SHOULD be defined as described in Section 4.3.4. 2332 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 2333 concurrent TCP connections". The measured KPIs during the sustain 2334 phase MUST meet the test results validation criteria "a" and "b" 2335 defined in Section 7.9.3.3. 2337 If the KPI metrics do not meet the test results validation criteria, 2338 the test procedure MUST NOT be continued to "Step 2". 2340 7.9.4.2. Step 2: Test Run with Target Objective 2342 Configure test equipment to establish the target objective ("Target 2343 concurrent TCP connections"). The test equipment SHOULD follow the 2344 traffic load profile definition (except ramp up time) as described in 2345 Section 4.3.4. 2347 During the ramp up and sustain phase, the other KPIs such as 2348 inspected throughput, TCP connections per second, and application 2349 transactions per second MUST NOT reach to the maximum value that the 2350 DUT/SUT can support. 2352 The test equipment SHOULD start to measure and record KPIs defined in 2353 Section 7.9.3.4. Continue the test until all traffic profile phases 2354 are completed. 2356 Within the test results validation criteria, the DUT/SUT is expected 2357 to reach the desired value of the target objective in the sustain 2358 phase. Follow step 3, if the measured value does not meet the target 2359 value or does not fulfill the test results validation criteria. 2361 7.9.4.3. Step 3: Test Iteration 2363 Determine the achievable concurrent TCP connections within the test 2364 results validation criteria. 2366 8. IANA Considerations 2368 The IANA has assigned IPv4 and IPv6 address blocks in [RFC6890] that 2369 have been registered for special purposes. The IPv6 address block 2370 2001:2::/48 has been allocated for the purpose of IPv6 Benchmarking 2371 [RFC5180] and the IPv4 address block 198.18.0.0/15 has been allocated 2372 for the purpose of IPv4 Benchmarking [RFC2544]. This assignment was 2373 made to minimize the chance of conflict in case a testing device were 2374 to be accidentally connected to part of the Internet. 2376 9. Security Considerations 2378 The primary goal of this document is to provide benchmarking 2379 terminology and methodology for next-generation network security 2380 devices. However, readers should be aware that there is some overlap 2381 between performance and security issues. Specifically, the optimal 2382 configuration for network security device performance may not be the 2383 most secure, and vice-versa. The cipher suites recommended in this 2384 document are for test purpose only. The cipher suite recommendation 2385 for a real deployment is outside the scope of this document. 2387 10. Contributors 2389 The following individuals contributed significantly to the creation 2390 of this document: 2392 Alex Samonte, Amritam Putatunda, Aria Eslambolchizadeh, Chao Guo, 2393 Chris Brown, Cory Ford, David DeSanto, Jurrie Van Den Breekel, 2394 Michelle Rhines, Mike Jack, Ryan Liles, Samaresh Nair, Stephen 2395 Goudreault, Tim Carlin, and Tim Otto. 2397 11. Acknowledgements 2399 The authors wish to acknowledge the members of NetSecOPEN for their 2400 participation in the creation of this document. Additionally, the 2401 following members need to be acknowledged: 2403 Anand Vijayan, Chris Marshall, Jay Lindenauer, Michael Shannon, Mike 2404 Deichman, Ryan Riese, and Toulnay Orkun. 2406 12. References 2408 12.1. Normative References 2410 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2411 Requirement Levels", BCP 14, RFC 2119, 2412 DOI 10.17487/RFC2119, March 1997, 2413 . 2415 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2416 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 2417 May 2017, . 2419 12.2. Informative References 2421 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 2422 Network Interconnect Devices", RFC 2544, 2423 DOI 10.17487/RFC2544, March 1999, 2424 . 2426 [RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., 2427 Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext 2428 Transfer Protocol -- HTTP/1.1", RFC 2616, 2429 DOI 10.17487/RFC2616, June 1999, 2430 . 2432 [RFC2647] Newman, D., "Benchmarking Terminology for Firewall 2433 Performance", RFC 2647, DOI 10.17487/RFC2647, August 1999, 2434 . 2436 [RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin, 2437 "Benchmarking Methodology for Firewall Performance", 2438 RFC 3511, DOI 10.17487/RFC3511, April 2003, 2439 . 2441 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 2442 Dugatkin, "IPv6 Benchmarking Methodology for Network 2443 Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May 2444 2008, . 2446 [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, 2447 "Applicability Statement for RFC 2544: Use on Production 2448 Networks Considered Harmful", RFC 6815, 2449 DOI 10.17487/RFC6815, November 2012, 2450 . 2452 [RFC6890] Cotton, M., Vegoda, L., Bonica, R., Ed., and B. Haberman, 2453 "Special-Purpose IP Address Registries", BCP 153, 2454 RFC 6890, DOI 10.17487/RFC6890, April 2013, 2455 . 2457 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 2458 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, 2459 . 2461 [RFC9000] Iyengar, J., Ed. and M. Thomson, Ed., "QUIC: A UDP-Based 2462 Multiplexed and Secure Transport", RFC 9000, 2463 DOI 10.17487/RFC9000, May 2021, 2464 . 2466 Appendix A. Test Methodology - Security Effectiveness Evaluation 2468 A.1. Test Objective 2470 This test methodology verifies the DUT/SUT is able to detect, 2471 prevent, and report the vulnerabilities. 2473 In this test, background test traffic will be generated to utilize 2474 the DUT/SUT. In parallel, the CVEs will be sent to the DUT/SUT as 2475 encrypted and as well as clear text payload formats using a traffic 2476 generator. The selection of the CVEs is described in Section 4.2.1. 2478 * Number of blocked CVEs 2480 * Number of bypassed (nonblocked) CVEs 2482 * Background traffic performance (verify if the background traffic 2483 is impacted while sending CVE toward DUT/SUT) 2485 * Accuracy of DUT/SUT statistics in term of vulnerabilities 2486 reporting 2488 A.2. Testbed Setup 2490 The same testbed MUST be used for security effectiveness test and as 2491 well as for benchmarking test cases defined in Section 7. 2493 A.3. Test Parameters 2495 In this section, the benchmarking test specific parameters SHOULD be 2496 defined. 2498 A.3.1. DUT/SUT Configuration Parameters 2500 DUT/SUT configuration parameters MUST conform to the requirements 2501 defined in Section 4.2. The same DUT configuration MUST be used for 2502 Security effectiveness test and as well as for benchmarking test 2503 cases defined in Section 7. The DUT/SUT MUST be configured in inline 2504 mode and all detected attack traffic MUST be dropped and the session 2505 SHOULD be reset 2507 A.3.2. Test Equipment Configuration Parameters 2509 Test equipment configuration parameters MUST conform to the 2510 requirements defined in Section 4.3. The same client and server IP 2511 ranges MUST be configured as used in the benchmarking test cases. In 2512 addition, the following parameters MUST be documented for this 2513 benchmarking test: 2515 * Background Traffic: 45% of maximum HTTP throughput and 45% of 2516 Maximum HTTPS throughput supported by the DUT/SUT (measured with 2517 object size 64 KByte in the benchmarking tests "HTTP(S) 2518 Throughput" defined in Section 7.3 and Section 7.7). 2520 * RECOMMENDED CVE traffic transmission Rate: 10 CVEs per second 2522 * It is RECOMMENDED to generate each CVE multiple times 2523 (sequentially) at 10 CVEs per second 2525 * Ciphers and keys for the encrypted CVE traffic MUST use the same 2526 cipher configured for HTTPS traffic related benchmarking tests 2527 (Section 7.6 - Section 7.9) 2529 A.4. Test Results Validation Criteria 2531 The following criteria are the test results validation criteria. The 2532 test results validation criteria MUST be monitored during the whole 2533 test duration. 2535 a. Number of failed application transaction in the background 2536 traffic MUST be less than 0.01% of attempted transactions. 2538 b. Number of terminated TCP connections of the background traffic 2539 (due to unexpected TCP RST sent by DUT/SUT) MUST be less than 2540 0.01% of total initiated TCP connections in the background 2541 traffic. 2543 c. During the sustain phase, traffic SHOULD be forwarded at a 2544 constant rate (considered as a constant rate if any deviation of 2545 traffic forwarding rate is less than 5%). 2547 d. False positive MUST NOT occur in the background traffic. 2549 A.5. Measurement 2551 Following KPI metrics MUST be reported for this test scenario: 2553 Mandatory KPIs: 2555 * Blocked CVEs: It SHOULD be represented in the following ways: 2557 - Number of blocked CVEs out of total CVEs 2559 - Percentage of blocked CVEs 2561 * Unblocked CVEs: It SHOULD be represented in the following ways: 2563 - Number of unblocked CVEs out of total CVEs 2565 - Percentage of unblocked CVEs 2567 * Background traffic behavior: It SHOULD be represented one of the 2568 followings ways: 2570 - No impact: Considered as "no impact'" if any deviation of 2571 traffic forwarding rate is less than or equal to 5 % (constant 2572 rate) 2574 - Minor impact: Considered as "minor impact" if any deviation of 2575 traffic forwarding rate is greater than 5% and less than or 2576 equal to10% (i.e. small spikes) 2578 - Heavily impacted: Considered as "Heavily impacted" if any 2579 deviation of traffic forwarding rate is greater than 10% (i.e. 2580 large spikes) or reduced the background HTTP(S) throughput 2581 greater than 10% 2583 * DUT/SUT reporting accuracy: DUT/SUT MUST report all detected 2584 vulnerabilities. 2586 Optional KPIs: 2588 * List of unblocked CVEs 2590 A.6. Test Procedures and Expected Results 2592 The test procedure is designed to measure the security effectiveness 2593 of the DUT/SUT at the sustaining period of the traffic load profile. 2594 The test procedure consists of two major steps. This test procedure 2595 MAY be repeated multiple times with different IPv4 and IPv6 traffic 2596 distribution. 2598 A.6.1. Step 1: Background Traffic 2600 Generate background traffic at the transmission rate defined in 2601 Appendix A.3.2. 2603 The DUT/SUT MUST reach the target objective (HTTP(S) throughput) in 2604 sustain phase. The measured KPIs during the sustain phase MUST meet 2605 all the test results validation criteria defined in Appendix A.4. 2607 If the KPI metrics do not meet the acceptance criteria, the test 2608 procedure MUST NOT be continued to "Step 2". 2610 A.6.2. Step 2: CVE Emulation 2612 While generating background traffic (in sustain phase), send the CVE 2613 traffic as defined in the parameter section. 2615 The test equipment SHOULD start to measure and record all specified 2616 KPIs. Continue the test until all CVEs are sent. 2618 The measured KPIs MUST meet all the test results validation criteria 2619 defined in Appendix A.4. 2621 In addition, the DUT/SUT SHOULD report the vulnerabilities correctly. 2623 Appendix B. DUT/SUT Classification 2625 This document aims to classify the DUT/SUT in four different 2626 categories based on its maximum supported firewall throughput 2627 performance number defined in the vendor datasheet. This 2628 classification MAY help user to determine specific configuration 2629 scale (e.g., number of ACL entries), traffic profiles, and attack 2630 traffic profiles, scaling those proportionally to DUT/SUT sizing 2631 category. 2633 The four different categories are Extra Small (XS), Small (S), Medium 2634 (M), and Large (L). The RECOMMENDED throughput values for the 2635 following categories are: 2637 Extra Small (XS) - Supported throughput less than or equal to1Gbit/s 2639 Small (S) - Supported throughput greater than 1Gbit/s and less than 2640 or equal to 5Gbit/s 2642 Medium (M) - Supported throughput greater than 5Gbit/s and less than 2643 or equal to10Gbit/s 2645 Large (L) - Supported throughput greater than 10Gbit/s 2647 Authors' Addresses 2649 Balamuhunthan Balarajah 2650 Berlin 2651 Germany 2653 Email: bm.balarajah@gmail.com 2654 Carsten Rossenhoevel 2655 EANTC AG 2656 Salzufer 14 2657 10587 Berlin 2658 Germany 2660 Email: cross@eantc.de 2662 Brian Monkman 2663 NetSecOPEN 2664 417 Independence Court 2665 Mechanicsburg, PA 17050 2666 United States of America 2668 Email: bmonkman@netsecopen.org