idnits 2.17.1 draft-ietf-bmwg-ngfw-performance-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. -- The draft header indicates that this document obsoletes RFC3511, but the abstract doesn't seem to mention this, which it should. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (October 2021) is 924 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 2616 (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) -- Obsolete informational reference (is this intentional?): RFC 3511 (Obsoleted by RFC 9411) Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Methodology Working Group B. Balarajah 3 Internet-Draft 4 Obsoletes: 3511 (if approved) C. Rossenhoevel 5 Intended status: Informational EANTC AG 6 Expires: 23 April 2022 B. Monkman 7 NetSecOPEN 8 October 2021 10 Benchmarking Methodology for Network Security Device Performance 11 draft-ietf-bmwg-ngfw-performance-11 13 Abstract 15 This document provides benchmarking terminology and methodology for 16 next-generation network security devices including next-generation 17 firewalls (NGFW), next-generation intrusion prevention systems 18 (NGIPS), and unified threat management (UTM) implementations. This 19 document aims to improve the applicability, reproducibility, and 20 transparency of benchmarks and to align the test methodology with 21 today's increasingly complex layer 7 security centric network 22 application use cases. The main areas covered in this document are 23 test terminology, test configuration parameters, and benchmarking 24 methodology for NGFW and NGIPS. 26 Status of This Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at https://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on 4 April 2022. 43 Copyright Notice 45 Copyright (c) 2021 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 50 license-info) in effect on the date of publication of this document. 51 Please review these documents carefully, as they describe your rights 52 and restrictions with respect to this document. Code Components 53 extracted from this document must include Simplified BSD License text 54 as described in Section 4.e of the Trust Legal Provisions and are 55 provided without warranty as described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 60 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 61 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 62 4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 4 63 4.1. Testbed Configuration . . . . . . . . . . . . . . . . . . 5 64 4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 6 65 4.2.1. Security Effectiveness Configuration . . . . . . . . 12 66 4.3. Test Equipment Configuration . . . . . . . . . . . . . . 12 67 4.3.1. Client Configuration . . . . . . . . . . . . . . . . 12 68 4.3.2. Backend Server Configuration . . . . . . . . . . . . 15 69 4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 17 70 4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 17 71 5. Testbed Considerations . . . . . . . . . . . . . . . . . . . 18 72 6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 19 73 6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 19 74 6.2. Detailed Test Results . . . . . . . . . . . . . . . . . . 21 75 6.3. Benchmarks and Key Performance Indicators . . . . . . . . 21 76 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 23 77 7.1. Throughput Performance with Application Traffic Mix . . . 23 78 7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 23 79 7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 23 80 7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 23 81 7.1.4. Test Procedures and Expected Results . . . . . . . . 25 82 7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 26 83 7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 26 84 7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 27 85 7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 27 86 7.2.4. Test Procedures and Expected Results . . . . . . . . 28 87 7.3. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 30 88 7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 30 89 7.3.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 30 90 7.3.3. Test Parameters . . . . . . . . . . . . . . . . . . . 30 91 7.3.4. Test Procedures and Expected Results . . . . . . . . 32 92 7.4. HTTP Transaction Latency . . . . . . . . . . . . . . . . 33 93 7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 33 94 7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 33 95 7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 34 96 7.4.4. Test Procedures and Expected Results . . . . . . . . 35 97 7.5. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 36 98 7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 36 99 7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 36 100 7.5.3. Test Parameters . . . . . . . . . . . . . . . . . . . 37 101 7.5.4. Test Procedures and Expected Results . . . . . . . . 38 102 7.6. TCP/HTTPS Connections per Second . . . . . . . . . . . . 39 103 7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 40 104 7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 40 105 7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 40 106 7.6.4. Test Procedures and Expected Results . . . . . . . . 42 107 7.7. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 43 108 7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 43 109 7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 43 110 7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 43 111 7.7.4. Test Procedures and Expected Results . . . . . . . . 45 112 7.8. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 46 113 7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 46 114 7.8.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 46 115 7.8.3. Test Parameters . . . . . . . . . . . . . . . . . . . 46 116 7.8.4. Test Procedures and Expected Results . . . . . . . . 48 117 7.9. Concurrent TCP/HTTPS Connection Capacity . . . . . . . . 49 118 7.9.1. Objective . . . . . . . . . . . . . . . . . . . . . . 49 119 7.9.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 49 120 7.9.3. Test Parameters . . . . . . . . . . . . . . . . . . . 49 121 7.9.4. Test Procedures and Expected Results . . . . . . . . 51 122 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 52 123 9. Security Considerations . . . . . . . . . . . . . . . . . . . 53 124 10. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 53 125 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 53 126 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 53 127 12.1. Normative References . . . . . . . . . . . . . . . . . . 53 128 12.2. Informative References . . . . . . . . . . . . . . . . . 53 129 Appendix A. Test Methodology - Security Effectiveness 130 Evaluation . . . . . . . . . . . . . . . . . . . . . . . 55 131 A.1. Test Objective . . . . . . . . . . . . . . . . . . . . . 55 132 A.2. Testbed Setup . . . . . . . . . . . . . . . . . . . . . . 55 133 A.3. Test Parameters . . . . . . . . . . . . . . . . . . . . . 55 134 A.3.1. DUT/SUT Configuration Parameters . . . . . . . . . . 55 135 A.3.2. Test Equipment Configuration Parameters . . . . . . . 55 136 A.4. Test Results Validation Criteria . . . . . . . . . . . . 56 137 A.5. Measurement . . . . . . . . . . . . . . . . . . . . . . . 56 138 A.6. Test Procedures and Expected Results . . . . . . . . . . 57 139 A.6.1. Step 1: Background Traffic . . . . . . . . . . . . . 57 140 A.6.2. Step 2: CVE Emulation . . . . . . . . . . . . . . . . 58 141 Appendix B. DUT/SUT Classification . . . . . . . . . . . . . . . 58 142 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 58 144 1. Introduction 146 18 years have passed since IETF recommended test methodology and 147 terminology for firewalls initially ([RFC3511]). The requirements 148 for network security element performance and effectiveness have 149 increased tremendously since then. Security function implementations 150 have evolved to more advanced areas and have diversified into 151 intrusion detection and prevention, threat management, analysis of 152 encrypted traffic, etc. In an industry of growing importance, well- 153 defined, and reproducible key performance indicators (KPIs) are 154 increasingly needed as they enable fair and reasonable comparison of 155 network security functions. All these reasons have led to the 156 creation of a new next-generation network security device 157 benchmarking document and this document obsoletes [RFC3511]. 159 2. Requirements 161 The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 162 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 163 "OPTIONAL" in this document are to be interpreted as described in BCP 164 14 [RFC2119], [RFC8174] when, and only when, they appear in all 165 capitals, as shown here. 167 3. Scope 169 This document provides testing terminology and testing methodology 170 for modern and next-generation network security devices that are 171 configured in Active ("Inline", see Figure 1 and Figure 2) mode. It 172 covers the validation of security effectiveness configurations of 173 network security devices, followed by performance benchmark testing. 174 This document focuses on advanced, realistic, and reproducible 175 testing methods. Additionally, it describes testbed environments, 176 test tool requirements, and test result formats. 178 4. Test Setup 180 Test setup defined in this document applies to all benchmarking tests 181 described in Section 7. The test setup MUST be contained within an 182 Isolated Test Environment (see Section 3 of [RFC6815]). 184 4.1. Testbed Configuration 186 Testbed configuration MUST ensure that any performance implications 187 that are discovered during the benchmark testing aren't due to the 188 inherent physical network limitations such as the number of physical 189 links and forwarding performance capabilities (throughput and 190 latency) of the network devices in the testbed. For this reason, 191 this document recommends avoiding external devices such as switches 192 and routers in the testbed wherever possible. 194 In some deployment scenarios, the network security devices (Device 195 Under Test/System Under Test) are connected to routers and switches, 196 which will reduce the number of entries in MAC or ARP tables of the 197 Device Under Test/System Under Test (DUT/SUT). If MAC or ARP tables 198 have many entries, this may impact the actual DUT/SUT performance due 199 to MAC and ARP/ND (Neighbor Discovery) table lookup processes. This 200 document also recommends using test equipment with the capability of 201 emulating layer 3 routing functionality instead of adding external 202 routers in the testbed. 204 The testbed setup Option 1 (Figure 1) is the RECOMMENDED testbed 205 setup for the benchmarking test. 207 +-----------------------+ +-----------------------+ 208 | +-------------------+ | +-----------+ | +-------------------+ | 209 | | Emulated Router(s)| | | | | | Emulated Router(s)| | 210 | | (Optional) | +----- DUT/SUT +-----+ (Optional) | | 211 | +-------------------+ | | | | +-------------------+ | 212 | +-------------------+ | +-----------+ | +-------------------+ | 213 | | Clients | | | | Servers | | 214 | +-------------------+ | | +-------------------+ | 215 | | | | 216 | Test Equipment | | Test Equipment | 217 +-----------------------+ +-----------------------+ 219 Figure 1: Testbed Setup - Option 1 221 If the test equipment used is not capable of emulating layer 3 222 routing functionality or if the number of used ports is mismatched 223 between test equipment and the DUT/SUT (need for test equipment port 224 aggregation), the test setup can be configured as shown in Figure 2. 226 +-------------------+ +-----------+ +--------------------+ 227 |Aggregation Switch/| | | | Aggregation Switch/| 228 | Router +------+ DUT/SUT +------+ Router | 229 | | | | | | 230 +----------+--------+ +-----------+ +--------+-----------+ 231 | | 232 | | 233 +-----------+-----------+ +-----------+-----------+ 234 | | | | 235 | +-------------------+ | | +-------------------+ | 236 | | Emulated Router(s)| | | | Emulated Router(s)| | 237 | | (Optional) | | | | (Optional) | | 238 | +-------------------+ | | +-------------------+ | 239 | +-------------------+ | | +-------------------+ | 240 | | Clients | | | | Servers | | 241 | +-------------------+ | | +-------------------+ | 242 | | | | 243 | Test Equipment | | Test Equipment | 244 +-----------------------+ +-----------------------+ 246 Figure 2: Testbed Setup - Option 2 248 4.2. DUT/SUT Configuration 250 A unique DUT/SUT configuration MUST be used for all benchmarking 251 tests described in Section 7. Since each DUT/SUT will have its own 252 unique configuration, users SHOULD configure their device with the 253 same parameters and security features that would be used in the 254 actual deployment of the device or a typical deployment in order to 255 achieve maximum network security coverage. The DUT/SUT MUST be 256 configured in "Inline" mode so that the traffic is actively inspected 257 by the DUT/SUT. Also "Fail-Open" behavior MUST be disabled on the 258 DUT/SUT. 260 Table 1 and Table 2 below describe the RECOMMENDED and OPTIONAL sets 261 of network security feature list for NGFW and NGIPS respectively. 262 The selected security features SHOULD be consistently enabled on the 263 DUT/SUT for all benchmarking tests described in Section 7. 265 To improve repeatability, a summary of the DUT/SUT configuration 266 including a description of all enabled DUT/SUT features MUST be 267 published with the benchmarking results. 269 +============================+=============+==========+ 270 | DUT/SUT (NGFW) Features | RECOMMENDED | OPTIONAL | 271 +============================+=============+==========+ 272 | SSL Inspection | x | | 273 +----------------------------+-------------+----------+ 274 | IDS/IPS | x | | 275 +----------------------------+-------------+----------+ 276 | Anti-Spyware | x | | 277 +----------------------------+-------------+----------+ 278 | Anti-Virus | x | | 279 +----------------------------+-------------+----------+ 280 | Anti-Botnet | x | | 281 +----------------------------+-------------+----------+ 282 | Web Filtering | | x | 283 +----------------------------+-------------+----------+ 284 | Data Loss Protection (DLP) | | x | 285 +----------------------------+-------------+----------+ 286 | DDoS | | x | 287 +----------------------------+-------------+----------+ 288 | Certificate Validation | | x | 289 +----------------------------+-------------+----------+ 290 | Logging and Reporting | x | | 291 +----------------------------+-------------+----------+ 292 | Application Identification | x | | 293 +----------------------------+-------------+----------+ 295 Table 1: NGFW Security Features 297 +============================+=============+==========+ 298 | DUT/SUT (NGIPS) Features | RECOMMENDED | OPTIONAL | 299 +============================+=============+==========+ 300 | SSL Inspection | x | | 301 +----------------------------+-------------+----------+ 302 | Anti-Malware | x | | 303 +----------------------------+-------------+----------+ 304 | Anti-Spyware | x | | 305 +----------------------------+-------------+----------+ 306 | Anti-Botnet | x | | 307 +----------------------------+-------------+----------+ 308 | Logging and Reporting | x | | 309 +----------------------------+-------------+----------+ 310 | Application Identification | x | | 311 +----------------------------+-------------+----------+ 312 | Deep Packet Inspection | x | | 313 +----------------------------+-------------+----------+ 314 | Anti-Evasion | x | | 315 +----------------------------+-------------+----------+ 317 Table 2: NGIPS Security Features 319 The following table provides a brief description of the security 320 features. 322 +================+================================================+ 323 | DUT/SUT | Description | 324 | Features | | 325 +================+================================================+ 326 | SSL Inspection | DUT/SUT intercepts and decrypts inbound HTTPS | 327 | | traffic between servers and clients. Once the | 328 | | content inspection has been completed, DUT/SUT | 329 | | encrypts the HTTPS traffic with ciphers and | 330 | | keys used by the clients and servers. | 331 +----------------+------------------------------------------------+ 332 | IDS/IPS | DUT/SUT detects and blocks exploits targeting | 333 | | known and unknown vulnerabilities across the | 334 | | monitored network. | 335 +----------------+------------------------------------------------+ 336 | Anti-Malware | DUT/SUT detects and prevents the transmission | 337 | | of malicious executable code and any | 338 | | associated communications across the monitored | 339 | | network. This includes data exfiltration as | 340 | | well as command and control channels. | 341 +----------------+------------------------------------------------+ 342 | Anti-Spyware | Anti-Spyware is a subcategory of Anti Malware. | 343 | | Spyware transmits information without the | 344 | | user's knowledge or permission. DUT/SUT | 345 | | detects and block initial infection or | 346 | | transmission of data. | 347 +----------------+------------------------------------------------+ 348 | Anti-Botnet | DUT/SUT detects traffic to or from botnets. | 349 +----------------+------------------------------------------------+ 350 | Anti-Evasion | DUT/SUT detects and mitigates attacks that | 351 | | have been obfuscated in some manner. | 352 +----------------+------------------------------------------------+ 353 | Web Filtering | DUT/SUT detects and blocks malicious website | 354 | | including defined classifications of website | 355 | | across the monitored network. | 356 +----------------+------------------------------------------------+ 357 | DLP | DUT/SUT detects and prevents data breaches and | 358 | | data exfiltration, or it detects and blocks | 359 | | the transmission of sensitive data across the | 360 | | monitored network. | 361 +----------------+------------------------------------------------+ 362 | Certificate | DUT/SUT validates certificates used in | 363 | Validation | encrypted communications across the monitored | 364 | | network. | 365 +----------------+------------------------------------------------+ 366 | Logging and | DUT/SUT logs and reports all traffic at the | 367 | Reporting | flow level across the monitored network. | 368 +----------------+------------------------------------------------+ 369 | Application | DUT/SUT detects known applications as defined | 370 | Identification | within the traffic mix selected across the | 371 | | monitored network. | 372 +----------------+------------------------------------------------+ 374 Table 3: Security Feature Description 376 Below is a summary of the DUT/SUT configuration: 378 * DUT/SUT MUST be configured in "inline" mode. 380 * "Fail-Open" behavior MUST be disabled. 382 * All RECOMMENDED security features are enabled. 384 * Logging SHOULD be enabled. DUT/SUT SHOULD log all traffic at the 385 flow level - Logging to an external device is permissible. 387 * Geographical location filtering, and Application Identification 388 and Control SHOULD be configured to trigger based on a site or 389 application from the defined traffic mix. 391 In addition, a realistic number of access control rules (ACL) SHOULD 392 be configured on the DUT/SUT where ACLs are configurable and 393 reasonable based on the deployment scenario. This document 394 determines the number of access policy rules for four different 395 classes of DUT/SUT: Extra Small (XS), Small (S), Medium (M), and 396 Large (L). A sample DUT/SUT classification is described in 397 Appendix B. 399 The Access Control Rules (ACL) defined in Figure 3 MUST be configured 400 from top to bottom in the correct order as shown in the table. This 401 is due to ACL types listed in specificity decreasing order, with 402 "block" first, followed by "allow", representing a typical ACL based 403 security policy. The ACL entries SHOULD be configured with routable 404 IP subnets by the DUT/SUT. (Note: There will be differences between 405 how security vendors implement ACL decision making.) The configured 406 ACL MUST NOT block the security and measurement traffic used for the 407 benchmarking tests. 409 +---------------+ 410 | DUT/SUT | 411 | Classification| 412 | # Rules | 413 +-----------+-----------+--------------------+------+---+---+---+---+ 414 | | Match | | | | | | | 415 | Rules Type| Criteria | Description |Action| XS| S | M | L | 416 +-------------------------------------------------------------------+ 417 |Application|Application| Any application | block| 5 | 10| 20| 50| 418 |layer | | not included in | | | | | | 419 | | | the measurement | | | | | | 420 | | | traffic | | | | | | 421 +-------------------------------------------------------------------+ 422 |Transport |SRC IP and | Any SRC IP subnet | block| 25| 50|100|250| 423 |layer |TCP/UDP | used and any DST | | | | | | 424 | |DST ports | ports not used in | | | | | | 425 | | | the measurement | | | | | | 426 | | | traffic | | | | | | 427 +-------------------------------------------------------------------+ 428 |IP layer |SRC/DST IP | Any SRC/DST IP | block| 25| 50|100|250| 429 | | | subnet not used | | | | | | 430 | | | in the measurement | | | | | | 431 | | | traffic | | | | | | 432 +-------------------------------------------------------------------+ 433 |Application|Application| Half of the | allow| 10| 10| 10| 10| 434 |layer | | applications | | | | | | 435 | | | included in the | | | | | | 436 | | | measurement traffic| | | | | | 437 | | |(see the note below)| | | | | | 438 +-------------------------------------------------------------------+ 439 |Transport |SRC IP and | Half of the SRC | allow| >1| >1| >1| >1| 440 |layer |TCP/UDP | IPs used and any | | | | | | 441 | |DST ports | DST ports used in | | | | | | 442 | | | the measurement | | | | | | 443 | | | traffic | | | | | | 444 | | | (one rule per | | | | | | 445 | | | subnet) | | | | | | 446 +-------------------------------------------------------------------+ 447 |IP layer |SRC IP | The rest of the | allow| >1| >1| >1| >1| 448 | | | SRC IP subnet | | | | | | 449 | | | range used in the | | | | | | 450 | | | measurement | | | | | | 451 | | | traffic | | | | | | 452 | | | (one rule per | | | | | | 453 | | | subnet) | | | | | | 454 +-----------+-----------+--------------------+------+---+---+---+---+ 456 Figure 3: DUT/SUT Access List 458 Note: If half of the applications included in the measurement traffic 459 is less than 10, the missing number of ACL entries (dummy rules) can 460 be configured for any application traffic not included in the 461 measurement traffic. 463 4.2.1. Security Effectiveness Configuration 465 The Security features (defined in Table 1 and Table 2) of the DUT/SUT 466 MUST be configured effectively to detect, prevent, and report the 467 defined security vulnerability sets. This section defines the 468 selection of the security vulnerability sets from Common 469 vulnerabilities and Exposures (CVE) list for the testing. The 470 vulnerability set SHOULD reflect a minimum of 500 CVEs from no older 471 than 10 calendar years to the current year. These CVEs SHOULD be 472 selected with a focus on in-use software commonly found in business 473 applications, with a Common vulnerability Scoring System (CVSS) 474 Severity of High (7-10). 476 This document is primarily focused on performance benchmarking. 477 However, it is RECOMMENDED to validate the security features 478 configuration of the DUT/SUT by evaluating the security effectiveness 479 as a prerequisite for performance benchmarking tests defined in the 480 section 7. In case the benchmarking tests are performed without 481 evaluating security effectiveness, the test report MUST explain the 482 implications of this. The methodology for evaluating security 483 effectiveness is defined in Appendix A. 485 4.3. Test Equipment Configuration 487 In general, test equipment allows configuring parameters in different 488 protocol layers. These parameters thereby influence the traffic 489 flows which will be offered and impact performance measurements. 491 This section specifies common test equipment configuration parameters 492 applicable for all benchmarking tests defined in Section 7. Any 493 benchmarking test specific parameters are described under the test 494 setup section of each benchmarking test individually. 496 4.3.1. Client Configuration 498 This section specifies which parameters SHOULD be considered while 499 configuring clients using test equipment. Also, this section 500 specifies the RECOMMENDED values for certain parameters. The values 501 are the defaults used in most of the client operating systems 502 currently. 504 4.3.1.1. TCP Stack Attributes 506 The TCP stack SHOULD use a congestion control algorithm at client and 507 server endpoints. The IPv4 and IPv6 Maximum Segment Size (MSS) 508 SHOULD be set to 1460 bytes and 1440 bytes respectively and a TX and 509 RX initial receive windows of 64 KByte. Client initial congestion 510 window SHOULD NOT exceed 10 times the MSS. Delayed ACKs are 511 permitted and the maximum client delayed ACK SHOULD NOT exceed 10 512 times the MSS before a forced ACK. Up to three retries SHOULD be 513 allowed before a timeout event is declared. All traffic MUST set the 514 TCP PSH flag to high. The source port range SHOULD be in the range 515 of 1024 - 65535. Internal timeout SHOULD be dynamically scalable per 516 RFC 793. The client SHOULD initiate and close TCP connections. The 517 TCP connection MUST be initiated via a TCP three-way handshake (SYN, 518 SYN/ACK, ACK), and it MUST be closed via either a TCP three-way close 519 (FIN, FIN/ACK, ACK), or a TCP four-way close (FIN, ACK, FIN, ACK). 521 4.3.1.2. Client IP Address Space 523 The sum of the client IP space SHOULD contain the following 524 attributes. 526 * The IP blocks SHOULD consist of multiple unique, discontinuous 527 static address blocks. 529 * A default gateway is permitted. 531 * The DSCP (differentiated services code point) marking is set to DF 532 (Default Forwarding) '000000' on IPv4 Type of Service (ToS) field 533 and IPv6 traffic class field. 535 The following equation can be used to define the total number of 536 client IP addresses that will be configured on the test equipment. 538 Desired total number of client IP = Target throughput [Mbit/s] / 539 Average throughput per IP address [Mbit/s] 541 As shown in the example list below, the value for "Average throughput 542 per IP address" can be varied depending on the deployment and use 543 case scenario. 545 (Option 1) DUT/SUT deployment scenario 1 : 6-7 Mbit/s per IP (e.g. 546 1,400-1,700 IPs per 10Gbit/s throughput) 548 (Option 2) DUT/SUT deployment scenario 2 : 0.1-0.2 Mbit/s per IP 549 (e.g. 50,000-100,000 IPs per 10Gbit/s throughput) 551 Based on deployment and use case scenario, client IP addresses SHOULD 552 be distributed between IPv4 and IPv6. The following options MAY be 553 considered for a selection of traffic mix ratio. 555 (Option 1) 100 % IPv4, no IPv6 557 (Option 2) 80 % IPv4, 20% IPv6 559 (Option 3) 50 % IPv4, 50% IPv6 561 (Option 4) 20 % IPv4, 80% IPv6 563 (Option 5) no IPv4, 100% IPv6 565 Note: The IANA has assigned IP address range for the testing purpose 566 as described in Section 8. If the test scenario requires more IP 567 addresses or subnets than the IANA assigned, this document recommends 568 using non routable Private IPv4 address ranges or Unique Local 569 Address (ULA) IPv6 address ranges for the testing. 571 4.3.1.3. Emulated Web Browser Attributes 573 The client emulated web browser (emulated browser) contains 574 attributes that will materially affect how traffic is loaded. The 575 objective is to emulate modern, typical browser attributes to improve 576 realism of the result set. 578 For HTTP traffic emulation, the emulated browser MUST negotiate HTTP 579 version 1.1 or higher. Depending on test scenarios and chosen HTTP 580 version, the emulated browser MAY open multiple TCP connections per 581 Server endpoint IP at any time depending on how many sequential 582 transactions need to be processed. For HTTP/2 or HTTP/3, the 583 emulated browser MAY open multiple concurrent streams per connection 584 (multiplexing). If HTTP/3 is used the emulated browser MUST open 585 Quick UDP Internet Connections (QUIC). HTTP settings such as number 586 of connection per server IP, number of requests per connection, and 587 number of streams per connection MUST be documented. This document 588 refers to [RFC8446] for HTTP/2 and [RFC9000] for QUIC. The emulated 589 browser SHOULD advertise a User-Agent header. The emulated browser 590 SHOULD enforce content length validation. Depending on test 591 scenarios and selected HTTP version, HTTP header compression MAY be 592 set to enable or disable. This setting (compression enabled or 593 disabled) MUST be documented in the report. 595 For encrypted traffic, the following attributes SHALL define the 596 negotiated encryption parameters. The test clients MUST use TLS 597 version 1.2 or higher. TLS record size MAY be optimized for the 598 HTTPS response object size up to a record size of 16 KByte. If 599 Server Name Indication (SNI) is required in the traffic mix profile, 600 the client endpoint MUST send TLS extension Server Name Indication 601 (SNI) information when opening a security tunnel. Each client 602 connection MUST perform a full handshake with server certificate and 603 MUST NOT use session reuse or resumption. 605 The following TLS 1.2 supported ciphers and keys are RECOMMENDED to 606 use for HTTPS based benchmarking tests defined in Section 7. 608 1. ECDHE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 609 Algorithm: ecdsa_secp256r1_sha256 and Supported group: secp256r1) 611 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 612 Algorithm: rsa_pkcs1_sha256 and Supported group: secp256r1) 614 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 615 Algorithm: ecdsa_secp384r1_sha384 and Supported group: secp521r1) 617 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 618 Algorithm: rsa_pkcs1_sha384 and Supported group: secp256r1) 620 Note: The above ciphers and keys were those commonly used enterprise 621 grade encryption cipher suites for TLS 1.2. It is recognized that 622 these will evolve over time. Individual certification bodies SHOULD 623 use ciphers and keys that reflect evolving use cases. These choices 624 MUST be documented in the resulting test reports with detailed 625 information on the ciphers and keys used along with reasons for the 626 choices. 628 [RFC8446] defines the following cipher suites for use with TLS 1.3. 630 1. TLS_AES_128_GCM_SHA256 632 2. TLS_AES_256_GCM_SHA384 634 3. TLS_CHACHA20_POLY1305_SHA256 636 4. TLS_AES_128_CCM_SHA256 638 5. TLS_AES_128_CCM_8_SHA256 640 4.3.2. Backend Server Configuration 642 This section specifies which parameters should be considered while 643 configuring emulated backend servers using test equipment. 645 4.3.2.1. TCP Stack Attributes 647 The TCP stack on the server side SHOULD be configured similar to the 648 client side configuration described in Section 4.3.1.1. In addition, 649 server initial congestion window MUST NOT exceed 10 times the MSS. 650 Delayed ACKs are permitted and the maximum server delayed ACK MUST 651 NOT exceed 10 times the MSS before a forced ACK. 653 4.3.2.2. Server Endpoint IP Addressing 655 The sum of the server IP space SHOULD contain the following 656 attributes. 658 * The server IP blocks SHOULD consist of unique, discontinuous 659 static address blocks with one IP per server Fully Qualified 660 Domain Name (FQDN) endpoint per test port. 662 * A default gateway is permitted. The DSCP (differentiated services 663 code point) marking is set to DF (Default Forwarding) '000000' on 664 IPv4 Type of Service (ToS) field and IPv6 traffic class field. 666 * The server IP addresses SHOULD be distributed between IPv4 and 667 IPv6 with a ratio identical to the clients distribution ratio. 669 Note: The IANA has assigned IP address range for the testing purpose 670 as described in Section 8. If the test scenario requires more IP 671 addresses or subnets than the IANA assigned, this document recommends 672 using non routable Private IPv4 address ranges or Unique Local 673 Address (ULA) IPv6 address ranges for the testing. 675 4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes 677 The server pool for HTTP SHOULD listen on TCP port 80 and emulate the 678 same HTTP version and settings chosen by the client (emulated web 679 browser). The Server MUST advertise server type in the Server 680 response header [RFC2616]. For HTTPS server, TLS 1.2 or higher MUST 681 be used with a maximum record size of 16 KByte and MUST NOT use 682 ticket resumption or session ID reuse. The server SHOULD listen on 683 TCP port 443. The server SHALL serve a certificate to the client. 684 The HTTPS server MUST check host SNI information with the FQDN if SNI 685 is in use. Cipher suite and key size on the server side MUST be 686 configured similar to the client side configuration described in 687 Section 4.3.1.3. 689 4.3.3. Traffic Flow Definition 691 This section describes the traffic pattern between client and server 692 endpoints. At the beginning of the test, the server endpoint 693 initializes and will be ready to accept connection states including 694 initialization of the TCP stack as well as bound HTTP and HTTPS 695 servers. When a client endpoint is needed, it will initialize and be 696 given attributes such as a MAC and IP address. The behavior of the 697 client is to sweep through the given server IP space, generating a 698 recognizable service by the DUT. Sequential and pseudorandom sweep 699 methods are acceptable. The method used MUST be stated in the final 700 report. Thus, a balanced mesh between client endpoints and server 701 endpoints will be generated in a client IP and port to server IP and 702 port combination. Each client endpoint performs the same actions as 703 other endpoints, with the difference being the source IP of the 704 client endpoint and the target server IP pool. The client MUST use 705 the server IP address or FQDN in the host header [RFC2616]. 707 4.3.3.1. Description of Intra-Client Behavior 709 Client endpoints are independent of other clients that are 710 concurrently executing. When a client endpoint initiates traffic, 711 this section describes how the client steps through different 712 services. Once the test is initialized, the client endpoints 713 randomly hold (perform no operation) for a few milliseconds for 714 better randomization of the start of client traffic. Each client 715 will either open a new TCP connection or connect to a TCP persistence 716 stack still open to that specific server. At any point that the 717 traffic profile may require encryption, a TLS encryption tunnel will 718 form presenting the URL or IP address request to the server. If 719 using SNI, the server MUST then perform an SNI name check with the 720 proposed FQDN compared to the domain embedded in the certificate. 721 Only when correct, will the server process the HTTPS response object. 722 The initial response object to the server is based on benchmarking 723 tests described in Section 7. Multiple additional sub-URLs (response 724 objects on the service page) MAY be requested simultaneously. This 725 MAY be to the same server IP as the initial URL. Each sub-object 726 will also use a canonical FQDN and URL path, as observed in the 727 traffic mix used. 729 4.3.4. Traffic Load Profile 731 The loading of traffic is described in this section. The loading of 732 a traffic load profile has five phases: Init, ramp up, sustain, ramp 733 down, and collection. 735 1. Init phase: Testbed devices including the client and server 736 endpoints should negotiate layer 2-3 connectivity such as MAC 737 learning and ARP. Only after successful MAC learning or ARP/ND 738 resolution SHALL the test iteration move to the next phase. No 739 measurements are made in this phase. The minimum RECOMMENDED 740 time for Init phase is 5 seconds. During this phase, the 741 emulated clients SHOULD NOT initiate any sessions with the DUT/ 742 SUT, in contrast, the emulated servers should be ready to accept 743 requests from DUT/SUT or from emulated clients. 745 2. Ramp up phase: The test equipment SHOULD start to generate the 746 test traffic. It SHOULD use a set of the approximate number of 747 unique client IP addresses to generate traffic. The traffic 748 SHOULD ramp up from zero to desired target objective. The target 749 objective is defined for each benchmarking test. The duration 750 for the ramp up phase MUST be configured long enough that the 751 test equipment does not overwhelm the DUT/SUTs stated performance 752 metrics defined in Section 6.3 namely, TCP Connections Per 753 Second, Inspected Throughput, Concurrent TCP Connections, and 754 Application Transactions Per Second. No measurements are made in 755 this phase. 757 3. Sustain phase: Starts when all required clients are active and 758 operating at their desired load condition. In the sustain phase, 759 the test equipment SHOULD continue generating traffic to constant 760 target value for a constant number of active clients. The 761 minimum RECOMMENDED time duration for sustain phase is 300 762 seconds. This is the phase where measurements occur. The test 763 equipment SHOULD measure and record statistics continuously. The 764 sampling interval for collecting the raw results and calculating 765 the statistics SHOULD be less than 2 seconds. 767 4. Ramp down phase: No new connections are established, and no 768 measurements are made. The time duration for ramp up and ramp 769 down phase SHOULD be the same. 771 5. Collection phase: The last phase is administrative and will occur 772 when the test equipment merges and collates the report data. 774 5. Testbed Considerations 776 This section describes steps for a reference test (pre-test) that 777 control the test environment including test equipment, focusing on 778 physical and virtualized environments and as well as test equipments. 779 Below are the RECOMMENDED steps for the reference test. 781 1. Perform the reference test either by configuring the DUT/SUT in 782 the most trivial setup (fast forwarding) or without presence of 783 the DUT/SUT. 785 2. Generate traffic from traffic generator. Choose a traffic 786 profile used for HTTP or HTTPS throughput performance test with 787 smallest object size. 789 3. Ensure that any ancillary switching or routing functions added in 790 the test equipment does not limit the performance by introducing 791 network metrics such as packet loss and latency. This is 792 specifically important for virtualized components (e.g., 793 vSwitches, vRouters). 795 4. Verify that the generated traffic (performance) of the test 796 equipment matches and reasonably exceeds the expected maximum 797 performance of the DUT/SUT. 799 5. Record the network performance metrics packet loss latency 800 introduced by the test environment (without DUT/SUT). 802 6. Assert that the testbed characteristics are stable during the 803 entire test session. Several factors might influence stability 804 specifically, for virtualized testbeds. For example, additional 805 workloads in a virtualized system, load balancing, and movement 806 of virtual machines during the test, or simple issues such as 807 additional heat created by high workloads leading to an emergency 808 CPU performance reduction. 810 The reference test SHOULD be performed before the benchmarking tests 811 (described in section 7) start. 813 6. Reporting 815 This section describes how the benchmarking test report should be 816 formatted and presented. It is RECOMMENDED to include two main 817 sections in the report, namely the introduction and the detailed test 818 results sections. 820 6.1. Introduction 822 The following attributes SHOULD be present in the introduction 823 section of the test report. 825 1. The time and date of the execution of the tests 827 2. Summary of testbed software and hardware details 828 a. DUT/SUT hardware/virtual configuration 830 * This section SHOULD clearly identify the make and model of 831 the DUT/SUT 833 * The port interfaces, including speed and link information 835 * If the DUT/SUT is a Virtual Network Function (VNF), host 836 (server) hardware and software details, interface 837 acceleration type such as DPDK and SR-IOV, used CPU cores, 838 used RAM, resource sharing (e.g. Pinning details and NUMA 839 Node) configuration details, hypervisor version, virtual 840 switch version 842 * details of any additional hardware relevant to the DUT/SUT 843 such as controllers 845 b. DUT/SUT software 847 * Operating system name 849 * Version 851 * Specific configuration details (if any) 853 c. DUT/SUT enabled features 855 * Configured DUT/SUT features (see Table 1 and Table 2) 857 * Attributes of the above-mentioned features 859 * Any additional relevant information about the features 861 d. Test equipment hardware and software 863 * Test equipment vendor name 865 * Hardware details including model number, interface type 867 * Test equipment firmware and test application software 868 version 870 e. Key test parameters 872 * Used cipher suites and keys 874 * IPv4 and IPv6 traffic distribution 875 * Number of configured ACL 877 f. Details of application traffic mix used in the benchmarking 878 test "Throughput Performance with Application Traffic Mix" 879 (Section 7.1) 881 * Name of applications and layer 7 protocols 883 * Percentage of emulated traffic for each application and 884 layer 7 protocols 886 * Percentage of encrypted traffic and used cipher suites and 887 keys (The RECOMMENDED ciphers and keys are defined in 888 Section 4.3.1.3) 890 * Used object sizes for each application and layer 7 891 protocols 893 3. Results Summary / Executive Summary 895 a. Results SHOULD resemble a pyramid in how it is reported, with 896 the introduction section documenting the summary of results 897 in a prominent, easy to read block. 899 6.2. Detailed Test Results 901 In the result section of the test report, the following attributes 902 SHOULD be present for each benchmarking test. 904 a. KPIs MUST be documented separately for each benchmarking test. 905 The format of the KPI metrics SHOULD be presented as described in 906 Section 6.3. 908 b. The next level of details SHOULD be graphs showing each of these 909 metrics over the duration (sustain phase) of the test. This 910 allows the user to see the measured performance stability changes 911 over time. 913 6.3. Benchmarks and Key Performance Indicators 915 This section lists key performance indicators (KPIs) for overall 916 benchmarking tests. All KPIs MUST be measured during the sustain 917 phase of the traffic load profile described in Section 4.3.4. All 918 KPIs MUST be measured from the result output of test equipment. 920 * Concurrent TCP Connections 921 The aggregate number of simultaneous connections between hosts 922 across the DUT/SUT, or between hosts and the DUT/SUT (defined in 923 [RFC2647]). 925 * TCP Connections Per Second 927 The average number of successfully established TCP connections per 928 second between hosts across the DUT/SUT, or between hosts and the 929 DUT/SUT. The TCP connection MUST be initiated via a TCP three-way 930 handshake (SYN, SYN/ACK, ACK). Then the TCP session data is sent. 931 The TCP session MUST be closed via either a TCP three-way close 932 (FIN, FIN/ACK, ACK), or a TCP four-way close (FIN, ACK, FIN, ACK), 933 and MUST NOT by RST. 935 * Application Transactions Per Second 937 The average number of successfully completed transactions per 938 second. For a particular transaction to be considered successful, 939 all data MUST have been transferred in its entirety. In case of 940 HTTP(S) transactions, it MUST have a valid status code (200 OK), 941 and the appropriate FIN, FIN/ACK sequence MUST have been 942 completed. 944 * TLS Handshake Rate 946 The average number of successfully established TLS connections per 947 second between hosts across the DUT/SUT, or between hosts and the 948 DUT/SUT. 950 * Inspected Throughput 952 The number of bits per second of examined and allowed traffic a 953 network security device is able to transmit to the correct 954 destination interface(s) in response to a specified offered load. 955 The throughput benchmarking tests defined in Section 7 SHOULD 956 measure the average Layer 2 throughput value when the DUT/SUT is 957 "inspecting" traffic. This document recommends presenting the 958 inspected throughput value in Gbit/s rounded to two places of 959 precision with a more specific Kbit/s in parenthesis. 961 * Time to First Byte (TTFB) 963 TTFB is the elapsed time between the start of sending the TCP SYN 964 packet from the client and the client receiving the first packet 965 of application data from the server or DUT/SUT. The benchmarking 966 tests HTTP Transaction Latency (Section 7.4) and HTTPS Transaction 967 Latency (Section 7.8) measure the minimum, average and maximum 968 TTFB. The value SHOULD be expressed in milliseconds. 970 * URL Response time / Time to Last Byte (TTLB) 972 URL Response time / TTLB is the elapsed time between the start of 973 sending the TCP SYN packet from the client and the client 974 receiving the last packet of application data from the server or 975 DUT/SUT. The benchmarking tests HTTP Transaction Latency 976 (Section 7.4) and HTTPS Transaction Latency (Section 7.8) measure 977 the minimum, average and maximum TTLB. The value SHOULD be 978 expressed in millisecond. 980 7. Benchmarking Tests 982 7.1. Throughput Performance with Application Traffic Mix 984 7.1.1. Objective 986 Using a relevant application traffic mix, determine the sustainable 987 inspected throughput supported by the DUT/SUT. 989 Based on the test customer's specific use case, testers can choose 990 the relevant application traffic mix for this test. The details 991 about the traffic mix MUST be documented in the report. At least the 992 following traffic mix details MUST be documented and reported 993 together with the test results: 995 Name of applications and layer 7 protocols 997 Percentage of emulated traffic for each application and layer 7 998 protocol 1000 Percentage of encrypted traffic and used cipher suites and keys 1001 (The RECOMMENDED ciphers and keys are defined in Section 4.3.1.3.) 1003 Used object sizes for each application and layer 7 protocols 1005 7.1.2. Test Setup 1007 Testbed setup MUST be configured as defined in Section 4. Any 1008 benchmarking test specific testbed configuration changes MUST be 1009 documented. 1011 7.1.3. Test Parameters 1013 In this section, the benchmarking test specific parameters SHOULD be 1014 defined. 1016 7.1.3.1. DUT/SUT Configuration Parameters 1018 DUT/SUT parameters MUST conform to the requirements defined in 1019 Section 4.2. Any configuration changes for this specific 1020 benchmarking test MUST be documented. In case the DUT/SUT is 1021 configured without SSL inspection, the test report MUST explain the 1022 implications of this to the relevant application traffic mix 1023 encrypted traffic. 1025 7.1.3.2. Test Equipment Configuration Parameters 1027 Test equipment configuration parameters MUST conform to the 1028 requirements defined in Section 4.3. The following parameters MUST 1029 be documented for this benchmarking test: 1031 Client IP address range defined in Section 4.3.1.2 1033 Server IP address range defined in Section 4.3.2.2 1035 Traffic distribution ratio between IPv4 and IPv6 defined in 1036 Section 4.3.1.2 1038 Target inspected throughput: Aggregated line rate of interface(s) 1039 used in the DUT/SUT or the value defined based on requirement for 1040 a specific deployment scenario 1042 Initial throughput: 10% of the "Target inspected throughput" Note: 1043 Initial throughput is not a KPI to report. This value is 1044 configured on the traffic generator and used to perform Step 1: 1045 "Test Initialization and Qualification" described under the 1046 Section 7.1.4. 1048 One of the ciphers and keys defined in Section 4.3.1.3 are 1049 RECOMMENDED to use for this benchmarking test. 1051 7.1.3.3. Traffic Profile 1053 Traffic profile: This test MUST be run with a relevant application 1054 traffic mix profile. 1056 7.1.3.4. Test Results Validation Criteria 1058 The following criteria are the test results validation criteria. The 1059 test results validation criteria MUST be monitored during the whole 1060 sustain phase of the traffic load profile. 1062 a. Number of failed application transactions (receiving any HTTP 1063 response code other than 200 OK) MUST be less than 0.001% (1 out 1064 of 100,000 transactions) of total attempted transactions. 1066 b. Number of Terminated TCP connections due to unexpected TCP RST 1067 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1068 connections) of total initiated TCP connections. 1070 7.1.3.5. Measurement 1072 Following KPI metrics MUST be reported for this benchmarking test: 1074 Mandatory KPIs (benchmarks): Inspected Throughput, TTFB (minimum, 1075 average, and maximum), TTLB (minimum, average, and maximum) and 1076 Application Transactions Per Second 1078 Note: TTLB MUST be reported along with the object size used in the 1079 traffic profile. 1081 Optional KPIs: TCP Connections Per Second and TLS Handshake Rate 1083 7.1.4. Test Procedures and Expected Results 1085 The test procedures are designed to measure the inspected throughput 1086 performance of the DUT/SUT at the sustaining period of traffic load 1087 profile. The test procedure consists of three major steps: Step 1 1088 ensures the DUT/SUT is able to reach the performance value (initial 1089 throughput) and meets the test results validation criteria when it 1090 was very minimally utilized. Step 2 determines the DUT/SUT is able 1091 to reach the target performance value within the test results 1092 validation criteria. Step 3 determines the maximum achievable 1093 performance value within the test results validation criteria. 1095 This test procedure MAY be repeated multiple times with different IP 1096 types: IPv4 only, IPv6 only, and IPv4 and IPv6 mixed traffic 1097 distribution. 1099 7.1.4.1. Step 1: Test Initialization and Qualification 1101 Verify the link status of all connected physical interfaces. All 1102 interfaces are expected to be in "UP" status. 1104 Configure traffic load profile of the test equipment to generate test 1105 traffic at the "Initial throughput" rate as described in 1106 Section 7.1.3.2. The test equipment SHOULD follow the traffic load 1107 profile definition as described in Section 4.3.4. The DUT/SUT SHOULD 1108 reach the "Initial throughput" during the sustain phase. Measure all 1109 KPI as defined in Section 7.1.3.5. The measured KPIs during the 1110 sustain phase MUST meet all the test results validation criteria 1111 defined in Section 7.1.3.4. 1113 If the KPI metrics do not meet the test results validation criteria, 1114 the test procedure MUST NOT be continued to step 2. 1116 7.1.4.2. Step 2: Test Run with Target Objective 1118 Configure test equipment to generate traffic at the "Target inspected 1119 throughput" rate defined in Section 7.1.3.2. The test equipment 1120 SHOULD follow the traffic load profile definition as described in 1121 Section 4.3.4. The test equipment SHOULD start to measure and record 1122 all specified KPIs. Continue the test until all traffic profile 1123 phases are completed. 1125 Within the test results validation criteria, the DUT/SUT is expected 1126 to reach the desired value of the target objective ("Target inspected 1127 throughput") in the sustain phase. Follow step 3, if the measured 1128 value does not meet the target value or does not fulfill the test 1129 results validation criteria. 1131 7.1.4.3. Step 3: Test Iteration 1133 Determine the achievable average inspected throughput within the test 1134 results validation criteria. Final test iteration MUST be performed 1135 for the test duration defined in Section 4.3.4. 1137 7.2. TCP/HTTP Connections Per Second 1139 7.2.1. Objective 1141 Using HTTP traffic, determine the sustainable TCP connection 1142 establishment rate supported by the DUT/SUT under different 1143 throughput load conditions. 1145 To measure connections per second, test iterations MUST use different 1146 fixed HTTP response object sizes (the different load conditions) 1147 defined in Section 7.2.3.2. 1149 7.2.2. Test Setup 1151 Testbed setup SHOULD be configured as defined in Section 4. Any 1152 specific testbed configuration changes (number of interfaces and 1153 interface type, etc.) MUST be documented. 1155 7.2.3. Test Parameters 1157 In this section, benchmarking test specific parameters SHOULD be 1158 defined. 1160 7.2.3.1. DUT/SUT Configuration Parameters 1162 DUT/SUT parameters MUST conform to the requirements defined in 1163 Section 4.2. Any configuration changes for this specific 1164 benchmarking test MUST be documented. 1166 7.2.3.2. Test Equipment Configuration Parameters 1168 Test equipment configuration parameters MUST conform to the 1169 requirements defined in Section 4.3. The following parameters MUST 1170 be documented for this benchmarking test: 1172 Client IP address range defined in Section 4.3.1.2 1174 Server IP address range defined in Section 4.3.2.2 1176 Traffic distribution ratio between IPv4 and IPv6 defined in 1177 Section 4.3.1.2 1179 Target connections per second: Initial value from product datasheet 1180 or the value defined based on requirement for a specific deployment 1181 scenario 1183 Initial connections per second: 10% of "Target connections per 1184 second" (Note: Initial connections per second is not a KPI to report. 1185 This value is configured on the traffic generator and used to perform 1186 the Step1: "Test Initialization and Qualification" described under 1187 the Section 7.2.4. 1189 The client SHOULD negotiate HTTP and close the connection with FIN 1190 immediately after completion of one transaction. In each test 1191 iteration, client MUST send GET request requesting a fixed HTTP 1192 response object size. 1194 The RECOMMENDED response object sizes are 1, 2, 4, 16, and 64 KByte. 1196 7.2.3.3. Test Results Validation Criteria 1198 The following criteria are the test results validation criteria. The 1199 Test results validation criteria MUST be monitored during the whole 1200 sustain phase of the traffic load profile. 1202 a. Number of failed application transactions (receiving any HTTP 1203 response code other than 200 OK) MUST be less than 0.001% (1 out 1204 of 100,000 transactions) of total attempted transactions. 1206 b. Number of terminated TCP connections due to unexpected TCP RST 1207 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1208 connections) of total initiated TCP connections. 1210 c. During the sustain phase, traffic SHOULD be forwarded at a 1211 constant rate (considered as a constant rate if any deviation of 1212 traffic forwarding rate is less than 5%). 1214 d. Concurrent TCP connections MUST be constant during steady state 1215 and any deviation of concurrent TCP connections SHOULD be less 1216 than 10%. This confirms the DUT opens and closes TCP connections 1217 at approximately the same rate. 1219 7.2.3.4. Measurement 1221 TCP Connections Per Second MUST be reported for each test iteration 1222 (for each object size). 1224 7.2.4. Test Procedures and Expected Results 1226 The test procedure is designed to measure the TCP connections per 1227 second rate of the DUT/SUT at the sustaining period of the traffic 1228 load profile. The test procedure consists of three major steps: Step 1229 1 ensures the DUT/SUT is able to reach the performance value (Initial 1230 connections per second) and meets the test results validation 1231 criteria when it was very minimally utilized. Step 2 determines the 1232 DUT/SUT is able to reach the target performance value within the test 1233 results validation criteria. Step 3 determines the maximum 1234 achievable performance value within the test results validation 1235 criteria. 1237 This test procedure MAY be repeated multiple times with different IP 1238 types: IPv4 only, IPv6 only, and IPv4 and IPv6 mixed traffic 1239 distribution. 1241 7.2.4.1. Step 1: Test Initialization and Qualification 1243 Verify the link status of all connected physical interfaces. All 1244 interfaces are expected to be in "UP" status. 1246 Configure the traffic load profile of the test equipment to establish 1247 "Initial connections per second" as defined in Section 7.2.3.2. The 1248 traffic load profile SHOULD be defined as described in Section 4.3.4. 1250 The DUT/SUT SHOULD reach the "Initial connections per second" before 1251 the sustain phase. The measured KPIs during the sustain phase MUST 1252 meet all the test results validation criteria defined in 1253 Section 7.2.3.3. 1255 If the KPI metrics do not meet the test results validation criteria, 1256 the test procedure MUST NOT continue to "Step 2". 1258 7.2.4.2. Step 2: Test Run with Target Objective 1260 Configure test equipment to establish the target objective ("Target 1261 connections per second") defined in Section 7.2.3.2. The test 1262 equipment SHOULD follow the traffic load profile definition as 1263 described in Section 4.3.4. 1265 During the ramp up and sustain phase of each test iteration, other 1266 KPIs such as inspected throughput, concurrent TCP connections and 1267 application transactions per second MUST NOT reach the maximum value 1268 the DUT/SUT can support. The test results for specific test 1269 iterations SHOULD NOT be reported, if the above-mentioned KPI 1270 (especially inspected throughput) reaches the maximum value. 1271 (Example: If the test iteration with 64 KByte of HTTP response object 1272 size reached the maximum inspected throughput limitation of the DUT/ 1273 SUT, the test iteration MAY be interrupted and the result for 64 1274 KByte SHOULD NOT be reported.) 1276 The test equipment SHOULD start to measure and record all specified 1277 KPIs. Continue the test until all traffic profile phases are 1278 completed. 1280 Within the test results validation criteria, the DUT/SUT is expected 1281 to reach the desired value of the target objective ("Target 1282 connections per second") in the sustain phase. Follow step 3, if the 1283 measured value does not meet the target value or does not fulfill the 1284 test results validation criteria. 1286 7.2.4.3. Step 3: Test Iteration 1288 Determine the achievable TCP connections per second within the test 1289 results validation criteria. 1291 7.3. HTTP Throughput 1293 7.3.1. Objective 1295 Determine the sustainable inspected throughput of the DUT/SUT for 1296 HTTP transactions varying the HTTP response object size. 1298 7.3.2. Test Setup 1300 Testbed setup SHOULD be configured as defined in Section 4. Any 1301 specific testbed configuration changes (number of interfaces and 1302 interface type, etc.) MUST be documented. 1304 7.3.3. Test Parameters 1306 In this section, benchmarking test specific parameters SHOULD be 1307 defined. 1309 7.3.3.1. DUT/SUT Configuration Parameters 1311 DUT/SUT parameters MUST conform to the requirements defined in 1312 Section 4.2. Any configuration changes for this specific 1313 benchmarking test MUST be documented. 1315 7.3.3.2. Test Equipment Configuration Parameters 1317 Test equipment configuration parameters MUST conform to the 1318 requirements defined in Section 4.3. The following parameters MUST 1319 be documented for this benchmarking test: 1321 Client IP address range defined in Section 4.3.1.2 1323 Server IP address range defined in Section 4.3.2.2 1325 Traffic distribution ratio between IPv4 and IPv6 defined in 1326 Section 4.3.1.2 1328 Target inspected throughput: Aggregated line rate of interface(s) 1329 used in the DUT/SUT or the value defined based on requirement for a 1330 specific deployment scenario 1331 Initial throughput: 10% of "Target inspected throughput" Note: 1332 Initial throughput is not a KPI to report. This value is configured 1333 on the traffic generator and used to perform Step 1: "Test 1334 Initialization and Qualification" described under Section 7.3.4. 1336 Number of HTTP response object requests (transactions) per 1337 connection: 10 1339 RECOMMENDED HTTP response object size: 1, 16, 64, 256 KByte, and 1340 mixed objects defined in Table 4. 1342 +=====================+============================+ 1343 | Object size (KByte) | Number of requests/ Weight | 1344 +=====================+============================+ 1345 | 0.2 | 1 | 1346 +---------------------+----------------------------+ 1347 | 6 | 1 | 1348 +---------------------+----------------------------+ 1349 | 8 | 1 | 1350 +---------------------+----------------------------+ 1351 | 9 | 1 | 1352 +---------------------+----------------------------+ 1353 | 10 | 1 | 1354 +---------------------+----------------------------+ 1355 | 25 | 1 | 1356 +---------------------+----------------------------+ 1357 | 26 | 1 | 1358 +---------------------+----------------------------+ 1359 | 35 | 1 | 1360 +---------------------+----------------------------+ 1361 | 59 | 1 | 1362 +---------------------+----------------------------+ 1363 | 347 | 1 | 1364 +---------------------+----------------------------+ 1366 Table 4: Mixed Objects 1368 7.3.3.3. Test Results Validation Criteria 1370 The following criteria are the test results validation criteria. The 1371 test results validation criteria MUST be monitored during the whole 1372 sustain phase of the traffic load profile. 1374 a. Number of failed application transactions (receiving any HTTP 1375 response code other than 200 OK) MUST be less than 0.001% (1 out 1376 of 100,000 transactions) of attempt transactions. 1378 b. Traffic SHOULD be forwarded at a constant rate (considered as a 1379 constant rate if any deviation of traffic forwarding rate is less 1380 than 5%). 1382 c. Concurrent TCP connections MUST be constant during steady state 1383 and any deviation of concurrent TCP connections SHOULD be less 1384 than 10%. This confirms the DUT opens and closes TCP connections 1385 at approximately the same rate. 1387 7.3.3.4. Measurement 1389 Inspected Throughput and HTTP Transactions per Second MUST be 1390 reported for each object size. 1392 7.3.4. Test Procedures and Expected Results 1394 The test procedure is designed to measure HTTP throughput of the DUT/ 1395 SUT. The test procedure consists of three major steps: Step 1 1396 ensures the DUT/SUT is able to reach the performance value (Initial 1397 throughput) and meets the test results validation criteria when it 1398 was very minimal utilized. Step 2 determines the DUT/SUT is able to 1399 reach the target performance value within the test results validation 1400 criteria. Step 3 determines the maximum achievable performance value 1401 within the test results validation criteria. 1403 This test procedure MAY be repeated multiple times with different 1404 IPv4 and IPv6 traffic distribution and HTTP response object sizes. 1406 7.3.4.1. Step 1: Test Initialization and Qualification 1408 Verify the link status of all connected physical interfaces. All 1409 interfaces are expected to be in "UP" status. 1411 Configure traffic load profile of the test equipment to establish 1412 "Initial inspected throughput" as defined in Section 7.3.3.2. 1414 The traffic load profile SHOULD be defined as described in 1415 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial inspected 1416 throughput" during the sustain phase. Measure all KPI as defined in 1417 Section 7.3.3.4. 1419 The measured KPIs during the sustain phase MUST meet the test results 1420 validation criteria "a" defined in Section 7.3.3.3. The test results 1421 validation criteria "b" and "c" are OPTIONAL for step 1. 1423 If the KPI metrics do not meet the test results validation criteria, 1424 the test procedure MUST NOT be continued to "Step 2". 1426 7.3.4.2. Step 2: Test Run with Target Objective 1428 Configure test equipment to establish the target objective ("Target 1429 inspected throughput") defined in Section 7.3.3.2. The test 1430 equipment SHOULD start to measure and record all specified KPIs. 1431 Continue the test until all traffic profile phases are completed. 1433 Within the test results validation criteria, the DUT/SUT is expected 1434 to reach the desired value of the target objective in the sustain 1435 phase. Follow step 3, if the measured value does not meet the target 1436 value or does not fulfill the test results validation criteria. 1438 7.3.4.3. Step 3: Test Iteration 1440 Determine the achievable inspected throughput within the test results 1441 validation criteria and measure the KPI metric Transactions per 1442 Second. Final test iteration MUST be performed for the test duration 1443 defined in Section 4.3.4. 1445 7.4. HTTP Transaction Latency 1447 7.4.1. Objective 1449 Using HTTP traffic, determine the HTTP transaction latency when DUT 1450 is running with sustainable HTTP transactions per second supported by 1451 the DUT/SUT under different HTTP response object sizes. 1453 Test iterations MUST be performed with different HTTP response object 1454 sizes in two different scenarios. One with a single transaction and 1455 the other with multiple transactions within a single TCP connection. 1456 For consistency both the single and multiple transaction test MUST be 1457 configured with the same HTTP version 1459 Scenario 1: The client MUST negotiate HTTP and close the connection 1460 with FIN immediately after completion of a single transaction (GET 1461 and RESPONSE). 1463 Scenario 2: The client MUST negotiate HTTP and close the connection 1464 FIN immediately after completion of 10 transactions (GET and 1465 RESPONSE) within a single TCP connection. 1467 7.4.2. Test Setup 1469 Testbed setup SHOULD be configured as defined in Section 4. Any 1470 specific testbed configuration changes (number of interfaces and 1471 interface type, etc.) MUST be documented. 1473 7.4.3. Test Parameters 1475 In this section, benchmarking test specific parameters SHOULD be 1476 defined. 1478 7.4.3.1. DUT/SUT Configuration Parameters 1480 DUT/SUT parameters MUST conform to the requirements defined in 1481 Section 4.2. Any configuration changes for this specific 1482 benchmarking test MUST be documented. 1484 7.4.3.2. Test Equipment Configuration Parameters 1486 Test equipment configuration parameters MUST conform to the 1487 requirements defined in Section 4.3. The following parameters MUST 1488 be documented for this benchmarking test: 1490 Client IP address range defined in Section 4.3.1.2 1492 Server IP address range defined in Section 4.3.2.2 1494 Traffic distribution ratio between IPv4 and IPv6 defined in 1495 Section 4.3.1.2 1497 Target objective for scenario 1: 50% of the connections per second 1498 measured in benchmarking test TCP/HTTP Connections Per Second 1499 (Section 7.2) 1501 Target objective for scenario 2: 50% of the inspected throughput 1502 measured in benchmarking test HTTP Throughput (Section 7.3) 1504 Initial objective for scenario 1: 10% of "Target objective for 1505 scenario 1" 1507 Initial objective for scenario 2: 10% of "Target objective for 1508 scenario 2" 1510 Note: The Initial objectives are not a KPI to report. These values 1511 are configured on the traffic generator and used to perform the 1512 Step1: "Test Initialization and Qualification" described under the 1513 Section 7.4.4. 1515 HTTP transaction per TCP connection: Test scenario 1 with single 1516 transaction and test scenario 2 with 10 transactions. 1518 HTTP with GET request requesting a single object. The RECOMMENDED 1519 object sizes are 1, 16, and 64 KByte. For each test iteration, 1520 client MUST request a single HTTP response object size. 1522 7.4.3.3. Test Results Validation Criteria 1524 The following criteria are the test results validation criteria. The 1525 Test results validation criteria MUST be monitored during the whole 1526 sustain phase of the traffic load profile. 1528 a. Number of failed application transactions (receiving any HTTP 1529 response code other than 200 OK) MUST be less than 0.001% (1 out 1530 of 100,000 transactions) of attempt transactions. 1532 b. Number of terminated TCP connections due to unexpected TCP RST 1533 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1534 connections) of total initiated TCP connections. 1536 c. During the sustain phase, traffic SHOULD be forwarded at a 1537 constant rate (considered as a constant rate if any deviation of 1538 traffic forwarding rate is less than 5%). 1540 d. Concurrent TCP connections MUST be constant during steady state 1541 and any deviation of concurrent TCP connections SHOULD be less 1542 than 10%. This confirms the DUT opens and closes TCP connections 1543 at approximately the same rate. 1545 e. After ramp up the DUT MUST achieve the "Target objective" defined 1546 in Section 7.4.3.2 and remain in that state for the entire test 1547 duration (sustain phase). 1549 7.4.3.4. Measurement 1551 TTFB (minimum, average, and maximum) and TTLB (minimum, average and 1552 maximum) MUST be reported for each object size. 1554 7.4.4. Test Procedures and Expected Results 1556 The test procedure is designed to measure TTFB or TTLB when the DUT/ 1557 SUT is operating close to 50% of its maximum achievable connections 1558 per second or inspected throughput. The test procedure consists of 1559 two major steps: Step 1 ensures the DUT/SUT is able to reach the 1560 initial performance values and meets the test results validation 1561 criteria when it was very minimally utilized. Step 2 measures the 1562 latency values within the test results validation criteria. 1564 This test procedure MAY be repeated multiple times with different IP 1565 types (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 1566 distribution), HTTP response object sizes and single and multiple 1567 transactions per connection scenarios. 1569 7.4.4.1. Step 1: Test Initialization and Qualification 1571 Verify the link status of all connected physical interfaces. All 1572 interfaces are expected to be in "UP" status. 1574 Configure traffic load profile of the test equipment to establish 1575 "Initial objective" as defined in Section 7.4.3.2. The traffic load 1576 profile SHOULD be defined as described in Section 4.3.4. 1578 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 1579 phase. The measured KPIs during the sustain phase MUST meet all the 1580 test results validation criteria defined in Section 7.4.3.3. 1582 If the KPI metrics do not meet the test results validation criteria, 1583 the test procedure MUST NOT be continued to "Step 2". 1585 7.4.4.2. Step 2: Test Run with Target Objective 1587 Configure test equipment to establish "Target objective" defined in 1588 Section 7.4.3.2. The test equipment SHOULD follow the traffic load 1589 profile definition as described in Section 4.3.4. 1591 The test equipment SHOULD start to measure and record all specified 1592 KPIs. Continue the test until all traffic profile phases are 1593 completed. 1595 Within the test results validation criteria, the DUT/SUT MUST reach 1596 the desired value of the target objective in the sustain phase. 1598 Measure the minimum, average, and maximum values of TTFB and TTLB. 1600 7.5. Concurrent TCP/HTTP Connection Capacity 1602 7.5.1. Objective 1604 Determine the number of concurrent TCP connections that the DUT/ SUT 1605 sustains when using HTTP traffic. 1607 7.5.2. Test Setup 1609 Testbed setup SHOULD be configured as defined in Section 4. Any 1610 specific testbed configuration changes (number of interfaces and 1611 interface type, etc.) MUST be documented. 1613 7.5.3. Test Parameters 1615 In this section, benchmarking test specific parameters SHOULD be 1616 defined. 1618 7.5.3.1. DUT/SUT Configuration Parameters 1620 DUT/SUT parameters MUST conform to the requirements defined in 1621 Section 4.2. Any configuration changes for this specific 1622 benchmarking test MUST be documented. 1624 7.5.3.2. Test Equipment Configuration Parameters 1626 Test equipment configuration parameters MUST conform to the 1627 requirements defined in Section 4.3. The following parameters MUST 1628 be noted for this benchmarking test: 1630 Client IP address range defined in Section 4.3.1.2 1632 Server IP address range defined in Section 4.3.2.2 1634 Traffic distribution ratio between IPv4 and IPv6 defined in 1635 Section 4.3.1.2 1637 Target concurrent connection: Initial value from product datasheet 1638 or the value defined based on requirement for a specific 1639 deployment scenario. 1641 Initial concurrent connection: 10% of "Target concurrent 1642 connection" Note: Initial concurrent connection is not a KPI to 1643 report. This value is configured on the traffic generator and 1644 used to perform the Step1: "Test Initialization and Qualification" 1645 described under the Section 7.5.4. 1647 Maximum connections per second during ramp up phase: 50% of 1648 maximum connections per second measured in benchmarking test TCP/ 1649 HTTP Connections per second (Section 7.2) 1651 Ramp up time (in traffic load profile for "Target concurrent 1652 connection"): "Target concurrent connection" / "Maximum 1653 connections per second during ramp up phase" 1655 Ramp up time (in traffic load profile for "Initial concurrent 1656 connection"): "Initial concurrent connection" / "Maximum 1657 connections per second during ramp up phase" 1659 The client MUST negotiate HTTP and each client MAY open multiple 1660 concurrent TCP connections per server endpoint IP. 1662 Each client sends 10 GET requests requesting 1 KByte HTTP response 1663 object in the same TCP connection (10 transactions/TCP connection) 1664 and the delay (think time) between each transaction MUST be X 1665 seconds. 1667 X = ("Ramp up time" + "steady state time") /10 1669 The established connections SHOULD remain open until the ramp down 1670 phase of the test. During the ramp down phase, all connections 1671 SHOULD be successfully closed with FIN. 1673 7.5.3.3. Test Results Validation Criteria 1675 The following criteria are the test results validation criteria. The 1676 Test results validation criteria MUST be monitored during the whole 1677 sustain phase of the traffic load profile. 1679 a. Number of failed application transactions (receiving any HTTP 1680 response code other than 200 OK) MUST be less than 0.001% (1 out 1681 of 100,000 transaction) of total attempted transactions. 1683 b. Number of terminated TCP connections due to unexpected TCP RST 1684 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1685 connections) of total initiated TCP connections. 1687 c. During the sustain phase, traffic SHOULD be forwarded at a 1688 constant rate (considered as a constant rate if any deviation of 1689 traffic forwarding rate is less than 5%). 1691 7.5.3.4. Measurement 1693 Average Concurrent TCP Connections MUST be reported for this 1694 benchmarking test. 1696 7.5.4. Test Procedures and Expected Results 1698 The test procedure is designed to measure the concurrent TCP 1699 connection capacity of the DUT/SUT at the sustaining period of 1700 traffic load profile. The test procedure consists of three major 1701 steps: Step 1 ensures the DUT/SUT is able to reach the performance 1702 value (Initial concurrent connection) and meets the test results 1703 validation criteria when it was very minimally utilized. Step 2 1704 determines the DUT/SUT is able to reach the target performance value 1705 within the test results validation criteria. Step 3 determines the 1706 maximum achievable performance value within the test results 1707 validation criteria. 1709 This test procedure MAY be repeated multiple times with different 1710 IPv4 and IPv6 traffic distribution. 1712 7.5.4.1. Step 1: Test Initialization and Qualification 1714 Verify the link status of all connected physical interfaces. All 1715 interfaces are expected to be in "UP" status. 1717 Configure test equipment to establish "Initial concurrent TCP 1718 connections" defined in Section 7.5.3.2. Except ramp up time, the 1719 traffic load profile SHOULD be defined as described in Section 4.3.4. 1721 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 1722 concurrent TCP connections". The measured KPIs during the sustain 1723 phase MUST meet all the test results validation criteria defined in 1724 Section 7.5.3.3. 1726 If the KPI metrics do not meet the test results validation criteria, 1727 the test procedure MUST NOT be continued to "Step 2". 1729 7.5.4.2. Step 2: Test Run with Target Objective 1731 Configure test equipment to establish the target objective ("Target 1732 concurrent TCP connections"). The test equipment SHOULD follow the 1733 traffic load profile definition (except ramp up time) as described in 1734 Section 4.3.4. 1736 During the ramp up and sustain phase, the other KPIs such as 1737 inspected throughput, TCP connections per second, and application 1738 transactions per second MUST NOT reach the maximum value the DUT/SUT 1739 can support. 1741 The test equipment SHOULD start to measure and record KPIs defined in 1742 Section 7.5.3.4. Continue the test until all traffic profile phases 1743 are completed. 1745 Within the test results validation criteria, the DUT/SUT is expected 1746 to reach the desired value of the target objective in the sustain 1747 phase. Follow step 3, if the measured value does not meet the target 1748 value or does not fulfill the test results validation criteria. 1750 7.5.4.3. Step 3: Test Iteration 1752 Determine the achievable concurrent TCP connections capacity within 1753 the test results validation criteria. 1755 7.6. TCP/HTTPS Connections per Second 1756 7.6.1. Objective 1758 Using HTTPS traffic, determine the sustainable SSL/TLS session 1759 establishment rate supported by the DUT/SUT under different 1760 throughput load conditions. 1762 Test iterations MUST include common cipher suites and key strengths 1763 as well as forward looking stronger keys. Specific test iterations 1764 MUST include ciphers and keys defined in Section 7.6.3.2. 1766 For each cipher suite and key strengths, test iterations MUST use a 1767 single HTTPS response object size defined in Section 7.6.3.2 to 1768 measure connections per second performance under a variety of DUT/SUT 1769 security inspection load conditions. 1771 7.6.2. Test Setup 1773 Testbed setup SHOULD be configured as defined in Section 4. Any 1774 specific testbed configuration changes (number of interfaces and 1775 interface type, etc.) MUST be documented. 1777 7.6.3. Test Parameters 1779 In this section, benchmarking test specific parameters SHOULD be 1780 defined. 1782 7.6.3.1. DUT/SUT Configuration Parameters 1784 DUT/SUT parameters MUST conform to the requirements defined in 1785 Section 4.2. Any configuration changes for this specific 1786 benchmarking test MUST be documented. 1788 7.6.3.2. Test Equipment Configuration Parameters 1790 Test equipment configuration parameters MUST conform to the 1791 requirements defined in Section 4.3. The following parameters MUST 1792 be documented for this benchmarking test: 1794 Client IP address range defined in Section 4.3.1.2 1796 Server IP address range defined in Section 4.3.2.2 1798 Traffic distribution ratio between IPv4 and IPv6 defined in 1799 Section 4.3.1.2 1801 Target connections per second: Initial value from product datasheet 1802 or the value defined based on requirement for a specific deployment 1803 scenario. 1805 Initial connections per second: 10% of "Target connections per 1806 second" Note: Initial connections per second is not a KPI to report. 1807 This value is configured on the traffic generator and used to perform 1808 the Step1: "Test Initialization and Qualification" described under 1809 the Section 7.6.4. 1811 RECOMMENDED ciphers and keys defined in Section 4.3.1.3 1813 The client MUST negotiate HTTPS and close the connection with FIN 1814 immediately after completion of one transaction. In each test 1815 iteration, client MUST send GET request requesting a fixed HTTPS 1816 response object size. The RECOMMENDED object sizes are 1, 2, 4, 16, 1817 and 64 KByte. 1819 7.6.3.3. Test Results Validation Criteria 1821 The following criteria are the test results validation criteria. The 1822 test results validation criteria MUST be monitored during the whole 1823 test duration. 1825 a. Number of failed application transactions (receiving any HTTP 1826 response code other than 200 OK) MUST be less than 0.001% (1 out 1827 of 100,000 transactions) of attempt transactions. 1829 b. Number of terminated TCP connections due to unexpected TCP RST 1830 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1831 connections) of total initiated TCP connections. 1833 c. During the sustain phase, traffic SHOULD be forwarded at a 1834 constant rate (considered as a constant rate if any deviation of 1835 traffic forwarding rate is less than 5%). 1837 d. Concurrent TCP connections MUST be constant during steady state 1838 and any deviation of concurrent TCP connections SHOULD be less 1839 than 10%. This confirms the DUT opens and closes TCP connections 1840 at approximately the same rate. 1842 7.6.3.4. Measurement 1844 TCP connections per second MUST be reported for each test iteration 1845 (for each object size). 1847 The KPI metric TLS Handshake Rate can be measured in the test using 1 1848 KByte object size. 1850 7.6.4. Test Procedures and Expected Results 1852 The test procedure is designed to measure the TCP connections per 1853 second rate of the DUT/SUT at the sustaining period of traffic load 1854 profile. The test procedure consists of three major steps: Step 1 1855 ensures the DUT/SUT is able to reach the performance value (Initial 1856 connections per second) and meets the test results validation 1857 criteria when it was very minimally utilized. Step 2 determines the 1858 DUT/SUT is able to reach the target performance value within the test 1859 results validation criteria. Step 3 determines the maximum 1860 achievable performance value within the test results validation 1861 criteria. 1863 This test procedure MAY be repeated multiple times with different 1864 IPv4 and IPv6 traffic distribution. 1866 7.6.4.1. Step 1: Test Initialization and Qualification 1868 Verify the link status of all connected physical interfaces. All 1869 interfaces are expected to be in "UP" status. 1871 Configure traffic load profile of the test equipment to establish 1872 "Initial connections per second" as defined in Section 7.6.3.2. The 1873 traffic load profile SHOULD be defined as described in Section 4.3.4. 1875 The DUT/SUT SHOULD reach the "Initial connections per second" before 1876 the sustain phase. The measured KPIs during the sustain phase MUST 1877 meet all the test results validation criteria defined in 1878 Section 7.6.3.3. 1880 If the KPI metrics do not meet the test results validation criteria, 1881 the test procedure MUST NOT be continued to "Step 2". 1883 7.6.4.2. Step 2: Test Run with Target Objective 1885 Configure test equipment to establish "Target connections per second" 1886 defined in Section 7.6.3.2. The test equipment SHOULD follow the 1887 traffic load profile definition as described in Section 4.3.4. 1889 During the ramp up and sustain phase, other KPIs such as inspected 1890 throughput, concurrent TCP connections, and application transactions 1891 per second MUST NOT reach the maximum value the DUT/SUT can support. 1892 The test results for specific test iteration SHOULD NOT be reported, 1893 if the above mentioned KPI (especially inspected throughput) reaches 1894 the maximum value. (Example: If the test iteration with 64 KByte of 1895 HTTPS response object size reached the maximum inspected throughput 1896 limitation of the DUT, the test iteration MAY be interrupted and the 1897 result for 64 KByte SHOULD NOT be reported). 1899 The test equipment SHOULD start to measure and record all specified 1900 KPIs. Continue the test until all traffic profile phases are 1901 completed. 1903 Within the test results validation criteria, the DUT/SUT is expected 1904 to reach the desired value of the target objective ("Target 1905 connections per second") in the sustain phase. Follow step 3, if the 1906 measured value does not meet the target value or does not fulfill the 1907 test results validation criteria. 1909 7.6.4.3. Step 3: Test Iteration 1911 Determine the achievable connections per second within the test 1912 results validation criteria. 1914 7.7. HTTPS Throughput 1916 7.7.1. Objective 1918 Determine the sustainable inspected throughput of the DUT/SUT for 1919 HTTPS transactions varying the HTTPS response object size. 1921 Test iterations MUST include common cipher suites and key strengths 1922 as well as forward looking stronger keys. Specific test iterations 1923 MUST include the ciphers and keys defined in Section 7.7.3.2. 1925 7.7.2. Test Setup 1927 Testbed setup SHOULD be configured as defined in Section 4. Any 1928 specific testbed configuration changes (number of interfaces and 1929 interface type, etc.) MUST be documented. 1931 7.7.3. Test Parameters 1933 In this section, benchmarking test specific parameters SHOULD be 1934 defined. 1936 7.7.3.1. DUT/SUT Configuration Parameters 1938 DUT/SUT parameters MUST conform to the requirements defined in 1939 Section 4.2. Any configuration changes for this specific 1940 benchmarking test MUST be documented. 1942 7.7.3.2. Test Equipment Configuration Parameters 1944 Test equipment configuration parameters MUST conform to the 1945 requirements defined in Section 4.3. The following parameters MUST 1946 be documented for this benchmarking test: 1948 Client IP address range defined in Section 4.3.1.2 1950 Server IP address range defined in Section 4.3.2.2 1952 Traffic distribution ratio between IPv4 and IPv6 defined in 1953 Section 4.3.1.2 1955 Target inspected throughput: Aggregated line rate of interface(s) 1956 used in the DUT/SUT or the value defined based on requirement for a 1957 specific deployment scenario. 1959 Initial throughput: 10% of "Target inspected throughput" Note: 1960 Initial throughput is not a KPI to report. This value is configured 1961 on the traffic generator and used to perform the Step1: "Test 1962 Initialization and Qualification" described under the Section 7.7.4. 1964 Number of HTTPS response object requests (transactions) per 1965 connection: 10 1967 RECOMMENDED ciphers and keys defined in Section 4.3.1.3 1969 RECOMMENDED HTTPS response object size: 1, 16, 64, 256 KByte, and 1970 mixed objects defined in Table 4 under Section 7.3.3.2. 1972 7.7.3.3. Test Results Validation Criteria 1974 The following criteria are the test results validation criteria. The 1975 test results validation criteria MUST be monitored during the whole 1976 sustain phase of the traffic load profile. 1978 a. Number of failed Application transactions (receiving any HTTP 1979 response code other than 200 OK) MUST be less than 0.001% (1 out 1980 of 100,000 transactions) of attempt transactions. 1982 b. Traffic SHOULD be forwarded at a constant rate (considered as a 1983 constant rate if any deviation of traffic forwarding rate is less 1984 than 5%). 1986 c. Concurrent TCP connections MUST be constant during steady state 1987 and any deviation of concurrent TCP connections SHOULD be less 1988 than 10%. This confirms the DUT opens and closes TCP connections 1989 at approximately the same rate. 1991 7.7.3.4. Measurement 1993 Inspected Throughput and HTTP Transactions per Second MUST be 1994 reported for each object size. 1996 7.7.4. Test Procedures and Expected Results 1998 The test procedure consists of three major steps: Step 1 ensures the 1999 DUT/SUT is able to reach the performance value (Initial throughput) 2000 and meets the test results validation criteria when it was very 2001 minimally utilized. Step 2 determines the DUT/SUT is able to reach 2002 the target performance value within the test results validation 2003 criteria. Step 3 determines the maximum achievable performance value 2004 within the test results validation criteria. 2006 This test procedure MAY be repeated multiple times with different 2007 IPv4 and IPv6 traffic distribution and HTTPS response object sizes. 2009 7.7.4.1. Step 1: Test Initialization and Qualification 2011 Verify the link status of all connected physical interfaces. All 2012 interfaces are expected to be in "UP" status. 2014 Configure traffic load profile of the test equipment to establish 2015 "Initial throughput" as defined in Section 7.7.3.2. 2017 The traffic load profile SHOULD be defined as described in 2018 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial throughput" 2019 during the sustain phase. Measure all KPI as defined in 2020 Section 7.7.3.4. 2022 The measured KPIs during the sustain phase MUST meet the test results 2023 validation criteria "a" defined in Section 7.7.3.3. The test results 2024 validation criteria "b" and "c" are OPTIONAL for step 1. 2026 If the KPI metrics do not meet the test results validation criteria, 2027 the test procedure MUST NOT be continued to "Step 2". 2029 7.7.4.2. Step 2: Test Run with Target Objective 2031 Configure test equipment to establish the target objective ("Target 2032 inspected throughput") defined in Section 7.7.3.2. The test 2033 equipment SHOULD start to measure and record all specified KPIs. 2034 Continue the test until all traffic profile phases are completed. 2036 Within the test results validation criteria, the DUT/SUT is expected 2037 to reach the desired value of the target objective in the sustain 2038 phase. Follow step 3, if the measured value does not meet the target 2039 value or does not fulfill the test results validation criteria. 2041 7.7.4.3. Step 3: Test Iteration 2043 Determine the achievable average inspected throughput within the test 2044 results validation criteria. Final test iteration MUST be performed 2045 for the test duration defined in Section 4.3.4. 2047 7.8. HTTPS Transaction Latency 2049 7.8.1. Objective 2051 Using HTTPS traffic, determine the HTTPS transaction latency when 2052 DUT/SUT is running with sustainable HTTPS transactions per second 2053 supported by the DUT/SUT under different HTTPS response object size. 2055 Scenario 1: The client MUST negotiate HTTPS and close the connection 2056 with FIN immediately after completion of a single transaction (GET 2057 and RESPONSE). 2059 Scenario 2: The client MUST negotiate HTTPS and close the connection 2060 with FIN immediately after completion of 10 transactions (GET and 2061 RESPONSE) within a single TCP connection. 2063 7.8.2. Test Setup 2065 Testbed setup SHOULD be configured as defined in Section 4. Any 2066 specific testbed configuration changes (number of interfaces and 2067 interface type, etc.) MUST be documented. 2069 7.8.3. Test Parameters 2071 In this section, benchmarking test specific parameters SHOULD be 2072 defined. 2074 7.8.3.1. DUT/SUT Configuration Parameters 2076 DUT/SUT parameters MUST conform to the requirements defined in 2077 Section 4.2. Any configuration changes for this specific 2078 benchmarking test MUST be documented. 2080 7.8.3.2. Test Equipment Configuration Parameters 2082 Test equipment configuration parameters MUST conform to the 2083 requirements defined in Section 4.3. The following parameters MUST 2084 be documented for this benchmarking test: 2086 Client IP address range defined in Section 4.3.1.2 2088 Server IP address range defined in Section 4.3.2.2 2089 Traffic distribution ratio between IPv4 and IPv6 defined in 2090 Section 4.3.1.2 2092 RECOMMENDED cipher suites and key sizes defined in Section 4.3.1.3 2094 Target objective for scenario 1: 50% of the connections per second 2095 measured in benchmarking test TCP/HTTPS Connections per second 2096 (Section 7.6) 2098 Target objective for scenario 2: 50% of the inspected throughput 2099 measured in benchmarking test HTTPS Throughput (Section 7.7) 2101 Initial objective for scenario 1: 10% of "Target objective for 2102 scenario 1" 2104 Initial objective for scenario 2: 10% of "Target objective for 2105 scenario 2" 2107 Note: The Initial objectives are not a KPI to report. These values 2108 are configured on the traffic generator and used to perform the 2109 Step1: "Test Initialization and Qualification" described under the 2110 Section 7.8.4. 2112 HTTPS transaction per TCP connection: Test scenario 1 with single 2113 transaction and scenario 2 with 10 transactions 2115 HTTPS with GET request requesting a single object. The RECOMMENDED 2116 object sizes are 1, 16, and 64 KByte. For each test iteration, 2117 client MUST request a single HTTPS response object size. 2119 7.8.3.3. Test Results Validation Criteria 2121 The following criteria are the test results validation criteria. The 2122 Test results validation criteria MUST be monitored during the whole 2123 sustain phase of the traffic load profile. 2125 a. Number of failed application transactions (receiving any HTTP 2126 response code other than 200 OK) MUST be less than 0.001% (1 out 2127 of 100,000 transactions) of attempt transactions. 2129 b. Number of terminated TCP connections due to unexpected TCP RST 2130 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2131 connections) of total initiated TCP connections. 2133 c. During the sustain phase, traffic SHOULD be forwarded at a 2134 constant rate (considered as a constant rate if any deviation of 2135 traffic forwarding rate is less than 5%). 2137 d. Concurrent TCP connections MUST be constant during steady state 2138 and any deviation of concurrent TCP connections SHOULD be less 2139 than 10%. This confirms the DUT opens and closes TCP connections 2140 at approximately the same rate. 2142 e. After ramp up the DUT/SUT MUST achieve the "Target objective" 2143 defined in the parameter Section 7.8.3.2 and remain in that state 2144 for the entire test duration (sustain phase). 2146 7.8.3.4. Measurement 2148 TTFB (minimum, average, and maximum) and TTLB (minimum, average and 2149 maximum) MUST be reported for each object size. 2151 7.8.4. Test Procedures and Expected Results 2153 The test procedure is designed to measure TTFB or TTLB when the DUT/ 2154 SUT is operating close to 50% of its maximum achievable connections 2155 per second or inspected throughput. The test procedure consists of 2156 two major steps: Step 1 ensures the DUT/SUT is able to reach the 2157 initial performance values and meets the test results validation 2158 criteria when it was very minimally utilized. Step 2 measures the 2159 latency values within the test results validation criteria. 2161 This test procedure MAY be repeated multiple times with different IP 2162 types (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 2163 distribution), HTTPS response object sizes and single, and multiple 2164 transactions per connection scenarios. 2166 7.8.4.1. Step 1: Test Initialization and Qualification 2168 Verify the link status of all connected physical interfaces. All 2169 interfaces are expected to be in "UP" status. 2171 Configure traffic load profile of the test equipment to establish 2172 "Initial objective" as defined in the Section 7.8.3.2. The traffic 2173 load profile SHOULD be defined as described in Section 4.3.4. 2175 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 2176 phase. The measured KPIs during the sustain phase MUST meet all the 2177 test results validation criteria defined in Section 7.8.3.3. 2179 If the KPI metrics do not meet the test results validation criteria, 2180 the test procedure MUST NOT be continued to "Step 2". 2182 7.8.4.2. Step 2: Test Run with Target Objective 2184 Configure test equipment to establish "Target objective" defined in 2185 Section 7.8.3.2. The test equipment SHOULD follow the traffic load 2186 profile definition as described in Section 4.3.4. 2188 The test equipment SHOULD start to measure and record all specified 2189 KPIs. Continue the test until all traffic profile phases are 2190 completed. 2192 Within the test results validation criteria, the DUT/SUT MUST reach 2193 the desired value of the target objective in the sustain phase. 2195 Measure the minimum, average, and maximum values of TTFB and TTLB. 2197 7.9. Concurrent TCP/HTTPS Connection Capacity 2199 7.9.1. Objective 2201 Determine the number of concurrent TCP connections the DUT/SUT 2202 sustains when using HTTPS traffic. 2204 7.9.2. Test Setup 2206 Testbed setup SHOULD be configured as defined in Section 4. Any 2207 specific testbed configuration changes (number of interfaces and 2208 interface type, etc.) MUST be documented. 2210 7.9.3. Test Parameters 2212 In this section, benchmarking test specific parameters SHOULD be 2213 defined. 2215 7.9.3.1. DUT/SUT Configuration Parameters 2217 DUT/SUT parameters MUST conform to the requirements defined in 2218 Section 4.2. Any configuration changes for this specific 2219 benchmarking test MUST be documented. 2221 7.9.3.2. Test Equipment Configuration Parameters 2223 Test equipment configuration parameters MUST conform to the 2224 requirements defined in Section 4.3. The following parameters MUST 2225 be documented for this benchmarking test: 2227 Client IP address range defined in Section 4.3.1.2 2229 Server IP address range defined in Section 4.3.2.2 2230 Traffic distribution ratio between IPv4 and IPv6 defined in 2231 Section 4.3.1.2 2233 RECOMMENDED cipher suites and key sizes defined in Section 4.3.1.3 2235 Target concurrent connections: Initial value from product 2236 datasheet or the value defined based on requirement for a specific 2237 deployment scenario. 2239 Initial concurrent connections: 10% of "Target concurrent 2240 connections" Note: Initial concurrent connection is not a KPI to 2241 report. This value is configured on the traffic generator and 2242 used to perform the Step1: "Test Initialization and Qualification" 2243 described under the Section 7.9.4. 2245 Connections per second during ramp up phase: 50% of maximum 2246 connections per second measured in benchmarking test TCP/HTTPS 2247 Connections per second (Section 7.6) 2249 Ramp up time (in traffic load profile for "Target concurrent 2250 connections"): "Target concurrent connections" / "Maximum 2251 connections per second during ramp up phase" 2253 Ramp up time (in traffic load profile for "Initial concurrent 2254 connections"): "Initial concurrent connections" / "Maximum 2255 connections per second during ramp up phase" 2257 The client MUST perform HTTPS transaction with persistence and each 2258 client can open multiple concurrent TCP connections per server 2259 endpoint IP. 2261 Each client sends 10 GET requests requesting 1 KByte HTTPS response 2262 objects in the same TCP connections (10 transactions/TCP connection) 2263 and the delay (think time) between each transaction MUST be X 2264 seconds. 2266 X = ("Ramp up time" + "steady state time") /10 2268 The established connections SHOULD remain open until the ramp down 2269 phase of the test. During the ramp down phase, all connections 2270 SHOULD be successfully closed with FIN. 2272 7.9.3.3. Test Results Validation Criteria 2274 The following criteria are the test results validation criteria. The 2275 Test results validation criteria MUST be monitored during the whole 2276 sustain phase of the traffic load profile. 2278 a. Number of failed application transactions (receiving any HTTP 2279 response code other than 200 OK) MUST be less than 0.001% (1 out 2280 of 100,000 transactions) of total attempted transactions. 2282 b. Number of terminated TCP connections due to unexpected TCP RST 2283 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2284 connections) of total initiated TCP connections. 2286 c. During the sustain phase, traffic SHOULD be forwarded at a 2287 constant rate (considered as a constant rate if any deviation of 2288 traffic forwarding rate is less than 5%). 2290 7.9.3.4. Measurement 2292 Average Concurrent TCP Connections MUST be reported for this 2293 benchmarking test. 2295 7.9.4. Test Procedures and Expected Results 2297 The test procedure is designed to measure the concurrent TCP 2298 connection capacity of the DUT/SUT at the sustaining period of 2299 traffic load profile. The test procedure consists of three major 2300 steps: Step 1 ensures the DUT/SUT is able to reach the performance 2301 value (Initial concurrent connection) and meets the test results 2302 validation criteria when it was very minimally utilized. Step 2 2303 determines the DUT/SUT is able to reach the target performance value 2304 within the test results validation criteria. Step 3 determines the 2305 maximum achievable performance value within the test results 2306 validation criteria. 2308 This test procedure MAY be repeated multiple times with different 2309 IPv4 and IPv6 traffic distribution. 2311 7.9.4.1. Step 1: Test Initialization and Qualification 2313 Verify the link status of all connected physical interfaces. All 2314 interfaces are expected to be in "UP" status. 2316 Configure test equipment to establish "Initial concurrent TCP 2317 connections" defined in Section 7.9.3.2. Except ramp up time, the 2318 traffic load profile SHOULD be defined as described in Section 4.3.4. 2320 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 2321 concurrent TCP connections". The measured KPIs during the sustain 2322 phase MUST meet the test results validation criteria "a" and "b" 2323 defined in Section 7.9.3.3. 2325 If the KPI metrics do not meet the test results validation criteria, 2326 the test procedure MUST NOT be continued to "Step 2". 2328 7.9.4.2. Step 2: Test Run with Target Objective 2330 Configure test equipment to establish the target objective ("Target 2331 concurrent TCP connections"). The test equipment SHOULD follow the 2332 traffic load profile definition (except ramp up time) as described in 2333 Section 4.3.4. 2335 During the ramp up and sustain phase, the other KPIs such as 2336 inspected throughput, TCP connections per second, and application 2337 transactions per second MUST NOT reach to the maximum value that the 2338 DUT/SUT can support. 2340 The test equipment SHOULD start to measure and record KPIs defined in 2341 Section 7.9.3.4. Continue the test until all traffic profile phases 2342 are completed. 2344 Within the test results validation criteria, the DUT/SUT is expected 2345 to reach the desired value of the target objective in the sustain 2346 phase. Follow step 3, if the measured value does not meet the target 2347 value or does not fulfill the test results validation criteria. 2349 7.9.4.3. Step 3: Test Iteration 2351 Determine the achievable concurrent TCP connections within the test 2352 results validation criteria. 2354 8. IANA Considerations 2356 The IANA has assigned IPv4 and IPv6 address blocks in [RFC6890] that 2357 have been registered for special purposes. The IPv6 address block 2358 2001:2::/48 has been allocated for the purpose of IPv6 Benchmarking 2359 [RFC5180] and the IPv4 address block 198.18.0.0/15 has been allocated 2360 for the purpose of IPv4 Benchmarking [RFC2544]. This assignment was 2361 made to minimize the chance of conflict in case a testing device were 2362 to be accidentally connected to part of the Internet. 2364 9. Security Considerations 2366 The primary goal of this document is to provide benchmarking 2367 terminology and methodology for next-generation network security 2368 devices. However, readers should be aware that there is some overlap 2369 between performance and security issues. Specifically, the optimal 2370 configuration for network security device performance may not be the 2371 most secure, and vice-versa. The cipher suites recommended in this 2372 document are for test purpose only. The cipher suite recommendation 2373 for a real deployment is outside the scope of this document. 2375 10. Contributors 2377 The following individuals contributed significantly to the creation 2378 of this document: 2380 Alex Samonte, Amritam Putatunda, Aria Eslambolchizadeh, Chao Guo, 2381 Chris Brown, Cory Ford, David DeSanto, Jurrie Van Den Breekel, 2382 Michelle Rhines, Mike Jack, Ryan Liles, Samaresh Nair, Stephen 2383 Goudreault, Tim Carlin, and Tim Otto. 2385 11. Acknowledgements 2387 The authors wish to acknowledge the members of NetSecOPEN for their 2388 participation in the creation of this document. Additionally, the 2389 following members need to be acknowledged: 2391 Anand Vijayan, Chris Marshall, Jay Lindenauer, Michael Shannon, Mike 2392 Deichman, Ryan Riese, and Toulnay Orkun. 2394 12. References 2396 12.1. Normative References 2398 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2399 Requirement Levels", BCP 14, RFC 2119, 2400 DOI 10.17487/RFC2119, March 1997, 2401 . 2403 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2404 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 2405 May 2017, . 2407 12.2. Informative References 2409 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 2410 Network Interconnect Devices", RFC 2544, 2411 DOI 10.17487/RFC2544, March 1999, 2412 . 2414 [RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., 2415 Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext 2416 Transfer Protocol -- HTTP/1.1", RFC 2616, 2417 DOI 10.17487/RFC2616, June 1999, 2418 . 2420 [RFC2647] Newman, D., "Benchmarking Terminology for Firewall 2421 Performance", RFC 2647, DOI 10.17487/RFC2647, August 1999, 2422 . 2424 [RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin, 2425 "Benchmarking Methodology for Firewall Performance", 2426 RFC 3511, DOI 10.17487/RFC3511, April 2003, 2427 . 2429 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 2430 Dugatkin, "IPv6 Benchmarking Methodology for Network 2431 Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May 2432 2008, . 2434 [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, 2435 "Applicability Statement for RFC 2544: Use on Production 2436 Networks Considered Harmful", RFC 6815, 2437 DOI 10.17487/RFC6815, November 2012, 2438 . 2440 [RFC6890] Cotton, M., Vegoda, L., Bonica, R., Ed., and B. Haberman, 2441 "Special-Purpose IP Address Registries", BCP 153, 2442 RFC 6890, DOI 10.17487/RFC6890, April 2013, 2443 . 2445 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 2446 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, 2447 . 2449 [RFC9000] Iyengar, J., Ed. and M. Thomson, Ed., "QUIC: A UDP-Based 2450 Multiplexed and Secure Transport", RFC 9000, 2451 DOI 10.17487/RFC9000, May 2021, 2452 . 2454 Appendix A. Test Methodology - Security Effectiveness Evaluation 2456 A.1. Test Objective 2458 This test methodology verifies the DUT/SUT is able to detect, 2459 prevent, and report the vulnerabilities. 2461 In this test, background test traffic will be generated to utilize 2462 the DUT/SUT. In parallel, the CVEs will be sent to the DUT/SUT as 2463 encrypted and as well as clear text payload formats using a traffic 2464 generator. The selection of the CVEs is described in Section 4.2.1. 2466 * Number of blocked CVEs 2468 * Number of bypassed (nonblocked) CVEs 2470 * Background traffic performance (verify if the background traffic 2471 is impacted while sending CVE toward DUT/SUT) 2473 * Accuracy of DUT/SUT statistics in term of vulnerabilities 2474 reporting 2476 A.2. Testbed Setup 2478 The same testbed MUST be used for security effectiveness test and as 2479 well as for benchmarking test cases defined in Section 7. 2481 A.3. Test Parameters 2483 In this section, the benchmarking test specific parameters SHOULD be 2484 defined. 2486 A.3.1. DUT/SUT Configuration Parameters 2488 DUT/SUT configuration parameters MUST conform to the requirements 2489 defined in Section 4.2. The same DUT configuration MUST be used for 2490 Security effectiveness test and as well as for benchmarking test 2491 cases defined in Section 7. The DUT/SUT MUST be configured in inline 2492 mode and all detected attack traffic MUST be dropped and the session 2493 SHOULD be reset 2495 A.3.2. Test Equipment Configuration Parameters 2497 Test equipment configuration parameters MUST conform to the 2498 requirements defined in Section 4.3. The same client and server IP 2499 ranges MUST be configured as used in the benchmarking test cases. In 2500 addition, the following parameters MUST be documented for this 2501 benchmarking test: 2503 * Background Traffic: 45% of maximum HTTP throughput and 45% of 2504 Maximum HTTPS throughput supported by the DUT/SUT (measured with 2505 object size 64 KByte in the benchmarking tests "HTTP(S) 2506 Throughput" defined in Section 7.3 and Section 7.7). 2508 * RECOMMENDED CVE traffic transmission Rate: 10 CVEs per second 2510 * It is RECOMMENDED to generate each CVE multiple times 2511 (sequentially) at 10 CVEs per second 2513 * Ciphers and keys for the encrypted CVE traffic MUST use the same 2514 cipher configured for HTTPS traffic related benchmarking tests 2515 (Section 7.6 - Section 7.9) 2517 A.4. Test Results Validation Criteria 2519 The following criteria are the test results validation criteria. The 2520 test results validation criteria MUST be monitored during the whole 2521 test duration. 2523 a. Number of failed application transaction in the background 2524 traffic MUST be less than 0.01% of attempted transactions. 2526 b. Number of terminated TCP connections of the background traffic 2527 (due to unexpected TCP RST sent by DUT/SUT) MUST be less than 2528 0.01% of total initiated TCP connections in the background 2529 traffic. 2531 c. During the sustain phase, traffic SHOULD be forwarded at a 2532 constant rate (considered as a constant rate if any deviation of 2533 traffic forwarding rate is less than 5%). 2535 d. False positive MUST NOT occur in the background traffic. 2537 A.5. Measurement 2539 Following KPI metrics MUST be reported for this test scenario: 2541 Mandatory KPIs: 2543 * Blocked CVEs: It SHOULD be represented in the following ways: 2545 - Number of blocked CVEs out of total CVEs 2547 - Percentage of blocked CVEs 2549 * Unblocked CVEs: It SHOULD be represented in the following ways: 2551 - Number of unblocked CVEs out of total CVEs 2553 - Percentage of unblocked CVEs 2555 * Background traffic behavior: It SHOULD be represented one of the 2556 followings ways: 2558 - No impact: Considered as "no impact'" if any deviation of 2559 traffic forwarding rate is less than or equal to 5 % (constant 2560 rate) 2562 - Minor impact: Considered as "minor impact" if any deviation of 2563 traffic forwarding rate is greater than 5% and less than or 2564 equal to10% (i.e. small spikes) 2566 - Heavily impacted: Considered as "Heavily impacted" if any 2567 deviation of traffic forwarding rate is greater than 10% (i.e. 2568 large spikes) or reduced the background HTTP(S) throughput 2569 greater than 10% 2571 * DUT/SUT reporting accuracy: DUT/SUT MUST report all detected 2572 vulnerabilities. 2574 Optional KPIs: 2576 * List of unblocked CVEs 2578 A.6. Test Procedures and Expected Results 2580 The test procedure is designed to measure the security effectiveness 2581 of the DUT/SUT at the sustaining period of the traffic load profile. 2582 The test procedure consists of two major steps. This test procedure 2583 MAY be repeated multiple times with different IPv4 and IPv6 traffic 2584 distribution. 2586 A.6.1. Step 1: Background Traffic 2588 Generate background traffic at the transmission rate defined in 2589 Appendix A.3.2. 2591 The DUT/SUT MUST reach the target objective (HTTP(S) throughput) in 2592 sustain phase. The measured KPIs during the sustain phase MUST meet 2593 all the test results validation criteria defined in Appendix A.4. 2595 If the KPI metrics do not meet the acceptance criteria, the test 2596 procedure MUST NOT be continued to "Step 2". 2598 A.6.2. Step 2: CVE Emulation 2600 While generating background traffic (in sustain phase), send the CVE 2601 traffic as defined in the parameter section. 2603 The test equipment SHOULD start to measure and record all specified 2604 KPIs. Continue the test until all CVEs are sent. 2606 The measured KPIs MUST meet all the test results validation criteria 2607 defined in Appendix A.4. 2609 In addition, the DUT/SUT SHOULD report the vulnerabilities correctly. 2611 Appendix B. DUT/SUT Classification 2613 This document aims to classify the DUT/SUT in four different 2614 categories based on its maximum supported firewall throughput 2615 performance number defined in the vendor datasheet. This 2616 classification MAY help user to determine specific configuration 2617 scale (e.g., number of ACL entries), traffic profiles, and attack 2618 traffic profiles, scaling those proportionally to DUT/SUT sizing 2619 category. 2621 The four different categories are Extra Small (XS), Small (S), Medium 2622 (M), and Large (L). The RECOMMENDED throughput values for the 2623 following categories are: 2625 Extra Small (XS) - Supported throughput less than or equal to1Gbit/s 2627 Small (S) - Supported throughput greater than 1Gbit/s and less than 2628 or equal to 5Gbit/s 2630 Medium (M) - Supported throughput greater than 5Gbit/s and less than 2631 or equal to10Gbit/s 2633 Large (L) - Supported throughput greater than 10Gbit/s 2635 Authors' Addresses 2637 Balamuhunthan Balarajah 2638 Berlin 2639 Germany 2641 Email: bm.balarajah@gmail.com 2642 Carsten Rossenhoevel 2643 EANTC AG 2644 Salzufer 14 2645 10587 Berlin 2646 Germany 2648 Email: cross@eantc.de 2650 Brian Monkman 2651 NetSecOPEN 2652 417 Independence Court 2653 Mechanicsburg, PA 17050 2654 United States of America 2656 Email: bmonkman@netsecopen.org