idnits 2.17.1 draft-ietf-bmwg-ngfw-performance-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (April 6, 2021) is 1114 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 2616 (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) -- Obsolete informational reference (is this intentional?): RFC 3511 (Obsoleted by RFC 9411) Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Methodology Working Group B. Balarajah 3 Internet-Draft 4 Intended status: Informational C. Rossenhoevel 5 Expires: October 8, 2021 EANTC AG 6 B. Monkman 7 NetSecOPEN 8 April 6, 2021 10 Benchmarking Methodology for Network Security Device Performance 11 draft-ietf-bmwg-ngfw-performance-07 13 Abstract 15 This document provides benchmarking terminology and methodology for 16 next-generation network security devices including next-generation 17 firewalls (NGFW), next-generation intrusion detection and prevention 18 systems (NGIDS/NGIPS) and unified threat management (UTM) 19 implementations. This document aims to strongly improve the 20 applicability, reproducibility, and transparency of benchmarks and to 21 align the test methodology with today's increasingly complex layer 7 22 security centric network application use cases. The main areas 23 covered in this document are test terminology, test configuration 24 parameters, and benchmarking methodology for NGFW and NGIDS/NGIPS to 25 start with. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at https://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on October 8, 2021. 44 Copyright Notice 46 Copyright (c) 2021 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (https://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 62 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 63 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 64 4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 4 65 4.1. Test Bed Configuration . . . . . . . . . . . . . . . . . 4 66 4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 6 67 4.2.1. Security Effectiveness Configuration . . . . . . . . 12 68 4.3. Test Equipment Configuration . . . . . . . . . . . . . . 12 69 4.3.1. Client Configuration . . . . . . . . . . . . . . . . 12 70 4.3.2. Backend Server Configuration . . . . . . . . . . . . 15 71 4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 16 72 4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 17 73 5. Test Bed Considerations . . . . . . . . . . . . . . . . . . . 18 74 6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 19 75 6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 19 76 6.2. Detailed Test Results . . . . . . . . . . . . . . . . . . 20 77 6.3. Benchmarks and Key Performance Indicators . . . . . . . . 21 78 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 22 79 7.1. Throughput Performance with Application Traffic Mix . . . 22 80 7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 22 81 7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 23 82 7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 23 83 7.1.4. Test Procedures and Expected Results . . . . . . . . 24 84 7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 25 85 7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 25 86 7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 25 87 7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 26 88 7.2.4. Test Procedures and Expected Results . . . . . . . . 27 89 7.3. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 28 90 7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 28 91 7.3.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 28 92 7.3.3. Test Parameters . . . . . . . . . . . . . . . . . . . 29 93 7.3.4. Test Procedures and Expected Results . . . . . . . . 31 94 7.4. HTTP Transaction Latency . . . . . . . . . . . . . . . . 32 95 7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 32 96 7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 32 97 7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 32 98 7.4.4. Test Procedures and Expected Results . . . . . . . . 34 99 7.5. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 35 100 7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 35 101 7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 35 102 7.5.3. Test Parameters . . . . . . . . . . . . . . . . . . . 35 103 7.5.4. Test Procedures and Expected Results . . . . . . . . 37 104 7.6. TCP/HTTPS Connections per Second . . . . . . . . . . . . 38 105 7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 38 106 7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 38 107 7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 38 108 7.6.4. Test Procedures and Expected Results . . . . . . . . 40 109 7.7. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 41 110 7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 41 111 7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 41 112 7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 42 113 7.7.4. Test Procedures and Expected Results . . . . . . . . 44 114 7.8. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 45 115 7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 45 116 7.8.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 45 117 7.8.3. Test Parameters . . . . . . . . . . . . . . . . . . . 45 118 7.8.4. Test Procedures and Expected Results . . . . . . . . 47 119 7.9. Concurrent TCP/HTTPS Connection Capacity . . . . . . . . 48 120 7.9.1. Objective . . . . . . . . . . . . . . . . . . . . . . 48 121 7.9.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 48 122 7.9.3. Test Parameters . . . . . . . . . . . . . . . . . . . 48 123 7.9.4. Test Procedures and Expected Results . . . . . . . . 50 124 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 51 125 9. Security Considerations . . . . . . . . . . . . . . . . . . . 51 126 10. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 51 127 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 51 128 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 52 129 12.1. Normative References . . . . . . . . . . . . . . . . . . 52 130 12.2. Informative References . . . . . . . . . . . . . . . . . 52 131 Appendix A. Test Methodology - Security Effectiveness Evaluation 53 132 A.1. Test Objective . . . . . . . . . . . . . . . . . . . . . 53 133 A.2. Test Bed Setup . . . . . . . . . . . . . . . . . . . . . 53 134 A.3. Test Parameters . . . . . . . . . . . . . . . . . . . . . 53 135 A.3.1. DUT/SUT Configuration Parameters . . . . . . . . . . 53 136 A.3.2. Test Equipment Configuration Parameters . . . . . . . 54 137 A.4. Test Results Validation Criteria . . . . . . . . . . . . 54 138 A.5. Measurement . . . . . . . . . . . . . . . . . . . . . . . 54 139 A.6. Test Procedures and Expected Results . . . . . . . . . . 55 140 A.6.1. Step 1: Background Traffic . . . . . . . . . . . . . 55 141 A.6.2. Step 2: CVE Emulation . . . . . . . . . . . . . . . . 56 142 Appendix B. DUT/SUT Classification . . . . . . . . . . . . . . . 56 143 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 56 145 1. Introduction 147 15 years have passed since IETF recommended test methodology and 148 terminology for firewalls initially ([RFC3511]). The requirements 149 for network security element performance and effectiveness have 150 increased tremendously since then. Security function implementations 151 have evolved to more advanced areas and have diversified into 152 intrusion detection and prevention, threat management, analysis of 153 encrypted traffic, etc. In an industry of growing importance, well- 154 defined, and reproducible key performance indicators (KPIs) are 155 increasingly needed as they enable fair and reasonable comparison of 156 network security functions. All these reasons have led to the 157 creation of a new next-generation network security device 158 benchmarking document and this document supersedes [RFC3511]. 160 2. Requirements 162 The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 163 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 164 "OPTIONAL" in this document are to be interpreted as described in BCP 165 14 [RFC2119], [RFC8174] when, and only when, they appear in all 166 capitals, as shown here. 168 3. Scope 170 This document provides testing terminology and testing methodology 171 for modern and next-generation network security devices. It covers 172 the validation of security effectiveness configurations of network 173 security devices, followed by performance benchmark testing. This 174 document focuses on advanced, realistic, and reproducible testing 175 methods. Additionally, it describes test bed environments, test tool 176 requirements, and test result formats. 178 4. Test Setup 180 Test setup defined in this document is applicable to all benchmarking 181 tests described in Section 7. The test setup MUST be contained 182 within an Isolated Test Environment (see Section 3 of [RFC6815]). 184 4.1. Test Bed Configuration 186 Test bed configuration MUST ensure that any performance implications 187 that are discovered during the benchmark testing aren't due to the 188 inherent physical network limitations such as the number of physical 189 links and forwarding performance capabilities (throughput and 190 latency) of the network devices in the test bed. For this reason, 191 this document recommends avoiding external devices such as switches 192 and routers in the test bed wherever possible. 194 In some deployment scenarios, the network security devices (Device 195 Under Test/System Under Test) are connected to routers and switches 196 which will reduce the number of entries in MAC or ARP tables of the 197 Device Under Test/System Under Test (DUT/SUT). If MAC or ARP tables 198 have many entries, this may impact the actual DUT/SUT performance due 199 to MAC and ARP/ND (Neighbor Discovery) table lookup processes. This 200 document also recommends using test equipment with the capability of 201 emulating layer 3 routing functionality instead of adding external 202 routers in the test bed. 204 The test bed setup Option 1 (Figure 1) is the RECOMMENDED test bed 205 setup for the benchmarking test. 207 +-----------------------+ +-----------------------+ 208 | +-------------------+ | +-----------+ | +-------------------+ | 209 | | Emulated Router(s)| | | | | | Emulated Router(s)| | 210 | | (Optional) | +----- DUT/SUT +-----+ (Optional) | | 211 | +-------------------+ | | | | +-------------------+ | 212 | +-------------------+ | +-----------+ | +-------------------+ | 213 | | Clients | | | | Servers | | 214 | +-------------------+ | | +-------------------+ | 215 | | | | 216 | Test Equipment | | Test Equipment | 217 +-----------------------+ +-----------------------+ 219 Figure 1: Test Bed Setup - Option 1 221 If the test equipment used is not capable of emulating layer 3 222 routing functionality or if the numbers of used ports are mismatched 223 between test equipment and the DUT/SUT (need for a test equipment 224 ports aggregation), the test setup can be configured as shown in 225 Figure 2. 227 +-------------------+ +-----------+ +--------------------+ 228 |Aggregation Switch/| | | | Aggregation Switch/| 229 | Router +------+ DUT/SUT +------+ Router | 230 | | | | | | 231 +----------+--------+ +-----------+ +--------+-----------+ 232 | | 233 | | 234 +-----------+-----------+ +-----------+-----------+ 235 | | | | 236 | +-------------------+ | | +-------------------+ | 237 | | Emulated Router(s)| | | | Emulated Router(s)| | 238 | | (Optional) | | | | (Optional) | | 239 | +-------------------+ | | +-------------------+ | 240 | +-------------------+ | | +-------------------+ | 241 | | Clients | | | | Servers | | 242 | +-------------------+ | | +-------------------+ | 243 | | | | 244 | Test Equipment | | Test Equipment | 245 +-----------------------+ +-----------------------+ 247 Figure 2: Test Bed Setup - Option 2 249 4.2. DUT/SUT Configuration 251 A unique DUT/SUT configuration MUST be used for all benchmarking 252 tests described in Section 7. Since each DUT/SUT will have their own 253 unique configuration, users SHOULD configure their device with the 254 same parameters and security features that would be used in the 255 actual deployment of the device or a typical deployment in order to 256 achieve maximum network security coverage. 258 Table 1 and Table 2 below describe the RECOMMENDED and OPTIONAL sets 259 of network security feature list for NGFW and NGIDS/NGIPS 260 respectively. The selected security features SHOULD be consistently 261 enabled on the DUT/SUT for all the benchmarking tests described in 262 Section 7. 264 To improve repeatability, a summary of the DUT/SUT configuration 265 including a description of all enabled DUT/SUT features MUST be 266 published with the benchmarking results. 268 +------------------------+ 269 | NGFW | 270 +--------------- +-------------+----------+ 271 | | | | 272 |DUT/SUT Features| RECOMMENDED | OPTIONAL | 273 | | | | 274 +----------------+-------------+----------+ 275 |SSL Inspection | x | | 276 +----------------+-------------+----------+ 277 |IDS/IPS | x | | 278 +----------------+-------------+----------+ 279 |Anti-Spyware | x | | 280 +----------------+-------------+----------+ 281 |Anti-Virus | x | | 282 +----------------+-------------+----------+ 283 |Anti-Botnet | x | | 284 +----------------+-------------+----------+ 285 |Web Filtering | | x | 286 +----------------+-------------+----------+ 287 |Data Loss | | | 288 |Protection (DLP)| | x | 289 +----------------+-------------+----------+ 290 |DDoS | | x | 291 +----------------+-------------+----------+ 292 |Certificate | | x | 293 |Validation | | | 294 +----------------+-------------+----------+ 295 |Logging and | x | | 296 |Reporting | | | 297 +----------------+-------------+----------+ 298 |Application | x | | 299 |Identification | | | 300 +----------------+-------------+----------+ 302 Table 1: NGFW Security Features 303 +------------------------+ 304 | NGIDS/NGIPS | 305 +----------------+-------------+----------+ 306 | | | | 307 |DUT/SUT Features| RECOMMENDED | OPTIONAL | 308 | | | | 309 +----------------+-------------+----------+ 310 |SSL Inspection | x | | 311 +----------------+-------------+----------+ 312 |Anti-Malware | x | | 313 +----------------+-------------+----------+ 314 |Anti-Spyware | x | | 315 +----------------+-------------+----------+ 316 |Anti-Botnet | x | | 317 +----------------+-------------+----------+ 318 |Logging and | x | | 319 |Reporting | | | 320 +----------------+-------------+----------+ 321 |Application | x | | 322 |Identification | | | 323 +----------------+-------------+----------+ 324 |Deep Packet | x | | 325 |Inspection | | | 326 +----------------+-------------+----------+ 327 |Anti-Evasion | x | | 328 +----------------+-------------+----------+ 330 Table 2: NGIDS/NGIPS Security Features 332 The following table provides a brief description of the security 333 features. 335 +------------------+------------------------------------------------+ 336 | DUT/SUT Features | Description | 337 +------------------+------------------------------------------------+ 338 | SSL Inspection | DUT/SUT intercepts and decrypts inbound HTTPS | 339 | | traffic between servers and clients. Once the | 340 | | content inspection has been completed, DUT/SUT | 341 | | encrypts the HTTPS traffic with ciphers | 342 | | and keys used by the clients and servers. | 343 +------------------+------------------------------------------------+ 344 | IDS/IPS | DUT/SUT detects and blocks exploits | 345 | | targeting known and unknown vulnerabilities | 346 | | across the monitored network. | 347 +------------------+------------------------------------------------+ 348 | Anti-Malware | DUT/SUT detects and prevents the transmission | 349 | | of malicious executable code and any associated| 350 | | communications across the monitored network. | 351 | | This includes data exfiltration as well as | 352 | | command and control channels. | 353 +------------------+------------------------------------------------+ 354 | Anti-Spyware | Anti-Spyware is a subcategory of Anti Malware. | 355 | | Spyware transmits information without the | 356 | | user's knowledge or permission. DUT/SUT detects| 357 | | and block initial infection or transmission of | 358 | | data. | 359 +------------------+------------------------------------------------+ 360 | Anti-Botnet | DUT/SUT detects traffic to or from botnets. | 361 +------------------+------------------------------------------------+ 362 | Anti-Evasion | DUT/SUT detects and mitigates attacks that have| 363 | | been obfuscated in some manner. | 364 +------------------+------------------------------------------------+ 365 | Web Filtering | DUT/SUT detects and blocks malicious website | 366 | | including defined classifications of website | 367 | | across the monitored network. | 368 +------------------+------------------------------------------------+ 369 | DLP | DUT/SUT detects and blocks the transmission | 370 | | of Personally Identifiable Information (PII) | 371 | | and specific files across the monitored network| 372 +------------------+------------------------------------------------+ 373 | Certificate | DUT/SUT validates certificates used in | 374 | Validation | encrypted communications across the monitored | 375 | | network. | 376 +------------------+------------------------------------------------+ 377 | Logging and | DUT/SUT logs and reports all traffic at the | 378 | Reporting | flow level across the monitored. | 379 +------------------+------------------------------------------------+ 380 | Application | DUT/SUT detects known applications as defined | 381 | Identification | within the traffic mix selected across | 382 | | the monitored network. | 383 +------------------+------------------------------------------------+ 385 Table 3: Security Feature Description 387 In summary, a DUT/SUT SHOULD be configured as follows: 389 o All RECOMMENDED security inspection enabled 391 o Disposition of all flows of traffic are logged - Logging to an 392 external device is permissible 394 o Geographical location filtering and Application Identification and 395 Control configured to be triggered based on a site or application 396 from the defined traffic mix 398 In addition, a realistic number of access control rules (ACL) SHOULD 399 be configured on the DUT/SUT where ACL's are configurable and also 400 reasonable based on the deployment scenario. This document 401 determines the number of access policy rules for four different 402 classes of DUT/SUT; namely Extra Small (XS), Small (S), Medium (M) 403 and Large (L). A sample DUT/SUT classification is described in 404 Appendix B. 406 The Access Control Rules (ACL) defined in Table 4 MUST be configured 407 from top to bottom in the correct order as shown in the table. This 408 is due to ACL types listed in specificity decreasing order, with 409 "block" first, followed by "allow", representing typical ACL based 410 security policy. The ACL entries SHOULD be configured with routable 411 IP subnets by the DUT/SUT. (Note: There will be differences between 412 how security vendors implement ACL decision making.) The configured 413 ACL MUST NOT block the security and measurement traffic used for the 414 benchmarking tests. 416 +---------------+ 417 | DUT/SUT | 418 | Classification| 419 | # Rules | 420 +-----------+-----------+--------------------+------+---+---+---+---+ 421 | | Match | | | | | | | 422 | Rules Type| Criteria | Description |Action| XS| S | M | L | 423 +-------------------------------------------------------------------+ 424 |Application|Application| Any application | block| 5 | 10| 20| 50| 425 |layer | | not included in | | | | | | 426 | | | the measurement | | | | | | 427 | | | traffic | | | | | | 428 +-------------------------------------------------------------------+ 429 |Transport |Src IP and | Any src IP subnet | block| 25| 50|100|250| 430 |layer |TCP/UDP | used and any dst | | | | | | 431 | |Dst ports | ports not used in | | | | | | 432 | | | the measurement | | | | | | 433 | | | traffic | | | | | | 434 +-------------------------------------------------------------------+ 435 |IP layer |Src/Dst IP | Any src/dst IP | block| 25| 50|100|250| 436 | | | subnet not used | | | | | | 437 | | | in the measurement | | | | | | 438 | | | traffic | | | | | | 439 +-------------------------------------------------------------------+ 440 |Application|Application| Half of the | allow| 10| 10| 10| 10| 441 |layer | | applications | | | | | | 442 | | | included in the | | | | | | 443 | | | measurement traffic| | | | | | 444 | | |(see the note below)| | | | | | 445 +-------------------------------------------------------------------+ 446 |Transport |Src IP and | Half of the src | allow| >1| >1| >1| >1| 447 |layer |TCP/UDP | IP used and any | | | | | | 448 | |Dst ports | dst ports used in | | | | | | 449 | | | the measurement | | | | | | 450 | | | traffic | | | | | | 451 | | | (one rule per | | | | | | 452 | | | subnet) | | | | | | 453 +-------------------------------------------------------------------+ 454 |IP layer |Src IP | The rest of the | allow| >1| >1| >1| >1| 455 | | | src IP subnet | | | | | | 456 | | | range used in the | | | | | | 457 | | | measurement | | | | | | 458 | | | traffic | | | | | | 459 | | | (one rule per | | | | | | 460 | | | subnet) | | | | | | 461 +-----------+-----------+--------------------+------+---+---+---+---+ 463 Table 4: DUT/SUT Access List 465 Note: If the half of applications included in the measurement traffic 466 is less than 10, the missing number of ACL entries (dummy rules) can 467 be configured for any application traffic not included in the 468 measurement traffic. 470 4.2.1. Security Effectiveness Configuration 472 The Security features (defined in table 1 and 2) of the DUT/SUT MUST 473 be configured effectively in such a way to detect, prevent, and 474 report the defined security Vulnerability sets. This Section defines 475 the selection of the security Vulnerability sets from Common 476 Vulnerabilities and Exposures (CVE) list for the testing. The 477 vulnerability set SHOULD reflect a minimum of 500 CVEs from no older 478 than 10 calendar years to the current year. These CVEs SHOULD be 479 selected with a focus on in-use software commonly found in business 480 applications, with a Common Vulnerability Scoring System (CVSS) 481 Severity of High (7-10). 483 This document is primarily focused on performance benchmarking. 484 However, it is RECOMMENDED to validate the security features 485 configuration of the DUT/SUT by evaluating the security effectiveness 486 as a prerequisite for performance benchmarking tests defined in the 487 section 7. In case the Benchmarking tests are performed without 488 evaluating security effectiveness, the test report MUST explain the 489 implications of this. The methodology for evaluating Security 490 effectiveness is defined in Appendix A. 492 4.3. Test Equipment Configuration 494 In general, test equipment allows configuring parameters in different 495 protocol layers. These parameters thereby influence the traffic 496 flows which will be offered and impact performance measurements. 498 This section specifies common test equipment configuration parameters 499 applicable for all benchmarking tests defined in Section 7. Any 500 benchmarking test specific parameters are described under the test 501 setup section of each benchmarking test individually. 503 4.3.1. Client Configuration 505 This section specifies which parameters SHOULD be considered while 506 configuring clients using test equipment. Also, this section 507 specifies the RECOMMENDED values for certain parameters. 509 4.3.1.1. TCP Stack Attributes 511 The TCP stack SHOULD use a congestion control algorithm at client and 512 server endpoints. The default IPv4 and IPv6 MSS segments size SHOULD 513 be set to 1460 bytes and 1440 bytes respectively and a TX and RX 514 initial receive windows of 64 KByte. Client initial congestion 515 window SHOULD NOT exceed 10 times the MSS. Delayed ACKs are 516 permitted and the maximum client delayed ACK SHOULD NOT exceed 10 517 times the MSS before a forced ACK. Up to three retries SHOULD be 518 allowed before a timeout event is declared. All traffic MUST set the 519 TCP PSH flag to high. The source port range SHOULD be in the range 520 of 1024 - 65535. Internal timeout SHOULD be dynamically scalable per 521 RFC 793. The client SHOULD initiate and close TCP connections. The 522 TCP connection MUST be initiated via a TCP three way handshake (SYN, 523 SYN/ACK, ACK). and it MUST be closed via either a TCP three way 524 close (FIN, FIN/ACK, ACK), or a TCP four way close (FIN, ACK, FIN, 525 ACK). 527 4.3.1.2. Client IP Address Space 529 The sum of the client IP space SHOULD contain the following 530 attributes. 532 o The IP blocks SHOULD consist of multiple unique, discontinuous 533 static address blocks. 535 o A default gateway is permitted. 537 o The IPv4 Type of Service (ToS) byte or IPv6 traffic class should 538 be set to '00' or '000000' respectively. 540 The following equation can be used to define the total number of 541 client IP addresses that will be configured on the test equipment. 543 Desired total number of client IP = Target throughput [Mbit/s] / 544 Average throughput per IP address [Mbit/s] 546 As shown in the example list below, the value for "Average throughput 547 per IP address" can be varied depending on the deployment and use 548 case scenario. 550 (Option 1) DUT/SUT deployment scenario 1 : 6-7 Mbit/s per IP (e.g. 551 1,400-1,700 IPs per 10Gbit/s throughput) 553 (Option 2) DUT/SUT deployment scenario 2 : 0.1-0.2 Mbit/s per IP 554 (e.g. 50,000-100,000 IPs per 10Gbit/s throughput) 556 Based on deployment and use case scenario, client IP addresses SHOULD 557 be distributed between IPv4 and IPv6 type. The Following options can 558 be considered for a selection of traffic mix ratio. 560 (Option 1) 100 % IPv4, no IPv6 562 (Option 2) 80 % IPv4, 20% IPv6 564 (Option 3) 50 % IPv4, 50% IPv6 566 (Option 4) 20 % IPv4, 80% IPv6 568 (Option 5) no IPv4, 100% IPv6 570 Note: The IANA has assigned IP address range for the testing purpose 571 as described in Section 8. 573 4.3.1.3. Emulated Web Browser Attributes 575 The emulated web client contains attributes that will materially 576 affect how traffic is loaded. The objective is to emulate modern, 577 typical browser attributes to improve realism of the result set. 579 For HTTP traffic emulation, the emulated browser MUST negotiate HTTP 580 1.1. HTTP persistence MAY be enabled depending on the test scenario. 581 The browser MAY open multiple TCP connections per Server endpoint IP 582 at any time depending on how many sequential transactions are needed 583 to be processed. Within the TCP connection multiple transactions MAY 584 be processed if the emulated browser has available connections. The 585 browser SHOULD advertise a User-Agent header. Headers MUST be sent 586 uncompressed. The browser SHOULD enforce content length validation. 588 For encrypted traffic, the following attributes SHALL define the 589 negotiated encryption parameters. The test clients MUST use TLS 590 version 1.2 or higher. TLS record size MAY be optimized for the 591 HTTPS response object size up to a record size of 16 KByte. The 592 client endpoint SHOULD send TLS Extension Server Name Indication 593 (SNI) information when opening a security tunnel. Each client 594 connection MUST perform a full handshake with server certificate and 595 MUST NOT use session reuse or resumption. 597 The following TLS 1.2 supported ciphers and keys are RECOMMENDED to 598 use for HTTPS based benchmarking tests defined in Section 7. 600 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 601 Algorithm: ecdsa_secp256r1_sha256 and Supported group: sepc256r1) 603 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 604 Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256) 606 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 607 Algorithm: ecdsa_secp384r1_sha384 and Supported group: sepc521r1) 609 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 610 Algorithm: rsa_pkcs1_sha384 and Supported group: secp256) 612 Note: The above ciphers and keys were those commonly used enterprise 613 grade encryption cipher suites for TLS 1.2. It is recognized that 614 these will evolve over time. Individual certification bodies SHOULD 615 use ciphers and keys that reflect evolving use cases. These choices 616 MUST be documented in the resulting test reports with detailed 617 information on the ciphers and keys used along with reasons for the 618 choices. 620 [RFC8446] defines the following cipher suites for use with TLS 1.3. 622 1. TLS_AES_128_GCM_SHA256 624 2. TLS_AES_256_GCM_SHA384 626 3. TLS_CHACHA20_POLY1305_SHA256 628 4. TLS_AES_128_CCM_SHA256 630 5. TLS_AES_128_CCM_8_SHA256 632 4.3.2. Backend Server Configuration 634 This section specifies which parameters should be considered while 635 configuring emulated backend servers using test equipment. 637 4.3.2.1. TCP Stack Attributes 639 The TCP stack on the server side SHOULD be configured similar to the 640 client side configuration described in Section 4.3.1.1. In addition, 641 server initial congestion window MUST NOT exceed 10 times the MSS. 642 Delayed ACKs are permitted and the maximum server delayed ACK MUST 643 NOT exceed 10 times the MSS before a forced ACK. 645 4.3.2.2. Server Endpoint IP Addressing 647 The sum of the server IP space SHOULD contain the following 648 attributes. 650 o The server IP blocks SHOULD consist of unique, discontinuous 651 static address blocks with one IP per Server Fully Qualified 652 Domain Name (FQDN) endpoint per test port. 654 o A default gateway is permitted. The IPv4 ToS byte and IPv6 655 traffic class bytes should be set to '00' and '000000' 656 respectively. 658 o The server IP addresses SHOULD be distributed between IPv4 and 659 IPv6 with a ratio identical to the clients distribution ratio. 661 Note: The IANA has assigned IP address range for the testing purpose 662 as described in Section 8. 664 4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes 666 The server pool for HTTP SHOULD listen on TCP port 80 and emulate 667 HTTP version 1.1 with persistence. The Server MUST advertise server 668 type in the Server response header [RFC2616]. For HTTPS server, TLS 669 1.2 or higher MUST be used with a maximum record size of 16 KByte and 670 MUST NOT use ticket resumption or Session ID reuse. The server MUST 671 listen on port TCP 443. The server SHALL serve a certificate to the 672 client. The HTTPS server MUST check Host SNI information with the 673 FQDN if the SNI is in use. Cipher suite and key size on the server 674 side MUST be configured similar to the client side configuration 675 described in Section 4.3.1.3. 677 4.3.3. Traffic Flow Definition 679 This section describes the traffic pattern between client and server 680 endpoints. At the beginning of the test, the server endpoint 681 initializes and will be ready to accept connection states including 682 initialization of the TCP stack as well as bound HTTP and HTTPS 683 servers. When a client endpoint is needed, it will initialize and be 684 given attributes such as a MAC and IP address. The behavior of the 685 client is to sweep through the given server IP space, sequentially 686 generating a recognizable service by the DUT. Thus, a balanced, mesh 687 between client endpoints and server endpoints will be generated in a 688 client port server port combination. Each client endpoint performs 689 the same actions as other endpoints, with the difference being the 690 source IP of the client endpoint and the target server IP pool. The 691 client MUST use the server's IP address or Fully Qualified Domain 692 Names (FQDN) in Host Headers [RFC2616]. For TLS the client MAY use 693 Server Name Indication (SNI). 695 4.3.3.1. Description of Intra-Client Behavior 697 Client endpoints are independent of other clients that are 698 concurrently executing. When a client endpoint initiates traffic, 699 this section describes how the client steps through different 700 services. Once the test is initialized, the client endpoints SHOULD 701 randomly hold (perform no operation) for a few milliseconds to allow 702 for better randomization of the start of client traffic. Each client 703 will either open a new TCP connection or connect to a TCP persistence 704 stack still open to that specific server. At any point that the 705 service profile may require encryption, a TLS encryption tunnel will 706 form presenting the URL or IP address request to the server. If 707 using SNI, the server will then perform an SNI name check with the 708 proposed FQDN compared to the domain embedded in the certificate. 709 Only when correct, will the server process the HTTPS response object. 710 The initial response object to the server MUST NOT have a fixed size; 711 its size is based on benchmarking tests described in Section 7. 712 Multiple additional sub-URLs (response objects on the service page) 713 MAY be requested simultaneously. This MAY be to the same server IP 714 as the initial URL. Each sub-object will also use a conical FQDN and 715 URL path, as observed in the traffic mix used. 717 4.3.4. Traffic Load Profile 719 The loading of traffic is described in this section. The loading of 720 a traffic load profile has five distinct phases: Init, ramp up, 721 sustain, ramp down, and collection. 723 1. During the Init phase, test bed devices including the client and 724 server endpoints should negotiate layer 2-3 connectivity such as 725 MAC learning and ARP. Only after successful MAC learning or ARP/ 726 ND resolution SHALL the test iteration move to the next phase. 727 No measurements are made in this phase. The minimum RECOMMEND 728 time for Init phase is 5 seconds. During this phase, the 729 emulated clients SHOULD NOT initiate any sessions with the DUT/ 730 SUT, in contrast, the emulated servers should be ready to accept 731 requests from DUT/SUT or from emulated clients. 733 2. In the ramp up phase, the test equipment SHOULD start to generate 734 the test traffic. It SHOULD use a set approximate number of 735 unique client IP addresses actively to generate traffic. The 736 traffic SHOULD ramp from zero to desired target objective. The 737 target objective will be defined for each benchmarking test. The 738 duration for the ramp up phase MUST be configured long enough, so 739 that the test equipment does not overwhelm the DUT/SUT's stated 740 performance metrics defined in Section 6.3 namely; TCP 741 Connections Per Second, Inspected Throughput, Concurrent TCP 742 Connections, and Application Transactions Per Second. No 743 measurements are made in this phase. 745 3. Sustain phase starts when all required clients (connections) are 746 active and operating at their desired load condition. In the 747 sustain phase, the test equipment SHOULD continue generating 748 traffic to constant target value for a constant number of active 749 clients. The minimum RECOMMENDED time duration for sustain phase 750 is 300 seconds. This is the phase where measurements occur. 752 4. In the ramp down/close phase, no new connections are established, 753 and no measurements are made. The time duration for ramp up and 754 ramp down phase SHOULD be the same. 756 5. The last phase is administrative and will occur when the test 757 equipment merges and collates the report data. 759 5. Test Bed Considerations 761 This section recommends steps to control the test environment and 762 test equipment, specifically focusing on virtualized environments and 763 virtualized test equipment. 765 1. Ensure that any ancillary switching or routing functions between 766 the system under test and the test equipment do not limit the 767 performance of the traffic generator. This is specifically 768 important for virtualized components (vSwitches, vRouters). 770 2. Verify that the performance of the test equipment matches and 771 reasonably exceeds the expected maximum performance of the system 772 under test. 774 3. Assert that the test bed characteristics are stable during the 775 entire test session. Several factors might influence stability 776 specifically, for virtualized test beds. For example, additional 777 workloads in a virtualized system, load balancing, and movement 778 of virtual machines during the test, or simple issues such as 779 additional heat created by high workloads leading to an emergency 780 CPU performance reduction. 782 Test bed reference pre-tests help to ensure that the maximum desired 783 traffic generator aspects such as throughput, transaction per second, 784 connection per second, concurrent connection, and latency. 786 Test bed preparation may be performed either by configuring the DUT 787 in the most trivial setup (fast forwarding) or without presence of 788 the DUT. 790 6. Reporting 792 This section describes how the final report should be formatted and 793 presented. The final test report MAY have two major sections; 794 Introduction and detailed test results sections. 796 6.1. Introduction 798 The following attributes SHOULD be present in the introduction 799 section of the test report. 801 1. The time and date of the execution of the test MUST be prominent. 803 2. Summary of test bed software and Hardware details 805 A. DUT/SUT Hardware/Virtual Configuration 807 + This section SHOULD clearly identify the make and model of 808 the DUT/SUT 810 + The port interfaces, including speed and link information 811 MUST be documented. 813 + If the DUT/SUT is a Virtual Network Function (VNF), host 814 (server) hardware and software details, interface 815 acceleration type such as DPDK and SR-IOV used CPU cores, 816 used RAM, and the resource sharing (e.g. Pinning details 817 and NUMA Node) configuration MUST be documented. The 818 virtual components such as Hypervisor, virtual switch 819 version MUST be also documented. 821 + Any additional hardware relevant to the DUT/SUT such as 822 controllers MUST be documented 824 B. DUT/SUT Software 826 + The operating system name MUST be documented 828 + The version MUST be documented 830 + The specific configuration MUST be documented 832 C. DUT/SUT Enabled Features 834 + Configured DUT/SUT features (see Table 1 and Table 2) MUST 835 be documented 837 + Attributes of those featured MUST be documented 838 + Any additional relevant information about features MUST be 839 documented 841 D. Test equipment hardware and software 843 + Test equipment vendor name 845 + Hardware details including model number, interface type 847 + Test equipment firmware and test application software 848 version 850 E. Key test parameters 852 + Used cipher suites and keys 854 + IPv4 and IPv6 traffic distribution 856 + Number of configured ACL 858 F. Details of application traffic mix used in the benchmarking 859 test "Throughput Performance with Application Traffic Mix" 860 (Section 7.1) 862 + Name of applications and layer 7 protocols 864 + Percentage of emulated traffic for each application and 865 layer 7 protocols 867 + Percentage of encrypted traffic and used cipher suites and 868 keys (The RECOMMENDED ciphers and keys are defined in 869 Section 4.3.1.3) 871 + Used object sizes for each application and layer 7 872 protocols 874 3. Results Summary / Executive Summary 876 A. Results SHOULD resemble a pyramid in how it is reported, with 877 the introduction section documenting the summary of results 878 in a prominent, easy to read block. 880 6.2. Detailed Test Results 882 In the result section of the test report, the following attributes 883 should be present for each benchmarking test. 885 a. KPIs MUST be documented separately for each benchmarking test. 886 The format of the KPI metrics should be presented as described in 887 Section 6.3. 889 b. The next level of details SHOULD be graphs showing each of these 890 metrics over the duration (sustain phase) of the test. This 891 allows the user to see the measured performance stability changes 892 over time. 894 6.3. Benchmarks and Key Performance Indicators 896 This section lists key performance indicators (KPIs) for overall 897 benchmarking tests. All KPIs MUST be measured during the sustain 898 phase of the traffic load profile described in Section 4.3.4. All 899 KPIs MUST be measured from the result output of test equipment. 901 o Concurrent TCP Connections 902 The aggregate number of simultaneous connections between hosts 903 across the DUT/SUT, or between hosts and the DUT/SUT (defined in 904 [RFC2647]). 906 o TCP Connections Per Second 907 The average number of successfully established TCP connections per 908 second between hosts across the DUT/SUT, or between hosts and the 909 DUT/SUT. The TCP connection must be initiated via a TCP three way 910 handshake (SYN, SYN/ACK, ACK). Then the TCP session data is sent. 911 The TCP session MUST be closed via either a TCP three way close 912 (FIN, FIN/ACK, ACK), or a TCP four way close (FIN, ACK, FIN, ACK), 913 and not by a RST. 915 o Application Transactions Per Second 916 The average number of successfully completed transactions per 917 second. For a particular transaction to be considered successful, 918 all data must have been transferred in its entirety. In case of 919 HTTP(S) transaction, it must have a valid status code, and the 920 appropriate FIN, FIN/ACK sequence must have been completed. 922 o TLS Handshake Rate 923 The average number of successfully established TLS connections per 924 second between hosts across the DUT/SUT, or between hosts and the 925 DUT/SUT. 927 o Inspected Throughput 928 The number of bits per second of allowed traffic a network 929 security device is able to transmit to the correct destination 930 interface(s) in response to a specified offered load. The 931 throughput benchmarking tests defined in Section 7 SHOULD measure 932 the average OSI model Layer 2 throughput value. This document 933 recommends presenting the throughput value in Gbit/s rounded to 934 two places of precision with a more specific Kbit/s in 935 parenthesis. 937 o Time to First Byte (TTFB) 938 TTFB is the elapsed time between the start of sending the TCP SYN 939 packet from the client and the client receiving the first packet 940 of application data from the server or DUT/SUT. The benchmarking 941 tests HTTP Transaction Latency (Section 7.4) and HTTPS Transaction 942 Latency (Section 7.8) measure the minimum, average and maximum 943 TTFB. The value SHOULD be expressed in millisecond. 945 o URL Response time / Time to Last Byte (TTLB) 946 URL Response time / TTLB is the elapsed time between the start of 947 sending the TCP SYN packet from the client and the client 948 receiving the last packet of application data from the server or 949 DUT/SUT. The benchmarking tests HTTP Transaction Latency 950 (Section 7.4) and HTTP Transaction Latency (Section 7.8) measure 951 the minimum, average and maximum TTLB. The value SHOULD be 952 expressed in millisecond. 954 7. Benchmarking Tests 956 7.1. Throughput Performance with Application Traffic Mix 958 7.1.1. Objective 960 Using a relevant application traffic mix, determine the sustainable 961 inspected throughput supported by the DUT/SUT. 963 Based on customer use case, users can choose the application traffic 964 mix for this test. The details about the traffic mix MUST be 965 documented in the report. At least the following traffic mix details 966 MUST be documented and reported together with the test results: 968 Name of applications and layer 7 protocols 970 Percentage of emulated traffic for each application and layer 7 971 protocols 973 Percentage of encrypted traffic and used cipher suites and keys 974 (The RECOMMENDED ciphers and keys are defined in Section 4.3.1.3.) 976 Used object sizes for each application and layer 7 protocols 978 7.1.2. Test Setup 980 Test bed setup MUST be configured as defined in Section 4. Any 981 benchmarking test specific test bed configuration changes MUST be 982 documented. 984 7.1.3. Test Parameters 986 In this section, the benchmarking test specific parameters SHOULD be 987 defined. 989 7.1.3.1. DUT/SUT Configuration Parameters 991 DUT/SUT parameters MUST conform to the requirements defined in 992 Section 4.2. Any configuration changes for this specific 993 benchmarking test MUST be documented. In case the DUT is configured 994 without SSL inspection feature, the test report MUST explain the 995 implications of this to the relevant application traffic mix 996 encrypted traffic. 998 7.1.3.2. Test Equipment Configuration Parameters 1000 Test equipment configuration parameters MUST conform to the 1001 requirements defined in Section 4.3. Following parameters MUST be 1002 noted for this benchmarking test: 1004 Client IP address range defined in Section 4.3.1.2 1006 Server IP address range defined in Section 4.3.2.2 1008 Traffic distribution ratio between IPv4 and IPv6 defined in 1009 Section 4.3.1.2 1011 Target inspected throughput: Aggregated line rate of interface(s) 1012 used in the DUT/SUT or the value defined based on requirement for 1013 a specific deployment scenario 1015 Initial inspected throughput: 10% of the "Target inspected 1016 throughput" 1018 One of the ciphers and keys defined in Section 4.3.1.3 are 1019 RECOMMENDED to use for this benchmarking test. 1021 7.1.3.3. Traffic Profile 1023 Traffic profile: This test MUST be run with a relevant application 1024 traffic mix profile. 1026 7.1.3.4. Test Results Validation Criteria 1028 The following test Criteria is defined as test results validation 1029 criteria. Test results validation criteria MUST be monitored during 1030 the whole sustain phase of the traffic load profile. 1032 a. Number of failed application transactions (receiving any HTTP 1033 response code other than 200 OK) MUST be less than 0.001% (1 out 1034 of 100,000 transactions) of total attempt transactions. 1036 b. Number of Terminated TCP connections due to unexpected TCP RST 1037 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1038 connections) of total initiated TCP connections. 1040 7.1.3.5. Measurement 1042 Following KPI metrics MUST be reported for this benchmarking test: 1044 Mandatory KPIs (benchmarks): Inspected Throughput, TTFB (minimum, 1045 average, and maximum), TTLB (minimum, average, and maximum) and 1046 Application Transactions Per Second 1048 Note: TTLB MUST be reported along with the object size used in the 1049 traffic profile. 1051 Optional KPIs: TCP Connections Per Second and TLS Handshake Rate 1053 7.1.4. Test Procedures and Expected Results 1055 The test procedures are designed to measure the inspected throughput 1056 performance of the DUT/SUT at the sustaining period of traffic load 1057 profile. The test procedure consists of three major steps. This 1058 test procedure MAY be repeated multiple times with different IP 1059 types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 1060 distribution. 1062 7.1.4.1. Step 1: Test Initialization and Qualification 1064 Verify the link status of all connected physical interfaces. All 1065 interfaces are expected to be in "UP" status. 1067 Configure traffic load profile of the test equipment to generate test 1068 traffic at the "Initial inspected throughput" rate as described in 1069 the parameters Section 7.1.3.2. The test equipment SHOULD follow the 1070 traffic load profile definition as described in Section 4.3.4. The 1071 DUT/SUT SHOULD reach the "Initial inspected throughput" during the 1072 sustain phase. Measure all KPI as defined in Section 7.1.3.5. The 1073 measured KPIs during the sustain phase MUST meet the test results 1074 validation criteria "a" and "b" defined in Section 7.1.3.4. 1076 If the KPI metrics do not meet the test results validation criteria, 1077 the test procedure MUST NOT be continued to step 2. 1079 7.1.4.2. Step 2: Test Run with Target Objective 1081 Configure test equipment to generate traffic at the "Target inspected 1082 throughput" rate defined in the parameter table. The test equipment 1083 SHOULD follow the traffic load profile definition as described in 1084 Section 4.3.4. The test equipment SHOULD start to measure and record 1085 all specified KPIs and the frequency of measurements SHOULD be less 1086 than 2 seconds. Continue the test until all traffic profile phases 1087 are completed. 1089 Within the test results validation criteria, the DUT/SUT is expected 1090 to reach the desired value of the target objective ("Target inspected 1091 throughput") in the sustain phase. Follow step 3, if the measured 1092 value does not meet the target value or does not fulfill the test 1093 results validation criteria. 1095 7.1.4.3. Step 3: Test Iteration 1097 Determine the achievable average inspected throughput within the test 1098 results validation criteria. Final test iteration MUST be performed 1099 for the test duration defined in Section 4.3.4. 1101 7.2. TCP/HTTP Connections Per Second 1103 7.2.1. Objective 1105 Using HTTP traffic, determine the sustainable TCP connection 1106 establishment rate supported by the DUT/SUT under different 1107 throughput load conditions. 1109 To measure connections per second, test iterations MUST use the 1110 different fixed HTTP response object sizes (the different load 1111 conditions) defined in Section 7.2.3.2. 1113 7.2.2. Test Setup 1115 Test bed setup SHOULD be configured as defined in Section 4. Any 1116 specific test bed configuration changes such as number of interfaces 1117 and interface type, etc. MUST be documented. 1119 7.2.3. Test Parameters 1121 In this section, benchmarking test specific parameters SHOULD be 1122 defined. 1124 7.2.3.1. DUT/SUT Configuration Parameters 1126 DUT/SUT parameters MUST conform to the requirements defined in 1127 Section 4.2. Any configuration changes for this specific 1128 benchmarking test MUST be documented. 1130 7.2.3.2. Test Equipment Configuration Parameters 1132 Test equipment configuration parameters MUST conform to the 1133 requirements defined in Section 4.3. Following parameters MUST be 1134 documented for this benchmarking test: 1136 Client IP address range defined in Section 4.3.1.2 1138 Server IP address range defined in Section 4.3.2.2 1140 Traffic distribution ratio between IPv4 and IPv6 defined in 1141 Section 4.3.1.2 1143 Target connections per second: Initial value from product datasheet 1144 or the value defined based on requirement for a specific deployment 1145 scenario 1147 Initial connections per second: 10% of "Target connections per 1148 second" (an optional parameter for documentation) 1150 The client SHOULD negotiate HTTP 1.1 and close the connection with 1151 FIN immediately after completion of one transaction. In each test 1152 iteration, client MUST send GET command requesting a fixed HTTP 1153 response object size. 1155 The RECOMMENDED response object sizes are 1, 2, 4, 16, and 64 KByte. 1157 7.2.3.3. Test Results Validation Criteria 1159 The following test Criteria is defined as test results validation 1160 criteria. Test results validation criteria MUST be monitored during 1161 the whole sustain phase of the traffic load profile. 1163 a. Number of failed Application transactions (receiving any HTTP 1164 response code other than 200 OK) MUST be less than 0.001% (1 out 1165 of 100,000 transactions) of total attempt transactions. 1167 b. Number of Terminated TCP connections due to unexpected TCP RST 1168 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1169 connections) of total initiated TCP connections. 1171 c. During the sustain phase, traffic should be forwarded at a 1172 constant rate. 1174 d. Concurrent TCP connections MUST be constant during steady state 1175 and any deviation of concurrent TCP connections SHOULD be less 1176 than 10%. This confirms the DUT opens and closes TCP connections 1177 almost at the same rate. 1179 7.2.3.4. Measurement 1181 TCP Connections Per Second MUST be reported for each test iteration 1182 (for each object size). 1184 7.2.4. Test Procedures and Expected Results 1186 The test procedure is designed to measure the TCP connections per 1187 second rate of the DUT/SUT at the sustaining period of the traffic 1188 load profile. The test procedure consists of three major steps. 1189 This test procedure MAY be repeated multiple times with different IP 1190 types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 1191 distribution. 1193 7.2.4.1. Step 1: Test Initialization and Qualification 1195 Verify the link status of all connected physical interfaces. All 1196 interfaces are expected to be in "UP" status. 1198 Configure the traffic load profile of the test equipment to establish 1199 "initial connections per second" as defined in the parameters 1200 Section 7.2.3.2. The traffic load profile SHOULD be defined as 1201 described in Section 4.3.4. 1203 The DUT/SUT SHOULD reach the "Initial connections per second" before 1204 the sustain phase. The measured KPIs during the sustain phase MUST 1205 meet the test results validation criteria a, b, c, and d defined in 1206 Section 7.2.3.3. 1208 If the KPI metrics do not meet the test results validation criteria, 1209 the test procedure MUST NOT be continued to "Step 2". 1211 7.2.4.2. Step 2: Test Run with Target Objective 1213 Configure test equipment to establish the target objective ("Target 1214 connections per second") defined in the parameters table. The test 1215 equipment SHOULD follow the traffic load profile definition as 1216 described in Section 4.3.4. 1218 During the ramp up and sustain phase of each test iteration, other 1219 KPIs such as inspected throughput, concurrent TCP connections and 1220 application transactions per second MUST NOT reach to the maximum 1221 value the DUT/SUT can support. The test results for specific test 1222 iterations SHOULD NOT be reported, if the above mentioned KPI 1223 (especially inspected throughput) reaches the maximum value. 1224 (Example: If the test iteration with 64 KByte of HTTP response object 1225 size reached the maximum inspected throughput limitation of the DUT, 1226 the test iteration MAY be interrupted and the result for 64 KByte 1227 SHOULD NOT be reported). 1229 The test equipment SHOULD start to measure and record all specified 1230 KPIs and the frequency of measurements SHOULD be less than 2 seconds. 1231 Continue the test until all traffic profile phases are completed. 1233 Within the test results validation criteria, the DUT/SUT is expected 1234 to reach the desired value of the target objective ("Target 1235 connections per second") in the sustain phase. Follow step 3, if the 1236 measured value does not meet the target value or does not fulfill the 1237 test results validation criteria. 1239 7.2.4.3. Step 3: Test Iteration 1241 Determine the achievable TCP connections per second within the test 1242 results validation criteria. 1244 7.3. HTTP Throughput 1246 7.3.1. Objective 1248 Determine the sustainable inspected throughput of the DUT/SUT for 1249 HTTP transactions varying the HTTP response object size. 1251 7.3.2. Test Setup 1253 Test bed setup SHOULD be configured as defined in Section 4. Any 1254 specific test bed configuration changes such as number of interfaces 1255 and interface type, etc. must be documented. 1257 7.3.3. Test Parameters 1259 In this section, benchmarking test specific parameters SHOULD be 1260 defined. 1262 7.3.3.1. DUT/SUT Configuration Parameters 1264 DUT/SUT parameters MUST conform to the requirements defined in 1265 Section 4.2. Any configuration changes for this specific 1266 benchmarking test MUST be documented. 1268 7.3.3.2. Test Equipment Configuration Parameters 1270 Test equipment configuration parameters MUST conform to the 1271 requirements defined in Section 4.3. Following parameters MUST be 1272 documented for this benchmarking test: 1274 Client IP address range defined in Section 4.3.1.2 1276 Server IP address range defined in Section 4.3.2.2 1278 Traffic distribution ratio between IPv4 and IPv6 defined in 1279 Section 4.3.1.2 1281 Target inspected throughput: Aggregated line rate of interface(s) 1282 used in the DUT/SUT or the value defined based on requirement for a 1283 specific deployment scenario 1285 Initial inspected throughput: 10% of "Target inspected throughput" 1286 (an optional parameter for documentation) 1288 Number of HTTP response object requests (transactions) per 1289 connection: 10 1291 RECOMMENDED HTTP response object size: 1, 16, 64, 256 KByte, and 1292 mixed objects defined in the table 1293 +---------------------+---------------------+ 1294 | Object size (KByte) | Number of requests/ | 1295 | | Weight | 1296 +---------------------+---------------------+ 1297 | 0.2 | 1 | 1298 +---------------------+---------------------+ 1299 | 6 | 1 | 1300 +---------------------+---------------------+ 1301 | 8 | 1 | 1302 +---------------------+---------------------+ 1303 | 9 | 1 | 1304 +---------------------+---------------------+ 1305 | 10 | 1 | 1306 +---------------------+---------------------+ 1307 | 25 | 1 | 1308 +---------------------+---------------------+ 1309 | 26 | 1 | 1310 +---------------------+---------------------+ 1311 | 35 | 1 | 1312 +---------------------+---------------------+ 1313 | 59 | 1 | 1314 +---------------------+---------------------+ 1315 | 347 | 1 | 1316 +---------------------+---------------------+ 1318 Table 4: Mixed Objects 1320 7.3.3.3. Test Results Validation Criteria 1322 The following test Criteria is defined as test results validation 1323 criteria. Test results validation criteria MUST be monitored during 1324 the whole sustain phase of the traffic load profile. 1326 a. Number of failed Application transactions (receiving any HTTP 1327 response code other than 200 OK) MUST be less than 0.001% (1 out 1328 of 100,000 transactions) of attempt transactions. 1330 b. Traffic should be forwarded constantly. 1332 c. Concurrent TCP connections MUST be constant during steady state 1333 and any deviation of concurrent TCP connections SHOULD be less 1334 than 10%. This confirms the DUT opens and closes TCP connections 1335 almost at the same rate. 1337 7.3.3.4. Measurement 1339 Inspected Throughput and HTTP Transactions per Second MUST be 1340 reported for each object size. 1342 7.3.4. Test Procedures and Expected Results 1344 The test procedure is designed to measure HTTP throughput of the DUT/ 1345 SUT. The test procedure consists of three major steps. This test 1346 procedure MAY be repeated multiple times with different IPv4 and IPv6 1347 traffic distribution and HTTP response object sizes. 1349 7.3.4.1. Step 1: Test Initialization and Qualification 1351 Verify the link status of all connected physical interfaces. All 1352 interfaces are expected to be in "UP" status. 1354 Configure traffic load profile of the test equipment to establish 1355 "Initial inspected throughput" as defined in the parameters 1356 Section 7.3.3.2. 1358 The traffic load profile SHOULD be defined as described in 1359 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial inspected 1360 throughput" during the sustain phase. Measure all KPI as defined in 1361 Section 7.3.3.4. 1363 The measured KPIs during the sustain phase MUST meet the test results 1364 validation criteria "a" defined in Section 7.3.3.3. 1366 If the KPI metrics do not meet the test results validation criteria, 1367 the test procedure MUST NOT be continued to "Step 2". 1369 7.3.4.2. Step 2: Test Run with Target Objective 1371 Configure test equipment to establish the target objective ("Target 1372 inspected throughput") defined in the parameters table. The test 1373 equipment SHOULD start to measure and record all specified KPIs and 1374 the frequency of measurements SHOULD be less than 2 seconds. 1375 Continue the test until all traffic profile phases are completed. 1377 Within the test results validation criteria, the DUT/SUT is expected 1378 to reach the desired value of the target objective in the sustain 1379 phase. Follow step 3, if the measured value does not meet the target 1380 value or does not fulfill the test results validation criteria. 1382 7.3.4.3. Step 3: Test Iteration 1384 Determine the achievable inspected throughput within the test results 1385 validation criteria and measure the KPI metric Transactions per 1386 Second. Final test iteration MUST be performed for the test duration 1387 defined in Section 4.3.4. 1389 7.4. HTTP Transaction Latency 1391 7.4.1. Objective 1393 Using HTTP traffic, determine the HTTP transaction latency when DUT 1394 is running with sustainable HTTP transactions per second supported by 1395 the DUT/SUT under different HTTP response object sizes. 1397 Test iterations MUST be performed with different HTTP response object 1398 sizes in two different scenarios. One with a single transaction and 1399 the other with multiple transactions within a single TCP connection. 1400 For consistency both the single and multiple transaction test MUST be 1401 configured with HTTP 1.1. 1403 Scenario 1: The client MUST negotiate HTTP 1.1 and close the 1404 connection with FIN immediately after completion of a single 1405 transaction (GET and RESPONSE). 1407 Scenario 2: The client MUST negotiate HTTP 1.1 and close the 1408 connection FIN immediately after completion of 10 transactions (GET 1409 and RESPONSE) within a single TCP connection. 1411 7.4.2. Test Setup 1413 Test bed setup SHOULD be configured as defined in Section 4. Any 1414 specific test bed configuration changes such as number of interfaces 1415 and interface type, etc. MUST be documented. 1417 7.4.3. Test Parameters 1419 In this section, benchmarking test specific parameters SHOULD be 1420 defined. 1422 7.4.3.1. DUT/SUT Configuration Parameters 1424 DUT/SUT parameters MUST conform to the requirements defined in 1425 Section 4.2. Any configuration changes for this specific 1426 benchmarking test MUST be documented. 1428 7.4.3.2. Test Equipment Configuration Parameters 1430 Test equipment configuration parameters MUST conform to the 1431 requirements defined in Section 4.3. Following parameters MUST be 1432 documented for this benchmarking test: 1434 Client IP address range defined in Section 4.3.1.2 1436 Server IP address range defined in Section 4.3.2.2 1438 Traffic distribution ratio between IPv4 and IPv6 defined in 1439 Section 4.3.1.2 1441 Target objective for scenario 1: 50% of the connection per second 1442 measured in benchmarking test TCP/HTTP Connections Per Second 1443 (Section 7.2) 1445 Target objective for scenario 2: 50% of the inspected throughput 1446 measured in benchmarking test HTTP Throughput (Section 7.3) 1448 Initial objective for scenario 1: 10% of Target objective for 1449 scenario 1" (an optional parameter for documentation) 1451 Initial objective for scenario 2: 10% of "Target objective for 1452 scenario 2" (an optional parameter for documentation) 1454 HTTP transaction per TCP connection: test scenario 1 with single 1455 transaction and the second scenario with 10 transactions 1457 HTTP 1.1 with GET command requesting a single object. The 1458 RECOMMENDED object sizes are 1, 16, and 64 KByte. For each test 1459 iteration, client MUST request a single HTTP response object size. 1461 7.4.3.3. Test Results Validation Criteria 1463 The following test Criteria is defined as test results validation 1464 criteria. Test results validation criteria MUST be monitored during 1465 the whole sustain phase of the traffic load profile. Ramp up and 1466 ramp down phase SHOULD NOT be considered. 1468 a. Number of failed Application transactions (receiving any HTTP 1469 response code other than 200 OK) MUST be less than 0.001% (1 out 1470 of 100,000 transactions) of attempt transactions. 1472 b. Number of Terminated TCP connections due to unexpected TCP RST 1473 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1474 connections) of total initiated TCP connections. 1476 c. During the sustain phase, traffic should be forwarded at a 1477 constant rate. 1479 d. Concurrent TCP connections MUST be constant during steady state 1480 and any deviation of concurrent TCP connections SHOULD be less 1481 than 10%. This confirms the DUT opens and closes TCP connections 1482 almost at the same rate. 1484 e. After ramp up the DUT MUST achieve the "Target objective" defined 1485 in the parameter Section 7.4.3.2 and remain in that state for the 1486 entire test duration (sustain phase). 1488 7.4.3.4. Measurement 1490 TTFB (minimum, average and maximum) and TTLB (minimum, average and 1491 maximum) MUST be reported for each object size. 1493 7.4.4. Test Procedures and Expected Results 1495 The test procedure is designed to measure TTFB or TTLB when the DUT/ 1496 SUT is operating close to 50% of its maximum achievable connections 1497 per second or inspected throughput. This test procedure MAY be 1498 repeated multiple times with different IP types (IPv4 only, IPv6 only 1499 and IPv4 and IPv6 mixed traffic distribution), HTTP response object 1500 sizes and single and multiple transactions per connection scenarios. 1502 7.4.4.1. Step 1: Test Initialization and Qualification 1504 Verify the link status of all connected physical interfaces. All 1505 interfaces are expected to be in "UP" status. 1507 Configure traffic load profile of the test equipment to establish 1508 "Initial objective" as defined in the parameters Section 7.4.3.2. 1509 The traffic load profile can be defined as described in 1510 Section 4.3.4. 1512 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 1513 phase. The measured KPIs during the sustain phase MUST meet the test 1514 results validation criteria a, b, c, d, e and f defined in 1515 Section 7.4.3.3. 1517 If the KPI metrics do not meet the test results validation criteria, 1518 the test procedure MUST NOT be continued to "Step 2". 1520 7.4.4.2. Step 2: Test Run with Target Objective 1522 Configure test equipment to establish "Target objective" defined in 1523 the parameters table. The test equipment SHOULD follow the traffic 1524 load profile definition as described in Section 4.3.4. 1526 The test equipment SHOULD start to measure and record all specified 1527 KPIs and the frequency of measurement SHOULD be less than 2 seconds. 1528 Continue the test until all traffic profile phases are completed. 1530 Within the test results validation criteria, the DUT/SUT MUST reach 1531 the desired value of the target objective in the sustain phase. 1533 Measure the minimum, average and maximum values of TFB and TTLB. 1535 7.5. Concurrent TCP/HTTP Connection Capacity 1537 7.5.1. Objective 1539 Determine the number of concurrent TCP connections that the DUT/ SUT 1540 sustains when using HTTP traffic. 1542 7.5.2. Test Setup 1544 Test bed setup SHOULD be configured as defined in Section 4. Any 1545 specific test bed configuration changes such as number of interfaces 1546 and interface type, etc. must be documented. 1548 7.5.3. Test Parameters 1550 In this section, benchmarking test specific parameters SHOULD be 1551 defined. 1553 7.5.3.1. DUT/SUT Configuration Parameters 1555 DUT/SUT parameters MUST conform to the requirements defined in 1556 Section 4.2. Any configuration changes for this specific 1557 benchmarking test MUST be documented. 1559 7.5.3.2. Test Equipment Configuration Parameters 1561 Test equipment configuration parameters MUST conform to the 1562 requirements defined in Section 4.3. Following parameters MUST be 1563 noted for this benchmarking test: 1565 Client IP address range defined in Section 4.3.1.2 1567 Server IP address range defined in Section 4.3.2.2 1568 Traffic distribution ratio between IPv4 and IPv6 defined in 1569 Section 4.3.1.2 1571 Target concurrent connection: Initial value from product datasheet 1572 or the value defined based on requirement for a specific 1573 deployment scenario. 1575 Initial concurrent connection: 10% of "Target concurrent 1576 connection" (an optional parameter for documentation) 1578 Maximum connections per second during ramp up phase: 50% of 1579 maximum connections per second measured in benchmarking test TCP/ 1580 HTTP Connections per second (Section 7.2) 1582 Ramp up time (in traffic load profile for "Target concurrent 1583 connection"): "Target concurrent connection" / "Maximum 1584 connections per second during ramp up phase" 1586 Ramp up time (in traffic load profile for "Initial concurrent 1587 connection"): "Initial concurrent connection" / "Maximum 1588 connections per second during ramp up phase" 1590 The client MUST negotiate HTTP 1.1 with persistence and each client 1591 MAY open multiple concurrent TCP connections per server endpoint IP. 1593 Each client sends 10 GET commands requesting 1 KByte HTTP response 1594 object in the same TCP connection (10 transactions/TCP connection) 1595 and the delay (think time) between each transaction MUST be X 1596 seconds. 1598 X = ("Ramp up time" + "steady state time") /10 1600 The established connections SHOULD remain open until the ramp down 1601 phase of the test. During the ramp down phase, all connections 1602 SHOULD be successfully closed with FIN. 1604 7.5.3.3. Test Results Validation Criteria 1606 The following test Criteria is defined as test results validation 1607 criteria. Test results validation criteria MUST be monitored during 1608 the whole sustain phase of the traffic load profile. 1610 a. Number of failed Application transactions (receiving any HTTP 1611 response code other than 200 OK) MUST be less than 0.001% (1 out 1612 of 100,000 transaction) of total attempted transactions. 1614 b. Number of Terminated TCP connections due to unexpected TCP RST 1615 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1616 connections) of total initiated TCP connections. 1618 c. During the sustain phase, traffic SHOULD be forwarded constantly. 1620 7.5.3.4. Measurement 1622 Average Concurrent TCP Connections MUST be reported for this 1623 benchmarking test. 1625 7.5.4. Test Procedures and Expected Results 1627 The test procedure is designed to measure the concurrent TCP 1628 connection capacity of the DUT/SUT at the sustaining period of 1629 traffic load profile. The test procedure consists of three major 1630 steps. This test procedure MAY be repeated multiple times with 1631 different IPv4 and IPv6 traffic distribution. 1633 7.5.4.1. Step 1: Test Initialization and Qualification 1635 Verify the link status of all connected physical interfaces. All 1636 interfaces are expected to be in "UP" status. 1638 Configure test equipment to establish "Initial concurrent TCP 1639 connections" defined in Section 7.5.3.2. Except ramp up time, the 1640 traffic load profile SHOULD be defined as described in Section 4.3.4. 1642 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 1643 concurrent TCP connections". The measured KPIs during the sustain 1644 phase MUST meet the test results validation criteria "a" and "b" 1645 defined in Section 7.5.3.3. 1647 If the KPI metrics do not meet the test results validation criteria, 1648 the test procedure MUST NOT be continued to "Step 2". 1650 7.5.4.2. Step 2: Test Run with Target Objective 1652 Configure test equipment to establish the target objective ("Target 1653 concurrent TCP connections"). The test equipment SHOULD follow the 1654 traffic load profile definition (except ramp up time) as described in 1655 Section 4.3.4. 1657 During the ramp up and sustain phase, the other KPIs such as 1658 inspected throughput, TCP connections per second and application 1659 transactions per second MUST NOT reach to the maximum value that the 1660 DUT/SUT can support. 1662 The test equipment SHOULD start to measure and record KPIs defined in 1663 Section 7.5.3.4. The frequency of measurement SHOULD be less than 2 1664 seconds. Continue the test until all traffic profile phases are 1665 completed. 1667 Within the test results validation criteria, the DUT/SUT is expected 1668 to reach the desired value of the target objective in the sustain 1669 phase. Follow step 3, if the measured value does not meet the target 1670 value or does not fulfill the test results validation criteria. 1672 7.5.4.3. Step 3: Test Iteration 1674 Determine the achievable concurrent TCP connections capacity within 1675 the test results validation criteria. 1677 7.6. TCP/HTTPS Connections per Second 1679 7.6.1. Objective 1681 Using HTTPS traffic, determine the sustainable SSL/TLS session 1682 establishment rate supported by the DUT/SUT under different 1683 throughput load conditions. 1685 Test iterations MUST include common cipher suites and key strengths 1686 as well as forward looking stronger keys. Specific test iterations 1687 MUST include ciphers and keys defined in Section 7.6.3.2. 1689 For each cipher suite and key strengths, test iterations MUST use a 1690 single HTTPS response object size defined in the test equipment 1691 configuration parameters Section 7.6.3.2 to measure connections per 1692 second performance under a variety of DUT Security inspection load 1693 conditions. 1695 7.6.2. Test Setup 1697 Test bed setup SHOULD be configured as defined in Section 4. Any 1698 specific test bed configuration changes such as number of interfaces 1699 and interface type, etc. MUST be documented. 1701 7.6.3. Test Parameters 1703 In this section, benchmarking test specific parameters SHOULD be 1704 defined. 1706 7.6.3.1. DUT/SUT Configuration Parameters 1708 DUT/SUT parameters MUST conform to the requirements defined in 1709 Section 4.2. Any configuration changes for this specific 1710 benchmarking test MUST be documented. 1712 7.6.3.2. Test Equipment Configuration Parameters 1714 Test equipment configuration parameters MUST conform to the 1715 requirements defined in Section 4.3. Following parameters MUST be 1716 documented for this benchmarking test: 1718 Client IP address range defined in Section 4.3.1.2 1720 Server IP address range defined in Section 4.3.2.2 1722 Traffic distribution ratio between IPv4 and IPv6 defined in 1723 Section 4.3.1.2 1725 Target connections per second: Initial value from product datasheet 1726 or the value defined based on requirement for a specific deployment 1727 scenario. 1729 Initial connections per second: 10% of "Target connections per 1730 second" (an optional parameter for documentation) 1732 RECOMMENDED ciphers and keys defined in Section 4.3.1.3 1734 The client MUST negotiate HTTPS 1.1 and close the connection with FIN 1735 immediately after completion of one transaction. In each test 1736 iteration, client MUST send GET command requesting a fixed HTTPS 1737 response object size. The RECOMMENDED object sizes are 1, 2, 4, 16, 1738 and 64 KByte. 1740 7.6.3.3. Test Results Validation Criteria 1742 The following test Criteria is defined as test results validation 1743 criteria: 1745 a. Number of failed Application transactions (receiving any HTTP 1746 response code other than 200 OK) MUST be less than 0.001% (1 out 1747 of 100,000 transactions) of attempt transactions. 1749 b. Number of Terminated TCP connections due to unexpected TCP RST 1750 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1751 connections) of total initiated TCP connections. 1753 c. During the sustain phase, traffic should be forwarded at a 1754 constant rate. 1756 d. Concurrent TCP connections MUST be constant during steady state 1757 and any deviation of concurrent TCP connections SHOULD be less 1758 than 10%. This confirms the DUT opens and closes TCP connections 1759 almost at the same rate. 1761 7.6.3.4. Measurement 1763 TCP Connections Per Second MUST be reported for each test iteration 1764 (for each object size). 1766 The KPI metric TLS Handshake Rate can be measured in the test using 1767 1KByte object size. 1769 7.6.4. Test Procedures and Expected Results 1771 The test procedure is designed to measure the TCP connections per 1772 second rate of the DUT/SUT at the sustaining period of traffic load 1773 profile. The test procedure consists of three major steps. This 1774 test procedure MAY be repeated multiple times with different IPv4 and 1775 IPv6 traffic distribution. 1777 7.6.4.1. Step 1: Test Initialization and Qualification 1779 Verify the link status of all connected physical interfaces. All 1780 interfaces are expected to be in "UP" status. 1782 Configure traffic load profile of the test equipment to establish 1783 "Initial connections per second" as defined in Section 7.6.3.2. The 1784 traffic load profile MAY be defined as described in Section 4.3.4. 1786 The DUT/SUT SHOULD reach the "Initial connections per second" before 1787 the sustain phase. The measured KPIs during the sustain phase MUST 1788 meet the test results validation criteria a, b, c, and d defined in 1789 Section 7.6.3.3. 1791 If the KPI metrics do not meet the test results validation criteria, 1792 the test procedure MUST NOT be continued to "Step 2". 1794 7.6.4.2. Step 2: Test Run with Target Objective 1796 Configure test equipment to establish "Target connections per second" 1797 defined in the parameters table. The test equipment SHOULD follow 1798 the traffic load profile definition as described in Section 4.3.4. 1800 During the ramp up and sustain phase, other KPIs such as inspected 1801 throughput, concurrent TCP connections and application transactions 1802 per second MUST NOT reach the maximum value that the DUT/SUT can 1803 support. The test results for specific test iteration SHOULD NOT be 1804 reported, if the above mentioned KPI (especially inspected 1805 throughput) reaches the maximum value. (Example: If the test 1806 iteration with 64 KByte of HTTPS response object size reached the 1807 maximum inspected throughput limitation of the DUT, the test 1808 iteration can be interrupted and the result for 64 KByte SHOULD NOT 1809 be reported). 1811 The test equipment SHOULD start to measure and record all specified 1812 KPIs. The frequency of measurement SHOULD be less than 2 seconds. 1813 Continue the test until all traffic profile phases are completed. 1815 Within the test results validation criteria, the DUT/SUT is expected 1816 to reach the desired value of the target objective ("Target 1817 connections per second") in the sustain phase. Follow step 3, if the 1818 measured value does not meet the target value or does not fulfill the 1819 test results validation criteria. 1821 7.6.4.3. Step 3: Test Iteration 1823 Determine the achievable connections per second within the test 1824 results validation criteria. 1826 7.7. HTTPS Throughput 1828 7.7.1. Objective 1830 Determine the sustainable inspected throughput of the DUT/SUT for 1831 HTTPS transactions varying the HTTPS response object size. 1833 Test iterations MUST include common cipher suites and key strengths 1834 as well as forward looking stronger keys. Specific test iterations 1835 MUST include the ciphers and keys defined in the parameter 1836 Section 7.7.3.2. 1838 7.7.2. Test Setup 1840 Test bed setup SHOULD be configured as defined in Section 4. Any 1841 specific test bed configuration changes such as number of interfaces 1842 and interface type, etc. must be documented. 1844 7.7.3. Test Parameters 1846 In this section, benchmarking test specific parameters SHOULD be 1847 defined. 1849 7.7.3.1. DUT/SUT Configuration Parameters 1851 DUT/SUT parameters MUST conform to the requirements defined in 1852 Section 4.2. Any configuration changes for this specific 1853 benchmarking test MUST be documented. 1855 7.7.3.2. Test Equipment Configuration Parameters 1857 Test equipment configuration parameters MUST conform to the 1858 requirements defined in Section 4.3. Following parameters MUST be 1859 documented for this benchmarking test: 1861 Client IP address range defined in Section 4.3.1.2 1863 Server IP address range defined in Section 4.3.2.2 1865 Traffic distribution ratio between IPv4 and IPv6 defined in 1866 Section 4.3.1.2 1868 Target inspected throughput: Aggregated line rate of interface(s) 1869 used in the DUT/SUT or the value defined based on requirement for a 1870 specific deployment scenario. 1872 Initial inspected throughput: 10% of "Target inspected throughput" 1873 (an optional parameter for documentation) 1875 Number of HTTPS response object requests (transactions) per 1876 connection: 10 1878 RECOMMENDED ciphers and keys defined in Section 4.3.1.3 1880 RECOMMENDED HTTPS response object size: 1, 16, 64, 256 KByte, and 1881 mixed objects defined in the table below. 1883 +---------------------+---------------------+ 1884 | Object size (KByte) | Number of requests/ | 1885 | | Weight | 1886 +---------------------+---------------------+ 1887 | 0.2 | 1 | 1888 +---------------------+---------------------+ 1889 | 6 | 1 | 1890 +---------------------+---------------------+ 1891 | 8 | 1 | 1892 +---------------------+---------------------+ 1893 | 9 | 1 | 1894 +---------------------+---------------------+ 1895 | 10 | 1 | 1896 +---------------------+---------------------+ 1897 | 25 | 1 | 1898 +---------------------+---------------------+ 1899 | 26 | 1 | 1900 +---------------------+---------------------+ 1901 | 35 | 1 | 1902 +---------------------+---------------------+ 1903 | 59 | 1 | 1904 +---------------------+---------------------+ 1905 | 347 | 1 | 1906 +---------------------+---------------------+ 1908 Table 5: Mixed Objects 1910 7.7.3.3. Test Results Validation Criteria 1912 The following test Criteria is defined as test results validation 1913 criteria. Test results validation criteria MUST be monitored during 1914 the whole sustain phase of the traffic load profile. 1916 a. Number of failed Application transactions (receiving any HTTP 1917 response code other than 200 OK) MUST be less than 0.001% (1 out 1918 of 100,000 transactions) of attempt transactions. 1920 b. Traffic should be forwarded constantly. 1922 c. Concurrent TCP connections MUST be constant during steady state 1923 and any deviation of concurrent TCP connections SHOULD be less 1924 than 10%. This confirms the DUT opens and closes TCP connections 1925 almost at the same rate. 1927 7.7.3.4. Measurement 1929 Inspected Throughput and HTTP Transactions per Second MUST be 1930 reported for each object size. 1932 7.7.4. Test Procedures and Expected Results 1934 The test procedure consists of three major steps. This test 1935 procedure MAY be repeated multiple times with different IPv4 and IPv6 1936 traffic distribution and HTTPS response object sizes. 1938 7.7.4.1. Step 1: Test Initialization and Qualification 1940 Verify the link status of all connected physical interfaces. All 1941 interfaces are expected to be in "UP" status. 1943 Configure traffic load profile of the test equipment to establish 1944 "initial inspected throughput" as defined in the parameters 1945 Section 7.7.3.2. 1947 The traffic load profile should be defined as described in 1948 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial inspected 1949 throughput" during the sustain phase. Measure all KPI as defined in 1950 Section 7.7.3.4. 1952 The measured KPIs during the sustain phase MUST meet the test results 1953 validation criteria "a" defined in Section 7.7.3.3. 1955 If the KPI metrics do not meet the test results validation criteria, 1956 the test procedure MUST NOT be continued to "Step 2". 1958 7.7.4.2. Step 2: Test Run with Target Objective 1960 Configure test equipment to establish the target objective ("Target 1961 inspected throughput") defined in the parameters table. The test 1962 equipment SHOULD start to measure and record all specified KPIs. The 1963 frequency of measurement SHOULD be less than 2 seconds. Continue the 1964 test until all traffic profile phases are completed. 1966 Within the test results validation criteria, the DUT/SUT is expected 1967 to reach the desired value of the target objective in the sustain 1968 phase. Follow step 3, if the measured value does not meet the target 1969 value or does not fulfill the test results validation criteria. 1971 7.7.4.3. Step 3: Test Iteration 1973 Determine the achievable average inspected throughput within the test 1974 results validation criteria. Final test iteration MUST be performed 1975 for the test duration defined in Section 4.3.4. 1977 7.8. HTTPS Transaction Latency 1979 7.8.1. Objective 1981 Using HTTPS traffic, determine the HTTPS transaction latency when DUT 1982 is running with sustainable HTTPS transactions per second supported 1983 by the DUT/SUT under different HTTPS response object size. 1985 Scenario 1: The client MUST negotiate HTTPS and close the connection 1986 with FIN immediately after completion of a single transaction (GET 1987 and RESPONSE). 1989 Scenario 2: The client MUST negotiate HTTPS and close the connection 1990 with FIN immediately after completion of 10 transactions (GET and 1991 RESPONSE) within a single TCP connection. 1993 7.8.2. Test Setup 1995 Test bed setup SHOULD be configured as defined in Section 4. Any 1996 specific test bed configuration changes such as number of interfaces 1997 and interface type, etc. MUST be documented. 1999 7.8.3. Test Parameters 2001 In this section, benchmarking test specific parameters SHOULD be 2002 defined. 2004 7.8.3.1. DUT/SUT Configuration Parameters 2006 DUT/SUT parameters MUST conform to the requirements defined in 2007 Section 4.2. Any configuration changes for this specific 2008 benchmarking test MUST be documented. 2010 7.8.3.2. Test Equipment Configuration Parameters 2012 Test equipment configuration parameters MUST conform to the 2013 requirements defined in Section 4.3. Following parameters MUST be 2014 documented for this benchmarking test: 2016 Client IP address range defined in Section 4.3.1.2 2018 Server IP address range defined in Section 4.3.2.2 2019 Traffic distribution ratio between IPv4 and IPv6 defined in 2020 Section 4.3.1.2 2022 RECOMMENDED cipher suites and key sizes defined in Section 4.3.1.3 2024 Target objective for scenario 1: 50% of the connections per second 2025 measured in benchmarking test TCP/HTTPS Connections per second 2026 (Section 7.6) 2028 Target objective for scenario 2: 50% of the inspected throughput 2029 measured in benchmarking test HTTPS Throughput (Section 7.7) 2031 Initial objective for scenario 1: 10% of Target objective for 2032 scenario 1" (an optional parameter for documentation) 2034 Initial objective for scenario 2: 10% of "Target objective for 2035 scenario 2" (an optional parameter for documentation) 2037 HTTPS transaction per TCP connection: test scenario 1 with single 2038 transaction and the second scenario with 10 transactions 2040 HTTPS 1.1 with GET command requesting a single object. The 2041 RECOMMENDED object sizes are 1, 16, and 64 KByte. For each test 2042 iteration, client MUST request a single HTTPS response object size. 2044 7.8.3.3. Test Results Validation Criteria 2046 The following test Criteria is defined as test results validation 2047 criteria. Test results validation criteria MUST be monitored during 2048 the whole sustain phase of the traffic load profile. Ramp up and 2049 ramp down phase SHOULD NOT be considered. 2051 a. Number of failed Application transactions (receiving any HTTP 2052 response code other than 200 OK) MUST be less than 0.001% (1 out 2053 of 100,000 transactions) of attempt transactions. 2055 b. Number of Terminated TCP connections due to unexpected TCP RST 2056 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2057 connections) of total initiated TCP connections 2059 c. During the sustain phase, traffic should be forwarded at a 2060 constant rate. 2062 d. Concurrent TCP connections MUST be constant during steady state 2063 and any deviation of concurrent TCP connections SHOULD be less 2064 than 10%. This confirms the DUT opens and closes TCP connections 2065 almost at the same rate 2067 e. After ramp up the DUT MUST achieve the "Target objective" defined 2068 in the parameter Section 7.8.3.2 and remain in that state for the 2069 entire test duration (sustain phase). 2071 7.8.3.4. Measurement 2073 TTFB (minimum, average and maximum) and TTLB (minimum, average and 2074 maximum) MUST be reported for each object size. 2076 7.8.4. Test Procedures and Expected Results 2078 The test procedure is designed to measure TTFB or TTLB when the DUT/ 2079 SUT is operating close to 50% of its maximum achievable connections 2080 per second or inspected throughput. This test procedure MAY be 2081 repeated multiple times with different IP types (IPv4 only, IPv6 only 2082 and IPv4 and IPv6 mixed traffic distribution), HTTPS response object 2083 sizes and single and multiple transactions per connection scenarios. 2085 7.8.4.1. Step 1: Test Initialization and Qualification 2087 Verify the link status of all connected physical interfaces. All 2088 interfaces are expected to be in "UP" status. 2090 Configure traffic load profile of the test equipment to establish 2091 "Initial objective" as defined in the parameters Section 7.8.3.2. 2092 The traffic load profile can be defined as described in 2093 Section 4.3.4. 2095 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 2096 phase. The measured KPIs during the sustain phase MUST meet the test 2097 results validation criteria a, b, c, d, e and f defined in 2098 Section 7.8.3.3. 2100 If the KPI metrics do not meet the test results validation criteria, 2101 the test procedure MUST NOT be continued to "Step 2". 2103 7.8.4.2. Step 2: Test Run with Target Objective 2105 Configure test equipment to establish "Target objective" defined in 2106 the parameters table. The test equipment SHOULD follow the traffic 2107 load profile definition as described in Section 4.3.4. 2109 The test equipment SHOULD start to measure and record all specified 2110 KPIs. The frequency of measurement SHOULD be less than 2 seconds. 2111 Continue the test until all traffic profile phases are completed. 2113 Within the test results validation criteria, the DUT/SUT MUST reach 2114 the desired value of the target objective in the sustain phase. 2116 Measure the minimum, average and maximum values of TFB and TTLB. 2118 7.9. Concurrent TCP/HTTPS Connection Capacity 2120 7.9.1. Objective 2122 Determine the number of concurrent TCP connections that the DUT/SUT 2123 sustains when using HTTPS traffic. 2125 7.9.2. Test Setup 2127 Test bed setup SHOULD be configured as defined in Section 4. Any 2128 specific test bed configuration changes such as number of interfaces 2129 and interface type, etc. MUST be documented. 2131 7.9.3. Test Parameters 2133 In this section, benchmarking test specific parameters SHOULD be 2134 defined. 2136 7.9.3.1. DUT/SUT Configuration Parameters 2138 DUT/SUT parameters MUST conform to the requirements defined in 2139 Section 4.2. Any configuration changes for this specific 2140 benchmarking test MUST be documented. 2142 7.9.3.2. Test Equipment Configuration Parameters 2144 Test equipment configuration parameters MUST conform to the 2145 requirements defined in Section 4.3. Following parameters MUST be 2146 documented for this benchmarking test: 2148 Client IP address range defined in Section 4.3.1.2 2150 Server IP address range defined in Section 4.3.2.2 2152 Traffic distribution ratio between IPv4 and IPv6 defined in 2153 Section 4.3.1.2 2155 RECOMMENDED cipher suites and key sizes defined in Section 4.3.1.3 2157 Target concurrent connections: Initial value from product 2158 datasheet or the value defined based on requirement for a specific 2159 deployment scenario. 2161 Initial concurrent connections: 10% of "Target concurrent 2162 connections" (an optional parameter for documentation) 2163 Connections per second during ramp up phase: 50% of maximum 2164 connections per second measured in benchmarking test TCP/HTTPS 2165 Connections per second (Section 7.6) 2167 Ramp up time (in traffic load profile for "Target concurrent 2168 connections"): "Target concurrent connections" / "Maximum 2169 connections per second during ramp up phase" 2171 Ramp up time (in traffic load profile for "Initial concurrent 2172 connections"): "Initial concurrent connections" / "Maximum 2173 connections per second during ramp up phase" 2175 The client MUST perform HTTPS transaction with persistence and each 2176 client can open multiple concurrent TCP connections per server 2177 endpoint IP. 2179 Each client sends 10 GET commands requesting 1 KByte HTTPS response 2180 objects in the same TCP connections (10 transactions/TCP connection) 2181 and the delay (think time) between each transaction MUST be X 2182 seconds. 2184 X = ("Ramp up time" + "steady state time") /10 2186 The established connections SHOULD remain open until the ramp down 2187 phase of the test. During the ramp down phase, all connections 2188 SHOULD be successfully closed with FIN. 2190 7.9.3.3. Test Results Validation Criteria 2192 The following test Criteria is defined as test results validation 2193 criteria. Test results validation criteria MUST be monitored during 2194 the whole sustain phase of the traffic load profile. 2196 a. Number of failed Application transactions (receiving any HTTP 2197 response code other than 200 OK) MUST be less than 0.001% (1 out 2198 of 100,000 transactions) of total attempted transactions. 2200 b. Number of Terminated TCP connections due to unexpected TCP RST 2201 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2202 connections) of total initiated TCP connections. 2204 c. During the sustain phase, traffic SHOULD be forwarded constantly. 2206 7.9.3.4. Measurement 2208 Average Concurrent TCP Connections MUST be reported for this 2209 benchmarking test. 2211 7.9.4. Test Procedures and Expected Results 2213 The test procedure is designed to measure the concurrent TCP 2214 connection capacity of the DUT/SUT at the sustaining period of 2215 traffic load profile. The test procedure consists of three major 2216 steps. This test procedure MAY be repeated multiple times with 2217 different IPv4 and IPv6 traffic distribution. 2219 7.9.4.1. Step 1: Test Initialization and Qualification 2221 Verify the link status of all connected physical interfaces. All 2222 interfaces are expected to be in "UP" status. 2224 Configure test equipment to establish "initial concurrent TCP 2225 connections" defined in Section 7.9.3.2. Except ramp up time, the 2226 traffic load profile SHOULD be defined as described in Section 4.3.4. 2228 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 2229 concurrent TCP connections". The measured KPIs during the sustain 2230 phase MUST meet the test results validation criteria "a" and "b" 2231 defined in Section 7.9.3.3. 2233 If the KPI metrics do not meet the test results validation criteria, 2234 the test procedure MUST NOT be continued to "Step 2". 2236 7.9.4.2. Step 2: Test Run with Target Objective 2238 Configure test equipment to establish the target objective ("Target 2239 concurrent TCP connections"). The test equipment SHOULD follow the 2240 traffic load profile definition (except ramp up time) as described in 2241 Section 4.3.4. 2243 During the ramp up and sustain phase, the other KPIs such as 2244 inspected throughput, TCP connections per second and application 2245 transactions per second MUST NOT reach to the maximum value that the 2246 DUT/SUT can support. 2248 The test equipment SHOULD start to measure and record KPIs defined in 2249 Section 7.9.3.4. The frequency of measurement SHOULD be less than 2 2250 seconds. Continue the test until all traffic profile phases are 2251 completed. 2253 Within the test results validation criteria, the DUT/SUT is expected 2254 to reach the desired value of the target objective in the sustain 2255 phase. Follow step 3, if the measured value does not meet the target 2256 value or does not fulfill the test results validation criteria. 2258 7.9.4.3. Step 3: Test Iteration 2260 Determine the achievable concurrent TCP connections within the test 2261 results validation criteria. 2263 8. IANA Considerations 2265 The IANA has allocated 2001:0200::/48 for IPv6 testing, which is a 2266 48-bit prefix from the [RFC4733] pool. For IPv4 testing, the IP 2267 subnet 198.18.0.0/15 has been assigned to the BMWG by the IANA. This 2268 assignment was made to minimize the chance of conflict in case a 2269 testing device were to be accidentally connected to part of the 2270 Internet. The specific use of the IPv4 addresses is detailed in 2271 [RFC2544] Appendix C. 2273 9. Security Considerations 2275 The primary goal of this document is to provide benchmarking 2276 terminology and methodology for next-generation network security 2277 devices. However, readers should be aware that there is some overlap 2278 between performance and security issues. Specifically, the optimal 2279 configuration for network security device performance may not be the 2280 most secure, and vice-versa. The Cipher suites recommended in this 2281 document are just for test purpose only. The Cipher suite 2282 recommendation for a real deployment is outside the scope of this 2283 document. 2285 10. Contributors 2287 The following individuals contributed significantly to the creation 2288 of this document: 2290 Alex Samonte, Amritam Putatunda, Aria Eslambolchizadeh, David 2291 DeSanto, Jurrie Van Den Breekel, Ryan Liles, Samaresh Nair, Stephen 2292 Goudreault, and Tim Otto 2294 11. Acknowledgements 2296 The authors wish to acknowledge the members of NetSecOPEN for their 2297 participation in the creation of this document. Additionally, the 2298 following members need to be acknowledged: 2300 Anand Vijayan, Baski Mohan, Chao Guo, Chris Brown, Chris Marshall, 2301 Jay Lindenauer, Michael Shannon, Mike Deichman, Ray Vinson, Ryan 2302 Riese, Tim Carlin, and Toulnay Orkun 2304 12. References 2306 12.1. Normative References 2308 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2309 Requirement Levels", BCP 14, RFC 2119, 2310 DOI 10.17487/RFC2119, March 1997, 2311 . 2313 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2314 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 2315 May 2017, . 2317 12.2. Informative References 2319 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 2320 Network Interconnect Devices", RFC 2544, 2321 DOI 10.17487/RFC2544, March 1999, 2322 . 2324 [RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., 2325 Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext 2326 Transfer Protocol -- HTTP/1.1", RFC 2616, 2327 DOI 10.17487/RFC2616, June 1999, 2328 . 2330 [RFC2647] Newman, D., "Benchmarking Terminology for Firewall 2331 Performance", RFC 2647, DOI 10.17487/RFC2647, August 1999, 2332 . 2334 [RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin, 2335 "Benchmarking Methodology for Firewall Performance", 2336 RFC 3511, DOI 10.17487/RFC3511, April 2003, 2337 . 2339 [RFC4733] Schulzrinne, H. and T. Taylor, "RTP Payload for DTMF 2340 Digits, Telephony Tones, and Telephony Signals", RFC 4733, 2341 DOI 10.17487/RFC4733, December 2006, 2342 . 2344 [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, 2345 "Applicability Statement for RFC 2544: Use on Production 2346 Networks Considered Harmful", RFC 6815, 2347 DOI 10.17487/RFC6815, November 2012, 2348 . 2350 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 2351 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, 2352 . 2354 Appendix A. Test Methodology - Security Effectiveness Evaluation 2356 A.1. Test Objective 2358 This test methodology verifies the DUT/SUT is able to detect, prevent 2359 and report the vulnerabilities. 2361 In this test, background test traffic will be generated in order to 2362 utilize the DUT/SUT. In parallel, the CVEs will be sent to the DUT/ 2363 SUT as encrypted and as well as clear text payload formats using a 2364 traffic generator. The selection of the CVEs is described in 2365 Section 4.2.1. 2367 o Number of blocked CVEs 2369 o Number of bypassed (nonblocked) CVEs 2371 o Background traffic performance (verify if the background traffic 2372 is impacted while sending CVE toward DUT/SUT) 2374 o Accuracy of DUT/SUT statistics in term of vulnerabilities 2375 reporting 2377 A.2. Test Bed Setup 2379 The same Test bed MUST be used for security effectiveness test and as 2380 well as for benchmarking test cases defined in Section 7. 2382 A.3. Test Parameters 2384 In this section, the benchmarking test specific parameters SHOULD be 2385 defined. 2387 A.3.1. DUT/SUT Configuration Parameters 2389 DUT/SUT configuration Parameters MUST conform to the requirements 2390 defined in Section 4.2. The same DUT configuration MUST be used for 2391 Security effectiveness test and as well as for benchmarking test 2392 cases defined in Section 7. The DUT/SUT MUST be configured in inline 2393 mode and all detected attack traffic MUST be dropped and the session 2394 Should be reset 2396 A.3.2. Test Equipment Configuration Parameters 2398 Test equipment configuration parameters MUST conform to the 2399 requirements defined in Section 4.3. The same Client and server IP 2400 ranges MUST be configured as used in the benchmarking test cases. In 2401 addition, the following parameters MUST be documented for this 2402 benchmarking test: 2404 o Background Traffic: 45% of maximum HTTP throughput and 45% of 2405 Maximum HTTPS throughput supported by the DUT/SUT (measured with 2406 object size 64 KByte in the benchmarking tests "HTTP(S) 2407 Throughput" defined in Section 7.3 and Section 7.7. 2409 o RECOMMENDED CVE traffic transmission Rate: 10 CVEs per second 2411 o RECOMMEND to generate each CVE multiple times (sequentially) at 10 2412 CVEs per second 2414 o Ciphers and Keys for the encrypted CVE traffic MUST use the same 2415 cipher configured for HTTPS traffic related benchmarking tests 2416 (Section 7.6 - Section 7.9) 2418 A.4. Test Results Validation Criteria 2420 The following test Criteria is defined as test results validation 2421 criteria. Test results validation criteria MUST be monitored during 2422 the whole test duration. 2424 a. Number of failed Application transaction in the background 2425 traffic MUST be less than 0.01% of attempted transactions 2427 b. Number of Terminated TCP connections of the background traffic 2428 (due to unexpected TCP RST sent by DUT/SUT) MUST be less than 2429 0.01% of total initiated TCP connections in the background 2430 traffic 2432 c. During the sustain phase, traffic should be forwarded at a 2433 constant rate 2435 d. False positive MUST NOT occur in the background traffic 2437 A.5. Measurement 2439 Following KPI metrics MUST be reported for this test scenario: 2441 Mandatory KPIs: 2443 o Blocked CVEs: It should be represented in the following ways: 2445 * Number of blocked CVEs out of total CVEs 2447 * Percentage of blocked CVEs 2449 o Unblocked CVEs: It should be represented in the following ways: 2451 * Number of unblocked CVEs out of total CVEs 2453 * Percentage of unblocked CVEs 2455 o Background traffic behavior: it should represent one of the 2456 followings ways: 2458 * No impact (traffic transmission at a constant rate) 2460 * Minor impact (e.g. small spikes- +/- 100 Mbit/s) 2462 * Heavily impacted (e.g. large spikes and reduced the background 2463 HTTP(S) throughput > 100 Mbit/s) 2465 o DUT/SUT reporting accuracy: DUT/SUT MUST report all detected 2466 vulnerabilities. 2468 Optional KPIs: 2470 o List of unblocked CVEs 2472 A.6. Test Procedures and Expected Results 2474 The test procedure is designed to measure the security effectiveness 2475 of the DUT/SUT at the sustaining period of the traffic load profile. 2476 The test procedure consists of two major steps. This test procedure 2477 MAY be repeated multiple times with different IPv4 and IPv6 traffic 2478 distribution. 2480 A.6.1. Step 1: Background Traffic 2482 Generate the background traffic at the transmission rate defined in 2483 the parameter section. 2485 The DUT/SUT MUST reach the target objective (HTTP(S) throughput) in 2486 sustain phase. The measured KPIs during the sustain phase MUST meet 2487 the test results validation criteria a, b, c and d defined in 2488 Appendix A.4. 2490 If the KPI metrics do not meet the acceptance criteria, the test 2491 procedure MUST NOT be continued to "Step 2". 2493 A.6.2. Step 2: CVE Emulation 2495 While generating the background traffic (in sustain phase), send the 2496 CVE traffic as defined in the parameter section. 2498 The test equipment SHOULD start to measure and record all specified 2499 KPIs. The frequency of measurement MUST be less than 2 seconds. 2500 Continue the test until all CVEs are sent. 2502 The measured KPIs MUST meet all the test results validation criteria 2503 a, b, c, and d defined in Appendix A.4. 2505 In addition, the DUT/SUT SHOULD report the vulnerabilities correctly. 2507 Appendix B. DUT/SUT Classification 2509 This document attempts to classify the DUT/SUT in four different four 2510 different categories based on its maximum supported firewall 2511 throughput performance number defined in the vendor datasheet. This 2512 classification MAY help user to determine specific configuration 2513 scale (e.g., number of ACL entries), traffic profiles, and attack 2514 traffic profiles, scaling those proportionally to DUT/SUT sizing 2515 category. 2517 The four different categories are Extra Small, Small, Medium, and 2518 Large. The RECOMMENDED throughput values for the following 2519 categories are: 2521 Extra Small (XS) - supported throughput less than 1Gbit/s 2523 Small (S) - supported throughput less than 5Gbit/s 2525 Medium (M) - supported throughput greater than 5Gbit/s and less than 2526 10Gbit/s 2528 Large (L) - supported throughput greater than 10Gbit/s 2530 Authors' Addresses 2532 Balamuhunthan Balarajah 2533 Berlin 2534 Germany 2536 Email: bm.balarajah@gmail.com 2537 Carsten Rossenhoevel 2538 EANTC AG 2539 Salzufer 14 2540 Berlin 10587 2541 Germany 2543 Email: cross@eantc.de 2545 Brian Monkman 2546 NetSecOPEN 2547 417 Independence Court 2548 Mechanicsburg, PA 17050 2549 USA 2551 Email: bmonkman@netsecopen.org