idnits 2.17.1 draft-ietf-bmwg-ngfw-performance-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (October 30, 2020) is 1274 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 2616 (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) -- Obsolete informational reference (is this intentional?): RFC 3511 (Obsoleted by RFC 9411) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Methodology Working Group B. Balarajah 3 Internet-Draft 4 Intended status: Informational C. Rossenhoevel 5 Expires: May 3, 2021 EANTC AG 6 B. Monkman 7 NetSecOPEN 8 October 30, 2020 10 Benchmarking Methodology for Network Security Device Performance 11 draft-ietf-bmwg-ngfw-performance-05 13 Abstract 15 This document provides benchmarking terminology and methodology for 16 next-generation network security devices including next-generation 17 firewalls (NGFW), next-generation intrusion detection and prevention 18 systems (NGIDS/NGIPS) and unified threat management (UTM) 19 implementations. This document aims to strongly improve the 20 applicability, reproducibility, and transparency of benchmarks and to 21 align the test methodology with today's increasingly complex layer 7 22 application use cases. The main areas covered in this document are 23 test terminology, test configuration parameters, and benchmarking 24 methodology for NGFW and NGIDS/NGIPS to start with. 26 Status of This Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at https://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on May 3, 2021. 43 Copyright Notice 45 Copyright (c) 2020 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents 50 (https://trustee.ietf.org/license-info) in effect on the date of 51 publication of this document. Please review these documents 52 carefully, as they describe your rights and restrictions with respect 53 to this document. Code Components extracted from this document must 54 include Simplified BSD License text as described in Section 4.e of 55 the Trust Legal Provisions and are provided without warranty as 56 described in the Simplified BSD License. 58 Table of Contents 60 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 61 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 62 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 63 4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 4 64 4.1. Testbed Configuration . . . . . . . . . . . . . . . . . . 4 65 4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 6 66 4.2.1. Security Effectiveness Configuration . . . . . . . . 11 67 4.3. Test Equipment Configuration . . . . . . . . . . . . . . 12 68 4.3.1. Client Configuration . . . . . . . . . . . . . . . . 12 69 4.3.2. Backend Server Configuration . . . . . . . . . . . . 14 70 4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 15 71 4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 16 72 5. Test Bed Considerations . . . . . . . . . . . . . . . . . . . 17 73 6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 18 74 6.1. Key Performance Indicators . . . . . . . . . . . . . . . 20 75 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 21 76 7.1. Throughput Performance With Application Traffic Mix . . . 21 77 7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 21 78 7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 21 79 7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 21 80 7.1.4. Test Procedures and Expected Results . . . . . . . . 23 81 7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 24 82 7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 24 83 7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 24 84 7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 24 85 7.2.4. Test Procedures and Expected Results . . . . . . . . 26 86 7.3. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 27 87 7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 27 88 7.3.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 27 89 7.3.3. Test Parameters . . . . . . . . . . . . . . . . . . . 27 90 7.3.4. Test Procedures and Expected Results . . . . . . . . 29 91 7.4. TCP/HTTP Transaction Latency . . . . . . . . . . . . . . 30 92 7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 30 93 7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 30 94 7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 30 95 7.4.4. Test Procedures and Expected Results . . . . . . . . 32 97 7.5. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 33 98 7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 33 99 7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 34 100 7.5.3. Test Parameters . . . . . . . . . . . . . . . . . . . 34 101 7.5.4. Test Procedures and Expected Results . . . . . . . . 35 102 7.6. TCP/HTTPS Connections per Second . . . . . . . . . . . . 36 103 7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 36 104 7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 37 105 7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 37 106 7.6.4. Test Procedures and Expected Results . . . . . . . . 38 107 7.7. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 40 108 7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 40 109 7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 40 110 7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 40 111 7.7.4. Test Procedures and Expected Results . . . . . . . . 42 112 7.8. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 43 113 7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 43 114 7.8.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 43 115 7.8.3. Test Parameters . . . . . . . . . . . . . . . . . . . 43 116 7.8.4. Test Procedures and Expected Results . . . . . . . . 45 117 7.9. Concurrent TCP/HTTPS Connection Capacity . . . . . . . . 46 118 7.9.1. Objective . . . . . . . . . . . . . . . . . . . . . . 46 119 7.9.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 46 120 7.9.3. Test Parameters . . . . . . . . . . . . . . . . . . . 47 121 7.9.4. Test Procedures and Expected Results . . . . . . . . 48 122 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 49 123 9. Security Considerations . . . . . . . . . . . . . . . . . . . 50 124 10. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 50 125 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 50 126 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 50 127 12.1. Normative References . . . . . . . . . . . . . . . . . . 50 128 12.2. Informative References . . . . . . . . . . . . . . . . . 51 129 Appendix A. Test Methodology - Security Effectiveness Evaluation 51 130 A.1. Test Objective . . . . . . . . . . . . . . . . . . . . . 51 131 A.2. Testbed setup . . . . . . . . . . . . . . . . . . . . . . 52 132 A.3. Test Parameters . . . . . . . . . . . . . . . . . . . . . 52 133 A.3.1. DUT/SUT Configuration Parameters . . . . . . . . . . 52 134 A.3.2. Test Equipment Configuration Parameters . . . . . . . 52 135 A.4. Test Results Validation Criteria . . . . . . . . . . . . 52 136 A.5. Measurement . . . . . . . . . . . . . . . . . . . . . . . 53 137 A.6. Test Procedures and expected Results . . . . . . . . . . 54 138 A.6.1. Step 1: Background traffic . . . . . . . . . . . . . 54 139 A.6.2. Step 2: CVE emulation . . . . . . . . . . . . . . . . 54 140 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 54 142 1. Introduction 144 15 years have passed since IETF recommended test methodology and 145 terminology for firewalls initially ([RFC2647], [RFC3511]). The 146 requirements for network security element performance and 147 effectiveness have increased tremendously since then. Security 148 function implementations have evolved to more advanced areas and have 149 diversified into intrusion detection and prevention, threat 150 management, analysis of encrypted traffic, etc. In an industry of 151 growing importance, well-defined, and reproducible key performance 152 indicators (KPIs) are increasingly needed as they enable fair and 153 reasonable comparison of network security functions. All these 154 reasons have led to the creation of a new next-generation security 155 device benchmarking document. 157 2. Requirements 159 The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 160 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 161 "OPTIONAL" in this document are to be interpreted as described in BCP 162 14 [RFC2119], [RFC8174] when, and only when, they appear in all 163 capitals, as shown here. 165 3. Scope 167 This document provides testing terminology and testing methodology 168 for next-generation security devices. It covers the validation of 169 security effectiveness configurations of the security devices, 170 followed by performance benchmark testing. This document focuses on 171 advanced, realistic, and reproducible testing methods. Additionally, 172 it describes testbed environments, test tool requirements, and test 173 result formats. 175 4. Test Setup 177 Test setup defined in this document is applicable to all benchmarking 178 test scenarios described in Section 7. 180 4.1. Testbed Configuration 182 Testbed configuration MUST ensure that any performance implications 183 that are discovered during the benchmark testing aren't due to the 184 inherent physical network limitations such as the number of physical 185 links and forwarding performance capabilities (throughput and 186 latency) of the network devices in the testbed. For this reason, 187 this document recommends avoiding external devices such as switches 188 and routers in the testbed wherever possible. 190 However, in the typical deployment, the security devices (Device 191 Under Test/System Under Test) are connected to routers and switches 192 which will reduce the number of entries in MAC or ARP tables of the 193 Device Under Test/System Under Test (DUT/SUT). If MAC or ARP tables 194 have many entries, this may impact the actual DUT/SUT performance due 195 to MAC and ARP/ND (Neighbor Discovery) table lookup processes. 196 Therefore, it is RECOMMENDED to connect aggregation switches or 197 routers between test equipment and DUT/SUT as shown in Figure 1. The 198 aggregation switches or routers can be also used to aggregate the 199 test equipment or DUT/SUT ports, if the numbers of used ports are 200 mismatched between test equipment and DUT/SUT. 202 If the test equipment is capable of emulating layer 3 routing 203 functionality and there is no need for a test equipment port 204 aggregation, it is RECOMMENDED to configure the test setup as shown 205 in Figure 2. 207 +-------------------+ +-----------+ +--------------------+ 208 |Aggregation Switch/| | | | Aggregation Switch/| 209 | Router +------+ DUT/SUT +------+ Router | 210 | | | | | | 211 +----------+--------+ +-----------+ +--------+-----------+ 212 | | 213 | | 214 +-----------+-----------+ +-----------+-----------+ 215 | | | | 216 | +-------------------+ | | +-------------------+ | 217 | | Emulated Router(s)| | | | Emulated Router(s)| | 218 | | (Optional) | | | | (Optional) | | 219 | +-------------------+ | | +-------------------+ | 220 | +-------------------+ | | +-------------------+ | 221 | | Clients | | | | Servers | | 222 | +-------------------+ | | +-------------------+ | 223 | | | | 224 | Test Equipment | | Test Equipment | 225 +-----------------------+ +-----------------------+ 227 Figure 1: Testbed Setup - Option 1 229 +-----------------------+ +-----------------------+ 230 | +-------------------+ | +-----------+ | +-------------------+ | 231 | | Emulated Router(s)| | | | | | Emulated Router(s)| | 232 | | (Optional) | +----- DUT/SUT +-----+ (Optional) | | 233 | +-------------------+ | | | | +-------------------+ | 234 | +-------------------+ | +-----------+ | +-------------------+ | 235 | | Clients | | | | Servers | | 236 | +-------------------+ | | +-------------------+ | 237 | | | | 238 | Test Equipment | | Test Equipment | 239 +-----------------------+ +-----------------------+ 241 Figure 2: Testbed Setup - Option 2 243 4.2. DUT/SUT Configuration 245 A unique DUT/SUT configuration MUST be used for all benchmarking 246 tests described in Section 7. Since each DUT/SUT will have their own 247 unique configuration, users SHOULD configure their device with the 248 same parameters and security features that would be used in the 249 actual deployment of the device or a typical deployment in order to 250 achieve maximum security coverage. 252 This document attempts to define the recommended security features 253 which SHOULD be consistently enabled for all the benchmarking tests 254 described in Section 7. Table 1 and Table 2 below describe the sets 255 of security feature list for NGFW and NGIDS/NGIPS that SHOULD be 256 configured on the DUT/SUT respectively. 258 To improve repeatability, a summary of the DUT configuration 259 including a description of all enabled DUT/SUT features MUST be 260 published with the benchmarking results. 262 +------------------------+ 263 | NGFW | 264 +-------------- +-------------+----------+ 265 | | | | 266 |DUT Features | RECOMMENDED | OPTIONAL | 267 | | | | 268 +----------------------------------------+ 269 |SSL Inspection | x | | 270 +----------------------------------------+ 271 |IDS/IPS | x | | 272 +----------------------------------------+ 273 |Anti Spyware | x | | 274 +----------------------------------------+ 275 |Antivirus | x | | 276 +----------------------------------------+ 277 |Anti Botnet | x | | 278 +----------------------------------------+ 279 |Web Filtering | | x | 280 +----------------------------------------+ 281 |DLP | | x | 282 +----------------------------------------+ 283 |DDoS | | x | 284 +----------------------------------------+ 285 |Certificate | | x | 286 |Validation | | | 287 +----------------------------------------+ 288 |Logging and | x | | 289 |Reporting | | | 290 +-------------- +------------------------+ 291 |Application | x | | 292 |Identification | | | 293 +---------------+-------------+----------+ 295 Table 1: NGFW Security Features 296 +------------------------+ 297 | NGIDS/NGIPS | 298 +-----------------------------+----------+ 299 | | | | 300 |DUT Features | RECOMMENDED | OPTIONAL | 301 | | | | 302 +----------------------------------------+ 303 |SSL Inspection | x | | 304 +----------------------------------------+ 305 |Anti Spyware | x | | 306 +----------------------------------------+ 307 |Antivirus | x | | 308 +----------------------------------------+ 309 |Anti Botnet | x | | 310 +----------------------------------------+ 311 |Logging and | x | | 312 |Reporting | | | 313 +-------------+ +------------------------+ 314 |Application | x | | 315 |Identification | | | 316 +----------------------------------------+ 317 |Deep Packet | x | | 318 |Inspection | | | 319 +----------------------------------------+ 320 |Anti Evasion | x | | 321 +---------------+-------------+----------+ 323 Table 2: NGIDS/NGIPS Security Features 325 The following table provides a brief description of the security 326 features. 328 +------------------+------------------------------------------------+ 329 | DUT/SUT Features | Description | 330 +-------------------------------------------------------------------+ 331 | SSL Inspection | DUT/SUT intercept and decrypt inbound HTTPS | 332 | | traffic between servers and clients. Once the | 333 | | content inspection has been completed, DUT/SUT | 334 | | MUST encrypt the HTTPS traffic with ciphers | 335 | | and keys used by the clients and servers. | 336 +-------------------------------------------------------------------+ 337 | IDS/IPS | DUT/SUT MUST detect and block exploits | 338 | | targeting known and unknown vulnerabilities | 339 | | across the monitored network. | 340 +-------------------------------------------------------------------+ 341 | Anti Malware | DUT/SUT MUST detect and prevent the | 342 | | transmission of malicious executable code and | 343 | | any associated communications across the | 344 | | monitored network. This includes data | 345 | | exfiltration as well as command and control .| 346 | | channels. | 347 +-------------------------------------------------------------------+ 348 | Anti Spyware |Anti-Spyware is a subcategory of Anti Malware. | 349 | |Spyware transmits information without the user's| 350 | |knowledge or permission. DUT/SUT MUST detect and| 351 | |block initial infection or transmission of data.| 352 +-------------------------------------------------------------------+ 353 | Anti Botnet |DUT/SUT MUST detect traffic to or from botnets. | 354 +-------------------------------------------------------------------+ 355 | Anti Evasion |DUT/SUT MUST detect and mitigate attacks that | 356 | |have been obfuscated in some manner. | 357 +-------------------------------------------------------------------+ 358 | Web Filtering | DUT/SUT MUST detect and block malicious website| 359 | | including defined classifications of website | 360 | | across the monitored network. | 361 +-------------------------------------------------------------------+ 362 | DLP | DUT/SUT MUST detect and block the transmission | 363 | | of Personally Identifiable Information (PII) | 364 | | and specific files across the monitored network| 365 +------------------+ ---------------------------------------------+ 366 | Certificate | DUT/SUT MUST validate certificates used in | 367 | Validation | encrypted communications across the monitored | 368 | | network. | 369 +-------------------------------------------------------------------+ 370 | Logging and | DUT/SUT MUST be able to log and report all | 371 | Reporting | traffic at the flow level across the monitored.| 372 +-------------------------------------------------------------------+ 373 | Application | DUT/SUT MUST detect known applications as | 374 | Identification | defined within the traffic mix selected across | 375 | | the monitored network. | 376 +------------------+------------------------------------------------- 378 Table 3: Security Feature Description 380 In summary, DUT/SUT SHOULD be configured as follows: 382 o All RECOMMENDED security inspection enabled 384 o Disposition of all flows of traffic are logged - Logging to an 385 external device is permissible 387 o Geographical location filtering and Application Identification and 388 Control configured to be triggered based on a site or application 389 from the defined traffic mix 391 In addition, a realistic number of access control rules (ACL) MUST be 392 configured on the DUT/SUT. However, this is applicable only for the 393 security devices where ACL's are configurable and also the ACL 394 configuration on NGIDS/NGIPS devices is OPTIONAL. This document 395 determines the number of access policy rules for four different 396 classes of DUT/SUT. The classification of the DUT/SUT MAY be based 397 on its maximum supported firewall throughput performance number 398 defined in the vendor datasheet. This document classifies the DUT/ 399 SUT in four different categories; namely Extra Small, Small, Medium, 400 and Large. 402 The RECOMMENDED throughput values for the following classes are: 404 Extra Small (XS) - supported throughput less than 1Gbit/s 406 Small (S) - supported throughput less than 5Gbit/s 408 Medium (M) - supported throughput greater than 5Gbit/s and less than 409 10Gbit/s 411 Large (L) - supported throughput greater than 10Gbit/s 413 The Access Control Rules (ACL) defined in Table 4 MUST be configured 414 from top to bottom in the correct order as shown in the table. The 415 ACL entries MUST be configured in Forward Information Base (FIB) 416 table of the DUT/SUT. (Note: There will be differences between how 417 security vendors implement ACL decision making.) The configured ACL 418 MUST NOT block the security and performance test traffic used for the 419 benchmarking test scenarios. 421 +---------------+ 422 | DUT/SUT | 423 | Classification| 424 | # Rules | 425 +-----------+-----------+------------------+------------+---+---+---+ 426 | | Match | | | | | | | 427 | Rules Type| Criteria | Description | Action | XS| S | M | L | 428 +-------------------------------------------------------------------+ 429 |Application|Application| Any application | block | 5 | 10| 20| 50| 430 |layer | | traffic NOT | | | | | | 431 | | | included in the | | | | | | 432 | | | test traffic | | | | | | 433 +-----------------------+ ------------------------------------------+ 434 |Transport |Src IP and | Any src IP subnet| block | 25| 50|100|250| 435 |layer |TCP/UDP | used in the test | | | | | | 436 | |Dst ports | AND any dst ports| | | | | | 437 | | | NOT used in the | | | | | | 438 | | | test traffic | | | | | | 439 +-------------------------------------------------------------------+ 440 |IP layer |Src/Dst IP | Any src/dst IP | block | 25| 50|100|250| 441 | | | subnet NOT used | | | | | | 442 | | | in the test | | | | | | 443 +-------------------------------------------------------------------+ 444 |Application|Application| Applications | allow | 10| 10| 10| 10| 445 |layer | | included in the | | | | | | 446 | | | test traffic | | | | | | 447 +-------------------------------------------------------------------+ 448 |Transport |Src IP and | Half of the src | allow | 1| 1| 1| 1| 449 |layer |TCP/UDP | IP used in the | | | | | | 450 | |Dst ports | test AND any dst | | | | | | 451 | | | ports used in the| | | | | | 452 | | | test traffic. One| | | | | | 453 | | | rule per subnet | | | | | | 454 +-------------------------------------------------------------------+ 455 |IP layer |Src IP | The rest of the | allow | 1| 1| 1| 1| 456 | | | src IP subnet | | | | | | 457 | | | range used in the| | | | | | 458 | | | test. One rule | | | | | | 459 | | | per subnet | | | | | | 460 +-----------+-----------+------------------+--------+---+---+---+---+ 462 Table 4: DUT/SUT Access List 464 4.2.1. Security Effectiveness Configuration 466 The Security features (defined in table 1 and 2) of the DUT/SUT MUST 467 be configured effectively in such a way to detect, prevent, and 468 report the defined security Vulnerability sets. This Section defines 469 the selection of the security Vulnerability sets from Common 470 Vulnerabilities and Exposures (CVE) list for the testing. The 471 vulnerability set MUST reflects a minimum of 500 CVEs from no older 472 than 10 calendar years to the current year. These CVEs SHOULD be 473 selected with a focus on in-use software commonly found in business 474 applications, with a Common Vulnerability Scoring System (CVSS) 475 Severity of High (7-10). 477 This Document is mainly focused on performance benchmarking. 478 However, it is strongly RECOMMENDED to validate the security 479 configuration of the DUT/SUT by evaluating the security effectiveness 480 as a Prerequisite for performance benchmarking tests defined in the 481 section 7. The Methodology for evaluating Security effectiveness is 482 defined in Appendix A. 484 4.3. Test Equipment Configuration 486 In general, test equipment allows configuring parameters in different 487 protocol layers. These parameters thereby influence the traffic 488 flows which will be offered and impact performance measurements. 490 This section specifies common test equipment configuration parameters 491 applicable for all test scenarios defined in Section 7. Any test 492 scenario specific parameters are described under the test setup 493 section of each test scenario individually. 495 4.3.1. Client Configuration 497 This section specifies which parameters SHOULD be considered while 498 configuring clients using test equipment. Also, this section 499 specifies the RECOMMENDED values for certain parameters. 501 4.3.1.1. TCP Stack Attributes 503 The TCP stack SHOULD use a TCP Reno [RFC5681] variant, which include 504 congestion avoidance, back off and windowing, fast retransmission, 505 and fast recovery on every TCP connection between client and server 506 endpoints. The default IPv4 and IPv6 MSS segments size MUST be set 507 to 1460 bytes and 1440 bytes respectively and a TX and RX receive 508 windows of 64 KByte. Client initial congestion window MUST NOT 509 exceed 10 times the MSS. Delayed ACKs are permitted and the maximum 510 client delayed Ack MUST NOT exceed 10 times the MSS before a forced 511 ACK. Up to 3 retries SHOULD be allowed before a timeout event is 512 declared. All traffic MUST set the TCP PSH flag to high. The source 513 port range SHOULD be in the range of 1024 - 65535. Internal timeout 514 SHOULD be dynamically scalable per RFC 793. The client SHOULD 515 initiate and close TCP connections. TCP connections MUST be closed 516 via FIN. 518 4.3.1.2. Client IP Address Space 520 The sum of the client IP space SHOULD contain the following 521 attributes. 523 o The IP blocks SHOULD consist of multiple unique, discontinuous 524 static address blocks. 526 o A default gateway is permitted. 528 o The IPv4 Type of Service (ToS) byte or IPv6 traffic class should 529 be set to '00' or '000000' respectively. 531 The following equation can be used to determine the required total 532 number of client IP addresses. 534 Desired total number of client IP = Target throughput [Mbit/s] / 535 Throughput per IP address [Mbit/s] 537 Based on deployment and use case scenario, the value for "Throughput 538 per IP address" can be varied. 540 (Option 1) DUT/SUT deployment scenario 1 : 6-7 Mbit/s per IP (e.g. 541 1,400-1,700 IPs per 10Gbit/s throughput) 543 (Option 2) DUT/SUT deployment scenario 2 : 0.1-0.2 Mbit/s per IP 544 (e.g. 50,000-100,000 IPs per 10Gbit/s throughput) 546 Based on deployment and use case scenario, client IP addresses SHOULD 547 be distributed between IPv4 and IPv6 type. The Following options can 548 be considered for a selection of traffic mix ratio. 550 (Option 1) 100 % IPv4, no IPv6 552 (Option 2) 80 % IPv4, 20% IPv6 554 (Option 3) 50 % IPv4, 50% IPv6 556 (Option 4) 20 % IPv4, 80% IPv6 558 (Option 5) no IPv4, 100% IPv6 560 4.3.1.3. Emulated Web Browser Attributes 562 The emulated web client contains attributes that will materially 563 affect how traffic is loaded. The objective is to emulate modern, 564 typical browser attributes to improve realism of the result set. 566 For HTTP traffic emulation, the emulated browser MUST negotiate HTTP 567 1.1. HTTP persistence MAY be enabled depending on the test scenario. 568 The browser MAY open multiple TCP connections per Server endpoint IP 569 at any time depending on how many sequential transactions are needed 570 to be processed. Within the TCP connection multiple transactions MAY 571 be processed if the emulated browser has available connections. The 572 browser SHOULD advertise a User-Agent header. Headers MUST be sent 573 uncompressed. The browser SHOULD enforce content length validation. 575 For encrypted traffic, the following attributes SHALL define the 576 negotiated encryption parameters. The test clients MUST use TLSv1.2 577 or higher. TLS record size MAY be optimized for the HTTPS response 578 object size up to a record size of 16 KByte. The client endpoint 579 SHOULD send TLS Extension Server Name Indication (SNI) information 580 when opening a security tunnel. Each client connection MUST perform 581 a full handshake with server certificate and MUST NOT use session 582 reuse or resumption. 584 The following ciphers and keys are RECOMMENDED to use for HTTPS based 585 benchmarking tests defined in Section 7. 587 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 588 Algorithm: ecdsa_secp256r1_sha256 and Supported group: sepc256r1) 590 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 591 Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256) 593 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 594 Algorithm: ecdsa_secp384r1_sha384 and Supported group: sepc521r1) 596 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 597 Algorithm: rsa_pkcs1_sha384 and Supported group: secp256) 599 Note: The above ciphers and keys were those commonly used enterprise 600 grade encryption cipher suites. It is recognized that these will 601 evolve over time. Individual certification bodies SHOULD use ciphers 602 and keys that reflect evolving use cases. These choices MUST be 603 documented in the resulting test reports with detailed information on 604 the ciphers and keys used along with reasons for the choices. 606 4.3.2. Backend Server Configuration 608 This section specifies which parameters should be considered while 609 configuring emulated backend servers using test equipment. 611 4.3.2.1. TCP Stack Attributes 613 The TCP stack on the server side SHOULD be configured similar to the 614 client side configuration described in Section 4.3.1.1. In addition, 615 server initial congestion window MUST NOT exceed 10 times the MSS. 616 Delayed ACKs are permitted and the maximum server delayed ACK MUST 617 NOT exceed 10 times the MSS before a forced ACK. 619 4.3.2.2. Server Endpoint IP Addressing 621 The sum of the server IP space SHOULD contain the following 622 attributes. 624 o The server IP blocks SHOULD consist of unique, discontinuous 625 static address blocks with one IP per Server Fully Qualified 626 Domain Name (FQDN) endpoint per test port. 628 o A default gateway is permitted. The IPv4 ToS byte and IPv6 629 traffic class bytes should be set to '00' and '000000' 630 respectively. 632 o The server IP addresses SHOULD be distributed between IPv4 and 633 IPv6 with a ratio identical to the clients distribution ratio. 635 4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes 637 The server pool for HTTP SHOULD listen on TCP port 80 and emulate 638 HTTP version 1.1 with persistence. The Server MUST advertise server 639 type in the Server response header [RFC2616]. For HTTPS server, TLS 640 1.2 or higher MUST be used with a maximum record size of 16 KByte and 641 MUST NOT use ticket resumption or Session ID reuse. The server MUST 642 listen on port TCP 443. The server SHALL serve a certificate to the 643 client. The HTTPS server MUST check Host SNI information with the 644 FQDN if the SNI is in use. Cipher suite and key size on the server 645 side MUST be configured similar to the client side configuration 646 described in Section 4.3.1.3. 648 4.3.3. Traffic Flow Definition 650 This section describes the traffic pattern between client and server 651 endpoints. At the beginning of the test, the server endpoint 652 initializes and will be ready to accept connection states including 653 initialization of the TCP stack as well as bound HTTP and HTTPS 654 servers. When a client endpoint is needed, it will initialize and be 655 given attributes such as a MAC and IP address. The behavior of the 656 client is to sweep through the given server IP space, sequentially 657 generating a recognizable service by the DUT. Thus, a balanced, mesh 658 between client endpoints and server endpoints will be generated in a 659 client port server port combination. Each client endpoint performs 660 the same actions as other endpoints, with the difference being the 661 source IP of the client endpoint and the target server IP pool. The 662 client MUST use the servers IP address or Fully Qualified Domain 663 Names (FQDN) in Host Headers. For TLS the client MAY use Server Name 664 Indication (SNI). 666 4.3.3.1. Description of Intra-Client Behavior 668 Client endpoints are independent of other clients that are 669 concurrently executing. When a client endpoint initiates traffic, 670 this section describes how the client steps through different 671 services. Once the test is initialized, the client endpoints SHOULD 672 randomly hold (perform no operation) for a few milliseconds to allow 673 for better randomization of the start of client traffic. Each client 674 will either open a new TCP connection or connect to a TCP persistence 675 stack still open to that specific server. At any point that the 676 service profile may require encryption, a TLS encryption tunnel will 677 form presenting the URL or IP address request to the server. If 678 using SNI, the server will then perform an SNI name check with the 679 proposed FQDN compared to the domain embedded in the certificate. 680 Only when correct, will the server process the HTTPS response object. 681 The initial response object to the server MUST NOT have a fixed size; 682 its size is based on benchmarking tests described in Section 7. 683 Multiple additional sub-URLs (response objects on the service page) 684 MAY be requested simultaneously. This MAY be to the same server IP 685 as the initial URL. Each sub-object will also use a conical FQDN and 686 URL path, as observed in the traffic mix used. 688 4.3.4. Traffic Load Profile 690 The loading of traffic is described in this section. The loading of 691 a traffic load profile has five distinct phases: Init, ramp up, 692 sustain, ramp down, and collection. 694 1. During the Init phase, testbed devices including the client and 695 server endpoints should negotiate layer 2-3 connectivity such as 696 MAC learning and ARP. Only after successful MAC learning or ARP/ 697 ND resolution SHALL the test iteration move to the next phase. 698 No measurements are made in this phase. The minimum RECOMMEND 699 time for Init phase is 5 seconds. During this phase, the 700 emulated clients SHOULD NOT initiate any sessions with the DUT/ 701 SUT, in contrast, the emulated servers should be ready to accept 702 requests from DUT/SUT or from emulated clients. 704 2. In the ramp up phase, the test equipment SHOULD start to generate 705 the test traffic. It SHOULD use a set approximate number of 706 unique client IP addresses actively to generate traffic. The 707 traffic SHOULD ramp from zero to desired target objective. The 708 target objective will be defined for each benchmarking test. The 709 duration for the ramp up phase MUST be configured long enough, so 710 that the test equipment does not overwhelm DUT/SUT's supported 711 performance metrics namely; connections per second, throughput, 712 concurrent TCP connections, and application transactions per 713 second. No measurements are made in this phase. 715 3. In the sustain phase, the test equipment SHOULD continue 716 generating traffic to constant target value for a constant number 717 of active client IPs. The minimum RECOMMENDED time duration for 718 sustain phase is 300 seconds. This is the phase where 719 measurements occur. 721 4. In the ramp down/close phase, no new connections are established, 722 and no measurements are made. The time duration for ramp up and 723 ramp down phase SHOULD be the same. 725 5. The last phase is administrative and will occur when the test 726 equipment merges and collates the report data. 728 5. Test Bed Considerations 730 This section recommends steps to control the test environment and 731 test equipment, specifically focusing on virtualized environments and 732 virtualized test equipment. 734 1. Ensure that any ancillary switching or routing functions between 735 the system under test and the test equipment do not limit the 736 performance of the traffic generator. This is specifically 737 important for virtualized components (vSwitches, vRouters). 739 2. Verify that the performance of the test equipment matches and 740 reasonably exceeds the expected maximum performance of the system 741 under test. 743 3. Assert that the testbed characteristics are stable during the 744 entire test session. Several factors might influence stability 745 specifically for virtualized test beds. For example additional 746 workloads in a virtualized system, load balancing, and movement 747 of virtual machines during the test, or simple issues such as 748 additional heat created by high workloads leading to an emergency 749 CPU performance reduction. 751 Testbed reference pre-tests help to ensure that the maximum desired 752 traffic generator aspects such as throughput, transaction per second, 753 connection per second, concurrent connection, and latency. 755 Once the desired maximum performance goals for the system under test 756 have been identified, a safety margin of 10% SHOULD be added for 757 throughput and subtracted for maximum latency and maximum packet 758 loss. 760 Testbed preparation may be performed either by configuring the DUT in 761 the most trivial setup (fast forwarding) or without presence of the 762 DUT. 764 6. Reporting 766 This section describes how the final report should be formatted and 767 presented. The final test report MAY have two major sections; 768 Introduction and result sections. The following attributes SHOULD be 769 present in the introduction section of the test report. 771 1. The time and date of the execution of the test MUST be prominent. 773 2. Summary of testbed software and Hardware details 775 A. DUT/SUT Hardware/Virtual Configuration 777 + This section SHOULD clearly identify the make and model of 778 the DUT/SUT 780 + The port interfaces, including speed and link information 781 MUST be documented. 783 + If the DUT/SUT is a Virtual Network Function (VNF), 784 host(server) hardware and software details, interface 785 acceleration type such as DPDK and SR-IOV used CPU cores, 786 used RAM, and the resource sharing (e.g. Pinning details 787 and NUMA Node) configuration MUST be documented. The 788 virtual components such as Hypervisor, virtual switch 789 version MUST be also documented. 791 + Any additional hardware relevant to the DUT/SUT such as 792 controllers MUST be documented 794 B. DUT/SUT Software 796 + The operating system name MUST be documented 798 + The version MUST be documented 800 + The specific configuration MUST be documented 802 C. DUT/SUT Enabled Features 803 + Configured DUT/SUT features (see Table 1 and Table 2) MUST 804 be documented 806 + Attributes of those featured MUST be documented 808 + Any additional relevant information about features MUST be 809 documented 811 D. Test equipment hardware and software 813 + Test equipment vendor name 815 + Hardware details including model number, interface type 817 + Test equipment firmware and test application software 818 version 820 E. Key test parameters 822 + Used cipher suites and keys 824 + IPv4 and IPv6 traffic distribution 826 + Number of configured ACL 828 F. Details of application traffic mix used in the test scenario 829 Throughput Performance With Application Traffic Mix 830 (Section 7.1) 832 + Name of applications and layer 7 protocols 834 + Percentage of emulated traffic for each application and 835 layer 7 protocols 837 + Percentage of encrypted traffic and used cipher suites and 838 keys (The RECOMMENDED ciphers and keys are defined in 839 Section 4.3.1.3) 841 + Used object sizes for each application and layer 7 842 protocols 844 3. Results Summary / Executive Summary 846 1. Results SHOULD resemble a pyramid in how it is reported, with 847 the introduction section documenting the summary of results 848 in a prominent, easy to read block. 850 2. In the result section of the test report, the following 851 attributes should be present for each test scenario. 853 a. KPIs MUST be documented separately for each test 854 scenario. The format of the KPI metrics should be 855 presented as described in Section 6.1. 857 b. The next level of details SHOULD be graphs showing each 858 of these metrics over the duration (sustain phase) of the 859 test. This allows the user to see the measured 860 performance stability changes over time. 862 6.1. Key Performance Indicators 864 This section lists key performance indicators (KPIs) for overall 865 benchmarking test scenarios. All KPIs MUST be measured during the 866 sustain phase of the traffic load profile described in Section 4.3.4. 867 All KPIs MUST be measured from the result output of test equipment. 869 o Concurrent TCP Connections 870 This KPI measures the average concurrent open TCP connections in 871 the sustaining period. 873 o TCP Connections Per Second 874 This KPI measures the average established TCP connections per 875 second in the sustaining period. Also this KPI measures average 876 established and terminated TCP connections per second 877 simultaneously for the test scenarios "TCP/HTTP(S) Connection Per 878 Second" defined in Section 7.2 and Section 7.6. 880 o Application Transactions Per Second 881 This KPI measures the average successfully completed application 882 transactions per second in the sustaining period. 884 o TLS Handshake Rate 885 This KPI measures the average TLS 1.2 or higher session formation 886 rate within the sustaining period. 888 o Throughput 889 This KPI measures the average Layer 2 throughput within the 890 sustaining period as well as average packets per seconds within 891 the same period. The value of throughput SHOULD be presented in 892 Gbit/s rounded to two places of precision with a more specific 893 Kbit/s in parenthesis. Optionally, goodput MAY also be logged as 894 an average goodput rate measured over the same period. Goodput 895 result SHALL also be presented in the same format as throughput. 897 o URL Response time / Time to Last Byte (TTLB) 898 This KPI measures the minimum, average and maximum per URL 899 response time in the sustaining period. The latency is measured 900 at Client and in this case, would be the time duration between 901 sending a GET request from Client and the receival of the complete 902 response from the server. 904 o Time to First Byte (TTFB) 905 This KPI will measure minimum, average and maximum the time to 906 first byte. TTFB is the elapsed time between sending the SYN 907 packet from the client and receiving the first byte of application 908 date from the DUT/SUT. TTFB SHOULD be expressed in millisecond. 910 7. Benchmarking Tests 912 7.1. Throughput Performance With Application Traffic Mix 914 7.1.1. Objective 916 Using a relevant application traffic mix, determine the maximum 917 sustainable throughput performance supported by the DUT/SUT. 919 Based on customer use case, users can choose the application traffic 920 mix for this test. The details about the traffic mix MUST be 921 documented in the report. At least the following traffic mix details 922 MUST be documented and reported together with the test results: 924 Name of applications and layer 7 protocols 926 Percentage of emulated traffic for each application and layer 7 927 protocols 929 Percentage of encrypted traffic and used cipher suites and keys 930 (The RECOMMENDED ciphers and keys are defined in Section 4.3.1.3) 932 Used object sizes for each application and layer 7 protocols 934 7.1.2. Test Setup 936 Testbed setup MUST be configured as defined in Section 4. Any test 937 scenario specific test bed configuration changes MUST be documented. 939 7.1.3. Test Parameters 941 In this section, the test scenario specific parameters SHOULD be 942 defined. 944 7.1.3.1. DUT/SUT Configuration Parameters 946 DUT/SUT parameters MUST conform to the requirements defined in 947 Section 4.2. Any configuration changes for this specific test 948 scenario MUST be documented. In case the DUT is configured without 949 SSL inspection feature, the test report MUST explain the implications 950 of this to the relevant application traffic mix encrypted traffic. 952 7.1.3.2. Test Equipment Configuration Parameters 954 Test equipment configuration parameters MUST conform to the 955 requirements defined in Section 4.3. Following parameters MUST be 956 noted for this test scenario: 958 Client IP address range defined in Section 4.3.1.2 960 Server IP address range defined in Section 4.3.2.2 962 Traffic distribution ratio between IPv4 and IPv6 defined in 963 Section 4.3.1.2 965 Target throughput: It can be defined based on requirements. 966 Otherwise, it represents aggregated line rate of interface(s) used 967 in the DUT/SUT 969 Initial throughput: 10% of the "Target throughput" 971 One of the ciphers and keys defined in Section 4.3.1.3 are 972 RECOMMENDED to use for this test scenario. 974 7.1.3.3. Traffic Profile 976 Traffic profile: Test scenario MUST be run with a relevant 977 application traffic mix profile. 979 7.1.3.4. Test Results Validation Criteria 981 The following test Criteria is defined as test results validation 982 criteria. Test results validation criteria MUST be monitored during 983 the whole sustain phase of the traffic load profile. 985 a. Number of failed application transactions (receiving any HTTP 986 response code other than 200 OK) MUST be less than 0.001% (1 out 987 of 100,000 transactions) of total attempt transactions 989 b. Number of Terminated TCP connections due to unexpected TCP RST 990 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 991 connections) of total initiated TCP connections 993 7.1.3.5. Measurement 995 Following KPI metrics MUST be reported for this test scenario. 997 Mandatory KPIs: average Throughput, TTFB (minimum, average, and 998 maximum), TTLB (minimum, average, and maximum) and average 999 Application Transactions Per Second 1001 Note: TTLB MUST be reported along with min, max, and avg object size 1002 used in the traffic profile. 1004 Optional KPIs: average TCP Connections Per Second and average TLS 1005 Handshake Rate 1007 7.1.4. Test Procedures and Expected Results 1009 The test procedures are designed to measure the throughput 1010 performance of the DUT/SUT at the sustaining period of traffic load 1011 profile. The test procedure consists of three major steps. 1013 7.1.4.1. Step 1: Test Initialization and Qualification 1015 Verify the link status of all connected physical interfaces. All 1016 interfaces are expected to be in "UP" status. 1018 Configure traffic load profile of the test equipment to generate test 1019 traffic at the "Initial throughput" rate as described in the 1020 parameters Section 7.1.3.2. The test equipment SHOULD follow the 1021 traffic load profile definition as described in Section 4.3.4. The 1022 DUT/SUT SHOULD reach the "Initial throughput" during the sustain 1023 phase. Measure all KPI as defined in Section 7.1.3.5. The measured 1024 KPIs during the sustain phase MUST meet validation criteria "a" and 1025 "b" defined in Section 7.1.3.4. 1027 If the KPI metrics do not meet the validation criteria, the test 1028 procedure MUST NOT be continued to step 2. 1030 7.1.4.2. Step 2: Test Run with Target Objective 1032 Configure test equipment to generate traffic at the "Target 1033 throughput" rate defined in the parameter table. The test equipment 1034 SHOULD follow the traffic load profile definition as described in 1035 Section 4.3.4. The test equipment SHOULD start to measure and record 1036 all specified KPIs. The frequency of KPI metric measurements SHOULD 1037 be 2 seconds. Continue the test until all traffic profile phases are 1038 completed. 1040 The DUT/SUT is expected to reach the desired target throughput during 1041 the sustain phase. In addition, the measured KPIs MUST meet all 1042 validation criteria. Follow step 3, if the KPI metrics do not meet 1043 the validation criteria. 1045 7.1.4.3. Step 3: Test Iteration 1047 Determine the maximum and average achievable throughput within the 1048 validation criteria. Final test iteration MUST be performed for the 1049 test duration defined in Section 4.3.4. 1051 7.2. TCP/HTTP Connections Per Second 1053 7.2.1. Objective 1055 Using HTTP traffic, determine the maximum sustainable TCP connection 1056 establishment rate supported by the DUT/SUT under different 1057 throughput load conditions. 1059 To measure connections per second, test iterations MUST use different 1060 fixed HTTP response object sizes defined in Section 7.2.3.2. 1062 7.2.2. Test Setup 1064 Test bed setup SHOULD be configured as defined in Section 4. Any 1065 specific test bed configuration changes such as number of interfaces 1066 and interface type, etc. MUST be documented. 1068 7.2.3. Test Parameters 1070 In this section, test scenario specific parameters SHOULD be defined. 1072 7.2.3.1. DUT/SUT Configuration Parameters 1074 DUT/SUT parameters MUST conform to the requirements defined in 1075 Section 4.2. Any configuration changes for this specific test 1076 scenario MUST be documented. 1078 7.2.3.2. Test Equipment Configuration Parameters 1080 Test equipment configuration parameters MUST conform to the 1081 requirements defined in Section 4.3. Following parameters MUST be 1082 documented for this test scenario: 1084 Client IP address range defined in Section 4.3.1.2 1086 Server IP address range defined in Section 4.3.2.2 1087 Traffic distribution ratio between IPv4 and IPv6 defined in 1088 Section 4.3.1.2 1090 Target connections per second: Initial value from product datasheet 1091 (if known) 1093 Initial connections per second: 10% of "Target connections per 1094 second" (an optional parameter for documentation) 1096 The client SHOULD negotiate HTTP 1.1 and close the connection with 1097 FIN immediately after completion of one transaction. In each test 1098 iteration, client MUST send GET command requesting a fixed HTTP 1099 response object size. 1101 The RECOMMENDED response object sizes are 1, 2, 4, 16, 64 KByte 1103 7.2.3.3. Test Results Validation Criteria 1105 The following test Criteria is defined as test results validation 1106 criteria. Test results validation criteria MUST be monitored during 1107 the whole sustain phase of the traffic load profile. 1109 a. Number of failed Application transactions (receiving any HTTP 1110 response code other than 200 OK) MUST be less than 0.001% (1 out 1111 of 100,000 transactions) of total attempt transactions 1113 b. Number of Terminated TCP connections due to unexpected TCP RST 1114 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1115 connections) of total initiated TCP connections 1117 c. During the sustain phase, traffic should be forwarded at a 1118 constant rate 1120 d. Concurrent TCP connections MUST be constant during steady state 1121 and any deviation of concurrent TCP connections SHOULD be less 1122 than 10%. This confirms the DUT opens and closes TCP connections 1123 almost at the same rate 1125 7.2.3.4. Measurement 1127 Following KPI metric MUST be reported for each test iteration. 1129 average TCP Connections Per Second 1131 7.2.4. Test Procedures and Expected Results 1133 The test procedure is designed to measure the TCP connections per 1134 second rate of the DUT/SUT at the sustaining period of the traffic 1135 load profile. The test procedure consists of three major steps. 1136 This test procedure MAY be repeated multiple times with different IP 1137 types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 1138 distribution. 1140 7.2.4.1. Step 1: Test Initialization and Qualification 1142 Verify the link status of all connected physical interfaces. All 1143 interfaces are expected to be in "UP" status. 1145 Configure the traffic load profile of the test equipment to establish 1146 "initial connections per second" as defined in the parameters 1147 Section 7.2.3.2. The traffic load profile SHOULD be defined as 1148 described in Section 4.3.4. 1150 The DUT/SUT SHOULD reach the "Initial connections per second" before 1151 the sustain phase. The measured KPIs during the sustain phase MUST 1152 meet validation criteria a, b, c, and d defined in Section 7.2.3.3. 1154 If the KPI metrics do not meet the validation criteria, the test 1155 procedure MUST NOT be continued to "Step 2". 1157 7.2.4.2. Step 2: Test Run with Target Objective 1159 Configure test equipment to establish "Target connections per second" 1160 defined in the parameters table. The test equipment SHOULD follow 1161 the traffic load profile definition as described in Section 4.3.4. 1163 During the ramp up and sustain phase of each test iteration, other 1164 KPIs such as throughput, concurrent TCP connections and application 1165 transactions per second MUST NOT reach to the maximum value the DUT/ 1166 SUT can support. The test results for specific test iterations 1167 SHOULD NOT be reported, if the above mentioned KPI (especially 1168 throughput) reaches the maximum value. (Example: If the test 1169 iteration with 64 KByte of HTTP response object size reached the 1170 maximum throughput limitation of the DUT, the test iteration MAY be 1171 interrupted and the result for 64 KByte SHOULD NOT be reported). 1173 The test equipment SHOULD start to measure and record all specified 1174 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1175 the test until all traffic profile phases are completed. 1177 The DUT/SUT is expected to reach the desired target connections per 1178 second rate at the sustain phase. In addition, the measured KPIs 1179 MUST meet all validation criteria. 1181 Follow step 3, if the KPI metrics do not meet the validation 1182 criteria. 1184 7.2.4.3. Step 3: Test Iteration 1186 Determine the maximum and average achievable connections per second 1187 within the validation criteria. 1189 7.3. HTTP Throughput 1191 7.3.1. Objective 1193 Determine the throughput for HTTP transactions varying the HTTP 1194 response object size. 1196 7.3.2. Test Setup 1198 Test bed setup SHOULD be configured as defined in Section 4. Any 1199 specific test bed configuration changes such as number of interfaces 1200 and interface type, etc. must be documented. 1202 7.3.3. Test Parameters 1204 In this section, test scenario specific parameters SHOULD be defined. 1206 7.3.3.1. DUT/SUT Configuration Parameters 1208 DUT/SUT parameters MUST conform to the requirements defined in 1209 Section 4.2. Any configuration changes for this specific test 1210 scenario MUST be documented. 1212 7.3.3.2. Test Equipment Configuration Parameters 1214 Test equipment configuration parameters MUST conform to the 1215 requirements defined in Section 4.3. Following parameters MUST be 1216 documented for this test scenario: 1218 Client IP address range defined in Section 4.3.1.2 1220 Server IP address range defined in Section 4.3.2.2 1222 Traffic distribution ratio between IPv4 and IPv6 defined in 1223 Section 4.3.1.2 1224 Target Throughput: Initial value from product datasheet (if known) 1226 Initial Throughput: 10% of "Target Throughput" (an optional parameter 1227 for documentation) 1229 Number of HTTP response object requests (transactions) per 1230 connection: 10 1232 RECOMMENDED HTTP response object size: 1 KByte, 16 KByte, 64 KByte, 1233 256 KByte and mixed objects defined in the table 1235 +---------------------+---------------------+ 1236 | Object size (KByte) | Number of requests/ | 1237 | | Weight | 1238 +---------------------+---------------------+ 1239 | 0.2 | 1 | 1240 +---------------------+---------------------+ 1241 | 6 | 1 | 1242 +---------------------+---------------------+ 1243 | 8 | 1 | 1244 +---------------------+---------------------+ 1245 | 9 | 1 | 1246 +---------------------+---------------------+ 1247 | 10 | 1 | 1248 +---------------------+---------------------+ 1249 | 25 | 1 | 1250 +---------------------+---------------------+ 1251 | 26 | 1 | 1252 +---------------------+---------------------+ 1253 | 35 | 1 | 1254 +---------------------+---------------------+ 1255 | 59 | 1 | 1256 +---------------------+---------------------+ 1257 | 347 | 1 | 1258 +---------------------+---------------------+ 1260 Table 4: Mixed Objects 1262 7.3.3.3. Test Results Validation Criteria 1264 The following test Criteria is defined as test results validation 1265 criteria. Test results validation criteria MUST be monitored during 1266 the whole sustain phase of the traffic load profile 1268 a. Number of failed Application transactions (receiving any HTTP 1269 response code other than 200 OK) MUST be less than 0.001% (1 out 1270 of 100,000 transactions) of attempt transactions. 1272 b. Traffic should be forwarded constantly. 1274 c. Concurrent TCP connections MUST be constant during steady state 1275 and any deviation of concurrent TCP connections SHOULD be less 1276 than 10%. This confirms the DUT opens and closes TCP connections 1277 almost at the same rate 1279 7.3.3.4. Measurement 1281 The KPI metrics MUST be reported for this test scenario: 1283 average Throughput and average HTTP Transactions per Second 1285 7.3.4. Test Procedures and Expected Results 1287 The test procedure is designed to measure HTTP throughput of the DUT/ 1288 SUT. The test procedure consists of three major steps. This test 1289 procedure MAY be repeated multiple times with different IPv4 and IPv6 1290 traffic distribution and HTTP response object sizes. 1292 7.3.4.1. Step 1: Test Initialization and Qualification 1294 Verify the link status of all connected physical interfaces. All 1295 interfaces are expected to be in "UP" status. 1297 Configure traffic load profile of the test equipment to establish 1298 "Initial Throughput" as defined in the parameters Section 7.3.3.2. 1300 The traffic load profile SHOULD be defined as described in 1301 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial Throughput" 1302 during the sustain phase. Measure all KPI as defined in 1303 Section 7.3.3.4. 1305 The measured KPIs during the sustain phase MUST meet the validation 1306 criteria "a" defined in Section 7.3.3.3. 1308 If the KPI metrics do not meet the validation criteria, the test 1309 procedure MUST NOT be continued to "Step 2". 1311 7.3.4.2. Step 2: Test Run with Target Objective 1313 The test equipment SHOULD start to measure and record all specified 1314 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1315 the test until all traffic profile phases are completed. 1317 The DUT/SUT is expected to reach the desired "Target Throughput" at 1318 the sustain phase. In addition, the measured KPIs must meet all 1319 validation criteria. 1321 Perform the test separately for each HTTP response object size. 1323 Follow step 3, if the KPI metrics do not meet the validation 1324 criteria. 1326 7.3.4.3. Step 3: Test Iteration 1328 Determine the maximum and average achievable throughput within the 1329 validation criteria. Final test iteration MUST be performed for the 1330 test duration defined in Section 4.3.4. 1332 7.4. TCP/HTTP Transaction Latency 1334 7.4.1. Objective 1336 Using HTTP traffic, determine the average HTTP transaction latency 1337 when DUT is running with sustainable HTTP transactions per second 1338 supported by the DUT/SUT under different HTTP response object sizes. 1340 Test iterations MUST be performed with different HTTP response object 1341 sizes in two different scenarios.one with a single transaction and 1342 the other with multiple transactions within a single TCP connection. 1343 For consistency both the single and multiple transaction test MUST be 1344 configured with HTTP 1.1. 1346 Scenario 1: The client MUST negotiate HTTP 1.1 and close the 1347 connection with FIN immediately after completion of a single 1348 transaction (GET and RESPONSE). 1350 Scenario 2: The client MUST negotiate HTTP 1.1 and close the 1351 connection FIN immediately after completion of 10 transactions (GET 1352 and RESPONSE) within a single TCP connection. 1354 7.4.2. Test Setup 1356 Test bed setup SHOULD be configured as defined in Section 4. Any 1357 specific test bed configuration changes such as number of interfaces 1358 and interface type, etc. MUST be documented. 1360 7.4.3. Test Parameters 1362 In this section, test scenario specific parameters SHOULD be defined. 1364 7.4.3.1. DUT/SUT Configuration Parameters 1366 DUT/SUT parameters MUST conform to the requirements defined in 1367 Section 4.2. Any configuration changes for this specific test 1368 scenario MUST be documented. 1370 7.4.3.2. Test Equipment Configuration Parameters 1372 Test equipment configuration parameters MUST conform to the 1373 requirements defined in Section 4.3 . Following parameters MUST be 1374 documented for this test scenario: 1376 Client IP address range defined in Section 4.3.1.2 1378 Server IP address range defined in Section 4.3.2.2 1380 Traffic distribution ratio between IPv4 and IPv6 defined in 1381 Section 4.3.1.2 1383 Target objective for scenario 1: 50% of the maximum connection per 1384 second measured in test scenario TCP/HTTP Connections Per Second 1385 (Section 7.2) 1387 Target objective for scenario 2: 50% of the maximum throughput 1388 measured in test scenario HTTP Throughput (Section 7.3) 1390 Initial objective for scenario 1: 10% of Target objective for 1391 scenario 1" (an optional parameter for documentation) 1393 Initial objective for scenario 2: 10% of "Target objective for 1394 scenario 2" (an optional parameter for documentation) 1396 HTTP transaction per TCP connection: test scenario 1 with single 1397 transaction and the second scenario with 10 transactions 1399 HTTP 1.1 with GET command requesting a single object. The 1400 RECOMMENDED object sizes are 1, 16 or 64 KByte. For each test 1401 iteration, client MUST request a single HTTP response object size. 1403 7.4.3.3. Test Results Validation Criteria 1405 The following test Criteria is defined as test results validation 1406 criteria. Test results validation criteria MUST be monitored during 1407 the whole sustain phase of the traffic load profile. Ramp up and 1408 ramp down phase SHOULD NOT be considered. 1410 Generic criteria: 1412 a. Number of failed Application transactions (receiving any HTTP 1413 response code other than 200 OK) MUST be less than 0.001% (1 out 1414 of 100,000 transactions) of attempt transactions. 1416 b. Number of Terminated TCP connections due to unexpected TCP RST 1417 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1418 connections) of total initiated TCP connections 1420 c. During the sustain phase, traffic should be forwarded at a 1421 constant rate. 1423 d. Concurrent TCP connections MUST be constant during steady state 1424 and any deviation of concurrent TCP connections SHOULD be less 1425 than 10%. This confirms the DUT opens and closes TCP connections 1426 almost at the same rate 1428 e. After ramp up the DUT MUST achieve the "Target objective" defined 1429 in the parameter Section 7.4.3.2 and remain in that state for the 1430 entire test duration (sustain phase). 1432 7.4.3.4. Measurement 1434 Following KPI metrics MUST be reported for each test scenario and 1435 HTTP response object sizes separately: 1437 TTFB (minimum, average and maximum) and TTLB (minimum, average and 1438 maximum) 1440 All KPI's are measured once the target throughput achieves the steady 1441 state. 1443 7.4.4. Test Procedures and Expected Results 1445 The test procedure is designed to measure the average application 1446 transaction latencies or TTLB when the DUT is operating close to 50% 1447 of its maximum achievable throughput or connections per second. This 1448 test procedure CAN be repeated multiple times with different IP types 1449 (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic distribution), 1450 HTTP response object sizes and single and multiple transactions per 1451 connection scenarios. 1453 7.4.4.1. Step 1: Test Initialization and Qualification 1455 Verify the link status of all connected physical interfaces. All 1456 interfaces are expected to be in "UP" status. 1458 Configure traffic load profile of the test equipment to establish 1459 "Initial objective" as defined in the parameters Section 7.4.3.2. 1460 The traffic load profile can be defined as described in 1461 Section 4.3.4. 1463 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 1464 phase. The measured KPIs during the sustain phase MUST meet the 1465 validation criteria a, b, c, d, e and f defined in Section 7.4.3.3. 1467 If the KPI metrics do not meet the validation criteria, the test 1468 procedure MUST NOT be continued to "Step 2". 1470 7.4.4.2. Step 2: Test Run with Target Objective 1472 Configure test equipment to establish "Target objective" defined in 1473 the parameters table. The test equipment SHOULD follow the traffic 1474 load profile definition as described in Section 4.3.4. 1476 During the ramp up and sustain phase, other KPIs such as throughput, 1477 concurrent TCP connections and application transactions per second 1478 MUST NOT reach to the maximum value that the DUT/SUT can support. 1479 The test results for specific test iterations SHOULD NOT be reported, 1480 if the above mentioned KPI (especially throughput) reaches to the 1481 maximum value. (Example: If the test iteration with 64 KByte of HTTP 1482 response object size reached the maximum throughput limitation of the 1483 DUT, the test iteration MAY be interrupted and the result for 64 1484 KByte SHOULD NOT be reported). 1486 The test equipment SHOULD start to measure and record all specified 1487 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1488 the test until all traffic profile phases are completed. DUT/SUT is 1489 expected to reach the desired "Target objective" at the sustain 1490 phase. In addition, the measured KPIs MUST meet all validation 1491 criteria. 1493 Follow step 3, if the KPI metrics do not meet the validation 1494 criteria. 1496 7.4.4.3. Step 3: Test Iteration 1498 Determine the maximum achievable connections per second within the 1499 validation criteria and measure the latency values. 1501 7.5. Concurrent TCP/HTTP Connection Capacity 1503 7.5.1. Objective 1505 Determine the maximum number of concurrent TCP connections that the 1506 DUT/ SUT sustains when using HTTP traffic. 1508 7.5.2. Test Setup 1510 Test bed setup SHOULD be configured as defined in Section 4. Any 1511 specific test bed configuration changes such as number of interfaces 1512 and interface type, etc. must be documented. 1514 7.5.3. Test Parameters 1516 In this section, test scenario specific parameters SHOULD be defined. 1518 7.5.3.1. DUT/SUT Configuration Parameters 1520 DUT/SUT parameters MUST conform to the requirements defined in 1521 Section 4.2. Any configuration changes for this specific test 1522 scenario MUST be documented. 1524 7.5.3.2. Test Equipment Configuration Parameters 1526 Test equipment configuration parameters MUST conform to the 1527 requirements defined in Section 4.3. Following parameters MUST be 1528 noted for this test scenario: 1530 Client IP address range defined in Section 4.3.1.2 1532 Server IP address range defined in Section 4.3.2.2 1534 Traffic distribution ratio between IPv4 and IPv6 defined in 1535 Section 4.3.1.2 1537 Target concurrent connection: Initial value from product datasheet 1538 (if known) 1540 Initial concurrent connection: 10% of "Target concurrent 1541 connection" (an optional parameter for documentation) 1543 Maximum connections per second during ramp up phase: 50% of 1544 maximum connections per second measured in test scenario TCP/HTTP 1545 Connections per second (Section 7.2) 1547 Ramp up time (in traffic load profile for "Target concurrent 1548 connection"): "Target concurrent connection" / "Maximum 1549 connections per second during ramp up phase" 1551 Ramp up time (in traffic load profile for "Initial concurrent 1552 connection"): "Initial concurrent connection" / "Maximum 1553 connections per second during ramp up phase" 1555 The client MUST negotiate HTTP 1.1 with persistence and each client 1556 MAY open multiple concurrent TCP connections per server endpoint IP. 1558 Each client sends 10 GET commands requesting 1 KByte HTTP response 1559 object in the same TCP connection (10 transactions/TCP connection) 1560 and the delay (think time) between the transaction MUST be X seconds. 1562 X = ("Ramp up time" + "steady state time") /10 1564 The established connections SHOULD remain open until the ramp down 1565 phase of the test. During the ramp down phase, all connections 1566 SHOULD be successfully closed with FIN. 1568 7.5.3.3. Test Results Validation Criteria 1570 The following test Criteria is defined as test results validation 1571 criteria. Test results validation criteria MUST be monitored during 1572 the whole sustain phase of the traffic load profile. 1574 a. Number of failed Application transactions (receiving any HTTP 1575 response code other than 200 OK) MUST be less than 0.001% (1 out 1576 of 100,000 transaction) of total attempted transactions 1578 b. Number of Terminated TCP connections due to unexpected TCP RST 1579 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1580 connections) of total initiated TCP connections 1582 c. During the sustain phase, traffic SHOULD be forwarded constantly 1584 7.5.3.4. Measurement 1586 Following KPI metric MUST be reported for this test scenario: 1588 average Concurrent TCP Connections 1590 7.5.4. Test Procedures and Expected Results 1592 The test procedure is designed to measure the concurrent TCP 1593 connection capacity of the DUT/SUT at the sustaining period of 1594 traffic load profile. The test procedure consists of three major 1595 steps. This test procedure MAY be repeated multiple times with 1596 different IPv4 and IPv6 traffic distribution. 1598 7.5.4.1. Step 1: Test Initialization and Qualification 1600 Verify the link status of all connected physical interfaces. All 1601 interfaces are expected to be in "UP" status. 1603 Configure test equipment to establish "Initial concurrent TCP 1604 connections" defined in Section 7.5.3.2. Except ramp up time, the 1605 traffic load profile SHOULD be defined as described in Section 4.3.4. 1607 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 1608 concurrent TCP connections". The measured KPIs during the sustain 1609 phase MUST meet the validation criteria "a" and "b" defined in 1610 Section 7.5.3.3. 1612 If the KPI metrics do not meet the validation criteria, the test 1613 procedure MUST NOT be continued to "Step 2". 1615 7.5.4.2. Step 2: Test Run with Target Objective 1617 Configure test equipment to establish "Target concurrent TCP 1618 connections". The test equipment SHOULD follow the traffic load 1619 profile definition (except ramp up time) as described in 1620 Section 4.3.4. 1622 During the ramp up and sustain phase, the other KPIs such as 1623 throughput, TCP connections per second and application transactions 1624 per second MUST NOT reach to the maximum value that the DUT/SUT can 1625 support. 1627 The test equipment SHOULD start to measure and record KPIs defined in 1628 Section 7.5.3.4. The frequency of measurement SHOULD be 2 seconds. 1629 Continue the test until all traffic profile phases are completed. 1631 The DUT/SUT is expected to reach the desired target concurrent 1632 connection at the sustain phase. In addition, the measured KPIs must 1633 meet all validation criteria. 1635 Follow step 3, if the KPI metrics do not meet the validation 1636 criteria. 1638 7.5.4.3. Step 3: Test Iteration 1640 Determine the maximum and average achievable concurrent TCP 1641 connections capacity within the validation criteria. 1643 7.6. TCP/HTTPS Connections per Second 1645 7.6.1. Objective 1647 Using HTTPS traffic, determine the maximum sustainable SSL/TLS 1648 session establishment rate supported by the DUT/SUT under different 1649 throughput load conditions. 1651 Test iterations MUST include common cipher suites and key strengths 1652 as well as forward looking stronger keys. Specific test iterations 1653 MUST include ciphers and keys defined in Section 7.6.3.2. 1655 For each cipher suite and key strengths, test iterations MUST use a 1656 single HTTPS response object size defined in the test equipment 1657 configuration parameters Section 7.6.3.2 to measure connections per 1658 second performance under a variety of DUT Security inspection load 1659 conditions. 1661 7.6.2. Test Setup 1663 Test bed setup SHOULD be configured as defined in Section 4. Any 1664 specific test bed configuration changes such as number of interfaces 1665 and interface type, etc. MUST be documented. 1667 7.6.3. Test Parameters 1669 In this section, test scenario specific parameters SHOULD be defined. 1671 7.6.3.1. DUT/SUT Configuration Parameters 1673 DUT/SUT parameters MUST conform to the requirements defined in 1674 Section 4.2. Any configuration changes for this specific test 1675 scenario MUST be documented. 1677 7.6.3.2. Test Equipment Configuration Parameters 1679 Test equipment configuration parameters MUST conform to the 1680 requirements defined in Section 4.3. Following parameters MUST be 1681 documented for this test scenario: 1683 Client IP address range defined in Section 4.3.1.2 1685 Server IP address range defined in Section 4.3.2.2 1687 Traffic distribution ratio between IPv4 and IPv6 defined in 1688 Section 4.3.1.2 1690 Target connections per second: Initial value from product datasheet 1691 (if known) 1693 Initial connections per second: 10% of "Target connections per 1694 second" (an optional parameter for documentation) 1696 RECOMMENDED ciphers and keys defined in Section 4.3.1.3 1697 The client MUST negotiate HTTPS 1.1 and close the connection with FIN 1698 immediately after completion of one transaction. In each test 1699 iteration, client MUST send GET command requesting a fixed HTTPS 1700 response object size. The RECOMMENDED object sizes are 1, 2, 4, 16, 1701 64 KByte. 1703 7.6.3.3. Test Results Validation Criteria 1705 The following test Criteria is defined as test results validation 1706 criteria: 1708 a. Number of failed Application transactions (receiving any HTTP 1709 response code other than 200 OK) MUST be less than 0.001% (1 out 1710 of 100,000 transactions) of attempt transactions 1712 b. Number of Terminated TCP connections due to unexpected TCP RST 1713 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1714 connections) of total initiated TCP connections 1716 c. During the sustain phase, traffic should be forwarded at a 1717 constant rate 1719 d. Concurrent TCP connections MUST be constant during steady state 1720 and any deviation of concurrent TCP connections SHOULD be less 1721 than 10%. This confirms the DUT opens and closes TCP connections 1722 almost at the same rate 1724 7.6.3.4. Measurement 1726 Following KPI metrics MUST be reported for this test scenario: 1728 average TCP Connections Per Second, average TLS Handshake Rate (TLS 1729 Handshake Rate can be measured in the test scenario using 1KB object 1730 size) 1732 7.6.4. Test Procedures and Expected Results 1734 The test procedure is designed to measure the TCP connections per 1735 second rate of the DUT/SUT at the sustaining period of traffic load 1736 profile. The test procedure consists of three major steps. This 1737 test procedure MAY be repeated multiple times with different IPv4 and 1738 IPv6 traffic distribution. 1740 7.6.4.1. Step 1: Test Initialization and Qualification 1742 Verify the link status of all connected physical interfaces. All 1743 interfaces are expected to be in "UP" status. 1745 Configure traffic load profile of the test equipment to establish 1746 "Initial connections per second" as defined in Section 7.6.3.2. The 1747 traffic load profile CAN be defined as described in Section 4.3.4. 1749 The DUT/SUT SHOULD reach the "Initial connections per second" before 1750 the sustain phase. The measured KPIs during the sustain phase MUST 1751 meet the validation criteria a, b, c, and d defined in 1752 Section 7.6.3.3. 1754 If the KPI metrics do not meet the validation criteria, the test 1755 procedure MUST NOT be continued to "Step 2". 1757 7.6.4.2. Step 2: Test Run with Target Objective 1759 Configure test equipment to establish "Target connections per second" 1760 defined in the parameters table. The test equipment SHOULD follow 1761 the traffic load profile definition as described in Section 4.3.4. 1763 During the ramp up and sustain phase, other KPIs such as throughput, 1764 concurrent TCP connections and application transactions per second 1765 MUST NOT reach the maximum value that the DUT/SUT can support. The 1766 test results for specific test iteration SHOULD NOT be reported, if 1767 the above mentioned KPI (especially throughput) reaches the maximum 1768 value. (Example: If the test iteration with 64 KByte of HTTPS 1769 response object size reached the maximum throughput limitation of the 1770 DUT, the test iteration can be interrupted and the result for 64 1771 KByte SHOULD NOT be reported). 1773 The test equipment SHOULD start to measure and record all specified 1774 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1775 the test until all traffic profile phases are completed. 1777 The DUT/SUT is expected to reach the desired target connections per 1778 second rate at the sustain phase. In addition, the measured KPIs 1779 must meet all validation criteria. 1781 Follow the step 3, if the KPI metrics do not meet the validation 1782 criteria. 1784 7.6.4.3. Step 3: Test Iteration 1786 Determine the maximum and average achievable connections per second 1787 within the validation criteria. 1789 7.7. HTTPS Throughput 1791 7.7.1. Objective 1793 Determine the throughput for HTTPS transactions varying the HTTPS 1794 response object size. 1796 Test iterations MUST include common cipher suites and key strengths 1797 as well as forward looking stronger keys. Specific test iterations 1798 MUST include the ciphers and keys defined in the parameter 1799 Section 7.7.3.2. 1801 7.7.2. Test Setup 1803 Test bed setup SHOULD be configured as defined in Section 4. Any 1804 specific test bed configuration changes such as number of interfaces 1805 and interface type, etc. must be documented. 1807 7.7.3. Test Parameters 1809 In this section, test scenario specific parameters SHOULD be defined. 1811 7.7.3.1. DUT/SUT Configuration Parameters 1813 DUT/SUT parameters MUST conform to the requirements defined in 1814 Section 4.2. Any configuration changes for this specific test 1815 scenario MUST be documented. 1817 7.7.3.2. Test Equipment Configuration Parameters 1819 Test equipment configuration parameters MUST conform to the 1820 requirements defined in Section 4.3. Following parameters MUST be 1821 documented for this test scenario: 1823 Client IP address range defined in Section 4.3.1.2 1825 Server IP address range defined in Section 4.3.2.2 1827 Traffic distribution ratio between IPv4 and IPv6 defined in 1828 Section 4.3.1.2 1830 Target Throughput: Initial value from product datasheet (if known) 1832 Initial Throughput: 10% of "Target Throughput" (an optional parameter 1833 for documentation) 1835 Number of HTTPS response object requests (transactions) per 1836 connection: 10 1837 RECOMMENDED ciphers and keys defined in Section 4.3.1.3 1839 RECOMMENDED HTTPS response object size: 1 KByte, 2 KByte, 4 KByte, 16 1840 KByte, 64 KByte, 256 KByte and mixed object defined in the table 1841 below. 1843 +---------------------+---------------------+ 1844 | Object size (KByte) | Number of requests/ | 1845 | | Weight | 1846 +---------------------+---------------------+ 1847 | 0.2 | 1 | 1848 +---------------------+---------------------+ 1849 | 6 | 1 | 1850 +---------------------+---------------------+ 1851 | 8 | 1 | 1852 +---------------------+---------------------+ 1853 | 9 | 1 | 1854 +---------------------+---------------------+ 1855 | 10 | 1 | 1856 +---------------------+---------------------+ 1857 | 25 | 1 | 1858 +---------------------+---------------------+ 1859 | 26 | 1 | 1860 +---------------------+---------------------+ 1861 | 35 | 1 | 1862 +---------------------+---------------------+ 1863 | 59 | 1 | 1864 +---------------------+---------------------+ 1865 | 347 | 1 | 1866 +---------------------+---------------------+ 1868 Table 5: Mixed Objects 1870 7.7.3.3. Test Results Validation Criteria 1872 The following test Criteria is defined as test results validation 1873 criteria. Test results validation criteria MUST be monitored during 1874 the whole sustain phase of the traffic load profile. 1876 a. Number of failed Application transactions (receiving any HTTP 1877 response code other than 200 OK) MUST be less than 0.001% (1 out 1878 of 100,000 transactions) of attempt transactions. 1880 b. Traffic should be forwarded constantly. 1882 c. Concurrent TCP connections MUST be constant during steady state 1883 and any deviation of concurrent TCP connections SHOULD be less 1884 than 10%. This confirms the DUT opens and closes TCP connections 1885 almost at the same rate 1887 7.7.3.4. Measurement 1889 The KPI metrics MUST be reported for this test scenario: 1891 average Throughput and average HTTPS Transactions Per Second 1893 7.7.4. Test Procedures and Expected Results 1895 The test procedure consists of three major steps. This test 1896 procedure MAY be repeated multiple times with different IPv4 and IPv6 1897 traffic distribution and HTTPS response object sizes. 1899 7.7.4.1. Step 1: Test Initialization and Qualification 1901 Verify the link status of all connected physical interfaces. All 1902 interfaces are expected to be in "UP" status. 1904 Configure traffic load profile of the test equipment to establish 1905 "initial throughput" as defined in the parameters Section 7.7.3.2. 1907 The traffic load profile should be defined as described in 1908 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial Throughput" 1909 during the sustain phase. Measure all KPI as defined in 1910 Section 7.7.3.4. 1912 The measured KPIs during the sustain phase MUST meet the validation 1913 criteria "a" defined in Section 7.7.3.3. 1915 If the KPI metrics do not meet the validation criteria, the test 1916 procedure MUST NOT be continued to "Step 2". 1918 7.7.4.2. Step 2: Test Run with Target Objective 1920 The test equipment SHOULD start to measure and record all specified 1921 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1922 the test until all traffic profile phases are completed. 1924 The DUT/SUT is expected to reach the desired "Target Throughput" at 1925 the sustain phase. In addition, the measured KPIs MUST meet all 1926 validation criteria. 1928 Perform the test separately for each HTTPS response object size. 1930 Follow step 3, if the KPI metrics do not meet the validation 1931 criteria. 1933 7.7.4.3. Step 3: Test Iteration 1935 Determine the maximum and average achievable throughput within the 1936 validation criteria. Final test iteration MUST be performed for the 1937 test duration defined in Section 4.3.4. 1939 7.8. HTTPS Transaction Latency 1941 7.8.1. Objective 1943 Using HTTPS traffic, determine the average HTTPS transaction latency 1944 when DUT is running with sustainable HTTPS transactions per second 1945 supported by the DUT/SUT under different HTTPS response object size. 1947 Scenario 1: The client MUST negotiate HTTPS and close the connection 1948 with FIN immediately after completion of a single transaction (GET 1949 and RESPONSE). 1951 Scenario 2: The client MUST negotiate HTTPS and close the connection 1952 with FIN immediately after completion of 10 transactions (GET and 1953 RESPONSE) within a single TCP connection. 1955 7.8.2. Test Setup 1957 Test bed setup SHOULD be configured as defined in Section 4. Any 1958 specific test bed configuration changes such as number of interfaces 1959 and interface type, etc. MUST be documented. 1961 7.8.3. Test Parameters 1963 In this section, test scenario specific parameters SHOULD be defined. 1965 7.8.3.1. DUT/SUT Configuration Parameters 1967 DUT/SUT parameters MUST conform to the requirements defined in 1968 Section 4.2. Any configuration changes for this specific test 1969 scenario MUST be documented. 1971 7.8.3.2. Test Equipment Configuration Parameters 1973 Test equipment configuration parameters MUST conform to the 1974 requirements defined in Section 4.3. Following parameters MUST be 1975 documented for this test scenario: 1977 Client IP address range defined in Section 4.3.1.2 1979 Server IP address range defined in Section 4.3.2.2 1980 Traffic distribution ratio between IPv4 and IPv6 defined in 1981 Section 4.3.1.2 1983 RECOMMENDED cipher suites and key sizes defined in Section 4.3.1.3 1985 Target objective for scenario 1: 50% of the maximum connections per 1986 second measured in test scenario TCP/HTTPS Connections per second 1987 (Section 7.6) 1989 Target objective for scenario 2: 50% of the maximum throughput 1990 measured in test scenario HTTPS Throughput (Section 7.7) 1992 Initial objective for scenario 1: 10% of Target objective for 1993 scenario 1" (an optional parameter for documentation) 1995 Initial objective for scenario 2: 10% of "Target objective for 1996 scenario 2" (an optional parameter for documentation) 1998 HTTPS transaction per TCP connection: test scenario 1 with single 1999 transaction and the second scenario with 10 transactions 2001 HTTPS 1.1 with GET command requesting a single 1, 16 or 64 KByte 2002 object. For each test iteration, client MUST request a single HTTPS 2003 response object size. 2005 7.8.3.3. Test Results Validation Criteria 2007 The following test Criteria is defined as test results validation 2008 criteria. Test results validation criteria MUST be monitored during 2009 the whole sustain phase of the traffic load profile. Ramp up and 2010 ramp down phase SHOULD NOT be considered. 2012 Generic criteria: 2014 a. Number of failed Application transactions (receiving any HTTP 2015 response code other than 200 OK) MUST be less than 0.001% (1 out 2016 of 100,000 transactions) of attempt transactions. 2018 b. Number of Terminated TCP connections due to unexpected TCP RST 2019 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2020 connections) of total initiated TCP connections 2022 c. During the sustain phase, traffic should be forwarded at a 2023 constant rate. 2025 d. Concurrent TCP connections MUST be constant during steady state 2026 and any deviation of concurrent TCP connections SHOULD be less 2027 than 10%. This confirms the DUT opens and closes TCP connections 2028 almost at the same rate 2030 e. After ramp up the DUT MUST achieve the "Target objective" defined 2031 in the parameter Section 7.8.3.2 and remain in that state for the 2032 entire test duration (sustain phase). 2034 7.8.3.4. Measurement 2036 Following KPI metrics MUST be reported for each test scenario and 2037 HTTPS response object sizes separately: 2039 TTFB (minimum, average and maximum) and TTLB (minimum, average and 2040 maximum) 2042 All KPI's are measured once the target connections per second 2043 achieves the steady state. 2045 7.8.4. Test Procedures and Expected Results 2047 The test procedure is designed to measure average TTFB or TTLB when 2048 the DUT is operating close to 50% of its maximum achievable 2049 connections per second. This test procedure can be repeated multiple 2050 times with different IP types (IPv4 only, IPv6 only and IPv4 and IPv6 2051 mixed traffic distribution), HTTPS response object sizes and single 2052 and multiple transactions per connection scenarios. 2054 7.8.4.1. Step 1: Test Initialization and Qualification 2056 Verify the link status of all connected physical interfaces. All 2057 interfaces are expected to be in "UP" status. 2059 Configure traffic load profile of the test equipment to establish 2060 "Initial objective" as defined in the parameters Section 7.8.3.2. 2061 The traffic load profile can be defined as described in 2062 Section 4.3.4. 2064 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 2065 phase. The measured KPIs during the sustain phase MUST meet the 2066 validation criteria a, b, c, d, e and f defined in Section 7.8.3.3. 2068 If the KPI metrics do not meet the validation criteria, the test 2069 procedure MUST NOT be continued to "Step 2". 2071 7.8.4.2. Step 2: Test Run with Target Objective 2073 Configure test equipment to establish "Target objective" defined in 2074 the parameters table. The test equipment SHOULD follow the traffic 2075 load profile definition as described in Section 4.3.4. 2077 During the ramp up and sustain phase, other KPIs such as throughput, 2078 concurrent TCP connections and application transactions per second 2079 MUST NOT reach to the maximum value that the DUT/SUT can support. 2080 The test results for specific test iterations SHOULD NOT be reported, 2081 if the above mentioned KPI (especially throughput) reaches to the 2082 maximum value. (Example: If the test iteration with 64 KByte of HTTP 2083 response object size reached the maximum throughput limitation of the 2084 DUT, the test iteration MAY be interrupted and the result for 64 2085 KByte SHOULD NOT be reported). 2087 The test equipment SHOULD start to measure and record all specified 2088 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 2089 the test until all traffic profile phases are completed. DUT/SUT is 2090 expected to reach the desired "Target objective" at the sustain 2091 phase. In addition, the measured KPIs MUST meet all validation 2092 criteria. 2094 Follow step 3, if the KPI metrics do not meet the validation 2095 criteria. 2097 7.8.4.3. Step 3: Test Iteration 2099 Determine the maximum achievable connections per second within the 2100 validation criteria and measure the latency values. 2102 7.9. Concurrent TCP/HTTPS Connection Capacity 2104 7.9.1. Objective 2106 Determine the maximum number of concurrent TCP connections that the 2107 DUT/SUT sustains when using HTTPS traffic. 2109 7.9.2. Test Setup 2111 Test bed setup SHOULD be configured as defined in Section 4. Any 2112 specific test bed configuration changes such as number of interfaces 2113 and interface type, etc. MUST be documented. 2115 7.9.3. Test Parameters 2117 In this section, test scenario specific parameters SHOULD be defined. 2119 7.9.3.1. DUT/SUT Configuration Parameters 2121 DUT/SUT parameters MUST conform to the requirements defined in 2122 Section 4.2. Any configuration changes for this specific test 2123 scenario MUST be documented. 2125 7.9.3.2. Test Equipment Configuration Parameters 2127 Test equipment configuration parameters MUST conform to the 2128 requirements defined in Section 4.3. Following parameters MUST be 2129 documented for this test scenario: 2131 Client IP address range defined in Section 4.3.1.2 2133 Server IP address range defined in Section 4.3.2.2 2135 Traffic distribution ratio between IPv4 and IPv6 defined in 2136 Section 4.3.1.2 2138 RECOMMENDED cipher suites and key sizes defined in Section 4.3.1.3 2140 Target concurrent connections: Initial value from product 2141 datasheet (if known) 2143 Initial concurrent connections: 10% of "Target concurrent 2144 connections" (an optional parameter for documentation) 2146 Connections per second during ramp up phase: 50% of maximum 2147 connections per second measured in test scenario TCP/HTTPS 2148 Connections per second (Section 7.6) 2150 Ramp up time (in traffic load profile for "Target concurrent 2151 connections"): "Target concurrent connections" / "Maximum 2152 connections per second during ramp up phase" 2154 Ramp up time (in traffic load profile for "Initial concurrent 2155 connections"): "Initial concurrent connections" / "Maximum 2156 connections per second during ramp up phase" 2158 The client MUST perform HTTPS transaction with persistence and each 2159 client can open multiple concurrent TCP connections per server 2160 endpoint IP. 2162 Each client sends 10 GET commands requesting 1 KByte HTTPS response 2163 objects in the same TCP connections (10 transactions/TCP connection) 2164 and the delay (think time) between each transactions MUST be X 2165 seconds. 2167 X = ("Ramp up time" + "steady state time") /10 2169 The established connections SHOULD remain open until the ramp down 2170 phase of the test. During the ramp down phase, all connections 2171 SHOULD be successfully closed with FIN. 2173 7.9.3.3. Test Results Validation Criteria 2175 The following test Criteria is defined as test results validation 2176 criteria. Test results validation criteria MUST be monitored during 2177 the whole sustain phase of the traffic load profile. 2179 a. Number of failed Application transactions (receiving any HTTP 2180 response code other than 200 OK) MUST be less than 0.001% (1 out 2181 of 100,000 transactions) of total attempted transactions 2183 b. Number of Terminated TCP connections due to unexpected TCP RST 2184 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2185 connections) of total initiated TCP connections 2187 c. During the sustain phase, traffic SHOULD be forwarded constantly 2189 7.9.3.4. Measurement 2191 Following KPI metric MUST be reported for this test scenario: 2193 average Concurrent TCP Connections 2195 7.9.4. Test Procedures and Expected Results 2197 The test procedure is designed to measure the concurrent TCP 2198 connection capacity of the DUT/SUT at the sustaining period of 2199 traffic load profile. The test procedure consists of three major 2200 steps. This test procedure MAY be repeated multiple times with 2201 different IPv4 and IPv6 traffic distribution. 2203 7.9.4.1. Step 1: Test Initialization and Qualification 2205 Verify the link status of all connected physical interfaces. All 2206 interfaces are expected to be in "UP" status. 2208 Configure test equipment to establish "initial concurrent TCP 2209 connections" defined in Section 7.9.3.2. Except ramp up time, the 2210 traffic load profile SHOULD be defined as described in Section 4.3.4. 2212 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 2213 concurrent TCP connections". The measured KPIs during the sustain 2214 phase MUST meet the validation criteria "a" and "b" defined in 2215 Section 7.9.3.3. 2217 If the KPI metrics do not meet the validation criteria, the test 2218 procedure MUST NOT be continued to "Step 2". 2220 7.9.4.2. Step 2: Test Run with Target Objective 2222 Configure test equipment to establish "Target concurrent TCP 2223 connections". The test equipment SHOULD follow the traffic load 2224 profile definition (except ramp up time) as described in 2225 Section 4.3.4. 2227 During the ramp up and sustain phase, the other KPIs such as 2228 throughput, TCP connections per second and application transactions 2229 per second MUST NOT reach to the maximum value that the DUT/SUT can 2230 support. 2232 The test equipment SHOULD start to measure and record KPIs defined in 2233 Section 7.9.3.4. The frequency of measurement SHOULD be 2 seconds. 2234 Continue the test until all traffic profile phases are completed. 2236 The DUT/SUT is expected to reach the desired target concurrent 2237 connections at the sustain phase. In addition, the measured KPIs 2238 MUST meet all validation criteria. 2240 Follow step 3, if the KPI metrics do not meet the validation 2241 criteria. 2243 7.9.4.3. Step 3: Test Iteration 2245 Determine the maximum and average achievable concurrent TCP 2246 connections within the validation criteria. 2248 8. IANA Considerations 2250 This document makes no request of IANA. 2252 Note to RFC Editor: this section may be removed on publication as an 2253 RFC. 2255 9. Security Considerations 2257 The primary goal of this document is to provide benchmarking 2258 terminology and methodology for next-generation network security 2259 devices. However, readers should be aware that there is some overlap 2260 between performance and security issues. Specifically, the optimal 2261 configuration for network security device performance may not be the 2262 most secure, and vice-versa. The Cipher suites recommended in this 2263 document are just for test purpose only. The Cipher suite 2264 recommendation for a real deployment is outside the scope of this 2265 document. 2267 10. Contributors 2269 The following individuals contributed significantly to the creation 2270 of this document: 2272 Alex Samonte, Amritam Putatunda, Aria Eslambolchizadeh, David 2273 DeSanto, Jurrie Van Den Breekel, Ryan Liles, Samaresh Nair, Stephen 2274 Goudreault, and Tim Otto 2276 11. Acknowledgements 2278 The authors wish to acknowledge the members of NetSecOPEN for their 2279 participation in the creation of this document. Additionally the 2280 following members need to be acknowledged: 2282 Anand Vijayan, Baski Mohan, Chao Guo, Chris Brown, Chris Marshall, 2283 Jay Lindenauer, Michael Shannon, Mike Deichman, Ray Vinson, Ryan 2284 Riese, Tim Carlin, Tim Otto and Toulnay Orkun 2286 12. References 2288 12.1. Normative References 2290 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2291 Requirement Levels", BCP 14, RFC 2119, 2292 DOI 10.17487/RFC2119, March 1997, 2293 . 2295 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2296 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 2297 May 2017, . 2299 12.2. Informative References 2301 [RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., 2302 Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext 2303 Transfer Protocol -- HTTP/1.1", RFC 2616, 2304 DOI 10.17487/RFC2616, June 1999, 2305 . 2307 [RFC2647] Newman, D., "Benchmarking Terminology for Firewall 2308 Performance", RFC 2647, DOI 10.17487/RFC2647, August 1999, 2309 . 2311 [RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin, 2312 "Benchmarking Methodology for Firewall Performance", 2313 RFC 3511, DOI 10.17487/RFC3511, April 2003, 2314 . 2316 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 2317 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 2318 . 2320 Appendix A. Test Methodology - Security Effectiveness Evaluation 2322 A.1. Test Objective 2324 This test methodology verifies the DUT/SUT is able to detect, prevent 2325 and report the vulnerabilities. 2327 In this test, background test traffic will be generated in order to 2328 utilize the DUT/SUT. In parallel, the CVEs will be sent to the DUT/ 2329 SUT as encrypted and as well as clear text payload formats using a 2330 traffic generator. The selection of the CVEs is described in 2331 Section 4.2.1. 2333 o Number of blocked CVEs 2335 o Number of bypassed (nonblocked) CVEs 2337 o Background traffic performance (verify if the background traffic 2338 is impacted while sending CVE toward DUT/SUT) 2340 o Accuracy of DUT/SUT statistics in term of vulnerabilities 2341 reporting 2343 A.2. Testbed setup 2345 The same Testbed MUST be used for security effectiveness test and as 2346 well as for benchmarking test cases defined in Section 7. 2348 A.3. Test Parameters 2350 In this section, the test scenario specific parameters SHOULD be 2351 defined. 2353 A.3.1. DUT/SUT Configuration Parameters 2355 DUT/SUT configuration Parameters MUST conform to the requirements 2356 defined in Section 4.2. The same DUT configuration MUST be used for 2357 Security effectiveness test and as well as for benchmarking test 2358 cases defined in Section 7. The DUT/SUT MUST be configured in inline 2359 mode and all detected attack traffic MUST be dropped and the session 2360 Should be reset 2362 A.3.2. Test Equipment Configuration Parameters 2364 Test equipment configuration parameters MUST conform to the 2365 requirements defined in Section 4.3. The same Client and server IP 2366 ranges MUST be configured as used in the benchmarking test cases. In 2367 addition, the following parameters MUST be documented for this test 2368 scenario: 2370 o Background Traffic: 45% of maximum HTTP throughput and 45% of 2371 Maximum HTTPS throughput supported by the DUT/SUT (measured with 2372 object size 64 KByte in the test scenarios "HTTP(S) Throughput" 2373 defined in Section 7.3 and Section 7.7. 2375 o RECOMMENDED CVE traffic transmission Rate: 10 CVEs per second 2377 o RECOMMEND to generate each CVE multiple times (sequentially) at 10 2378 CVEs per second 2380 o Ciphers and Keys for the encrypted CVE traffic MUST use the same 2381 cipher configured for HTTPS traffic related benchmarking test 2382 scenarios (Section 7.6 - Section 7.9) 2384 A.4. Test Results Validation Criteria 2386 The following test Criteria is defined as test results validation 2387 criteria. Test results validation criteria MUST be monitored during 2388 the whole test duration. 2390 a. Number of failed Application transaction in the background 2391 traffic MUST be less than 0.01% of attempted transactions 2393 b. Number of Terminated TCP connections of the background traffic 2394 (due to unexpected TCP RST sent by DUT/SUT) MUST be less than 2395 0.01% of total initiated TCP connections in the background 2396 traffic 2398 c. During the sustain phase, traffic should be forwarded at a 2399 constant rate 2401 d. False positive MUST NOT occur in the background traffic 2403 A.5. Measurement 2405 Following KPI metrics MUST be reported for this test scenario: 2407 Mandatory KPIs: 2409 o Blocked CVEs: It should be represented in the following ways: 2411 * Number of blocked CVEs out of total CVEs 2413 * Percentage of blocked CVEs 2415 o Unblocked CVEs: It should be represented in the following ways: 2417 * Number of unblocked CVEs out of total CVEs 2419 * Percentage of unblocked CVEs 2421 o Background traffic behavior: it should represent one of the 2422 followings ways: 2424 * No impact (traffic transmission at a constant rate) 2426 * Minor impact (e.g. small spikes- +/- 100 Mbit/s) 2428 * Heavily impacted (e.g. large spikes and reduced the background 2429 throughput > 100 Mbit/s) 2431 o DUT/SUT reporting accuracy: DUT/SUT MUST report all detected 2432 vulnerabilities. 2434 Optional KPIs: 2436 o List of unblocked CVEs 2438 A.6. Test Procedures and expected Results 2440 The test procedure is designed to measure the security effectiveness 2441 of the DUT/SUT at the sustaining period of the traffic load profile. 2442 The test procedure consists of two major steps. This test procedure 2443 MAY be repeated multiple times with different IPv4 and IPv6 traffic 2444 distribution. 2446 A.6.1. Step 1: Background traffic 2448 Generate the background traffic at the transmission rate defined in 2449 the parameter section. 2451 The DUT/SUT MUST reach the target objective (throughput) in sustain 2452 phase. The measured KPIs during the sustain phase MUST meet the test 2453 validation criteria a, b, c and d defined in Appendix A.4. 2455 If the KPI metrics do not meet the acceptance criteria, the test 2456 procedure MUST NOT be continued to "Step 2". 2458 A.6.2. Step 2: CVE emulation 2460 While generating the background traffic (in sustain phase), send the 2461 CVE traffic as defined in the parameter section. 2463 The test equipment SHOULD start to measure and record all specified 2464 KPIs. The frequency of measurement MUST be less than 2 seconds. 2465 Continue the test until all CVEs are sent. 2467 The measured KPIs MUST meet all test validation criteria a, b, c, and 2468 d defined in Appendix A.4. 2470 In addition, the DUT/SUT SHOULD report the vulnerabilities correctly. 2472 Authors' Addresses 2474 Balamuhunthan Balarajah 2475 Berlin 2476 Germany 2478 Email: bm.balarajah@gmail.com 2479 Carsten Rossenhoevel 2480 EANTC AG 2481 Salzufer 14 2482 Berlin 10587 2483 Germany 2485 Email: cross@eantc.de 2487 Brian Monkman 2488 NetSecOPEN 2489 417 Independence Court 2490 Mechanicsburg, PA 17050 2491 USA 2493 Email: bmonkman@netsecopen.org