idnits 2.17.1 draft-balarajah-bmwg-ngfw-performance-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. == There are 9 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: d. During the sustain phase, Average connect time and average transaction time MUST be constant and latency deviation SHOULD not increase more than 10%. -- The document date (October 14, 2018) is 2021 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 3511 (Obsoleted by RFC 9411) Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Methodology Working Group B. Balarajah 3 Internet-Draft C. Rossenhoevel 4 Intended status: Informational EANTC AG 5 Expires: April 17, 2019 October 14, 2018 7 Benchmarking Methodology for Network Security Device Performance 8 draft-balarajah-bmwg-ngfw-performance-05 10 Abstract 12 This document provides benchmarking terminology and methodology for 13 next-generation network security devices including next-generation 14 firewalls (NGFW), intrusion detection and prevention solutions (IDS/ 15 IPS) and unified threat management (UTM) implementations. The 16 document aims to strongly improve the applicability, reproducibility, 17 and transparency of benchmarks and to align the test methodology with 18 today's increasingly complex layer 7 application use cases. The main 19 areas covered in this document are test terminology, traffic profiles 20 and benchmarking methodology for NGFWs to start with. 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at https://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on April 17, 2019. 39 Copyright Notice 41 Copyright (c) 2018 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (https://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 57 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 58 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 59 4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 4 60 4.1. Testbed Configuration . . . . . . . . . . . . . . . . . . 4 61 4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 5 62 4.3. Test Equipment Configuration . . . . . . . . . . . . . . 8 63 4.3.1. Client Configuration . . . . . . . . . . . . . . . . 9 64 4.3.2. Backend Server Configuration . . . . . . . . . . . . 10 65 4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 11 66 4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 12 67 5. Test Bed Considerations . . . . . . . . . . . . . . . . . . . 13 68 6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 14 69 6.1. Key Performance Indicators . . . . . . . . . . . . . . . 15 70 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 16 71 7.1. Throughput Performance With NetSecOPEN Traffic Mix . . . 17 72 7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 17 73 7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 17 74 7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 17 75 7.1.4. Test Procedures and expected Results . . . . . . . . 19 76 7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 20 77 7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 20 78 7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 20 79 7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 20 80 7.2.4. Test Procedures and Expected Results . . . . . . . . 21 81 7.3. HTTP Transaction per Second . . . . . . . . . . . . . . . 23 82 7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 23 83 7.3.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 23 84 7.3.3. Test Parameters . . . . . . . . . . . . . . . . . . . 23 85 7.3.4. Test Procedures and Expected Results . . . . . . . . 24 86 7.4. TCP/HTTP Transaction Latency . . . . . . . . . . . . . . 26 87 7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 26 88 7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 26 89 7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 26 90 7.4.4. Test Procedures and Expected Results . . . . . . . . 28 91 7.5. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 29 92 7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 29 93 7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 29 94 7.5.3. Test Parameters . . . . . . . . . . . . . . . . . . . 30 95 7.5.4. Test Procedures and Expected Results . . . . . . . . 32 96 7.6. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 33 97 7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 33 98 7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 33 99 7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 33 100 7.6.4. Test Procedures and expected Results . . . . . . . . 34 101 7.7. TCP/HTTPS Connections per second . . . . . . . . . . . . 36 102 7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 36 103 7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 36 104 7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 36 105 7.7.4. Test Procedures and expected Results . . . . . . . . 38 106 7.8. HTTPS Transaction per Second . . . . . . . . . . . . . . 39 107 7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 39 108 7.8.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 40 109 7.8.3. Test Parameters . . . . . . . . . . . . . . . . . . . 40 110 7.8.4. Test Procedures and Expected Results . . . . . . . . 42 111 7.9. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 43 112 7.9.1. Objective . . . . . . . . . . . . . . . . . . . . . . 43 113 7.9.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 43 114 7.9.3. Test Parameters . . . . . . . . . . . . . . . . . . . 43 115 7.9.4. Test Procedures and Expected Results . . . . . . . . 45 116 7.10. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 46 117 7.10.1. Objective . . . . . . . . . . . . . . . . . . . . . 46 118 7.10.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 47 119 7.10.3. Test Parameters . . . . . . . . . . . . . . . . . . 47 120 7.10.4. Test Procedures and Expected Results . . . . . . . . 49 121 7.11. Concurrent TCP/HTTPS Connection Capacity . . . . . . . . 50 122 7.11.1. Objective . . . . . . . . . . . . . . . . . . . . . 50 123 7.11.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 50 124 7.11.3. Test Parameters . . . . . . . . . . . . . . . . . . 50 125 7.11.4. Test Procedures and expected Results . . . . . . . . 52 126 8. Formal Syntax . . . . . . . . . . . . . . . . . . . . . . . . 53 127 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 53 128 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 54 129 11. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 54 130 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 54 131 12.1. Normative References . . . . . . . . . . . . . . . . . . 54 132 12.2. Informative References . . . . . . . . . . . . . . . . . 54 133 Appendix A. NetSecOPEN Basic Traffic Mix . . . . . . . . . . . . 55 134 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 63 136 1. Introduction 138 15 years have passed since IETF recommended test methodology and 139 terminology for firewalls initially ([RFC2647], [RFC3511]). The 140 requirements for network security element performance and 141 effectiveness have increased tremendously since then. Security 142 function implementations have evolved to more advanced areas and have 143 diversified into intrusion detection and prevention, threat 144 management, analysis of encrypted traffic, etc. In an industry of 145 growing importance, well-defined and reproducible key performance 146 indicators (KPIs) are increasingly needed: They enable fair and 147 reasonable comparison of network security functions. All these 148 reasons have led to the creation of a new next-generation firewall 149 benchmarking document. 151 2. Requirements 153 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 154 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 155 document are to be interpreted as described in [RFC2119]. 157 3. Scope 159 This document provides testing terminology and testing methodology 160 for next-generation firewalls and related security functions. It 161 covers two main areas: Performance benchmarks and security 162 effectiveness testing. The document focuses on advanced, realistic, 163 and reproducible testing methods. Additionally, it describes test 164 bed environments, test tool requirements and test result formats. 166 4. Test Setup 168 Test setup defined in this document is applicable to all benchmarking 169 test scenarios described in Section 7. 171 4.1. Testbed Configuration 173 Testbed configuration MUST ensure that any performance implications 174 that are discovered during the benchmark testing aren't due to the 175 inherent physical network limitations such as number of physical 176 links and forwarding performance capabilities (throughput and 177 latency) of the network devise in the testbed. For this reason, this 178 document recommends avoiding external devices such as switch and 179 router in the testbed as possible. 181 However, in the typical deployment, the security devices (DUT/SUT) 182 are connected to routers and switches which will reduce the number of 183 entries in MAC or ARP tables of the DUT/SUT. If MAC or ARP tables 184 have many entries, this may impact the actual DUT/SUT performance due 185 to MAC and ARP/ND table lookup processes. Therefore, it is 186 RECOMMENDED to connect Layer 3 device(s) between test equipment and 187 DUT/SUT as shown in Figure 1. 189 If the test equipment is capable to emulate layer 3 routing 190 functionality and there is no need for test equipment ports 191 aggregation, it is RECOMMENDED to configure the test setup as shown 192 in Figure 2. 194 +-------------------+ +-----------+ +--------------------+ 195 |Aggregation Switch/| | | | Aggregation Switch/| 196 | Router +------+ DUT/SUT +------+ Router | 197 | | | | | | 198 +----------+--------+ +-----------+ +--------+-----------+ 199 | | 200 | | 201 +-----------+-----------+ +-----------+-----------+ 202 | | | | 203 | +-------------------+ | | +-------------------+ | 204 | | Emulated Router(s)| | | | Emulated Router(s)| | 205 | | (Optional) | | | | (Optional) | | 206 | +-------------------+ | | +-------------------+ | 207 | +-------------------+ | | +-------------------+ | 208 | | Clients | | | | Servers | | 209 | +-------------------+ | | +-------------------+ | 210 | | | | 211 | Test Equipment | | Test Equipment | 212 +-----------------------+ +-----------------------+ 214 Figure 1: Testbed Setup - Option 1 216 +-----------------------+ +-----------------------+ 217 | +-------------------+ | +-----------+ | +-------------------+ | 218 | | Emulated Router(s)| | | | | | Emulated Router(s)| | 219 | | (Optional) | +----- DUT/SUT +-----+ (Optional) | | 220 | +-------------------+ | | | | +-------------------+ | 221 | +-------------------+ | +-----------+ | +-------------------+ | 222 | | Clients | | | | Servers | | 223 | +-------------------+ | | +-------------------+ | 224 | | | | 225 | Test Equipment | | Test Equipment | 226 +-----------------------+ +-----------------------+ 228 Figure 2: Testbed Setup - Option 2 230 4.2. DUT/SUT Configuration 232 A unique DUT/SUT configuration MUST be used for all benchmarking 233 tests described in Section 7. Since each DUT/SUT will have their own 234 unique configuration, testers SHOULD configure their device with the 235 same parameters that would be used in the actual deployment of the 236 device or a typical deployment. Users MUST enable security features 237 on the DUT/SUT to achieve maximum security coverage for a specific 238 deployment scenario. 240 This document attempts to define the recommended security features 241 which SHOULD be consistently enabled for all the benchmarking tests 242 described in Section 7. The table 1 below describes the RECOMMENDED 243 sets of feature list which SHOULD be configured on the DUT/SUT. 245 Based on customer use case, user can take a decision to enable or 246 disable SSL inspection feature for "Throughput Performance with 247 NetSecOPEN Traffic Mix" test scenario described in Section 7.1 249 To improve repeatability, a summary of the DUT configuration 250 including description of all enabled DUT/SUT features MUST be 251 published with the benchmarking results. 253 +---------------------------+ 254 | NGFW | 255 +------------------------------------------+ 256 | | |Included |Added to| 257 |DUT Features |Feature|in initial|future | 258 | | |Scope |Scope | 259 +------------------------------------------+ 260 |SSL Inspection| x | x | | 261 +------------------------------------------+ 262 |IDS/IPS | x | x | | 263 +------------------------------------------+ 264 |Web Filtering | x | | x | 265 +------------------------------------------+ 266 |Antivirus | x | x | | 267 +------------------------------------------+ 268 |Anti Spyware | x | x | | 269 +------------------------------------------+ 270 |Anti Botnet | x | x | | 271 +------------------------------------------+ 272 |DLP | x | | x | 273 +------------------------------------------+ 274 |DDoS | x | | x | 275 +------------------------------------------+ 276 |Certificate | x | | x | 277 |Validation | | | | 278 +------------------------------------------+ 279 |Logging and | x | x | | 280 |Reporting | | | | 281 +------------------------------------------+ 282 |Application | x | x | | 283 |Identification| | | | 284 +----------------------+----------+--------+ 286 Table 1: DUT/SUT Feature List 288 In summary, DUT/SUT SHOULD be configured as follows: 290 o All security inspection enabled 292 o Disposition of all traffic is logged - Logging to an external 293 device is permissible 295 o CVEs matching the following characteristics when serving the NVD 297 * CVSS Version: 2 299 * CVSS V2 Metrics: AV:N/Au:N/I:C/A:C 301 * AV=Attack Vector, Au=Authentication, I=Integrity and 302 A=Availability 304 * CVSS V2 Severity: High (7-10) 306 * If doing a group test the published start date and published 307 end date should be the same 309 o Geographical location filtering and Application Identification and 310 Control configured to be triggered based on a site or application 311 from the defined traffic mix 313 In addition, it is also RECOMMENDED to configure a realistic number 314 of access policy rules on the DUT/SUT. This document determines the 315 number of access policy rules for three different class of DUT/SUT. 316 The classification of the DUT/SUT MAY be based on its maximum 317 supported firewall throughput performance number defined in the 318 vendor data sheet. This document classifies the DUT/SUT in three 319 different categories; namely small, medium, and maximum. 321 The RECOMMENDED throughput values for the following classes are: 323 Extra Small (XS) - supported throughput less than 1Gbit/s 325 Small (S) - supported throughput less than 5Gbit/s 327 Medium (M) - supported throughput greater than 5Gbit/s and less than 328 10Gbit/s 330 Large (L) - supported throughput greater than 10Gbit/s 332 The access rule defined in the table 2 MUST be configured from top to 333 bottom in correct order shown in the table. The configured access 334 policy rule MUST NOT block the test traffic used for the benchmarking 335 test scenarios. 337 +---------------------------------------------------+---------------+ 338 | | UD/SUT | 339 | | lCssification | 340 | | #ules | 341 +-----------+-----------+------------------+------------+---+---+---+ 342 | | Match | | | | | | | 343 | Rules Type| Criteria | Description | Action | XS| S | M | L | 344 +-------------------------------------------------------------------+ 345 |Application|Application| Any application | block | 5 | 10| 20| 50| 346 |layer | | traffic NOT | | | | | | 347 | | | included in the | | | | | | 348 | | | test traffic | | | | | | 349 +-----------------------+ ------------------------------------------+ 350 |Transport |Src IP and | Any src IP use in| block | 25| 50|100|250| 351 |layer |TCP/UDP | the test AND any | | | | | | 352 | |Dst ports | dst ports NOT | | | | | | 353 | | | used in the test | | | | | | 354 | | | traffic | | | | | | 355 +-------------------------------------------------------------------+ 356 |IP layer |Src/Dst IP | Any src/dst IP | block | 25| 50|100|250| 357 | | | NOT used in the | | | | | | 358 | | | test | | | | | | 359 +-------------------------------------------------------------------+ 360 |Application|Application| Applications | allow | 10| 10| 10| 10| 361 |layer | | included in the | | | | | | 362 | | | test traffic | | | | | | 363 +-------------------------------------------------------------------+ 364 |Transport |Src IP and | Half of the src | allow | 1| 1| 1| 1| 365 |layer |TCP/UDP | IP used in the | | | | | | 366 | |Dst ports | test AND any dst | | | | | | 367 | | | ports used in the| | | | | | 368 | | | test traffic. One| | | | | | 369 | | | rule per subnet | | | | | | 370 +-------------------------------------------------------------------+ 371 |IP layer |Src IP | The rest of the | allow | 1| 1| 1| 1| 372 | | | src IP subnet | | | | | | 373 | | | range used in the| | | | | | 374 | | | test. One rule | | | | | | 375 | | | per subnet | | | | | | 376 +-----------+-----------+------------------+--------+---+---+---+---+ 378 Table 2: DUT/SUT Access List 380 4.3. Test Equipment Configuration 382 In general, test equipment allows configuring parameters in different 383 protocol level. These parameters thereby influencing the traffic 384 flows which will be offered and impacting performance measurements. 386 This document specifies common test equipment configuration 387 parameters applicable for all test scenarios defined in Section 7. 388 Any test scenario specific parameters are described under test setup 389 section of each test scenario individually. 391 4.3.1. Client Configuration 393 This section specifies which parameters SHOULD be considered while 394 configuring clients using test equipment. Also, this section 395 specifies the recommended values for certain parameters. 397 4.3.1.1. TCP Stack Attributes 399 The TCP stack SHOULD use a TCP Reno variant, which include congestion 400 avoidance, back off and windowing, retransmission, and recovery on 401 every TCP connection between client and server endpoints. The 402 default IPv4 and IPv6 MSS segments size MUST be set to 1460 bytes and 403 1440 bytes respectively and a TX and RX receive windows of 32768 404 bytes. Client initial congestion window MUST NOT exceed 10 times the 405 MSS. Delayed ACKs are permitted and the maximum client delayed Ack 406 MUST NOT exceed 10 times the MSS before a forced ACK. Up to 3 407 retries SHOULD be allowed before a timeout event is declared. All 408 traffic MUST set the TCP PSH flag to high. The source port range 409 SHOULD be in the range of 1024 - 65535. Internal timeout SHOULD be 410 dynamically scalable per RFC 793. Client SHOULD initiate and close 411 TCP connections. TCP connections MUST be closed via FIN. 413 4.3.1.2. Client IP Address Space 415 The sum of the client IP space SHOULD contain the following 416 attributes. The traffic blocks SHOULD consist of multiple unique, 417 discontinuous static address blocks. A default gateway is permitted. 418 The IPv4 ToS byte or IPv6 traffic class should be set to '00' or 419 '000000' respectively. 421 The following equation can be used to determine the required total 422 number of client IP address. 424 Desired total number of client IP = Target throughput [Mbit/s] / 425 Throughput per IP address [Mbit/s] 427 (Idea 1) 6-7 Mbps per IP (e.g. 1,400-1,700 IPs per 10Gbit/s 428 throughput) 430 (Idea 2) 0.1-0.2 Mbps per IP (e.g. 50,000-100,000 IPs per 10Gbit/s 431 throughput) 433 Based on deployment and use case scenario, client IP addresses SHOULD 434 be distributed between IPv4 and IPv6 type. This document recommends 435 using the following ratio(s) between IPv4 and IPv6: 437 (Idea 1) 100 % IPv4, no IPv6 439 (Idea 2) 80 % IPv4, 20 % IPv6 441 (Idea 3) 50 % IPv4, 50 % IPv6 443 (Idea 4) 0 % IPv4, 100 % IPv6 445 4.3.1.3. Emulated Web Browser Attributes 447 The emulated web browser contains attributes that will materially 448 affect how traffic is loaded. The objective is to emulate a modern, 449 typical browser attributes to improve realism of the result set. 451 For HTTP traffic emulation, the emulated browser MUST negotiate HTTP 452 1.1. HTTP persistency MAY be enabled depending on test scenario. 453 The browser MAY open multiple TCP connections per Server endpoint IP 454 at any time depending on how many sequential transactions are needed 455 to be processed. Within the TCP connection multiple transactions MAY 456 be processed if the emulated browser has available connections. The 457 browser SHOULD advertise a User-Agent header. Headers MUST be sent 458 uncompressed. The browser SHOULD enforce content length validation. 460 For encrypted traffic, the following attributes shall define the 461 negotiated encryption parameters. The tests MUST use TLSv1.2 or 462 higher with a record size of 16383, commonly used cipher suite and 463 key strength. Depending on test scenario, Session reuse or ticket 464 resumption MAY be used for subsequent connections to the same Server 465 endpoint IP. The client endpoint MUST send TLS Extension Server Name 466 Indication (SNI) information when opening a security tunnel. Cipher 467 suite and certificate size should be defined in the parameter session 468 of each test scenario. 470 4.3.2. Backend Server Configuration 472 This document specifies which parameters should be considerable while 473 configuring emulated backend servers using test equipment. 475 4.3.2.1. TCP Stack Attributes 477 The TCP stack SHOULD use a TCP Reno variant, which include congestion 478 avoidance, back off and windowing, retransmission, and recovery on 479 every TCP connection between client and server endpoints. The 480 default IPv4 and IPv6 MSS segment size MUST be set to 1460 bytes and 481 1440 bytes respectively and a TX and RX receive windows of at least 482 32768 bytes. Server initial congestion window MUST NOT exceed 10 483 times the MSS. Delayed ACKs are permitted and the maximum server 484 delayed Ack MUST NOT exceed 10 times the MSS before a forced ACK. Up 485 to 3 retries SHOULD be allowed before a timeout event is declared. 486 All traffic MUST set the TCP PSH flag to high. The source port range 487 SHOULD be in the range of 1024 - 65535. Internal timeout should be 488 dynamically scalable per RFC 793. 490 4.3.2.2. Server Endpoint IP Addressing 492 The server IP blocks SHOULD consist of unique, discontinuous static 493 address blocks with one IP per Server Fully Qualified Domain Name 494 (FQDN) endpoint per test port. The IPv4 ToS byte and IPv6 traffic 495 class bytes should be set to '00' and '000000' respectively. 497 4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes 499 The server pool for HTTP SHOULD listen on TCP port 80 and emulate 500 HTTP version 1.1 with persistence. The server MUST advertise a 501 server type. For HTTPS server, TLS 1.2 or higher MUST be used with a 502 record size of 16383 bytes and ticket resumption or Session ID reuse 503 SHOULD be enabled based on test scenario. The server MUST listen on 504 port TCP 443. The server shall serve a certificate to the client. 505 It is REQUIRED that the HTTPS server also check Host SNI information 506 with the FQDN. Cipher suite and certificate size should be defined 507 in the parameter section of each test scenario. 509 4.3.3. Traffic Flow Definition 511 The section describes the traffic pattern between the client and 512 server endpoints. At the beginning of the test, the server endpoint 513 initializes and will be in a ready to accept connection state 514 including initialization of the TCP stack as well as bound HTTP and 515 HTTPS servers. When a client endpoint is needed, it will initialize 516 and be given attributes such as the MAC and IP address. The behavior 517 of the client is to sweep though the given server IP space, 518 sequentially generating a recognizable service by the DUT. Thus, a 519 balanced, mesh between client endpoints and server endpoints will be 520 generated in a client port server port combination. Each client 521 endpoint performs the same actions as other endpoints, with the 522 difference being the source IP of the client endpoint and the target 523 server IP pool. The client shall use Fully Qualified Domain Names 524 (FQDN) in Host Headers and for TLS Server Name Indication (SNI). 526 4.3.3.1. Description of Intra-Client Behavior 528 Client endpoints are independent of other clients that are 529 concurrently executing. When a client endpoint initiates traffic, 530 this section describes how the client steps though different 531 services. Once initialized, the client should randomly hold (perform 532 no operation) for a few milliseconds to allow for better 533 randomization of start of client traffic. The client will then 534 either open a new TCP connection or connect to a TCP persistence 535 stack still open to that specific server. At any point that the 536 service profile may require encryption, a TLS encryption tunnel will 537 form presenting the URL request to the server. The server will then 538 perform an SNI name check with the proposed FQDN compared to the 539 domain embedded in the certificate. Only when correct, will the 540 server process the HTTPS response object. The initial response 541 object to the server MUST NOT have a fixed size; its size is based on 542 benchmarking tests described in Section 7. Multiple additional sub- 543 URLs (response objects on the service page) MAY be requested 544 simultaneously. This may or may not be to the same server IP as the 545 initial URL. Each sub-object will also use a conical FQDN and URL 546 path, as observed in the traffic mix used. 548 4.3.4. Traffic Load Profile 550 The loading of traffic is described in this section. The loading of 551 a traffic load profile has five distinct phases: Init, ramp up, 552 sustain, ramp down, and collection. 554 During the Init phase, test bed devices including the client and 555 server endpoints should negotiate layer 2-3 connectivity such as MAC 556 learning and ARP. Only after successful MAC learning or ARP/ND 557 resolution shall the test iteration move to the next phase. No 558 measurements are made in this phase. The minimum RECOMMEND time for 559 Init phase is 5 seconds. During this phase, the emulated clients 560 SHOULD NOT initiate any sessions with the DUT/SUT, in contrast, the 561 emulated servers should be ready to accept requests from DUT/SUT or 562 from emulated clients. 564 In the ramp up phase, the test equipment SHOULD start to generate the 565 test traffic. It SHOULD use a set approximate number of unique 566 client IP addresses actively to generate traffic. The traffic should 567 ramp from zero to desired target objective. The target objective 568 will be defined for each benchmarking test. The duration for the 569 ramp up phase MUST be configured long enough, so that the test 570 equipment does not overwhelm DUT/SUT's supported performance metrics 571 namely; connections per second, concurrent TCP connections, and 572 application transactions per second. The RECOMMENDED time duration 573 for the ramp up phase is 180-300 seconds. No measurements are made 574 in this phase. 576 In the sustain phase, the test equipment SHOULD continue generating 577 traffic to constant target value for a constant number of active 578 client IPs. The RECOMMENDED time duration for sustain phase is 600 579 seconds. This is the phase where measurements occur. 581 In the ramp down/close phase, no new connections are established, and 582 no measurements are made. The time duration for ramp up and ramp 583 down phase SHOULD be same. The RECOMMENDED duration of this phase is 584 between 180 to 300 seconds. 586 The last phase is administrative and will be when the tester merges 587 and collates the report data. 589 5. Test Bed Considerations 591 This section recommends steps to control the test environment and 592 test equipment, specifically focusing on virtualized environments and 593 virtualized test equipment. 595 1. Ensure that any ancillary switching or routing functions between 596 the system under test and the test equipment do not limit the 597 performance of the traffic generator. This is specifically 598 important for virtualized components (vSwitches, vRouters). 600 2. Verify that the performance of the test equipment matches and 601 reasonably exceeds the expected maximum performance of the system 602 under test. 604 3. Assert that the test bed characteristics are stable during the 605 entire test session. Several factors might influence stability 606 specifically for virtualized test beds, for example additional 607 workloads in a virtualized system, load balancing and movement of 608 virtual machines during the test, or simple issues such as 609 additional heat created by high workloads leading to an emergency 610 CPU performance reduction. 612 Test bed reference pre-tests help to ensure that the desired traffic 613 generator aspects such as maximum throughput and the network 614 performance metrics such as maximum latency and maximum packet loss 615 are met. 617 Once the desired maximum performance goals for the system under test 618 have been identified, a safety margin of 10% SHOULD be added for 619 throughput and subtracted for maximum latency and maximum packet 620 loss. 622 Test bed preparation may be performed either by configuring the DUT 623 in the most trivial setup (fast forwarding) or without presence of 624 DUT. 626 6. Reporting 628 This section describes how the final report should be formatted and 629 presented. The final test report MAY have two major sections; 630 Introduction and result sections. The following attributes SHOULD be 631 present in the introduction section of the test report. 633 1. The name of the NetSecOPEN traffic mix (see Appendix A) MUST be 634 prominent. 636 2. The time and date of the execution of the test MUST be prominent. 638 3. Summary of testbed software and Hardware details 640 A. DUT Hardware/Virtual Configuration 642 + This section SHOULD clearly identify the make and model of 643 the DUT 645 + The port interfaces, including speed and link information 646 MUST be documented. 648 + If the DUT is a virtual VNF, interface acceleration such 649 as DPDK and SR-IOV MUST be documented as well as cores 650 used, RAM used, and the pinning / resource sharing 651 configuration. The Hypervisor and version MUST be 652 documented. 654 + Any additional hardware relevant to the DUT such as 655 controllers MUST be documented 657 B. DUT Software 659 + The operating system name MUST be documented 661 + The version MUST be documented 663 + The specific configuration MUST be documented 665 C. DUT Enabled Features 667 + Specific features, such as logging, NGFW, DPI MUST be 668 documented 670 + Attributes of those featured MUST be documented 672 + Any additional relevant information about features MUST be 673 documented 675 D. Test equipment hardware and software 677 + Test equipment vendor name 679 + Hardware details including model number, interface type 681 + Test equipment firmware and test application software 682 version 684 4. Results Summary / Executive Summary 686 1. Results should resemble a pyramid in how it is reported, with 687 the introduction section documenting the summary of results 688 in a prominent, easy to read block. 690 2. In the result section of the test report, the following 691 attributes should be present for each test scenario. 693 a. KPIs MUST be documented separately for each test 694 scenario. The format of the KPI metrics should be 695 presented as described in Section 6.1. 697 b. The next level of details SHOULD be graphs showing each 698 of these metrics over the duration (sustain phase) of the 699 test. This allows the user to see the measured 700 performance stability changes over time. 702 6.1. Key Performance Indicators 704 This section lists KPIs for overall benchmarking tests scenarios. 705 All KPIs MUST be measured during the of sustain phase of the traffic 706 load profile described in Section 4.3.4. All KPIs MUST be measured 707 from the result output of test equipment. 709 o Concurrent TCP Connections 710 This key performance indicator measures the average concurrent 711 open TCP connections in the sustaining period. 713 o TCP Connections Per Second 714 This key performance indicator measures the average established 715 TCP connections per second in the sustaining period. For "TCP/ 716 HTTP(S) Connection Per Second" benchmarking test scenario, the KPI 717 is measured average established and terminated TCP connections per 718 second simultaneously. 720 o Application Transactions Per Second 721 This key performance indicator measures the average successfully 722 completed application transactions per second in the sustaining 723 period. 725 o TLS Handshake Rate 726 This key performance indicator measures the average TLS 1.2 or 727 higher session formation rate within the sustaining period. 729 o Throughput 730 This key performance indicator measures the average Layer 2 731 throughput within the sustaining period as well as average packets 732 per seconds within the same period. The value of throughput 733 SHOULD be presented in Gbit/s rounded to two places of precision 734 with a more specific kbps in parenthesis. Optionally, goodput MAY 735 also be logged as an average goodput rate measured over the same 736 period. Goodput result SHALL also be presented in the same format 737 as throughput. 739 o URL Response time / Time to Last Byte (TTLB) 740 This key performance indicator measures the minimum, average and 741 maximum per URL response time in the sustaining period. The 742 latency is measured at Client and in this case would be the time 743 duration between sending a GET request from Client and the 744 receival of the complete response from the server. 746 o Application Transaction Latency 747 This key performance indicator measures the minimum, average and 748 maximum the amount of time to receive all objects from the server. 749 The value of application transaction latency SHOULD be presented 750 in millisecond rounded to zero decimal. 752 o Time to First Byte (TTFB) 753 This key performance indicator will measure minimum, average and 754 maximum the time to first byte. TTFB is the elapsed time between 755 sending the SYN packet from the client and receiving the first 756 byte of application date from the DUT/SUT. TTFB SHOULD be 757 expressed in millisecond. 759 7. Benchmarking Tests 760 7.1. Throughput Performance With NetSecOPEN Traffic Mix 762 7.1.1. Objective 764 Using NetSecOPEN traffic mix, determine the maximum sustainable 765 throughput performance supported by the DUT/SUT. (see Appendix A for 766 details about traffic mix) 768 7.1.2. Test Setup 770 Test bed setup MUST be configured as defined in Section 4. Any test 771 scenario specific test bed configuration changes MUST be documented. 773 7.1.3. Test Parameters 775 In this section, test scenario specific parameters SHOULD be defined. 777 7.1.3.1. DUT/SUT Configuration Parameters 779 DUT/SUT parameters MUST conform to the requirements defined in 780 Section 4.2. Any configuration changes for this specific test 781 scenario MUST be documented. 783 This test scenario is RECOMMENDED to perform twice; one with SSL 784 inspection feature enabled and the second scenario with SSL 785 inspection feature disabled on the DUT/SUT. 787 7.1.3.2. Test Equipment Configuration Parameters 789 Test equipment configuration parameters MUST conform to the 790 requirements defined in Section 4.3. Following parameters MUST be 791 noted for this test scenario: 793 Client IP address range defined in Section 4.3.1.2 795 Server IP address range defined in Section 4.3.2.2 797 Traffic distribution ratio between IPv4 and IPv6 defined in 798 Section 4.3.1.2 800 Traffic load objective or specification type (e.g. Throughput, 801 SimUsers and etc.) 803 Target throughput: It can be defined based on requirements. 804 Otherwise it represents aggregated line rate of interface(s) used 805 in the DUT/SUT 807 Initial throughput: 10% of the "Target throughput" 809 7.1.3.3. Traffic Profile 811 Traffic profile: Test scenario MUST be run with a single application 812 traffic mix profile (see Appendix A for details about traffic mix). 813 The name of the NetSecOPEN traffic mix MUST be documented. 815 7.1.3.4. Test Results Acceptance Criteria 817 The following test Criteria is defined as test results acceptance 818 criteria. Test results acceptance criteria MUST be monitored during 819 the whole sustain phase of the traffic load profile. 821 a. Number of failed Application transaction MUST be less than 0.01% 822 of total attempt transactions 824 b. Number of Terminated TCP connections due to unexpected TCP RST 825 sent by DUT/SUT MUST be less than 0.01% of total initiated TCP 826 connections 828 c. Maximum deviation (max. dev) of application transaction time or 829 TTLB (Time To Last Byte) MUST be less than X (The value for "X" 830 will be finalized and updated after completion of PoC test) 831 The following equation MUST be used to calculate the deviation of 832 application transaction latency or TTLB 833 max. dev = max((avg_latency - min_latency),(max_latency - 834 avg_latency)) / (Initial latency) 835 Where, the initial latency is calculated using the following 836 equation. For this calculation, the latency values (min', avg' 837 and max') MUST be measured during test procedure step 1 as 838 defined in Section 7.1.4.1. 839 The variable latency represents application transaction latency 840 or TTLB. 841 Initial latency:= min((avg' latency - min' latency) | (max' 842 latency - avg' latency)) 844 d. Maximum value of Time to First Byte MUST be less than X 846 7.1.3.5. Measurement 848 Following KPI metrics MUST be reported for this test scenario. 850 Mandatory KPIs: average Throughput, average Concurrent TCP 851 connections, TTLB/application transaction latency (minimum, average 852 and maximum) and average application transactions per second 854 Optional KPIs: average TCP connections per second, average TLS 855 handshake rate and TTFB 857 7.1.4. Test Procedures and expected Results 859 The test procedures are designed to measure the throughput 860 performance of the DUT/SUT at the sustaining period of traffic load 861 profile. The test procedure consists of three major steps. 863 7.1.4.1. Step 1: Test Initialization and Qualification 865 Verify the link status of the all connected physical interfaces. All 866 interfaces are expected to be "UP" status. 868 Configure traffic load profile of the test equipment to generate test 869 traffic at "initial throughput" rate as described in the parameters 870 section. The test equipment SHOULD follow the traffic load profile 871 definition as described in Section 4.3.4. The DUT/SUT SHOULD reach 872 the "initial throughput" during the sustain phase. Measure all KPI 873 as defined in Section 7.1.3.5. The measured KPIs during the sustain 874 phase MUST meet acceptance criteria "a" and "b" defined in 875 Section 7.1.3.4. 877 If the KPI metrics do not meet the acceptance criteria, the test 878 procedure MUST NOT be continued to step 2. 880 7.1.4.2. Step 2: Test Run with Target Objective 882 Configure test equipment to generate traffic at "Target throughput" 883 rate defined in the parameter table. The test equipment SHOULD 884 follow the traffic load profile definition as described in 885 Section 4.3.4. The test equipment SHOULD start to measure and record 886 all specified KPIs. The frequency of KPI metric measurements MUST be 887 less than 5 seconds. Continue the test until all traffic profile 888 phases are completed. 890 The DUT/SUT is expected to reach the desired target throughput during 891 the sustain phase. In addition, the measured KPIs MUST meet all 892 acceptance criteria. Follow the step 3, if the KPI metrics do not 893 meet the acceptance criteria. 895 7.1.4.3. Step 3: Test Iteration 897 Determine the maximum and average achievable throughput within the 898 acceptance criteria. Final test iteration MUST be performed for the 899 test duration defined in Section 4.3.4. 901 7.2. TCP/HTTP Connections Per Second 903 7.2.1. Objective 905 Using HTTP traffic, determine the maximum sustainable TCP connection 906 establishment rate supported by the DUT/SUT under different 907 throughput load conditions. 909 To measure connections per second, test iterations MUST use different 910 fixed HTTP response object sizes defined in the test equipment 911 configuration parameters section 7.2.3.2. 913 7.2.2. Test Setup 915 Test bed setup SHOULD be configured as defined in section 4. Any 916 specific test bed configuration changes such as number of interfaces 917 and interface type, etc. MUST be documented. 919 7.2.3. Test Parameters 921 In this section, test scenario specific parameters SHOULD be defined. 923 7.2.3.1. DUT/SUT Configuration Parameters 925 DUT/SUT parameters MUST conform to the requirements defined in the 926 section 4.2. Any configuration changes for this specific test 927 scenario MUST be documented. 929 7.2.3.2. Test Equipment Configuration Parameters 931 Test equipment configuration parameters MUST conform to the 932 requirements defined in the section 4.3. Following parameters MUST 933 be documented for this test scenario: 935 Client IP address range defined in Section 4.3.1.2 937 Server IP address range defined in Section 4.3.2.2 939 Traffic distribution ratio between IPv4 and IPv6 defined in 940 Section 4.3.1.2 942 Target connections per second: Initial value from product data sheet 943 (if known) 945 Initial connections per second: 10% of "Target connections per 946 second" 947 The client SHOULD negotiate HTTP 1.1 and close the connection with 948 FIN immediately after completion of one transaction. In each test 949 iteration, client MUST send GET command requesting a fixed HTTP 950 response object size. 952 The RECOMMENDED response object sizes are 1, 2, 4, 16, 64 KByte 954 7.2.3.3. Test Results Acceptance Criteria 956 The following test Criteria is defined as test results acceptance 957 criteria. Test results acceptance criteria MUST be monitored during 958 the whole sustain phase of the traffic load profile. 960 a. Number of failed Application transaction MUST be less than 0.01% 961 of total attempt transactions 963 b. Number of Terminated TCP connections due to unexpected TCP RST 964 sent by DUT/SUT MUST be less than 0.01% of total initiated TCP 965 connections 967 c. During the sustain phase, traffic should be forwarded at a 968 constant rate 970 d. Concurrent TCP connections SHOULD be constant during steady 971 state. The deviation of concurrent TCP connections MUST be less 972 than 10%. This confirms that DUT open and close the TCP 973 connections almost at the same rate 975 7.2.3.4. Measurement 977 Following KPI metrics MUST be reported for each test iteration. 979 Mandatory KPIs: average TCP connections per second, average 980 Throughput and Average Time to First Byte (TTFB). 982 7.2.4. Test Procedures and Expected Results 984 The test procedure is designed to measure the TCP connections per 985 second rate of the DUT/SUT at the sustaining period of traffic load 986 profile. The test procedure consists of three major steps. This 987 test procedure MAY be repeated multiple times with different IP 988 types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 989 distribution. 991 7.2.4.1. Step 1: Test Initialization and Qualification 993 Verify the link status of the all connected physical interfaces. All 994 interfaces are expected to be "UP" status. 996 Configure traffic load profile of the test equipment to establish 997 "initial connections per second" as defined in the parameters 998 section. The traffic load profile SHOULD be defined as described in 999 the section 4.3.4. 1001 The DUT/SUT SHOULD reach the "initial connections per second" before 1002 the sustain phase. The measured KPIs during the sustain phase MUST 1003 meet the acceptance criteria a, b, c, and d defined in section 1004 7.3.3.3. 1006 If the KPI metrics do not meet the acceptance criteria, the test 1007 procedure MUST NOT be continued to "Step 2". 1009 7.2.4.2. Step 2: Test Run with Target Objective 1011 Configure test equipment to establish "Target connections per second" 1012 defined in the parameters table. The test equipment SHOULD follow 1013 the traffic load profile definition as described in the section 1014 4.3.4. 1016 During the ramp up and sustain phase of each test iteration, other 1017 KPIs such as throughput, concurrent TCP connections and application 1018 transactions per second MUST NOT reach to the maximum value the DUT/ 1019 SUT can support. The test results for specific test iterations 1020 SHOULD NOT be reported, if the above mentioned KPI (especially 1021 throughput) reaches to the maximum value. (Example: If the test 1022 iteration with 64Kbyte of HTTP response object size reached the 1023 maximum throughput limitation of the DUT, the test iteration MAY be 1024 interrupted and the result for 64kbyte SHOULD NOT be reported). 1026 The test equipment SHOULD start to measure and record all specified 1027 KPIs. The frequency of measurement MUST be less than 5 seconds. 1028 Continue the test until all traffic profile phases are completed. 1030 The DUT/SUT is expected to reach the desired target connections per 1031 second rate at the sustain phase. In addition, the measured KPIs 1032 MUST meet all acceptance criteria. 1034 Follow the step 3, if the KPI metrics do not meet the acceptance 1035 criteria. 1037 7.2.4.3. Step 3: Test Iteration 1039 Determine the maximum and average achievable connections per second 1040 within the acceptance criteria. 1042 7.3. HTTP Transaction per Second 1044 7.3.1. Objective 1046 Using HTTP 1.1 traffic, determine the maximum sustainable HTTP 1047 transactions per second supported by the DUT/SUT under different 1048 throughput load conditions. 1050 To measure transactions per second performance under a variety of DUT 1051 Security inspection load conditions, each test iteration MUST use 1052 different fixed HTTP response object sizes defined in the test 1053 equipment configuration parameters section 7.3.3.2. 1055 7.3.2. Test Setup 1057 Test bed setup SHOULD be configured as defined in section 4. Any 1058 specific test bed configuration changes such as number of interfaces 1059 and interface type, etc. MUST be documented. 1061 7.3.3. Test Parameters 1063 In this section, test scenario specific parameters SHOULD be defined. 1065 7.3.3.1. DUT/SUT Configuration Parameters 1067 DUT/SUT parameters MUST conform to the requirements defined in 1068 section 4.2. Any configuration changes for this specific test 1069 scenario MUST be documented. 1071 7.3.3.2. Test Equipment Configuration Parameters 1073 Test equipment configuration parameters MUST conform to the 1074 requirements defined in the section 4.3. Following parameters MUST 1075 be documented for this test scenario: 1077 Client IP address range defined in Section 4.3.1.2 1079 Server IP address range defined in Section 4.3.2.2 1081 Traffic distribution ratio between IPv4 and IPv6 defined in 1082 Section 4.3.1.2 1083 Target Transactions per second: Initial value from product data sheet 1084 (if known) 1086 Initial Transactions per second: 10% of "Target Transactions per 1087 second" 1089 Test scenario SHOULD be run with a single traffic profile with 1090 following attributes: 1092 The client MUST negotiate HTTP 1.1 and close the connections with FIN 1093 immediately after completion of 10 transactions. In each test 1094 iteration, client MUST send GET command requesting a fixed HTTP 1095 response object size. The RECOMMENDED object sizes are 1, 16 and 64 1096 KByte 1098 7.3.3.3. Test Results Acceptance Criteria 1100 The following test Criteria is defined as test results acceptance 1101 criteria. Test results acceptance criteria MUST be monitored during 1102 the whole sustain phase of the traffic load profile. 1104 a. Number of failed Application transactions MUST be zero 1106 b. Number of Terminated HTTP connections due to unexpected TCP RST 1107 sent by DUT/SUT MUST be less than 0.01% of total initiated HTTP 1108 sessions 1110 c. Traffic should be forwarded at a constant rate 1112 d. Average Time to TCP First Byte MUST be constant and not increase 1113 more than 10% 1115 e. The deviation of concurrent TCP connection Must be less than 10% 1117 7.3.3.4. Measurement 1119 Following KPI metrics MUST be reported for this test scenario. 1121 average TCP Connections per second, average Throughput, Average Time 1122 to TCP First Byte and average application transaction latency. 1124 7.3.4. Test Procedures and Expected Results 1126 The test procedure is designed to measure the HTTP transactions per 1127 second of the DUT/SUT at the sustaining period of traffic load 1128 profile. The test procedure consists of three major steps. This 1129 test procedure MAY be repeated multiple times with different IP 1130 types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 1131 distribution. 1133 7.3.4.1. Step 1: Test Initialization and Qualification 1135 Verify the link status of the all connected physical interfaces. All 1136 interfaces are expected to be "UP" status. 1138 Configure traffic load profile of the test equipment to establish 1139 "initial HTTP transactions per second" as defined in the parameters 1140 section. The traffic load profile CAN be defined as described in the 1141 section 4.3.4. 1143 The DUT/SUT SHOULD reach the "initial HTTP transactions per second" 1144 before the sustain phase. The measured KPIs during the sustain phase 1145 MUST meet the acceptance criteria a, b, c, and d defined in section 1146 7.3.3.3. 1148 If the KPI metrics do not meet the acceptance criteria, the test 1149 procedure MUST NOT be continued to "Step 2". 1151 7.3.4.2. Step 2: Test Run with Target Objective 1153 Configure test equipment to establish "Target HTTP transactions per 1154 second" defined in the parameters table. The test equipment SHOULD 1155 follow the traffic load profile definition as described in the 1156 section 4.3.4. 1158 During the ramp up and sustain phase of each test iteration, other 1159 KPIs such as throughput, concurrent TCP connections and connection 1160 per second MUST NOT reach to the maximum value the DUT/SUT can 1161 support. The test results for specific test iterations SHOULD NOT be 1162 reported, if the above mentioned KPI (especially throughput) reaches 1163 to the maximum value. (Example: If the test iteration with 64Kbyte 1164 of HTTP response object size reached the maximum throughput 1165 limitation of the DUT, the test iteration MAY be interrupted and the 1166 result for 64kbyte SHOULD NOT be reported). 1168 The test equipment SHOULD start to measure and record all specified 1169 KPIs. The frequency of measurement MUST be less than 5 seconds. 1170 Continue the test until all traffic profile phases are completed. 1172 The DUT/SUT is expected to reach the desired target HTTP transactions 1173 per second at the sustain phase. In addition, the measured KPIs MUST 1174 meet all acceptance criteria. 1176 Follow the step 3, if the KPI metrics do not meet the acceptance 1177 criteria. 1179 7.3.4.3. Step 3: Test Iteration 1181 Determine the maximum and average achievable HTTP transactions per 1182 second within the acceptance criteria. Final test iteration MUST be 1183 performed for the test duration defined in Section 4.3.4. 1185 7.4. TCP/HTTP Transaction Latency 1187 7.4.1. Objective 1189 Using HTTP traffic, determine the average HTTP transaction latency 1190 when DUT is running with sustainable HTTP transactions per second 1191 supported by the DUT/SUT under different HTTP response object sizes. 1193 Test iterations MUST be performed with different HTTP response object 1194 sizes twice, one with a single transaction and the other with 1195 multiple transactions within a single TCP connection. For 1196 consistency both single and multiple transaction test needs to be 1197 configured with HTTP 1.1. 1199 7.4.2. Test Setup 1201 Test bed setup SHOULD be configured as defined in section 4. Any 1202 specific test bed configuration changes such as number of interfaces 1203 and interface type, etc. MUST be documented. 1205 7.4.3. Test Parameters 1207 In this section, test scenario specific parameters SHOULD be defined. 1209 7.4.3.1. DUT/SUT Configuration Parameters 1211 DUT/SUT parameters MUST conform to the requirements defined in the 1212 section 4.2. Any configuration changes for this specific test 1213 scenario MUST be documented. 1215 7.4.3.2. Test Equipment Configuration Parameters 1217 Test equipment configuration parameters MUST conform to the 1218 requirements defined in the section 4.3. Following parameters MUST 1219 be documented for this test scenario: 1221 Client IP address range defined in Section 4.3.1.2 1223 Server IP address range defined in Section 4.3.2.2 1225 Traffic distribution ratio between IPv4 and IPv6 defined in 1226 Section 4.3.1.2 1227 Target connections per second:50% of the value measured in test 1228 scenario TCP/HTTP Connections Per Second (Section 7.2) 1230 Initial connections per second: 10% of "Target connections per 1231 second" 1233 HTTP transaction per TCP connection: one test scenario with single 1234 transaction and another scenario with 10 transactions 1236 Test scenario SHOULD be run with a single traffic profile with 1237 following attributes: 1239 To measure application transaction latency with a single connection 1240 per transaction and a single connection with multiple transactions 1241 the tests should run twice: 1243 1st test run: The client MUST negotiate HTTP 1.1 and close the 1244 connection with FIN immediately after completion of the transaction. 1246 2nd test run: The client MUST negotiate HTTP 1.1 and close the 1247 connection after 10 transactions (GET and RESPONSE) within a single 1248 TCP connection. 1250 HTTP 1.1 with GET command requesting a single 1, 16 or 64 Kbyte 1251 objects. For each test iteration, client MUST request a single HTTP 1252 response object size. 1254 7.4.3.3. Test Results Acceptance Criteria 1256 The following test Criteria is defined as test results acceptance 1257 criteria. Test results acceptance criteria MUST be monitored during 1258 the whole sustain phase of the traffic load profile. Ramp up and 1259 ramp down phase SHOULD NOT be considered. 1261 Generica criteria: 1263 a. Number of failed Application transaction MUST be zero. 1265 b. Number of Terminated TCP connection due to unexpected TCP RST 1266 sent by DUT/SUT MUST be zero. 1268 c. During the sustain phase, traffic should be forwarded at a 1269 constant rate. 1271 d. During the sustain phase, Average connect time and average 1272 transaction time MUST be constant and latency deviation SHOULD 1273 not increase more than 10%. 1275 e. Concurrent TCP connections should be constant during steady 1276 state. This confirms the DUT opens and closes TCP connections at 1277 the same rate. 1279 f. After ramp up the DUT MUST achieve the target connections per 1280 second objective defined in the parameter section 7.4.3.2 and it 1281 remains in that state for the entire test duration (sustain 1282 phase). 1284 7.4.3.4. Measurement 1286 Following KPI metrics MUST be reported for each test scenario and 1287 HTTP response object sizes separately: 1289 average TCP connections per second and average application 1290 transaction latency needs to be recorded. 1292 All KPI's are measured once the target connections per second 1293 achieves the steady state. 1295 7.4.4. Test Procedures and Expected Results 1297 The test procedure is designed to measure the average application 1298 transaction latencies or TTLB when the DUT is operating close to 50% 1299 of its maximum achievable connections per second. , This test 1300 procedure CAN be repeated multiple times with different IP types 1301 (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic distribution), 1302 HTTP response object sizes and single and multiple transactions per 1303 connection scenarios. 1305 7.4.4.1. Step 1: Test Initialization and Qualification 1307 Verify the link status of the all connected physical interfaces. All 1308 interfaces are expected to be "UP" status. 1310 Configure traffic load profile of the test equipment to establish 1311 "initial connections per second" as defined in the parameters 1312 section. The traffic load profile CAN be defined as described in the 1313 section 4.3.4. 1315 The DUT/SUT SHOULD reach the "initial connections per second" before 1316 the sustain phase. The measured KPIs during the sustain phase MUST 1317 meet the acceptance criteria a, b, c, d ,e and f defined in section 1318 7.4.3.3. 1320 If the KPI metrics do not meet the acceptance criteria, the test 1321 procedure MUST NOT be continued to "Step 2". 1323 7.4.4.2. Step 2: Test Run with Target Objective 1325 Configure test equipment to establish "Target connections per second" 1326 defined in the parameters table. The test equipment SHOULD follow 1327 the traffic load profile definition as described in the section 1328 4.3.4. 1330 During the ramp up and sustain phase, other KPIs such as throughput, 1331 concurrent TCP connections and application transactions per second 1332 MUST NOT reach to the maximum value that DUT/SUT can support. The 1333 test results for specific test iterations SHOULD NOT be reported, if 1334 the above mentioned KPI (especially throughput) reaches to the 1335 maximum value. (Example: If the test iteration with 64Kbyte of HTTP 1336 response object size reached the maximum throughput limitation of the 1337 DUT, the test iteration MAY be interrupted and the result for 64kbyte 1338 SHOULD NOT be reported). 1340 The test equipment SHOULD start to measure and record all specified 1341 KPIs. The frequency of measurement MUST be less than 5 seconds. 1342 Continue the test until all traffic profile phases are completed. 1343 DUT/SUT is expected to reach the desired target connections per 1344 second rate at the sustain phase. In addition, the measured KPIs 1345 must meet all acceptance criteria. 1347 Follow the step 3, if the KPI metrics do not meet the acceptance 1348 criteria. 1350 7.4.4.3. Step 3: Test Iteration 1352 Determine the maximum achievable connections per second within the 1353 acceptance criteria and measure the latency values. 1355 7.5. HTTP Throughput 1357 7.5.1. Objective 1359 Determine the throughput for HTTP transactions varying the HTTP 1360 response object size. 1362 7.5.2. Test Setup 1364 Test bed setup SHOULD be configured as defined in section 4. Any 1365 specific test bed configuration changes such as number of interfaces 1366 and interface type, etc. must be documented. 1368 7.5.3. Test Parameters 1370 In this section, test scenario specific parameters SHOULD be defined. 1372 7.5.3.1. DUT/SUT Configuration Parameters 1374 DUT/SUT parameters MUST conform to the requirements defined in the 1375 section 4.2. Any configuration changes for this specific test 1376 scenario MUST be documented. 1378 7.5.3.2. Test Equipment Configuration Parameters 1380 Test equipment configuration parameters MUST conform to the 1381 requirements defined in the section 4.3. Following parameters MUST 1382 be documented for this test scenario: 1384 Client IP address range defined in Section 4.3.1.2 1386 Server IP address range defined in Section 4.3.2.2 1388 Traffic distribution ratio between IPv4 and IPv6 defined in 1389 Section 4.3.1.2 1391 Target Throughput: Initial value from product data sheet (if known) 1393 Number of HTTP response object requests (transactions) per 1394 connection: 10 1396 HTTP response object size: 16KB, 64KB, 256KB and mixed objects 1397 defined in the table 1398 +---------------------+---------------------+ 1399 | Object size (KByte) | Number of requests/ | 1400 | | Weight | 1401 +---------------------+---------------------+ 1402 | 0.2 | 1 | 1403 +---------------------+---------------------+ 1404 | 6 | 1 | 1405 +---------------------+---------------------+ 1406 | 8 | 1 | 1407 +---------------------+---------------------+ 1408 | 9 | 1 | 1409 +---------------------+---------------------+ 1410 | 10 | 1 | 1411 +---------------------+---------------------+ 1412 | 25 | 1 | 1413 +---------------------+---------------------+ 1414 | 26 | 1 | 1415 +---------------------+---------------------+ 1416 | 35 | 1 | 1417 +---------------------+---------------------+ 1418 | 59 | 1 | 1419 +---------------------+---------------------+ 1420 | 347 | 1 | 1421 +---------------------+---------------------+ 1423 Table 3: Mixed Objects 1425 7.5.3.3. Test Results Acceptance Criteria 1427 The following test Criteria is defined as test results acceptance 1428 criteria. Test results acceptance criteria MUST be monitored during 1429 the whole sustain phase of the traffic load profile 1431 a. Number of failed Application transaction MUST be less than 0.01% 1432 of attempt transaction. 1434 b. Traffic should be forwarded constantly. 1436 c. The deviation of concurrent TCP connection Must be less than 10% 1438 d. The deviation of average HTTP transaction latency MUST be less 1439 than 10% 1441 7.5.3.4. Measurement 1443 The KPI metrics MUST be reported for this test scenario: 1445 Average Throughput, concurrent connections, and average TCP 1446 connections per second. 1448 7.5.4. Test Procedures and Expected Results 1450 The test procedure is designed to measure HTTP throughput of the DUT/ 1451 SUT. The test procedure consists of three major steps. This test 1452 procedure MAY be repeated multiple times with different IPv4 and IPv6 1453 traffic distribution and HTTP response object sizes. 1455 7.5.4.1. Step 1: Test Initialization and Qualification 1457 Verify the link status of the all connected physical interfaces. All 1458 interfaces are expected to be "UP" status. 1460 Configure traffic load profile of the test equipment to establish 1461 "initial throughput" as defined in the parameters section. 1463 The traffic load profile SHOULD be defined as described in 1464 Section 4.3.4. The DUT/SUT SHOULD reach the "initial throughput" 1465 during the sustain phase. Measure all KPI as defined in 1466 Section 7.5.3.4. 1468 The measured KPIs during the sustain phase MUST meet the acceptance 1469 criteria "a" defined in Section 7.5.3.3. 1471 If the KPI metrics do not meet the acceptance criteria, the test 1472 procedure MUST NOT be continued to "Step 2". 1474 7.5.4.2. Step 2: Test Run with Target Objective 1476 The test equipment SHOULD start to measure and record all specified 1477 KPIs. The frequency of measurement MUST be less than 5 seconds. 1478 Continue the test until all traffic profile phases are completed. 1480 The DUT/SUT is expected to reach the desired target throughput at the 1481 sustain phase. In addition, the measured KPIs must meet all 1482 acceptance criteria. 1484 Perform the test separately for each HTTP response object size (16k, 1485 64k, 256k and mixed HTTP response objects). 1487 Follow the step 3, if the KPI metrics do not meet the acceptance 1488 criteria. 1490 7.5.4.3. Step 3: Test Iteration 1492 Determine the maximum and average achievable throughput within the 1493 acceptance criteria. Final test iteration MUST be performed for the 1494 test duration defined in Section 4.3.4. 1496 7.6. Concurrent TCP/HTTP Connection Capacity 1498 7.6.1. Objective 1500 Determine the maximum number of concurrent TCP connections that DUT/ 1501 SUT sustains when using HTTP traffic. 1503 7.6.2. Test Setup 1505 Test bed setup SHOULD be configured as defined in Section 4. Any 1506 specific test bed configuration changes such as number of interfaces 1507 and interface type, etc. must be documented. 1509 7.6.3. Test Parameters 1511 In this section, test scenario specific parameters SHOULD be defined. 1513 7.6.3.1. DUT/SUT Configuration Parameters 1515 DUT/SUT parameters MUST conform to the requirements defined in 1516 Section 4.2. Any configuration changes for this specific test 1517 scenario MUST be documented. 1519 7.6.3.2. Test Equipment Configuration Parameters 1521 Test equipment configuration parameters MUST conform to the 1522 requirements defined in Section 4.3. Following parameters MUST be 1523 noted for this test scenario: 1525 Client IP address range defined in Section 4.3.1.2 1527 Server IP address range defined in Section 4.3.2.2 1529 Traffic distribution ratio between IPv4 and IPv6 defined in 1530 Section 4.3.1.2 1532 Target concurrent connection: Initial value from product data 1533 sheet (if known) 1535 Initial concurrent connection: 10% of "Target concurrent 1536 connection" 1538 The client must negotiate HTTP 1.1 with persistence and each client 1539 MAY open multiple concurrent TCP connections per server endpoint IP. 1541 Each client sends 10 GET commands requesting 1Kbyte HTTP response 1542 object in the same TCP connection (10 transactions/TCP connection) 1543 and the delay (think time) between the transaction MUST be X seconds. 1544 The value for think time (X) MUST be defined to achieve 15% of 1545 maximum throughput measured in test scenario 7.5. 1547 The established connections SHOULD remain open until the ramp down 1548 phase of the test. During the ramp down phase, all connections 1549 should be successfully closed with FIN. 1551 7.6.3.3. Test Results Acceptance Criteria 1553 The following test Criteria is defined as test results acceptance 1554 criteria. Test results acceptance criteria MUST be monitored during 1555 the whole sustain phase of the traffic load profile. 1557 a. Number of failed Application transaction MUST be zero 1559 b. Number of Terminated TCP connections due to unexpected TCP RST 1560 sent by DUT/SUT MUST be less than 0.01% of total initiated TCP 1561 connections 1563 c. During the sustain phase, traffic should be forwarded constantly 1564 at the rate defined in the parameter section 7.6.3.2 1566 d. During the sustain phase, the maximum deviation (max. dev) of 1567 application transaction latency or TTLB (Time To Last Byte) MUST 1568 be less than 10% 1570 7.6.3.4. Measurement 1572 Following KPI metrics MUST be reported for this test scenario: 1574 average Throughput, max. Min. Avg. Concurrent TCP connections, TTLB/ 1575 application transaction latency (minimum, average and maximum) and 1576 average application transactions per second. 1578 7.6.4. Test Procedures and expected Results 1580 The test procedure is designed to measure the concurrent TCP 1581 connection capacity of the DUT/SUT at the sustaining period of 1582 traffic load profile. The test procedure consists of three major 1583 steps. This test procedure MAY be repeated multiple times with 1584 different IPv4 and IPv6 traffic distribution. 1586 7.6.4.1. Step 1: Test Initialization and Qualification 1588 Verify the link status of the all connected physical interfaces. All 1589 interfaces are expected to be "UP" status. 1591 Configure test equipment to generate background traffic ad defined in 1592 section 7.6.3.2. Measure throughput, concurrent TCP connections, and 1593 TCP connections per second. 1595 While generating the background traffic, configure another traffic 1596 profile on the test equipment to establish "initial concurrent TCP 1597 connections" defined in the section 7.6.3.2. The traffic load 1598 profile CAN be defined as described in the section Error: Reference 1599 source not found. 1601 During the sustain phase, the DUT/SUT SHOULD reach the "initial 1602 concurrent TCP connections" plus concurrent TCP connections measured 1603 in background traffic. The measured KPIs during the sustain phase 1604 MUST meet the acceptance criteria "a" and "b" defined in the section 1605 Error: Reference source not found 1607 If the KPI metrics do not meet the acceptance criteria, the test 1608 procedure MUST NOT be continued to "Step 2". 1610 7.6.4.2. Step 2: Test Run with Target Objective 1612 Configure test equipment to establish "Target concurrent TCP 1613 connections" defined in the parameters table. The test equipment 1614 SHOULD follow the traffic load profile definition as described in 1615 Section 4.3.4. 1617 Configure test equipment to establish "Target concurrent TCP 1618 connections" minus concurrent TCP connections measured in background 1619 traffic. The test equipment SHOULD follow the traffic load profile 1620 definition as described in the section Error: Reference source not 1621 found. 1623 During the ramp up and sustain phase, the other KPIs such as 1624 throughput, TCP connections per second and application transactions 1625 per second MUST NOT reach to the maximum value that the DUT/SUT can 1626 support. 1628 The test equipment SHOULD start to measure and record KPIs defined in 1629 section 7.6.3.4. The frequency of measurement MUST be less than 5 1630 seconds. Continue the test until all traffic profile phases are 1631 completed. 1633 The DUT/SUT is expected to reach the desired target concurrent 1634 connection at the sustain phase. In addition, the measured KPIs must 1635 meet all acceptance criteria. 1637 Follow the step 3, if the KPI metrics do not meet the acceptance 1638 criteria. 1640 7.6.4.3. Step 3: Test Iteration 1642 Determine the maximum and average achievable concurrent TCP 1643 connections capacity within the acceptance criteria. 1645 7.7. TCP/HTTPS Connections per second 1647 7.7.1. Objective 1649 Using HTTPS traffic, determine the maximum sustainable SSL/TLS 1650 session establishment rate supported by the DUT/SUT under different 1651 throughput load conditions. 1653 Test iterations MUST include common cipher suites and key strengths 1654 as well as forward looking stronger keys. Specific test iterations 1655 MUST include ciphers and keys defined in the parameter section 1656 7.7.3.2 1658 For each cipher suite and key strengths, test iterations MUST use a 1659 single HTTPS response object size defined in the test equipment 1660 configuration parameters section 7.7.3.2 to measure connections per 1661 second performance under a variety of DUT Security inspection load 1662 conditions. 1664 7.7.2. Test Setup 1666 Test bed setup SHOULD be configured as defined in section 4. Any 1667 specific test bed configuration changes such as number of interfaces 1668 and interface type, etc. must be documented. 1670 7.7.3. Test Parameters 1672 In this section, test scenario specific parameters SHOULD be defined. 1674 7.7.3.1. DUT/SUT Configuration Parameters 1676 DUT/SUT parameters MUST conform to the requirements defined in the 1677 section 4.2. Any configuration changes for this specific test 1678 scenario MUST be documented. 1680 7.7.3.2. Test Equipment Configuration Parameters 1682 Test equipment configuration parameters MUST conform to the 1683 requirements defined in the section 4.3. Following parameters MUST 1684 be documented for this test scenario: 1686 Client IP address range defined in Section 4.3.1.2 1688 Server IP address range defined in Section 4.3.2.2 1690 Traffic distribution ratio between IPv4 and IPv6 defined in 1691 Section 4.3.1.2 1693 Target connections per second: Initial value from product data sheet 1694 (if known) 1696 Initial connections per second: 10% of "Target connections per 1697 second" 1699 Ciphers and keys: 1701 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 1702 Algorithmn: ecdsa_secp256r1_sha256 and Supported group: 1703 sepc256r1) 1705 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 1706 Algorithmn: rsa_pkscs1_sha256 and Supported group: sepc256) 1708 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 1709 Algorithmn: ecdsa_secp256r1_sha384 and Supported group: 1710 sepc521r1) 1712 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 1713 Algorithmn: rsa_pkcs1_sha384 and Supported group: secp256) 1715 The client MUST negotiate HTTPS 1.1 and close the connection with FIN 1716 immediately after completion of one transaction. In each test 1717 iteration, client MUST send GET command requesting a fixed HTTPS 1718 response object size. The RECOMMENDED object sizes are 1, 2, 4, 16, 1719 64 Kbyte. 1721 Each client connection MUST perform a full handshake with server 1722 certificate (no Certificate on client side) and MUST NOT use session 1723 reuse or resumption. TLS record size MAY be optimized for the HTTPS 1724 response object size up to a record size of 16K. 1726 7.7.3.3. Test Results Acceptance Criteria 1728 The following test Criteria is defined as test results acceptance 1729 criteria: 1731 a. Number of failed Application transaction MUST be less than 0.01% 1732 of attempt transactions 1734 b. Number of Terminated TCP connections due to unexpected TCP RST 1735 sent by DUT/SUT MUST be less than 0.01% of total initiated TCP 1736 connections 1738 c. During the sustain phase, traffic should be forwarded at a 1739 constant rate 1741 d. Concurrent TCP connections SHOULD be constant during steady 1742 state. This confirms that DUT open and close the TCP connections 1743 at the same rate 1745 7.7.3.4. Measurement 1747 Following KPI metrics MUST be reported for this test scenario: 1749 Mandatory KPIs: average TCP connections per second, average 1750 Throughput and Average Time to TCP First Byte. 1752 7.7.4. Test Procedures and expected Results 1754 The test procedure is designed to measure the TCP connections per 1755 second rate of the DUT/SUT at the sustaining period of traffic load 1756 profile. The test procedure consists of three major steps. This 1757 test procedure MAY be repeated multiple times with different IPv4 and 1758 IPv6 traffic distribution. 1760 7.7.4.1. Step 1: Test Initialization and Qualification 1762 Verify the link status of the all connected physical interfaces. All 1763 interfaces are expected to be "UP" status. 1765 Configure traffic load profile of the test equipment to establish 1766 "initial connections per second" as defined in the parameters 1767 section. The traffic load profile CAN be defined as described in the 1768 section 4.3.4. 1770 The DUT/SUT SHOULD reach the "initial connections per second" before 1771 the sustain phase. The measured KPIs during the sustain phase MUST 1772 meet the acceptance criteria a, b, c, and d defined in section 1773 7.7.3.3. 1775 If the KPI metrics do not meet the acceptance criteria, the test 1776 procedure MUST NOT be continued to "Step 2". 1778 7.7.4.2. Step 2: Test Run with Target Objective 1780 Configure test equipment to establish "Target connections per second" 1781 defined in the parameters table. The test equipment SHOULD follow 1782 the traffic load profile definition as described in the section 1783 4.3.4. 1785 During the ramp up and sustain phase, other KPIs such as throughput, 1786 concurrent TCP connections and application transactions per second 1787 MUST NOT reach to the maximum value the DUT/SUT can support. The 1788 test results for specific test iteration SHOULD NOT be reported, if 1789 the above mentioned KPI (especially throughput) reaches to the 1790 maximum value. (Example: If the test iteration with 64Kbyte of HTTPS 1791 response object size reached the maximum throughput limitation of the 1792 DUT, the test iteration can be interrupted and the result for 64kbyte 1793 SHOULD NOT be reported). 1795 The test equipment SHOULD start to measure and record all specified 1796 KPIs. The frequency of measurement MUST be less than 5 seconds. 1797 Continue the test until all traffic profile phases are completed. 1799 The DUT/SUT is expected to reach the desired target connections per 1800 second rate at the sustain phase. In addition, the measured KPIs 1801 must meet all acceptance criteria. 1803 Follow the step 3, if the KPI metrics do not meet the acceptance 1804 criteria. 1806 7.7.4.3. Step 3: Test Iteration 1808 Determine the maximum and average achievable connections per second 1809 within the acceptance criteria. 1811 7.8. HTTPS Transaction per Second 1813 7.8.1. Objective 1815 Using HTTPS traffic, determine the maximum sustainable HTTPS 1816 transactions per second supported by the DUT/SUT under different 1817 throughput load conditions. 1819 To measure transactions per second performance under a variety of DUT 1820 Security inspection load conditions, each test iteration MUST use 1821 different fixed HTTPS transaction object sizes defined in the test 1822 equipment configuration parameters section 7.8.3.2. 1824 Test iterations MUST include common cipher suites and key strengths 1825 as well as forward looking stronger keys. Specific test iterations 1826 MUST include the ciphers and keys defined in the parameter section 1827 7.8.3.2. 1829 7.8.2. Test Setup 1831 Test bed setup SHOULD be configured as defined in section 4. Any 1832 specific test bed configuration changes such as number of interfaces 1833 and interface type, etc. must be documented. 1835 7.8.3. Test Parameters 1837 In this section, test scenario specific parameters SHOULD be defined. 1839 7.8.3.1. DUT/SUT Configuration Parameters 1841 DUT/SUT parameters MUST conform to the requirements defined in the 1842 section 4.2. Any configuration changes for this specific test 1843 scenario MUST be documented. 1845 7.8.3.2. Test Equipment Configuration Parameters 1847 Test equipment configuration parameters MUST conform to the 1848 requirements defined in the section 4.3. Following parameters MUST 1849 be documented for this test scenario: 1851 Client IP address range defined in Section 4.3.1.2 1853 Server IP address range defined in Section 4.3.2.2 1855 Traffic distribution ratio between IPv4 and IPv6 defined in 1856 Section 4.3.1.2 1858 Target Transactions per second: Initial value from product data sheet 1859 (if known) 1861 Initial Transactions per second: 10% of "Target Transactions per 1862 second" 1864 Ciphers and keys: 1866 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 1867 Algorithmn: ecdsa_secp256r1_sha256 and Supported group: 1868 sepc256r1) 1870 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 1871 Algorithmn: rsa_pkscs1_sha256 and Supported group: sepc256) 1873 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 1874 Algorithmn: ecdsa_secp256r1_sha384 and Supported group: 1875 sepc521r1) 1877 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 1878 Algorithmn: rsa_pkcs1_sha384 and Supported group: secp256) 1880 The client MUST negotiate HTTPS 1.1 and close the connection with FIN 1881 immediately after completion of 10 transactions. 1883 HTTPS 1.1 with GET command requesting a single 1, 16 and 64 KByte 1884 objects. 1886 Each client connection MUST perform a full handshake with server 1887 certificate and SHOULD NOT use session reuse or resumption. 1889 TLS record size MAY be optimized for the object size up to a record 1890 size of 16K. 1892 7.8.3.3. Test Results Acceptance Criteria 1894 The following test Criteria is defined as test results acceptance 1895 criteria. Test results acceptance criteria MUST be monitored during 1896 the whole sustain phase of the traffic load profile. Ramp up and 1897 ramp down phase SHOULD NOT be considered. 1899 a. Number of failed Application transactions MUST be zero 1901 b. Number of Terminated HTTP connections due to unexpected TCP RST 1902 sent by DUT/SUT MUST be less than 0.01% of total initiated HTTP 1903 sessions 1905 c. Average Time to TCP First Byte MUST be constant and not increase 1906 more than 10% 1908 d. The deviation of concurrent TCP connection Must be less than 10% 1910 7.8.3.4. Measurement 1912 Following KPI metrics MUST be reported for this test scenario. 1914 average TCP connections per second, average Throughput, Average Time 1915 to TCP First Byte and average application transaction latency. 1917 7.8.4. Test Procedures and Expected Results 1919 The test procedure is designed to measure the HTTPS transactions per 1920 second rate of the DUT/SUT at the sustaining period of traffic load 1921 profile. The test procedure consists of three major steps. This 1922 test procedure MAY be repeated multiple times with different IPv4 and 1923 IPv6 traffic distribution, HTTPS response object sizes and ciphers 1924 and keys. 1926 7.8.4.1. Step 1: Test Initialization and Qualification 1928 Verify the link status of the all connected physical interfaces. All 1929 interfaces are expected to be "UP" status. 1931 Configure traffic load profile of the test equipment to establish 1932 "initial HTTPS transactions per second" as defined in the parameters 1933 section. The traffic load profile CAN be defined as described in the 1934 section 4.3.4. 1936 The DUT/SUT SHOULD reach the "initial HTTPS transactions per second" 1937 before the sustain phase. The measured KPIs during the sustain phase 1938 MUST meet the acceptance criteria a, b, c, and d defined in section 1939 7.8.3.3. 1941 If the KPI metrics do not meet the acceptance criteria, the test 1942 procedure MUST NOT be continued to "Step 2". 1944 7.8.4.2. Step 2: Test Run with Target Objective 1946 Configure test equipment to establish "Target HTTPS transactions per 1947 second" defined in the parameters table. The test equipment SHOULD 1948 follow the traffic load profile definition as described in the 1949 section 4.3.4. 1951 During the ramp up and sustain phase of each test iteration, other 1952 KPIs such as throughput, concurrent TCP connections and connections 1953 per second MUST NOT reach to the maximum value the DUT/SUT can 1954 support. The test results for specific test iterations SHOULD NOT be 1955 reported, if the above mentioned KPI (especially throughput) reaches 1956 to the maximum value. (Example: If the test iteration with 64Kbyte 1957 of HTTP response object size reached the maximum throughput 1958 limitation of the DUT, the test iteration MAY be interrupted and the 1959 result for 64kbyte SHOULD NOT be reported). 1961 The test equipment SHOULD start to measure and record all specified 1962 KPIs. The frequency of measurement MUST be less than 5 seconds. 1963 Continue the test until all traffic profile phases are completed. 1965 The DUT/SUT is expected to reach the desired target HTTPS 1966 transactions per second rate at the sustain phase. In addition, the 1967 measured KPIs must meet all acceptance criteria. 1969 Follow the step 3, if the KPI metrics do not meet the acceptance 1970 criteria. 1972 7.8.4.3. Step 3: Test Iteration 1974 Determine the maximum and average achievable HTTPS transactions per 1975 second within the acceptance criteria. Final test iteration MUST be 1976 performed for the test duration defined in Section 4.3.4. 1978 7.9. HTTPS Transaction Latency 1980 7.9.1. Objective 1982 Using HTTPS traffic, determine the average HTTPS transaction latency 1983 when DUT is running with sustainable HTTPS transactions per second 1984 supported by the DUT/SUT under different HTTPS response object size. 1986 Test iterations MUST be performed with different HTTPS response 1987 object sizes twice, one with a single transaction and the other with 1988 multiple transactions within a single TCP connection. 1990 7.9.2. Test Setup 1992 Test bed setup SHOULD be configured as defined in section 4. Any 1993 specific test bed configuration changes such as number of interfaces 1994 and interface type, etc. must be documented. 1996 7.9.3. Test Parameters 1998 In this section, test scenario specific parameters SHOULD be defined. 2000 7.9.3.1. DUT/SUT Configuration Parameters 2002 DUT/SUT parameters MUST conform to the requirements defined in the 2003 section 4.2. Any configuration changes for this specific test 2004 scenario MUST be documented. 2006 7.9.3.2. Test Equipment Configuration Parameters 2008 Test equipment configuration parameters MUST conform to the 2009 requirements defined in the section 4.3. Following parameters MUST 2010 be documented for this test scenario: 2012 Client IP address range defined in Section 4.3.1.2 2013 Server IP address range defined in Section 4.3.2.2 2015 Traffic distribution ratio between IPv4 and IPv6 defined in 2016 Section 4.3.1.2 2018 Cipher suites and key size: ECDHE-ECDSA-AES256-GCM-SHA384 with 2019 Secp521 bits key size (Signature Hash Algorithmn: 2020 ecdsa_secp256r1_sha384 and Supported group: sepc521r1) 2022 Target connections per second:50% of the value measured in test 2023 scenario TCP/HTTPS Connections per second (Section 7.7) 2025 Initial Transactions per second: 10% of "Target Transactions per 2026 second" 2028 HTTPS transaction per connection: one test scenario with a single 2029 transaction and another scenario with 10 transactions 2031 Test scenario SHOULD be run with a single traffic profile with 2032 following attributes: 2034 To measure application transaction latency with a single connection 2035 per transaction and single connection with multiple transactions the 2036 tests should run twice: 2038 1st test run: The client MUST negotiate HTTPS 1.1 and close the 2039 connection with FIN immediately after completion of the transaction. 2041 2nd test run: The client MUST negotiate HTTPS 1.1 and close the 2042 connection after 10 transactions (GET and RESPONSE) within a single 2043 TCP connection. 2045 HTTPS 1.1 with GET command requesting a single 1, 16 or 64 Kbyte 2046 objects. For each test iteration, client MUST request a single HTTPS 2047 response object size. 2049 7.9.3.3. Test Results Acceptance Criteria 2051 The following test Criteria is defined as test results acceptance 2052 criteria. Test results acceptance criteria MUST be monitored during 2053 the whole sustain phase of the traffic load profile. Ramp up and 2054 ramp down phase SHOULD NOT be considered. 2056 Generic creteria: 2058 a. Number of failed Application transactions MUST be zero 2059 b. Number of Terminated TCP connections due to unexpected TCP RST 2060 sent by DUT/SUT MUST be zero. 2062 c. During the sustain phase, traffic should be forwarded at a 2063 constant rate. 2065 d. During the sustain phase and average application transaction 2066 latency MUST be constant and latency deviation SHOULD NOT 2067 increase more than 10%. 2069 e. Concurrent TCP connections SHOULD be constant during steady 2070 state. This confirms the DUT opens and closes the TCP 2071 connections at the same rate. 2073 f. After ramp up the DUT MUST achieve the target connections per 2074 second objective defined in the parameter section and remain in 2075 that state for the entire duration of the sustain phase. 2077 7.9.3.4. Measurement 2079 Following KPI metrics MUST be reported for each test scenario and 2080 HTTPS response object sizes separately: 2082 average TCP connections per second and average application 2083 transaction latency or TTLB needs to be recorded. 2085 All KPI's are measured once the target connections per second 2086 achieves the steady state. 2088 7.9.4. Test Procedures and Expected Results 2090 The test procedure is designed to measure average application 2091 transaction latency or TTLB when the DUT is operating close to 50% of 2092 its maximum achievable connections per second. , This test procedure 2093 CAN be repeated multiple times with different IP types (IPv4 only, 2094 IPv6 only and IPv4 and IPv6 mixed traffic distribution), HTTPS 2095 response object sizes and single and multiple transactions per 2096 connection scenarios. 2098 7.9.4.1. Step 1: Test Initialization and Qualification 2100 Verify the link status of the all connected physical interfaces. All 2101 interfaces are expected to be "UP" status. 2103 Configure traffic load profile of the test equipment to establish 2104 "initial connections per second" as defined in the parameters 2105 section. The traffic load profile CAN be defined as described in the 2106 section 4.3.4. 2108 The DUT/SUT SHOULD reach the "initial connections per second" before 2109 the sustain phase. The measured KPIs during the sustain phase MUST 2110 meet the acceptance criteria a, b, c, d ,e and f defined in section 2111 7.4.3.3. 2113 If the KPI metrics do not meet the acceptance criteria, the test 2114 procedure MUST NOT be continued to "Step 2". 2116 7.9.4.2. Step 2: Test Run with Target Objective 2118 Configure test equipment to establish "Target connections per second" 2119 defined in the parameters table. The test equipment SHOULD follow 2120 the traffic load profile definition as described in the section 2121 4.3.4. 2123 During the ramp up and sustain phase, other KPIs such as throughput, 2124 concurrent TCP connections and application transactions per second 2125 MUST NOT reach to the maximum value the DUT/SUT can support. 2127 The test equipment SHOULD start to measure and record all specified 2128 KPIs. The frequency of measurement MUST be less than 5 seconds. 2129 Continue the test until all traffic profile phases are completed. 2130 DUT/SUT is expected to reach the desired target connections per 2131 second rate at the sustain phase. In addition, the measured KPIs 2132 must meet all acceptance criteria. 2134 The DUT/SUT is expected to reach the desired target HTTPS 2135 transactions per second rate at the sustain phase. In addition, the 2136 measured KPIs must meet all acceptance criteria. 2138 Follow the step 3, if the KPI metrics do not meet the acceptance 2139 criteria. 2141 7.9.4.3. Step 3: Test Iteration 2143 Determine the maximum achievable connections per second within the 2144 acceptance criteria and measure the latency values. 2146 7.10. HTTPS Throughput 2148 7.10.1. Objective 2150 Determine the throughput for HTTPS transactions varying the HTTPS 2151 response object size. 2153 Test iterations MUST include common cipher suites and key strengths 2154 as well as forward looking stronger keys. Specific test iterations 2155 MUST include the ciphers and keys defined in the parameter section 2156 7.10.3.2. 2158 7.10.2. Test Setup 2160 Test bed setup SHOULD be configured as defined in section 4. Any 2161 specific test bed configuration changes such as number of interfaces 2162 and interface type, etc. must be documented. 2164 7.10.3. Test Parameters 2166 In this section, test scenario specific parameters SHOULD be defined. 2168 7.10.3.1. DUT/SUT Configuration Parameters 2170 DUT/SUT parameters MUST conform to the requirements defined in the 2171 section 4.2. Any configuration changes for this specific test 2172 scenario MUST be documented. 2174 7.10.3.2. Test Equipment Configuration Parameters 2176 Test equipment configuration parameters MUST conform to the 2177 requirements defined in the section 4.3. Following parameters MUST 2178 be documented for this test scenario: 2180 Client IP address range defined in Section 4.3.1.2 2182 Server IP address range defined in Section 4.3.2.2 2184 Traffic distribution ratio between IPv4 and IPv6 defined in 2185 Section 4.3.1.2 2187 Target Throughput: Initial value from product data sheet (if known) 2189 Number of HTPPS response object requests (transactions) per 2190 connection: 10 2192 Ciphers and keys: 2194 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 2195 Algorithmn: ecdsa_secp256r1_sha256 and Supported group: 2196 sepc256r1) 2198 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 2199 Algorithmn: rsa_pkscs1_sha256 and Supported group: sepc256) 2201 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 2202 Algorithmn: ecdsa_secp256r1_sha384 and Supported group: 2203 sepc521r1) 2205 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 2206 Algorithmn: rsa_pkcs1_sha384 and Supported group: secp256) 2208 HTTPS response object size: 16KB, 64KB, 256KB and mixed object 2209 defined in the table below. 2211 +---------------------+---------------------+ 2212 | Object size (KByte) | Number of requests/ | 2213 | | Weight | 2214 +---------------------+---------------------+ 2215 | 0.2 | 1 | 2216 +---------------------+---------------------+ 2217 | 6 | 1 | 2218 +---------------------+---------------------+ 2219 | 8 | 1 | 2220 +---------------------+---------------------+ 2221 | 9 | 1 | 2222 +---------------------+---------------------+ 2223 | 10 | 1 | 2224 +---------------------+---------------------+ 2225 | 25 | 1 | 2226 +---------------------+---------------------+ 2227 | 26 | 1 | 2228 +---------------------+---------------------+ 2229 | 35 | 1 | 2230 +---------------------+---------------------+ 2231 | 59 | 1 | 2232 +---------------------+---------------------+ 2233 | 347 | 1 | 2234 +---------------------+---------------------+ 2236 Table 4: Mixed Objects 2238 Each client connection MUST perform a full handshake with server 2239 certificate (no Certificate on client side) and 50% of connection 2240 SHOULD use session reuse or resumption. 2242 TLS record size MAY be optimized for the HTTPS response object size 2243 up to a record size of 16K. 2245 7.10.3.3. Test Results Acceptance Criteria 2247 The following test Criteria is defined as test results acceptance 2248 criteria. Test results acceptance criteria MUST be monitored during 2249 the whole sustain phase of the traffic load profile. 2251 a. Number of failed Application transaction MUST be less than 0.01% 2252 of attempt transaction. 2254 b. Traffic should be forwarded constantly. 2256 c. The deviation of concurrent TCP connection Must be less than 10% 2258 d. The deviation of average application transaction latency MUST be 2259 less than 10% 2261 7.10.3.4. Measurement 2263 The KPI metrics MUST be reported for this test scenario: 2265 Average Throughput, concurrent connections, and average TCP 2266 connections per second. 2268 7.10.4. Test Procedures and Expected Results 2270 The test procedure consists of three major steps. This test 2271 procedure MAY be repeated multiple times with different IPv4 and IPv6 2272 traffic distribution and HTTPS response object sizes. 2274 7.10.4.1. Step 1: Test Initialization and Qualification 2276 Verify the link status of the all connected physical interfaces. All 2277 interfaces are expected to be "UP" status. 2279 Configure traffic load profile of the test equipment to establish 2280 "initial throughput" as defined in the parameters section. 2282 The traffic load profile should be defined as described in 2283 Section 4.3.4. The DUT/SUT SHOULD reach the "initial throughput" 2284 during the sustain phase. Measure all KPI as defined in 2285 Section 7.10.3.4. 2287 The measured KPIs during the sustain phase MUST meet the acceptance 2288 criteria "a" defined in Section 7.10.3.3. 2290 If the KPI metrics do not meet the acceptance criteria, the test 2291 procedure MUST NOT be continued to "Step 2". 2293 7.10.4.2. Step 2: Test Run with Target Objective 2295 The test equipment SHOULD start to measure and record all specified 2296 KPIs. The frequency of measurement MUST be less than 5 seconds. 2297 Continue the test until all traffic profile phases are completed. 2299 The DUT/SUT is expected to reach the desired target throughput at the 2300 sustain phase. In addition, the measured KPIs must meet all 2301 acceptance criteria. 2303 Perform the test separately for each HTTPS response object size (16k, 2304 64k, 256k and mixed HTTPS response objects). 2306 Follow the step 3, if the KPI metrics do not meet the acceptance 2307 criteria. 2309 7.10.4.3. Step 3: Test Iteration 2311 Determine the maximum and average achievable throughput within the 2312 acceptance criteria. Final test iteration MUST be performed for the 2313 test duration defined in Section 4.3.4. 2315 7.11. Concurrent TCP/HTTPS Connection Capacity 2317 7.11.1. Objective 2319 Determine the maximum number of concurrent TCP connections that DUT/ 2320 SUT sustains when using HTTPS traffic. 2322 7.11.2. Test Setup 2324 Test bed setup SHOULD be configured as defined in section 4. Any 2325 specific test bed configuration changes such as number of interfaces 2326 and interface type, etc. must be documented. 2328 7.11.3. Test Parameters 2330 In this section, test scenario specific parameters SHOULD be defined. 2332 7.11.3.1. DUT/SUT Configuration Parameters 2334 DUT/SUT parameters MUST conform to the requirements defined in the 2335 section 4.2. Any configuration changes for this specific test 2336 scenario MUST be documented. 2338 7.11.3.2. Test Equipment Configuration Parameters 2340 Test equipment configuration parameters MUST conform to the 2341 requirements defined in the section Error: Reference source not 2342 found. Following parameters MUST be documented for this test 2343 scenario: 2345 Client IP address range defined in Section 4.3.1.2 2347 Server IP address range defined in Section 4.3.2.2 2349 Traffic distribution ratio between IPv4 and IPv6 defined in 2350 Section 4.3.1.2 2352 Cipher suites and key size: ECDHE-ECDSA-AES256-GCM-SHA384 with 2353 Secp521 bits key size (Signature Hash Algorithmn: 2354 ecdsa_secp256r1_sha384 and Supported group: sepc521r1) 2356 Target concurrent connection: Initial value from product data 2357 sheet (if known) 2359 Initial concurrent connection: 10% of "Target concurrent 2360 connection" 2362 Maximum connections per second during ramp up phase: 50% of 2363 maximum connections per second measured in test scenario TCP/HTTPS 2364 Connections per second (Section 7.7) 2366 Throughput for background traffic: 10% of maximum throughput 2367 measured in test scenario HTTPS Throughput (Section 7.10)7.10 2368 using an HTTPS response object size of 16Kbyte with a matching 2369 cipher and key size to what is being tested in this test 2371 The client must perform HTTPS transaction with persistence and each 2372 client can open multiple concurrent TCP connections per server 2373 endpoint IP. 2375 Each client sends 10 times of GET commands requesting 1Kbyte HTTPS 2376 response object in the same TCP connections (10 transactions/TCP 2377 connection) and the delay (think time) between the transaction MUST 2378 be X seconds. The value for think time (X) MUST be defined to 2379 achieve 15% of maximum throughput measured in test scenario 7.10. 2381 The established connections (except background traffic connection) 2382 SHOULD remain open until the end phase of the test. During the ramp 2383 down phase, all connections should be successfully closed with FIN. 2385 7.11.3.3. Test Results Acceptance Criteria 2387 The following test Criteria is defined as test results acceptance 2388 criteria. Test results acceptance criteria MUST be monitored during 2389 the whole sustain phase of the traffic load profile. 2391 a. Number of failed Application transactions MUST be zero. 2393 b. Number of Terminated TCP connections due to unexpected TCP RST 2394 sent by DUT/SUT MUST be less than 0.01% of total initiated TCP 2395 connections 2397 c. During the sustain phase, traffic should be forwarded constantly 2398 at the rate defined in the parameter section 7.11.3.2 2400 d. During the sustain phase, then maximum deviation (max. dev) of 2401 application transaction latency or TTLB (Time To Last Byte) MUST 2402 be less than 10% 2404 7.11.3.4. Measurement 2406 Following KPI metrics MUST be reported for this test scenario: 2408 Average Throughput, max. Min. Avg. Concurrent TCP connections, TTLB/ 2409 application transaction latency and average application transactions 2410 per second 2412 7.11.4. Test Procedures and expected Results 2414 The test procedure is designed to measure the concurrent TCP 2415 connection capacity of the DUT/SUT at the sustaining period of 2416 traffic load profile. The test procedure consists of three major 2417 steps. This test procedure MAY be repeated multiple times with 2418 different IPv4 and IPv6 traffic distribution. 2420 7.11.4.1. Step 1: Test Initialization and Qualification 2422 Verify the link status of the all connected physical interfaces. All 2423 interfaces are expected to be "UP" status. 2425 Configure test equipment to generate background traffic ad defined in 2426 section 7.3.11.2. Measure throughput, concurrent TCP connections, 2427 and connections per second. 2429 While generating the background traffic, configure another traffic 2430 profile on the test equipment to establish "initial concurrent TCP 2431 connections" defined in the section 7.11.3.2. The traffic load 2432 profile CAN be defined as described in the section Error: Reference 2433 source not found 2435 During the sustain phase, the DUT/SUT SHOULD reach the "initial 2436 concurrent TCP connections" plus concurrent TCP connections measured 2437 in background traffic. The measured KPIs during the sustain phase 2438 MUST meet the acceptance criteria "a" and "b" defined in the section 2439 Error: Reference source not found 2441 If the KPI metrics do not meet the acceptance criteria, the test 2442 procedure MUST NOT be continued to "Step 2". 2444 7.11.4.2. Step 2: Test Run with Target Objective 2446 Configure test equipment to establish "Target concurrent TCP 2447 connections" minus concurrent TCP connections measured in background 2448 traffic. The test equipment SHOULD follow the traffic load profile 2449 definition as described in the section 4.3.4 2451 During the ramp up and sustain phase, the other KPIs such as 2452 throughput, TCP connections per second and application transactions 2453 per second MUST NOT reach to the maximum value that the DUT/SUT can 2454 support. 2456 The test equipment SHOULD start to measure and record KPIs defined in 2457 section 7.11.3.4. The frequency of measurement MUST be less than 5 2458 seconds. Continue the test until all traffic profile phases are 2459 completed. 2461 The DUT/SUT is expected to reach the desired target concurrent TCP 2462 connections at the sustain phase. In addition, the measured KPIs 2463 must meet all acceptance criteria. 2465 Follow the step 3, if the KPI metrics do not meet the acceptance 2466 criteria. 2468 7.11.4.3. Step 3: Test Iteration 2470 Determine the maximum and average achievable concurrent TCP 2471 connections within the acceptance criteria. 2473 8. Formal Syntax 2475 9. IANA Considerations 2477 This document makes no request of IANA. 2479 Note to RFC Editor: this section may be removed on publication as an 2480 RFC. 2482 10. Acknowledgements 2484 Acknowledgements will be added in the future release. 2486 11. Contributors 2488 The authors would like to thank the many people that contributed 2489 their time and knowledge to this effort. 2491 Specifically to the co-chairs of the NetSecOPEN Test Methodology 2492 working group and the NetSecOPEN Security Effectiveness working group 2493 - Alex Samonte, Aria Eslambolchizadeh, Carsten Rossenhoevel and David 2494 DeSanto. 2496 Additionally the following people provided input, comments and spent 2497 time reviewiing the myriad of drafts. If we have missed anyone the 2498 fault is entirely our own. Thanks to - Amritam Putatunda, 2499 Balamuhunthan Balarajah, Brian Monkman, Chris Chapman, Chris Pearson, 2500 Chuck McAuley, David White, Jurrie Van Den Breekel, Michelle Rhines, 2501 Rob Andrews, Samaresh Nair, and Tim Winters. 2503 12. References 2505 12.1. Normative References 2507 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2508 Requirement Levels", BCP 14, RFC 2119, 2509 DOI 10.17487/RFC2119, March 1997, 2510 . 2512 12.2. Informative References 2514 [RFC2647] Newman, D., "Benchmarking Terminology for Firewall 2515 Performance", RFC 2647, DOI 10.17487/RFC2647, August 1999, 2516 . 2518 [RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin, 2519 "Benchmarking Methodology for Firewall Performance", 2520 RFC 3511, DOI 10.17487/RFC3511, April 2003, 2521 . 2523 Appendix A. NetSecOPEN Basic Traffic Mix 2525 A traffic mix for testing performance of next generation firewalls 2526 MUST scale to stress the DUT based on real-world conditions. In 2527 order to achieve this the following MUST be included: 2529 o Clients connecting to multiple different server FQDNs per 2530 application 2532 o Clients loading apps and pages with connections and objects in 2533 specific orders 2535 o Multiple unique certificates for HTTPS/TLS 2537 o A wide variety of different object sizes 2539 o Different URL paths 2541 o Mix of HTTP and HTTPS 2543 A traffic mix for testing performance of next generation firewalls 2544 MUST also facility application identification using different 2545 detection methods with and without decryption of the traffic. Such 2546 as: 2548 o HTTP HOST based application detection 2550 o HTTPS/TLS Server Name Indication (SNI) 2552 o Certificate Subject Common Name (CN) 2554 The mix MUST be of sufficient complexity and volume to render 2555 differences in individual apps as statistically insignificant. For 2556 example, changes in like to like apps - such as one type of video 2557 service vs. another both consist of larger objects whereas one news 2558 site vs. another both typically have more connections then other apps 2559 because of trackers and embedded advertising content. To achieve 2560 sufficient complexity, a mix MUST have: 2562 o Thousands of URLs each client walks thru 2564 o Hundreds of FQDNs each client connects to 2566 o Hundreds of unique certificates for HTTPS/TLS 2568 o Thousands of different object sizes per client in orders matching 2569 applications 2571 The following is a description of what a popular application in an 2572 enterprise traffic mix contains. 2574 Table 5 lists the FQDNs, number of transactions and bytes transferred 2575 as an example client interacts with Office 365 Outlook, Word, Excel, 2576 Powerpoint, Sharepoint and Skype. 2578 +---------------------------------+------------+-------------+ 2579 | Office365 FQDN | Bytes | Transaction | 2580 +============================================================+ 2581 | r1.res.office365.com | 14,056,960 | 192 | 2582 +---------------------------------+------------+-------------+ 2583 | s1-word-edit-15.cdn.office.net | 6,731,019 | 22 | 2584 +---------------------------------+------------+-------------+ 2585 | company1-my.sharepoint.com | 6,269,492 | 42 | 2586 +---------------------------------+------------+-------------+ 2587 | swx.cdn.skype.com | 6,100,027 | 12 | 2588 +---------------------------------+------------+-------------+ 2589 | static.sharepointonline.com | 6,036,947 | 41 | 2590 +---------------------------------+------------+-------------+ 2591 | spoprod-a.akamaihd.net | 3,904,250 | 25 | 2592 +---------------------------------+------------+-------------+ 2593 | s1-excel-15.cdn.office.net | 2,767,941 | 16 | 2594 +---------------------------------+------------+-------------+ 2595 | outlook.office365.com | 2,047,301 | 86 | 2596 +---------------------------------+------------+-------------+ 2597 | shellprod.msocdn.com | 1,008,370 | 11 | 2598 +---------------------------------+------------+-------------+ 2599 | word-edit.officeapps.live.com | 932,080 | 25 | 2600 +---------------------------------+------------+-------------+ 2601 | res.delve.office.com | 760,146 | 2 | 2602 +---------------------------------+------------+-------------+ 2603 | s1-powerpoint-15.cdn.office.net | 557,604 | 3 | 2604 +---------------------------------+------------+-------------+ 2605 | appsforoffice.microsoft.com | 511,171 | 5 | 2606 +---------------------------------+------------+-------------+ 2607 | powerpoint.officeapps.live.com | 471,625 | 14 | 2608 +---------------------------------+------------+-------------+ 2609 | excel.officeapps.live.com | 342,040 | 14 | 2610 +---------------------------------+------------+-------------+ 2611 | s1-officeapps-15.cdn.office.net | 331,343 | 5 | 2612 +---------------------------------+------------+-------------+ 2613 | webdir0a.online.lync.com | 66,930 | 15 | 2614 +---------------------------------+------------+-------------+ 2615 | portal.office.com | 13,956 | 1 | 2616 +---------------------------------+------------+-------------+ 2617 | config.edge.skype.com | 6,911 | 2 | 2618 +---------------------------------+------------+-------------+ 2619 | clientlog.portal.office.com | 6,608 | 8 | 2620 +---------------------------------+------------+-------------+ 2621 | webdir.online.lync.com | 4,343 | 5 | 2622 +---------------------------------+------------+-------------+ 2623 | graph.microsoft.com | 2,289 | 2 | 2624 +---------------------------------+------------+-------------+ 2625 | nam.loki.delve.office.com | 1,812 | 5 | 2626 +---------------------------------+------------+-------------+ 2627 | login.microsoftonline.com | 464 | 2 | 2628 +---------------------------------+------------+-------------+ 2629 | login.windows.net | 232 | 1 | 2630 +---------------------------------+------------+-------------+ 2632 Table 5: Office365 2634 Clients MUST connect to multiple server FQDNs in the same order as 2635 real applications. Connections MUST be made when the client is 2636 interacting with the application and NOT first setup up all 2637 connections. Connections SHOULD stay open per client for subsequent 2638 transactions to the same FQDN similar to how a web browser behaves. 2639 Clients MUST use different URL Paths and Object sizes in orders as 2640 they are observed in real Applications. Clients MAY also setup 2641 multiple connections per FQDN to process multiple transactions in a 2642 sequence at the same time. Table 6 has a partial example sequence of 2643 the Office 365 Word application transactions. 2645 +---------------------------------+----------------------+----------+ 2646 | FQDN | URL Path | Object | 2647 | | | size | 2648 +===================================================================+ 2649 | company1-my.sharepoint.com | /personal... | 23,132 | 2650 +---------------------------------+----------------------+----------+ 2651 | word-edit.officeapps.live.com | /we/WsaUpload.ashx | 2 | 2652 +---------------------------------+----------------------+----------+ 2653 | static.sharepointonline.com | /bld/.../blank.js | 454 | 2654 +---------------------------------+----------------------+----------+ 2655 | static.sharepointonline.com | /bld/.../ | 23,254 | 2656 | | initstrings.js | | 2657 +---------------------------------+----------------------+----------+ 2658 | static.sharepointonline.com | /bld/.../init.js | 292,740 | 2659 +---------------------------------+----------------------+----------+ 2660 | company1-my.sharepoint.com | /ScriptResource... | 102,774 | 2661 +---------------------------------+----------------------+----------+ 2662 | company1-my.sharepoint.com | /ScriptResource... | 40,329 | 2663 +---------------------------------+----------------------+----------+ 2664 | company1-my.sharepoint.com | /WebResource... | 23,063 | 2665 +---------------------------------+----------------------+----------+ 2666 | word-edit.officeapps.live.com | /we/wordeditorframe. | 60,657 | 2667 | | aspx... | | 2668 +---------------------------------+----------------------+----------+ 2669 | static.sharepointonline.com | /bld/_layouts/.../ | 454 | 2670 | | blank.js | | 2671 +---------------------------------+----------------------+----------+ 2672 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 19,201 | 2673 | | EditSurface.css | | 2674 +---------------------------------+----------------------+----------+ 2675 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 221,397 | 2676 | | WordEditor.css | | 2677 +---------------------------------+----------------------+----------+ 2678 | s1-officeapps-15.cdn.office.net | /we/s/.../ | 107,571 | 2679 | | Microsoft | | 2680 | | Ajax.js | | 2681 +---------------------------------+----------------------+----------+ 2682 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 39,981 | 2683 | | wacbootwe.js | | 2684 +---------------------------------+----------------------+----------+ 2685 | s1-officeapps-15.cdn.office.net | /we/s/.../ | 51,749 | 2686 | | CommonIntl.js | | 2687 +---------------------------------+----------------------+----------+ 2688 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 6,050 | 2689 | | Compat.js | | 2690 +---------------------------------+----------------------+----------+ 2691 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 54,158 | 2692 | | Box4Intl.js | | 2693 +---------------------------------+----------------------+----------+ 2694 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 24,946 | 2695 | | WoncaIntl.js | | 2696 +---------------------------------+----------------------+----------+ 2697 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 53,515 | 2698 | | WordEditorIntl.js | | 2699 +---------------------------------+----------------------+----------+ 2700 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 1,978,712| 2701 | | WordEditorExp.js | | 2702 +---------------------------------+----------------------+----------+ 2703 | s1-word-edit-15.cdn.office.net | /we/s/.../jSanity.js | 10,912 | 2704 +---------------------------------+----------------------+----------+ 2705 | word-edit.officeapps.live.com | /we/OneNote.ashx | 145,708 | 2706 +---------------------------------+----------------------+----------+ 2708 Table 6: Office365 Word Transactions 2710 For application identification the HTTPS/TLS traffic MUST include 2711 realistic Certificate Subject Common Name (CN) data as well as Server 2712 Name Indications. For example, a DUT may detect Facebook Chat 2713 traffic by inspecting the certificate and detecting *.facebook.com in 2714 the certificate subject CN and subsequently detect the word chat in 2715 the FQDN 5-edge-chat.facebook.com and identify traffic on the 2716 connection to be Facebook Chat. 2718 Table 7 includes further examples in SNI and CN pairs for several 2719 FQDNs of Office 365. 2721 +------------------------------+----------------------------------+ 2722 |Server Name Indication (SNI) | Certificate Subject | 2723 | | Common Name (CN) | 2724 +=================================================================+ 2725 | r1.res.office365.com | *.res.outlook.com | 2726 +------------------------------+----------------------------------+ 2727 | login.windows.net | graph.windows.net | 2728 +------------------------------+----------------------------------+ 2729 | webdir0a.online.lync.com | *.online.lync.com | 2730 +------------------------------+----------------------------------+ 2731 | login.microsoftonline.com | stamp2.login.microsoftonline.com | 2732 +------------------------------+----------------------------------+ 2733 | webdir.online.lync.com | *.online.lync.com | 2734 +------------------------------+----------------------------------+ 2735 | graph.microsoft.com | graph.microsoft.com | 2736 +------------------------------+----------------------------------+ 2737 | outlook.office365.com | outlook.com | 2738 +------------------------------+----------------------------------+ 2739 | appsforoffice.microsoft.com | appsforoffice.microsoft.com | 2740 +------------------------------+----------------------------------+ 2742 Table 7: Office365 SNI and CN Pairs Examples 2744 NetSecOPEN has provided a reference enterprise perimeter traffic mix 2745 with dozens of applications, hundreds of connections, and thousands 2746 of transactions. 2748 The enterprise perimeter traffic mix consists of 70% HTTPS and 30% 2749 HTTP by Bytes, 58% HTTPS and 42% HTTP by Transactions. By 2750 connections with a single connection per FQDN the mix consists of 43% 2751 HTTPS and 57% HTTP. With multiple connections per FQDN the HTTPS 2752 percentage is higher. 2754 Table 8 is a summary of the NetSecOPEN enterprise perimeter traffic 2755 mix sorted by bytes with unique FQDNs and transactions per 2756 applications. 2758 +------------------+-------+--------------+-------------+ 2759 | Application | FQDNs | Transactions | Bytes | 2760 +=======================================================+ 2761 | Office365 | 26 | 558 | 52,931,947 | 2762 +------------------+-------+--------------+-------------+ 2763 | Box | 4 | 90 | 23,276,089 | 2764 +------------------+-------+--------------+-------------+ 2765 | Salesforce | 6 | 365 | 23,137,548 | 2766 +------------------+-------+--------------+-------------+ 2767 | Gmail | 13 | 139 | 16,399,289 | 2768 +------------------+-------+--------------+-------------+ 2769 | Linkedin | 10 | 206 | 15,040,918 | 2770 +------------------+-------+--------------+-------------+ 2771 | DailyMotion | 8 | 77 | 14,751,514 | 2772 +------------------+-------+--------------+-------------+ 2773 | GoogleDocs | 2 | 71 | 14,205,476 | 2774 +------------------+-------+--------------+-------------+ 2775 | Wikia | 15 | 159 | 13,909,777 | 2776 +------------------+-------+--------------+-------------+ 2777 | Foxnews | 82 | 499 | 13,758,899 | 2778 +------------------+-------+--------------+-------------+ 2779 | Yahoo Finance | 33 | 254 | 13,134,011 | 2780 +------------------+-------+--------------+-------------+ 2781 | Youtube | 8 | 97 | 13,056,216 | 2782 +------------------+-------+--------------+-------------+ 2783 | Facebook | 4 | 207 | 12,726,231 | 2784 +------------------+-------+--------------+-------------+ 2785 | CNBC | 77 | 275 | 11,939,566 | 2786 +------------------+-------+--------------+-------------+ 2787 | Lightreading | 27 | 304 | 11,200,864 | 2788 +------------------+-------+--------------+-------------+ 2789 | BusinessInsider | 16 | 142 | 11,001,575 | 2790 +------------------+-------+--------------+-------------+ 2791 | Alexa | 5 | 153 | 10,475,151 | 2792 +------------------+-------+--------------+-------------+ 2793 | CNN | 41 | 206 | 10,423,740 | 2794 +------------------+-------+--------------+-------------+ 2795 | Twitter Video | 2 | 72 | 10,112,820 | 2796 +------------------+-------+--------------+-------------+ 2797 | Cisco Webex | 1 | 213 | 9,988,417 | 2798 +------------------+-------+--------------+-------------+ 2799 | Slack | 3 | 40 | 9,938,686 | 2800 +------------------+-------+--------------+-------------+ 2801 | Google Maps | 5 | 191 | 8,771,873 | 2802 +------------------+-------+--------------+-------------+ 2803 | SpectrumIEEE | 7 | 145 | 8,682,629 | 2804 +------------------+-------+--------------+-------------+ 2805 | Yelp | 9 | 146 | 8,607,645 | 2806 +------------------+-------+--------------+-------------+ 2807 | Vimeo | 12 | 74 | 8,555,960 | 2808 +------------------+-------+--------------+-------------+ 2809 | Wikihow | 11 | 140 | 8,042,314 | 2810 +------------------+-------+--------------+-------------+ 2811 | Netflix | 3 | 31 | 7,839,256 | 2812 +------------------+-------+--------------+-------------+ 2813 | Instagram | 3 | 114 | 7,230,883 | 2814 +------------------+-------+--------------+-------------+ 2815 | Morningstar | 30 | 150 | 7,220,121 | 2816 +------------------+-------+--------------+-------------+ 2817 | Docusign | 5 | 68 | 6,972,738 | 2818 +------------------+-------+--------------+-------------+ 2819 | Twitter | 1 | 100 | 6,939,150 | 2820 +------------------+-------+--------------+-------------+ 2821 | Tumblr | 11 | 70 | 6,877,200 | 2822 +------------------+-------+--------------+-------------+ 2823 | Whatsapp | 3 | 46 | 6,829,848 | 2824 +------------------+-------+--------------+-------------+ 2825 | Imdb | 16 | 251 | 6,505,227 | 2826 +------------------+-------+--------------+-------------+ 2827 | NOAAgov | 1 | 44 | 6,316,283 | 2828 +------------------+-------+--------------+-------------+ 2829 | IndustryWeek | 23 | 192 | 6,242,403 | 2830 +------------------+-------+--------------+-------------+ 2831 | Spotify | 18 | 119 | 6,231,013 | 2832 +------------------+-------+--------------+-------------+ 2833 | AutoNews | 16 | 165 | 6,115,354 | 2834 +------------------+-------+--------------+-------------+ 2835 | Evernote | 3 | 47 | 6,063,168 | 2836 +------------------+-------+--------------+-------------+ 2837 | NatGeo | 34 | 104 | 6,026,344 | 2838 +------------------+-------+--------------+-------------+ 2839 | BBC News | 18 | 156 | 5,898,572 | 2840 +------------------+-------+--------------+-------------+ 2841 | Investopedia | 38 | 241 | 5,792,038 | 2842 +------------------+-------+--------------+-------------+ 2843 | Pinterest | 8 | 102 | 5,658,994 | 2844 +------------------+-------+--------------+-------------+ 2845 | Succesfactors | 2 | 112 | 5,049,001 | 2846 +------------------+-------+--------------+-------------+ 2847 | AbaJournal | 6 | 93 | 4,985,626 | 2848 +------------------+-------+--------------+-------------+ 2849 | Pbworks | 4 | 78 | 4,670,980 | 2850 +------------------+-------+--------------+-------------+ 2851 | NetworkWorld | 42 | 153 | 4,651,354 | 2852 +------------------+-------+--------------+-------------+ 2853 | WebMD | 24 | 280 | 4,416,736 | 2854 +------------------+-------+--------------+-------------+ 2855 | OilGasJournal | 14 | 105 | 4,095,255 | 2856 +------------------+-------+--------------+-------------+ 2857 | Trello | 5 | 39 | 4,080,182 | 2858 +------------------+-------+--------------+-------------+ 2859 | BusinessWire | 5 | 109 | 4,055,331 | 2860 +------------------+-------+--------------+-------------+ 2861 | Dropbox | 5 | 17 | 4,023,469 | 2862 +------------------+-------+--------------+-------------+ 2863 | Nejm | 20 | 190 | 4,003,657 | 2864 +------------------+-------+--------------+-------------+ 2865 | OilGasDaily | 7 | 199 | 3,970,498 | 2866 +------------------+-------+--------------+-------------+ 2867 | Chase | 6 | 52 | 3,719,232 | 2868 +------------------+-------+--------------+-------------+ 2869 | MedicalNews | 6 | 117 | 3,634,187 | 2870 +------------------+-------+--------------+-------------+ 2871 | Marketwatch | 25 | 142 | 3,291,226 | 2872 +------------------+-------+--------------+-------------+ 2873 | Imgur | 5 | 48 | 3,189,919 | 2874 +------------------+-------+--------------+-------------+ 2875 | NPR | 9 | 83 | 3,184,303 | 2876 +------------------+-------+--------------+-------------+ 2877 | Onelogin | 2 | 31 | 3,132,707 | 2878 +------------------+-------+--------------+-------------+ 2879 | Concur | 2 | 50 | 3,066,326 | 2880 +------------------+-------+--------------+-------------+ 2881 | Service-now | 1 | 37 | 2,985,329 | 2882 +------------------+-------+--------------+-------------+ 2883 | Apple itunes | 14 | 80 | 2,843,744 | 2884 +------------------+-------+--------------+-------------+ 2885 | BerkeleyEdu | 3 | 69 | 2,622,009 | 2886 +------------------+-------+--------------+-------------+ 2887 | MSN | 39 | 203 | 2,532,972 | 2888 +------------------+-------+--------------+-------------+ 2889 | Indeed | 3 | 47 | 2,325,197 | 2890 +------------------+-------+--------------+-------------+ 2891 | MayoClinic | 6 | 56 | 2,269,085 | 2892 +------------------+-------+--------------+-------------+ 2893 | Ebay | 9 | 164 | 2,219,223 | 2894 +------------------+-------+--------------+-------------+ 2895 | UCLAedu | 3 | 42 | 1,991,311 | 2896 +------------------+-------+--------------+-------------+ 2897 | ConstructionDive | 5 | 125 | 1,828,428 | 2898 +------------------+-------+--------------+-------------+ 2899 | EducationNews | 4 | 78 | 1,605,427 | 2900 +------------------+-------+--------------+-------------+ 2901 | BofA | 12 | 68 | 1,584,851 | 2902 +------------------+-------+--------------+-------------+ 2903 | ScienceDirect | 7 | 26 | 1,463,951 | 2904 +------------------+-------+--------------+-------------+ 2905 | Reddit | 8 | 55 | 1,441,909 | 2906 +------------------+-------+--------------+-------------+ 2907 | FoodBusinessNews | 5 | 49 | 1,378,298 | 2908 +------------------+-------+--------------+-------------+ 2909 | Amex | 8 | 42 | 1,270,696 | 2910 +------------------+-------+--------------+-------------+ 2911 | Weather | 4 | 50 | 1,243,826 | 2912 +------------------+-------+--------------+-------------+ 2913 | Wikipedia | 3 | 27 | 958,935 | 2914 +------------------+-------+--------------+-------------+ 2915 | Bing | 1 | 52 | 697,514 | 2916 +------------------+-------+--------------+-------------+ 2917 | ADP | 1 | 30 | 508,654 | 2918 +------------------+-------+--------------+-------------+ 2919 | | | | | 2920 +------------------+-------+--------------+-------------+ 2921 | Grand Total | 983 | 10021 | 569,819,095 | 2922 +------------------+-------+--------------+-------------+ 2924 Table 8: Summary of NetSecOPEN Enterprise Perimeter Traffic Mix 2926 Authors' Addresses 2928 Balamuhunthan Balarajah 2929 EANTC AG 2930 Salzufer 14 2931 Berlin 10587 2932 Germany 2934 Email: balarajah@eantc.de 2936 Carsten Rossenhoevel 2937 EANTC AG 2938 Salzufer 14 2939 Berlin 10587 2940 Germany 2942 Email: cross@eantc.de