idnits 2.17.1 draft-ietf-bmwg-ngfw-performance-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (September 9, 2020) is 1324 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 2616 (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) -- Obsolete informational reference (is this intentional?): RFC 3511 (Obsoleted by RFC 9411) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Methodology Working Group B. Balarajah 3 Internet-Draft 4 Intended status: Informational C. Rossenhoevel 5 Expires: March 13, 2021 EANTC AG 6 B. Monkman 7 NetSecOPEN 8 September 9, 2020 10 Benchmarking Methodology for Network Security Device Performance 11 draft-ietf-bmwg-ngfw-performance-04 13 Abstract 15 This document provides benchmarking terminology and methodology for 16 next-generation network security devices including next-generation 17 firewalls (NGFW), intrusion detection and prevention solutions (IDS/ 18 IPS) and unified threat management (UTM) implementations. This 19 document aims to strongly improve the applicability, reproducibility, 20 and transparency of benchmarks and to align the test methodology with 21 today's increasingly complex layer 7 application use cases. The main 22 areas covered in this document are test terminology, traffic profiles 23 and benchmarking methodology for NGFWs to start with. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at https://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on March 13, 2021. 42 Copyright Notice 44 Copyright (c) 2020 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (https://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 60 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 61 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 62 4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 4 63 4.1. Testbed Configuration . . . . . . . . . . . . . . . . . . 4 64 4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 5 65 4.3. Test Equipment Configuration . . . . . . . . . . . . . . 10 66 4.3.1. Client Configuration . . . . . . . . . . . . . . . . 10 67 4.3.2. Backend Server Configuration . . . . . . . . . . . . 12 68 4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 13 69 4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 14 70 5. Test Bed Considerations . . . . . . . . . . . . . . . . . . . 15 71 6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 15 72 6.1. Key Performance Indicators . . . . . . . . . . . . . . . 17 73 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 18 74 7.1. Throughput Performance With NetSecOPEN Traffic Mix . . . 18 75 7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 18 76 7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 18 77 7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 18 78 7.1.4. Test Procedures and Expected Results . . . . . . . . 20 79 7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 21 80 7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 21 81 7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 21 82 7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 22 83 7.2.4. Test Procedures and Expected Results . . . . . . . . 23 84 7.3. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 24 85 7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 24 86 7.3.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 24 87 7.3.3. Test Parameters . . . . . . . . . . . . . . . . . . . 25 88 7.3.4. Test Procedures and Expected Results . . . . . . . . 27 89 7.4. TCP/HTTP Transaction Latency . . . . . . . . . . . . . . 28 90 7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 28 91 7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 28 92 7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 28 93 7.4.4. Test Procedures and Expected Results . . . . . . . . 30 94 7.5. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 31 95 7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 31 96 7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 32 97 7.5.3. Test Parameters . . . . . . . . . . . . . . . . . . . 32 98 7.5.4. Test Procedures and Expected Results . . . . . . . . 33 99 7.6. TCP/HTTPS Connections per Second . . . . . . . . . . . . 34 100 7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 34 101 7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 35 102 7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 35 103 7.6.4. Test Procedures and Expected Results . . . . . . . . 36 104 7.7. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 38 105 7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 38 106 7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 38 107 7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 38 108 7.7.4. Test Procedures and Expected Results . . . . . . . . 40 109 7.8. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 41 110 7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 41 111 7.8.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 41 112 7.8.3. Test Parameters . . . . . . . . . . . . . . . . . . . 41 113 7.8.4. Test Procedures and Expected Results . . . . . . . . 43 114 7.9. Concurrent TCP/HTTPS Connection Capacity . . . . . . . . 44 115 7.9.1. Objective . . . . . . . . . . . . . . . . . . . . . . 44 116 7.9.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 44 117 7.9.3. Test Parameters . . . . . . . . . . . . . . . . . . . 45 118 7.9.4. Test Procedures and Expected Results . . . . . . . . 46 119 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 47 120 9. Security Considerations . . . . . . . . . . . . . . . . . . . 48 121 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 48 122 11. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 48 123 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 48 124 12.1. Normative References . . . . . . . . . . . . . . . . . . 48 125 12.2. Informative References . . . . . . . . . . . . . . . . . 49 126 Appendix A. NetSecOPEN Basic Traffic Mix . . . . . . . . . . . . 49 127 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 58 129 1. Introduction 131 15 years have passed since IETF recommended test methodology and 132 terminology for firewalls initially ([RFC2647], [RFC3511]). The 133 requirements for network security element performance and 134 effectiveness have increased tremendously since then. Security 135 function implementations have evolved to more advanced areas and have 136 diversified into intrusion detection and prevention, threat 137 management, analysis of encrypted traffic, etc. In an industry of 138 growing importance, well-defined and reproducible key performance 139 indicators (KPIs) are increasingly needed as they enable fair and 140 reasonable comparison of network security functions. All these 141 reasons have led to the creation of a new next-generation firewall 142 benchmarking document. 144 2. Requirements 146 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 147 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 148 "OPTIONAL" in this document are to be interpreted as described in BCP 149 14 [RFC2119], [RFC8174] when, and only when, they appear in all 150 capitals, as shown here. 152 3. Scope 154 This document provides testing terminology and testing methodology 155 for next-generation firewalls security devices. It covers security 156 effectiveness configurations, followed by performance benchmark 157 testing. This document focuses on advanced, realistic, and 158 reproducible testing methods. Additionally, it describes test bed 159 environments, test tool requirements and test result formats. 161 4. Test Setup 163 Test setup defined in this document is applicable to all benchmarking 164 test scenarios described in Section 7. 166 4.1. Testbed Configuration 168 Testbed configuration MUST ensure that any performance implications 169 that are discovered during the benchmark testing aren't due to the 170 inherent physical network limitations such as number of physical 171 links and forwarding performance capabilities (throughput and 172 latency) of the network devise in the testbed. For this reason, this 173 document recommends avoiding external devices such as switches and 174 routers in the testbed wherever possible. 176 However, in the typical deployment, the security devices (Device 177 Under Test/System Under Test) are connected to routers and switches 178 which will reduce the number of entries in MAC or ARP tables of the 179 Device Under Test/System Under Test (DUT/SUT). If MAC or ARP tables 180 have many entries, this may impact the actual DUT/SUT performance due 181 to MAC and ARP/ND table lookup processes. Therefore, it is 182 RECOMMENDED to connect aggregation switches or routers between test 183 equipment and DUT/SUT as shown in Figure 1. The aggregation switches 184 or routers can be also used to aggregate the test equipment or DUT/ 185 SUT ports, if the numbers of used ports are mismatched between test 186 equipment and DUT/SUT. 188 If the test equipment is capable of emulating layer 3 routing 189 functionality and there is no need for test equipment port 190 aggregation, it is RECOMMENDED to configure the test setup as shown 191 in Figure 2. 193 +-------------------+ +-----------+ +--------------------+ 194 |Aggregation Switch/| | | | Aggregation Switch/| 195 | Router +------+ DUT/SUT +------+ Router | 196 | | | | | | 197 +----------+--------+ +-----------+ +--------+-----------+ 198 | | 199 | | 200 +-----------+-----------+ +-----------+-----------+ 201 | | | | 202 | +-------------------+ | | +-------------------+ | 203 | | Emulated Router(s)| | | | Emulated Router(s)| | 204 | | (Optional) | | | | (Optional) | | 205 | +-------------------+ | | +-------------------+ | 206 | +-------------------+ | | +-------------------+ | 207 | | Clients | | | | Servers | | 208 | +-------------------+ | | +-------------------+ | 209 | | | | 210 | Test Equipment | | Test Equipment | 211 +-----------------------+ +-----------------------+ 213 Figure 1: Testbed Setup - Option 1 215 +-----------------------+ +-----------------------+ 216 | +-------------------+ | +-----------+ | +-------------------+ | 217 | | Emulated Router(s)| | | | | | Emulated Router(s)| | 218 | | (Optional) | +----- DUT/SUT +-----+ (Optional) | | 219 | +-------------------+ | | | | +-------------------+ | 220 | +-------------------+ | +-----------+ | +-------------------+ | 221 | | Clients | | | | Servers | | 222 | +-------------------+ | | +-------------------+ | 223 | | | | 224 | Test Equipment | | Test Equipment | 225 +-----------------------+ +-----------------------+ 227 Figure 2: Testbed Setup - Option 2 229 4.2. DUT/SUT Configuration 231 A unique DUT/SUT configuration MUST be used for all benchmarking 232 tests described in Section 7. Since each DUT/SUT will have their own 233 unique configuration, users SHOULD configure their device with the 234 same parameters and security features that would be used in the 235 actual deployment of the device or a typical deployment in order to 236 achieve maximum security coverage. 238 This document attempts to define the recommended security features 239 which SHOULD be consistently enabled for all the benchmarking tests 240 described in Section 7. Table 1 below describes the sets of security 241 feature list which SHOULD be configured on the DUT/SUT. 243 Based on customer use case, users MAY enable or disable SSL 244 inspection feature for "Throughput Performance with NetSecOPEN 245 Traffic Mix" test scenario described in Section 7.1 247 To improve repeatability, a summary of the DUT configuration 248 including description of all enabled DUT/SUT features MUST be 249 published with the benchmarking results. 251 +------------------------+ 252 | NGFW | 253 +-------------- +-------------+----------+ 254 | | | | 255 |DUT Features | RECOMMENDED | OPTIONAL | 256 | | | | 257 +----------------------------------------+ 258 |SSL Inspection | x | | 259 +----------------------------------------+ 260 |IDS/IPS | x | | 261 +----------------------------------------+ 262 |Anti Spyware | x | | 263 +----------------------------------------+ 264 |Antivirus | x | | 265 +----------------------------------------+ 266 |Anti Botnet | x | | 267 +----------------------------------------+ 268 |Web Filtering | | x | 269 +----------------------------------------+ 270 |DLP | | x | 271 +----------------------------------------+ 272 |DDoS | | x | 273 +----------------------------------------+ 274 |Certificate | | x | 275 |Validation | | | 276 +----------------------------------------+ 277 |Logging and | x | | 278 |Reporting | | | 279 +-------------- +------------------------+ 280 |Application | x | | 281 |Identification | | | 282 +---------------+-------------+----------+ 284 Table 1: DUT/SUT Feature 286 The following table provides a brief description of the security 287 features. 289 +------------------+------------------------------------------------+ 290 | DUT/SUT Features | Description | 291 +------------------+------------------------------------------------+ 292 | SSL Inspection | DUT/SUT intercept and decrypt inbound HTTPS | 293 | | traffic between servers and clients. Once the | 294 | | content inspection has been completed, DUT/SUT | 295 | | MUST encrypt the HTTPS traffic with ciphers | 296 | | and keys used by the clients and servers. | 297 +------------------+------------------------------------------------+ 298 | IDS/IPS | DUT MUST detect and block exploits targeting | 299 | | known and unknown vulnerabilities across the | 300 | | monitored network. | 301 +------------------+------------------------------------------------+ 302 | Anti Malware | DUT MUST detect and prevent the transmission of| 303 | | malicious executable code and any associated | 304 | | communications across the monitored network. | 305 | | This includes data exfiltration as well as | 306 | | command and control channels. | 307 +------------------+------------------------------------------------+ 308 | Web Filtering | DUT MUST detect and block malicious websites | 309 | | including defined classifications of website | 310 | | across the monitored network. | 311 +------------------+------------------------------------------------+ 312 | DLP | DUT MUST detect and block the transmission of | 313 | | Personally Identifiable Information (PII) and | 314 | | specific files across the monitored network. | 315 +------------------+------------------------------------------------+ 316 | Certificate | DUT MUST validate certifcates used in encrypted| 317 | Validation | comunications across the monitored network. | 318 +------------------+------------------------------------------------+ 319 | Logging and | DUT MUST be able to log and report all traffic | 320 | Reporting | at the flow level across the monitored network.| 321 +------------------+------------------------------------------------+ 322 | Application | DUT MUST detect known applications as defined | 323 | Identification | within the traffic mix selected across the | 324 | | monitored network. | 325 +------------------+------------------------------------------------+ 327 Table 2: NGFW Security Feature Description 329 In summary, DUT/SUT SHOULD be configured as follows: 331 o All security inspection enabled 333 o Disposition of all flows of traffic are logged - Logging to an 334 external device is permissible 336 o Detection of Common Vulnerabilities and Exposures (CVE) matching 337 the following characteristics when searching the National 338 Vulnerability Database (NVD) 340 * Common Vulnerability Scoring System (CVSS) Version: 2 342 * CVSS V2 Metrics: AV:N/Au:N/I:C/A:C 344 * AV=Attack Vector, Au=Authentication, I=Integrity and 345 A=Availability 347 * CVSS V2 Severity: High (7-10) 349 * If doing a group test the published start date and published 350 end date SHOULD be the same 352 o Geographical location filtering and Application Identification and 353 Control configured to be triggered based on a site or application 354 from the defined traffic mix 356 In addition, a realistic number of access control rules (ACL) MUST be 357 configured on the DUT/SUT. However, this is applicable only for the 358 security devices where ACL's are configurable. This document 359 determines the number of access policy rules for four different 360 classes of DUT/SUT. The classification of the DUT/SUT MAY be based 361 on its maximum supported firewall throughput performance number 362 defined in the vendor datasheet. This document classifies the DUT/ 363 SUT in four different categories; namely Extra Small, Small, Medium, 364 and Large. 366 The RECOMMENDED throughput values for the following classes are: 368 Extra Small (XS) - supported throughput less than 1Gbit/s 370 Small (S) - supported throughput less than 5Gbit/s 372 Medium (M) - supported throughput greater than 5Gbit/s and less than 373 10Gbit/s 375 Large (L) - supported throughput greater than 10Gbit/s 377 The Access Control Rules (ACL) defined in Table 3 MUST be configured 378 from top to bottom in the correct order as shown in the table. The 379 ACL entries MUST be configured in Forward Information Base (FIB) 380 table of the DUT/SUT. (Note: There will be differences between how 381 security vendors implement ACL decision making.) The configured ACL 382 MUST NOT block the security and performance test traffic used for the 383 benchmarking test scenarios. 385 +---------------+ 386 | DUT/SUT | 387 | Classification| 388 | # Rules | 389 +-----------+-----------+------------------+------------+---+---+---+ 390 | | Match | | | | | | | 391 | Rules Type| Criteria | Description | Action | XS| S | M | L | 392 +-------------------------------------------------------------------+ 393 |Application|Application| Any application | block | 5 | 10| 20| 50| 394 |layer | | traffic NOT | | | | | | 395 | | | included in the | | | | | | 396 | | | test traffic | | | | | | 397 +-----------------------+ ------------------------------------------+ 398 |Transport |Src IP and | Any src IP subnet| block | 25| 50|100|250| 399 |layer |TCP/UDP | used in the test | | | | | | 400 | |Dst ports | AND any dst ports| | | | | | 401 | | | NOT used in the | | | | | | 402 | | | test traffic | | | | | | 403 +-------------------------------------------------------------------+ 404 |IP layer |Src/Dst IP | Any src/dst IP | block | 25| 50|100|250| 405 | | | subnet NOT used | | | | | | 406 | | | in the test | | | | | | 407 +-------------------------------------------------------------------+ 408 |Application|Application| Applications | allow | 10| 10| 10| 10| 409 |layer | | included in the | | | | | | 410 | | | test traffic | | | | | | 411 +-------------------------------------------------------------------+ 412 |Transport |Src IP and | Half of the src | allow | 1| 1| 1| 1| 413 |layer |TCP/UDP | IP used in the | | | | | | 414 | |Dst ports | test AND any dst | | | | | | 415 | | | ports used in the| | | | | | 416 | | | test traffic. One| | | | | | 417 | | | rule per subnet | | | | | | 418 +-------------------------------------------------------------------+ 419 |IP layer |Src IP | The rest of the | allow | 1| 1| 1| 1| 420 | | | src IP subnet | | | | | | 421 | | | range used in the| | | | | | 422 | | | test. One rule | | | | | | 423 | | | per subnet | | | | | | 424 +-----------+-----------+------------------+--------+---+---+---+---+ 426 Table 3: DUT/SUT Access List 428 4.3. Test Equipment Configuration 430 In general, test equipment allows configuring parameters in different 431 protocol layers. These parameters thereby influence the traffic 432 flows which will be offered and impact performance measurements. 434 This section specifies common test equipment configuration parameters 435 applicable for all test scenarios defined in Section 7. Any test 436 scenario specific parameters are described under the test setup 437 section of each test scenario individually. 439 4.3.1. Client Configuration 441 This section specifies which parameters SHOULD be considered while 442 configuring clients using test equipment. Also, this section 443 specifies the RECOMMENDED values for certain parameters. 445 4.3.1.1. TCP Stack Attributes 447 The TCP stack SHOULD use a TCP Reno [RFC5681] variant, which include 448 congestion avoidance, back off and windowing, fast retransmission, 449 and fast recovery on every TCP connection between client and server 450 endpoints. The default IPv4 and IPv6 MSS segments size MUST be set 451 to 1460 bytes and 1440 bytes respectively and a TX and RX receive 452 windows of 64 KByte. Client initial congestion window MUST NOT 453 exceed 10 times the MSS. Delayed ACKs are permitted and the maximum 454 client delayed Ack MUST NOT exceed 10 times the MSS before a forced 455 ACK. Up to 3 retries SHOULD be allowed before a timeout event is 456 declared. All traffic MUST set the TCP PSH flag to high. The source 457 port range SHOULD be in the range of 1024 - 65535. Internal timeout 458 SHOULD be dynamically scalable per RFC 793. Client SHOULD initiate 459 and close TCP connections. TCP connections MUST be closed via FIN. 461 4.3.1.2. Client IP Address Space 463 The sum of the client IP space SHOULD contain the following 464 attributes. 466 o The IP blocks SHOULD consist of multiple unique, discontinuous 467 static address blocks. 469 o A default gateway is permitted. 471 o The IPv4 Type of Service (ToS) byte or IPv6 traffic class should 472 be set to '00' or '000000' respectively. 474 The following equation can be used to determine the required total 475 number of client IP addresses. 477 Desired total number of client IP = Target throughput [Mbit/s] / 478 Throughput per IP address [Mbit/s] 480 Based on deployment and use case scenario, the value for "Throughput 481 per IP address" can be varied. 483 (Option 1) DUT/SUT deployment scenario 1 : 6-7 Mbit/s per IP (e.g. 484 1,400-1,700 IPs per 10Gbit/s throughput) 486 (Option 2) DUT/SUT deployment scenario 2 : 0.1-0.2 Mbit/s per IP 487 (e.g. 50,000-100,000 IPs per 10Gbit/s throughput) 489 Based on deployment and use case scenario, client IP addresses SHOULD 490 be distributed between IPv4 and IPv6 type. The Following options can 491 be considered for a selection of traffic mix ratio. 493 (Option 1) 100 % IPv4, no IPv6 495 (Option 2) 80 % IPv4, 20% IPv6 497 (Option 3) 50 % IPv4, 50% IPv6 499 (Option 4) 20 % IPv4, 80% IPv6 501 (Option 5) no IPv4, 100% IPv6 503 4.3.1.3. Emulated Web Browser Attributes 505 The emulated web client contains attributes that will materially 506 affect how traffic is loaded. The objective is to emulate modern, 507 typical browser attributes to improve realism of the result set. 509 For HTTP traffic emulation, the emulated browser MUST negotiate HTTP 510 1.1. HTTP persistence MAY be enabled depending on the test scenario. 511 The browser MAY open multiple TCP connections per Server endpoint IP 512 at any time depending on how many sequential transactions are needed 513 to be processed. Within the TCP connection multiple transactions MAY 514 be processed if the emulated browser has available connections. The 515 browser SHOULD advertise a User-Agent header. Headers MUST be sent 516 uncompressed. The browser SHOULD enforce content length validation. 518 For encrypted traffic, the following attributes SHALL define the 519 negotiated encryption parameters. The test clients MUST use TLSv1.2 520 or higher. TLS record size MAY be optimized for the HTTPS response 521 object size up to a record size of 16 KByte. The client endpoint 522 SHOULD send TLS Extension Server Name Indication (SNI) information 523 when opening a security tunnel. Each client connection MUST perform 524 a full handshake with server certificate and MUST NOT use session 525 reuse or resumption. 527 The following ciphers and keys are RECOMMENDED to use for HTTPS based 528 benchmarking tests defined in Section 7. 530 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 531 Algorithm: ecdsa_secp256r1_sha256 and Supported group: sepc256r1) 533 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 534 Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256) 536 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 537 Algorithm: ecdsa_secp384r1_sha384 and Supported group: sepc521r1) 539 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 540 Algorithm: rsa_pkcs1_sha384 and Supported group: secp256) 542 Note: The above ciphers and keys were those commonly used enterprise 543 grade encryption cipher suites . It is recognised that these will 544 evolve over time. Individual certification bodies SHOULD use ciphers 545 and keys that reflect evolving use cases. These choices MUST be 546 documented in the resulting test reports with detailed information on 547 the ciphers and keys used along with reasons for the choices. 549 4.3.2. Backend Server Configuration 551 This section specifies which parameters should be considered while 552 configuring emulated backend servers using test equipment. 554 4.3.2.1. TCP Stack Attributes 556 The TCP stack on the server side SHOULD be configured similar to the 557 client side configuration described in Section 4.3.1.1. In addition, 558 server initial congestion window MUST NOT exceed 10 times the MSS. 559 Delayed ACKs are permitted and the maximum server delayed ACK MUST 560 NOT exceed 10 times the MSS before a forced ACK. 562 4.3.2.2. Server Endpoint IP Addressing 564 The sum of the server IP space SHOULD contain the following 565 attributes. 567 o The server IP blocks SHOULD consist of unique, discontinuous 568 static address blocks with one IP per Server Fully Qualified 569 Domain Name (FQDN) endpoint per test port. 571 o A default gateway is permitted. The IPv4 ToS byte and IPv6 572 traffic class bytes should be set to '00' and '000000' 573 respectively. 575 o The server IP addresses SHOULD be distributed between IPv4 and 576 IPv6 with a ratio identical to the clients distribution ratio. 578 4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes 580 The server pool for HTTP SHOULD listen on TCP port 80 and emulate 581 HTTP version 1.1 with persistence. The Server MUST advertise server 582 type in the Server response header [RFC2616]. For HTTPS server, TLS 583 1.2 or higher MUST be used with a maximum record size of 16 KByte and 584 MUST NOT use ticket resumption or Session ID reuse . The server MUST 585 listen on port TCP 443. The server SHALL serve a certificate to the 586 client. The HTTPS server MUST check Host SNI information with the 587 FQDN if the SNI is in use. Cipher suite and key size on the server 588 side MUST be configured smilar to the client side configuration 589 described in Section 4.3.1.3. 591 4.3.3. Traffic Flow Definition 593 This section describes the traffic pattern between client and server 594 endpoints. At the beginning of the test, the server endpoint 595 initializes and will be ready to accept connection states including 596 initialization of the TCP stack as well as bound HTTP and HTTPS 597 servers. When a client endpoint is needed, it will initialize and be 598 given attributes such as a MAC and IP address. The behavior of the 599 client is to sweep through the given server IP space, sequentially 600 generating a recognizable service by the DUT. Thus, a balanced, mesh 601 between client endpoints and server endpoints will be generated in a 602 client port server port combination. Each client endpoint performs 603 the same actions as other endpoints, with the difference being the 604 source IP of the client endpoint and the target server IP pool. The 605 client MUST use the servers IP address or Fully Qualified Domain 606 Names (FQDN) in Host Headers.For TLS the client MAY use Server Name 607 Indication (SNI). 609 4.3.3.1. Description of Intra-Client Behavior 611 Client endpoints are independent of other clients that are 612 concurrently executing. When a client endpoint initiates traffic, 613 this section describes how the client steps though different 614 services. Once the test is initialized, the client endpoints SHOULD 615 randomly hold (perform no operation) for a few milliseconds to allow 616 for better randomization of the start of client traffic. Each client 617 will either open a new TCP connection or connect to a TCP persistence 618 stack still open to that specific server. At any point that the 619 service profile may require encryption, a TLS encryption tunnel will 620 form presenting the URL or IP address request to the server. If 621 using SNI, the server will then perform an SNI name check with the 622 proposed FQDN compared to the domain embedded in the certificate. 623 Only when correct, will the server process the HTTPS response object. 624 The initial response object to the server MUST NOT have a fixed size; 625 its size is based on benchmarking tests described in Section 7. 626 Multiple additional sub-URLs (response objects on the service page) 627 MAY be requested simultaneously. This MAY be to the same server IP 628 as the initial URL. Each sub-object will also use a conical FQDN and 629 URL path, as observed in the traffic mix used. 631 4.3.4. Traffic Load Profile 633 The loading of traffic is described in this section. The loading of 634 a traffic load profile has five distinct phases: Init, ramp up, 635 sustain, ramp down, and collection. 637 1. During the Init phase, test bed devices including the client and 638 server endpoints should negotiate layer 2-3 connectivity such as 639 MAC learning and ARP. Only after successful MAC learning or ARP/ 640 ND resolution SHALL the test iteration move to the next phase. 641 No measurements are made in this phase. The minimum RECOMMEND 642 time for Init phase is 5 seconds. During this phase, the 643 emulated clients SHOULD NOT initiate any sessions with the DUT/ 644 SUT, in contrast, the emulated servers should be ready to accept 645 requests from DUT/SUT or from emulated clients. 647 2. In the ramp up phase, the test equipment SHOULD start to generate 648 the test traffic. It SHOULD use a set approximate number of 649 unique client IP addresses actively to generate traffic. The 650 traffic SHOULD ramp from zero to desired target objective. The 651 target objective will be defined for each benchmarking test. The 652 duration for the ramp up phase MUST be configured long enough, so 653 that the test equipment does not overwhelm DUT/SUT's supported 654 performance metrics namely; connections per second, throughput, 655 concurrent TCP connections, and application transactions per 656 second. No measurements are made in this phase. 658 3. In the sustain phase, the test equipment SHOULD continue 659 generating traffic to constant target value for a constant number 660 of active client IPs. The mininum RECOMMENDED time duration for 661 sustain phase is 300 seconds. This is the phase where 662 measurements occur. 664 4. In the ramp down/close phase, no new connections are established, 665 and no measurements are made. The time duration for ramp up and 666 ramp down phase SHOULD be same. 668 5. The last phase is administrative and will occur when the test 669 equipment merges and collates the report data. 671 5. Test Bed Considerations 673 This section recommends steps to control the test environment and 674 test equipment, specifically focusing on virtualized environments and 675 virtualized test equipment. 677 1. Ensure that any ancillary switching or routing functions between 678 the system under test and the test equipment do not limit the 679 performance of the traffic generator. This is specifically 680 important for virtualized components (vSwitches, vRouters). 682 2. Verify that the performance of the test equipment matches and 683 reasonably exceeds the expected maximum performance of the system 684 under test. 686 3. Assert that the test bed characteristics are stable during the 687 entire test session. Several factors might influence stability 688 specifically for virtualized test beds. For example additional 689 workloads in a virtualized system, load balancing and movement of 690 virtual machines during the test, or simple issues such as 691 additional heat created by high workloads leading to an emergency 692 CPU performance reduction. 694 Test bed reference pre-tests help to ensure that the maximum desired 695 traffic generator aspects such as throughput, transaction per second, 696 connection per second, concurrent connection and latency. 698 Once the desired maximum performance goals for the system under test 699 have been identified, a safety margin of 10% SHOULD be added for 700 throughput and subtracted for maximum latency and maximum packet 701 loss. 703 Test bed preparation may be performed either by configuring the DUT 704 in the most trivial setup (fast forwarding) or without presence of 705 DUT. 707 6. Reporting 709 This section describes how the final report should be formatted and 710 presented. The final test report MAY have two major sections; 711 Introduction and result sections. The following attributes SHOULD be 712 present in the introduction section of the test report. 714 1. The name of the NetSecOPEN traffic mix (see Appendix A) MUST be 715 prominent. 717 2. The time and date of the execution of the test MUST be prominent. 719 3. Summary of testbed software and Hardware details 721 A. DUT Hardware/Virtual Configuration 723 + This section SHOULD clearly identify the make and model of 724 the DUT 726 + The port interfaces, including speed and link information 727 MUST be documented. 729 + If the DUT is a virtual Netwerk Function (VNF), interface 730 acceleration such as DPDK and SR-IOV MUST be documented as 731 well as cores used, RAM used, and the pinning / resource 732 sharing configuration. The Hypervisor and version MUST be 733 documented. 735 + Any additional hardware relevant to the DUT such as 736 controllers MUST be documented 738 B. DUT Software 740 + The operating system name MUST be documented 742 + The version MUST be documented 744 + The specific configuration MUST be documented 746 C. DUT Enabled Features 748 + Configured DUT/SUT features (see Table 1) MUST be 749 documented 751 + Attributes of those featured MUST be documented 753 + Any additional relevant information about features MUST be 754 documented 756 D. Test equipment hardware and software 758 + Test equipment vendor name 760 + Hardware details including model number, interface type 762 + Test equipment firmware and test application software 763 version 765 4. Results Summary / Executive Summary 767 1. Results SHOULD resemble a pyramid in how it is reported, with 768 the introduction section documenting the summary of results 769 in a prominent, easy to read block. 771 2. In the result section of the test report, the following 772 attributes should be present for each test scenario. 774 a. KPIs MUST be documented separately for each test 775 scenario. The format of the KPI metrics should be 776 presented as described in Section 6.1. 778 b. The next level of details SHOULD be graphs showing each 779 of these metrics over the duration (sustain phase) of the 780 test. This allows the user to see the measured 781 performance stability changes over time. 783 6.1. Key Performance Indicators 785 This section lists KPIs for overall benchmarking tests scenarios. 786 All KPIs MUST be measured during the sustain phase of the traffic 787 load profile described in Section 4.3.4. All KPIs MUST be measured 788 from the result output of test equipment. 790 o Concurrent TCP Connections 791 This key performance indicator measures the average concurrent 792 open TCP connections in the sustaining period. 794 o TCP Connections Per Second 795 This key performance indicator measures the average established 796 TCP connections per second in the sustaining period. For "TCP/ 797 HTTP(S) Connection Per Second" benchmarking test scenario, the KPI 798 is measured average established and terminated TCP connections per 799 second simultaneously. 801 o Application Transactions Per Second 802 This key performance indicator measures the average successfully 803 completed application transactions per second in the sustaining 804 period. 806 o TLS Handshake Rate 807 This key performance indicator measures the average TLS 1.2 or 808 higher session formation rate within the sustaining period. 810 o Throughput 811 This key performance indicator measures the average Layer 2 812 throughput within the sustaining period as well as average packets 813 per seconds within the same period. The value of throughput 814 SHOULD be presented in Gbit/s rounded to two places of precision 815 with a more specific Kbit/s in parenthesis. Optionally, goodput 816 MAY also be logged as an average goodput rate measured over the 817 same period. Goodput result SHALL also be presented in the same 818 format as throughput. 820 o URL Response time / Time to Last Byte (TTLB) 821 This key performance indicator measures the minimum, average and 822 maximum per URL response time in the sustaining period. The 823 latency is measured at Client and in this case would be the time 824 duration between sending a GET request from Client and the 825 receival of the complete response from the server. 827 o Time to First Byte (TTFB) 828 This key performance indicator will measure minimum, average and 829 maximum the time to first byte. TTFB is the elapsed time between 830 sending the SYN packet from the client and receiving the first 831 byte of application date from the DUT/SUT. TTFB SHOULD be 832 expressed in millisecond. 834 7. Benchmarking Tests 836 7.1. Throughput Performance With NetSecOPEN Traffic Mix 838 7.1.1. Objective 840 Using NetSecOPEN traffic mix, determine the maximum sustainable 841 throughput performance supported by the DUT/SUT. (see Appendix A for 842 details about traffic mix) 844 This test scenario is RECOMMENDED to perform twice; one with SSL 845 inspection feature enabled and the second scenario with SSL 846 inspection feature disabled on the DUT/SUT. 848 7.1.2. Test Setup 850 Test bed setup MUST be configured as defined in Section 4. Any test 851 scenario specific test bed configuration changes MUST be documented. 853 7.1.3. Test Parameters 855 In this section, test scenario specific parameters SHOULD be defined. 857 7.1.3.1. DUT/SUT Configuration Parameters 859 DUT/SUT parameters MUST conform to the requirements defined in 860 Section 4.2. Any configuration changes for this specific test 861 scenario MUST be documented. 863 7.1.3.2. Test Equipment Configuration Parameters 865 Test equipment configuration parameters MUST conform to the 866 requirements defined in Section 4.3. Following parameters MUST be 867 noted for this test scenario: 869 Client IP address range defined in Section 4.3.1.2 871 Server IP address range defined in Section 4.3.2.2 873 Traffic distribution ratio between IPv4 and IPv6 defined in 874 Section 4.3.1.2 876 Target throughput: It can be defined based on requirements. 877 Otherwise it represents aggregated line rate of interface(s) used 878 in the DUT/SUT 880 Initial throughput: 10% of the "Target throughput" 882 One of the ciphers and keys defined in Section 4.3.1.3 are 883 RECOMMENDED to use for this test scenarios. 885 7.1.3.3. Traffic Profile 887 Traffic profile: Test scenario MUST be run with a single application 888 traffic mix profile (see Appendix A for details about traffic mix). 889 The name of the NetSecOPEN traffic mix MUST be documented. 891 7.1.3.4. Test Results Validation Criteria 893 The following test Criteria is defined as test results validation 894 criteria. Test results validation criteria MUST be monitored during 895 the whole sustain phase of the traffic load profile. 897 a. Number of failed application transactions (receiving any HTTP 898 response code other than 200 OK) MUST be less than 0.001% (1 out 899 of 100,000 transactions) of total attempt transactions 901 b. Number of Terminated TCP connections due to unexpected TCP RST 902 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 903 connections) of total initiated TCP connections 905 c. Maximum deviation (max. dev) of URL Response Time or TTLB (Time 906 To Last Byte) MUST be less than X (The value for "X" will be 907 finalized and updated after completion of PoC test) 908 The following equation MUST be used to calculate the deviation of 909 URL Response Time or TTLB 910 max. dev = max((avg_latency - min_latency),(max_latency - 911 avg_latency)) / (Initial latency) 912 Where, the initial latency is calculated using the following 913 equation. For this calculation, the latency values (min', avg' 914 and max') MUST be measured during test procedure step 1 as 915 defined in Section 7.1.4.1. 916 The variable latency represents URL Response Time or TTLB. 917 Initial latency:= min((avg' latency - min' latency) | (max' 918 latency - avg' latency)) 920 d. Maximum value of Time to First Byte (TTFB) MUST be less than X 922 7.1.3.5. Measurement 924 Following KPI metrics MUST be reported for this test scenario. 926 Mandatory KPIs: average Throughput, TTFB (minimum, average and 927 maximum), TTLB (minimum, average and maximum) and average Application 928 Transactions Per Second 930 Note: TTLB MUST be reported along with min, max and avg object size 931 used in the traffic profile. 933 Optional KPIs: average TCP Connections Per Second and average TLS 934 Handshake Rate 936 7.1.4. Test Procedures and Expected Results 938 The test procedures are designed to measure the throughput 939 performance of the DUT/SUT at the sustaining period of traffic load 940 profile. The test procedure consists of three major steps. 942 7.1.4.1. Step 1: Test Initialization and Qualification 944 Verify the link status of the all connected physical interfaces. All 945 interfaces are expected to be in "UP" status. 947 Configure traffic load profile of the test equipment to generate test 948 traffic at the "Initial throughput" rate as described in the 949 parameters Section 7.1.3.2. The test equipment SHOULD follow the 950 traffic load profile definition as described in Section 4.3.4. The 951 DUT/SUT SHOULD reach the "Initial throughput" during the sustain 952 phase. Measure all KPI as defined in Section 7.1.3.5. The measured 953 KPIs during the sustain phase MUST meet validation criteria "a" and 954 "b" defined in Section 7.1.3.4. 956 If the KPI metrics do not meet the validation criteria, the test 957 procedure MUST NOT be continued to step 2. 959 7.1.4.2. Step 2: Test Run with Target Objective 961 Configure test equipment to generate traffic at the "Target 962 throughput" rate defined in the parameter table. The test equipment 963 SHOULD follow the traffic load profile definition as described in 964 Section 4.3.4. The test equipment SHOULD start to measure and record 965 all specified KPIs. The frequency of KPI metric measurements SHOULD 966 be 2 seconds. Continue the test until all traffic profile phases are 967 completed. 969 The DUT/SUT is expected to reach the desired target throughput during 970 the sustain phase. In addition, the measured KPIs MUST meet all 971 validation criteria. Follow step 3, if the KPI metrics do not meet 972 the validation criteria. 974 7.1.4.3. Step 3: Test Iteration 976 Determine the maximum and average achievable throughput within the 977 validation criteria. Final test iteration MUST be performed for the 978 test duration defined in Section 4.3.4. 980 7.2. TCP/HTTP Connections Per Second 982 7.2.1. Objective 984 Using HTTP traffic, determine the maximum sustainable TCP connection 985 establishment rate supported by the DUT/SUT under different 986 throughput load conditions. 988 To measure connections per second, test iterations MUST use different 989 fixed HTTP response object sizes defined in Section 7.2.3.2. 991 7.2.2. Test Setup 993 Test bed setup SHOULD be configured as defined in Section 4. Any 994 specific test bed configuration changes such as number of interfaces 995 and interface type, etc. MUST be documented. 997 7.2.3. Test Parameters 999 In this section, test scenario specific parameters SHOULD be defined. 1001 7.2.3.1. DUT/SUT Configuration Parameters 1003 DUT/SUT parameters MUST conform to the requirements defined in 1004 Section 4.2. Any configuration changes for this specific test 1005 scenario MUST be documented. 1007 7.2.3.2. Test Equipment Configuration Parameters 1009 Test equipment configuration parameters MUST conform to the 1010 requirements defined in Section 4.3. Following parameters MUST be 1011 documented for this test scenario: 1013 Client IP address range defined in Section 4.3.1.2 1015 Server IP address range defined in Section 4.3.2.2 1017 Traffic distribution ratio between IPv4 and IPv6 defined in 1018 Section 4.3.1.2 1020 Target connections per second: Initial value from product datasheet 1021 (if known) 1023 Initial connections per second: 10% of "Target connections per 1024 second" (an optional parameter for documentation) 1026 The client SHOULD negotiate HTTP 1.1 and close the connection with 1027 FIN immediately after completion of one transaction. In each test 1028 iteration, client MUST send GET command requesting a fixed HTTP 1029 response object size. 1031 The RECOMMENDED response object sizes are 1, 2, 4, 16, 64 KByte 1033 7.2.3.3. Test Results Validation Criteria 1035 The following test Criteria is defined as test results validation 1036 criteria. Test results validation criteria MUST be monitored during 1037 the whole sustain phase of the traffic load profile. 1039 a. Number of failed Application transactions (receiving any HTTP 1040 response code other than 200 OK) MUST be less than 0.001% (1 out 1041 of 100,000 transactions) of total attempt transactions 1043 b. Number of Terminated TCP connections due to unexpected TCP RST 1044 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1045 connections) of total initiated TCP connections 1047 c. During the sustain phase, traffic should be forwarded at a 1048 constant rate 1050 d. Concurrent TCP connections MUST be constant during steady state 1051 and any deviation of concurrent TCP connections SHOULD be less 1052 than 10%. This confirms the DUT opens and closes TCP connections 1053 almost at the same rate 1055 7.2.3.4. Measurement 1057 Following KPI metric MUST be reported for each test iteration. 1059 average TCP Connections Per Second 1061 7.2.4. Test Procedures and Expected Results 1063 The test procedure is designed to measure the TCP connections per 1064 second rate of the DUT/SUT at the sustaining period of the traffic 1065 load profile. The test procedure consists of three major steps. 1066 This test procedure MAY be repeated multiple times with different IP 1067 types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 1068 distribution. 1070 7.2.4.1. Step 1: Test Initialization and Qualification 1072 Verify the link status of all connected physical interfaces. All 1073 interfaces are expected to be in "UP" status. 1075 Configure the traffic load profile of the test equipment to establish 1076 "initial connections per second" as defined in the parameters 1077 Section 7.2.3.2. The traffic load profile SHOULD be defined as 1078 described in Section 4.3.4. 1080 The DUT/SUT SHOULD reach the "Initial connections per second" before 1081 the sustain phase. The measured KPIs during the sustain phase MUST 1082 meet validation criteria a, b, c, and d defined in Section 7.2.3.3. 1084 If the KPI metrics do not meet the validation criteria, the test 1085 procedure MUST NOT be continued to "Step 2". 1087 7.2.4.2. Step 2: Test Run with Target Objective 1089 Configure test equipment to establish "Target connections per second" 1090 defined in the parameters table. The test equipment SHOULD follow 1091 the traffic load profile definition as described in Section 4.3.4. 1093 During the ramp up and sustain phase of each test iteration, other 1094 KPIs such as throughput, concurrent TCP connections and application 1095 transactions per second MUST NOT reach to the maximum value the DUT/ 1096 SUT can support. The test results for specific test iterations 1097 SHOULD NOT be reported, if the above mentioned KPI (especially 1098 throughput) reaches the maximum value. (Example: If the test 1099 iteration with 64 KByte of HTTP response object size reached the 1100 maximum throughput limitation of the DUT, the test iteration MAY be 1101 interrupted and the result for 64 KByte SHOULD NOT be reported). 1103 The test equipment SHOULD start to measure and record all specified 1104 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1105 the test until all traffic profile phases are completed. 1107 The DUT/SUT is expected to reach the desired target connections per 1108 second rate at the sustain phase. In addition, the measured KPIs 1109 MUST meet all validation criteria. 1111 Follow step 3, if the KPI metrics do not meet the validation 1112 criteria. 1114 7.2.4.3. Step 3: Test Iteration 1116 Determine the maximum and average achievable connections per second 1117 within the validation criteria. 1119 7.3. HTTP Throughput 1121 7.3.1. Objective 1123 Determine the throughput for HTTP transactions varying the HTTP 1124 response object size. 1126 7.3.2. Test Setup 1128 Test bed setup SHOULD be configured as defined in Section 4. Any 1129 specific test bed configuration changes such as number of interfaces 1130 and interface type, etc. must be documented. 1132 7.3.3. Test Parameters 1134 In this section, test scenario specific parameters SHOULD be defined. 1136 7.3.3.1. DUT/SUT Configuration Parameters 1138 DUT/SUT parameters MUST conform to the requirements defined in 1139 Section 4.2. Any configuration changes for this specific test 1140 scenario MUST be documented. 1142 7.3.3.2. Test Equipment Configuration Parameters 1144 Test equipment configuration parameters MUST conform to the 1145 requirements defined in Section 4.3. Following parameters MUST be 1146 documented for this test scenario: 1148 Client IP address range defined in Section 4.3.1.2 1150 Server IP address range defined in Section 4.3.2.2 1152 Traffic distribution ratio between IPv4 and IPv6 defined in 1153 Section 4.3.1.2 1155 Target Throughput: Initial value from product datasheet (if known) 1157 Initial Throughput: 10% of "Target Throughput" (an optional parameter 1158 for documentation) 1160 Number of HTTP response object requests (transactions) per 1161 connection: 10 1163 RECOMMENDED HTTP response object size: 1 KByte, 16 KByte, 64 KByte, 1164 256 KByte and mixed objects defined in the table 1165 +---------------------+---------------------+ 1166 | Object size (KByte) | Number of requests/ | 1167 | | Weight | 1168 +---------------------+---------------------+ 1169 | 0.2 | 1 | 1170 +---------------------+---------------------+ 1171 | 6 | 1 | 1172 +---------------------+---------------------+ 1173 | 8 | 1 | 1174 +---------------------+---------------------+ 1175 | 9 | 1 | 1176 +---------------------+---------------------+ 1177 | 10 | 1 | 1178 +---------------------+---------------------+ 1179 | 25 | 1 | 1180 +---------------------+---------------------+ 1181 | 26 | 1 | 1182 +---------------------+---------------------+ 1183 | 35 | 1 | 1184 +---------------------+---------------------+ 1185 | 59 | 1 | 1186 +---------------------+---------------------+ 1187 | 347 | 1 | 1188 +---------------------+---------------------+ 1190 Table 4: Mixed Objects 1192 7.3.3.3. Test Results Validation Criteria 1194 The following test Criteria is defined as test results validation 1195 criteria. Test results validation criteria MUST be monitored during 1196 the whole sustain phase of the traffic load profile 1198 a. Number of failed Application transactions (receiving any HTTP 1199 response code other than 200 OK) MUST be less than 0.001% (1 out 1200 of 100,000 transactions) of attempt transactions. 1202 b. Traffic should be forwarded constantly. 1204 c. Concurrent TCP connections MUST be constant during steady state 1205 and any deviation of concurrent TCP connections SHOULD be less 1206 than 10%. This confirms the DUT opens and closes TCP connections 1207 almost at the same rate 1209 7.3.3.4. Measurement 1211 The KPI metrics MUST be reported for this test scenario: 1213 average Throughput and average HTTP Transactions per Second 1215 7.3.4. Test Procedures and Expected Results 1217 The test procedure is designed to measure HTTP throughput of the DUT/ 1218 SUT. The test procedure consists of three major steps. This test 1219 procedure MAY be repeated multiple times with different IPv4 and IPv6 1220 traffic distribution and HTTP response object sizes. 1222 7.3.4.1. Step 1: Test Initialization and Qualification 1224 Verify the link status of the all connected physical interfaces. All 1225 interfaces are expected to be in "UP" status. 1227 Configure traffic load profile of the test equipment to establish 1228 "Initial Throughput" as defined in the parameters Section 7.3.3.2. 1230 The traffic load profile SHOULD be defined as described in 1231 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial Throughput" 1232 during the sustain phase. Measure all KPI as defined in 1233 Section 7.3.3.4. 1235 The measured KPIs during the sustain phase MUST meet the validation 1236 criteria "a" defined in Section 7.3.3.3. 1238 If the KPI metrics do not meet the validation criteria, the test 1239 procedure MUST NOT be continued to "Step 2". 1241 7.3.4.2. Step 2: Test Run with Target Objective 1243 The test equipment SHOULD start to measure and record all specified 1244 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1245 the test until all traffic profile phases are completed. 1247 The DUT/SUT is expected to reach the desired "Target Throughput" at 1248 the sustain phase. In addition, the measured KPIs must meet all 1249 validation criteria. 1251 Perform the test separately for each HTTP response object size. 1253 Follow step 3, if the KPI metrics do not meet the validation 1254 criteria. 1256 7.3.4.3. Step 3: Test Iteration 1258 Determine the maximum and average achievable throughput within the 1259 validation criteria. Final test iteration MUST be performed for the 1260 test duration defined in Section 4.3.4. 1262 7.4. TCP/HTTP Transaction Latency 1264 7.4.1. Objective 1266 Using HTTP traffic, determine the average HTTP transaction latency 1267 when DUT is running with sustainable HTTP transactions per second 1268 supported by the DUT/SUT under different HTTP response object sizes. 1270 Test iterations MUST be performed with different HTTP response object 1271 sizes in two different scenarios.one with a single transaction and 1272 the other with multiple transactions within a single TCP connection. 1273 For consistency both the single and multiple transaction test MUST be 1274 configured with HTTP 1.1. 1276 Scenario 1: The client MUST negotiate HTTP 1.1 and close the 1277 connection with FIN immediately after completion of a single 1278 transaction (GET and RESPONSE). 1280 Scenario 2: The client MUST negotiate HTTP 1.1 and close the 1281 connection FIN immediately after completion of 10 transactions (GET 1282 and RESPONSE) within a single TCP connection. 1284 7.4.2. Test Setup 1286 Test bed setup SHOULD be configured as defined in Section 4. Any 1287 specific test bed configuration changes such as number of interfaces 1288 and interface type, etc. MUST be documented. 1290 7.4.3. Test Parameters 1292 In this section, test scenario specific parameters SHOULD be defined. 1294 7.4.3.1. DUT/SUT Configuration Parameters 1296 DUT/SUT parameters MUST conform to the requirements defined in 1297 Section 4.2. Any configuration changes for this specific test 1298 scenario MUST be documented. 1300 7.4.3.2. Test Equipment Configuration Parameters 1302 Test equipment configuration parameters MUST conform to the 1303 requirements defined in Section 4.3 . Following parameters MUST be 1304 documented for this test scenario: 1306 Client IP address range defined in Section 4.3.1.2 1308 Server IP address range defined in Section 4.3.2.2 1310 Traffic distribution ratio between IPv4 and IPv6 defined in 1311 Section 4.3.1.2 1313 Target objective for scenario 1: 50% of the maximum connection per 1314 second measured in test scenario TCP/HTTP Connections Per Second 1315 (Section 7.2) 1317 Target objective for scenario 2: 50% of the maximum throughput 1318 measured in test scenario HTTP Throughput (Section 7.3) 1320 Initial objective for scenario 1: 10% of Target objective for 1321 scenario 1" (an optional parameter for documentation) 1323 Initial objective for scenario 2: 10% of "Target objective for 1324 scenario 2" (an optional parameter for documentation) 1326 HTTP transaction per TCP connection: test scenario 1 with single 1327 transaction and the second scenario with 10 transactions 1329 HTTP 1.1 with GET command requesting a single object. The 1330 RECOMMENDED object sizes are 1, 16 or 64 KByte. For each test 1331 iteration, client MUST request a single HTTP response object size. 1333 7.4.3.3. Test Results Validation Criteria 1335 The following test Criteria is defined as test results validation 1336 criteria. Test results validation criteria MUST be monitored during 1337 the whole sustain phase of the traffic load profile. Ramp up and 1338 ramp down phase SHOULD NOT be considered. 1340 Generic criteria: 1342 a. Number of failed Application transactions (receiving any HTTP 1343 response code other than 200 OK) MUST be less than 0.001% (1 out 1344 of 100,000 transactions) of attempt transactions. 1346 b. Number of Terminated TCP connections due to unexpected TCP RST 1347 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1348 connections) of total initiated TCP connections 1350 c. During the sustain phase, traffic should be forwarded at a 1351 constant rate. 1353 d. Concurrent TCP connections MUST be constant during steady state 1354 and any deviation of concurrent TCP connections SHOULD be less 1355 than 10%. This confirms the DUT opens and closes TCP connections 1356 almost at the same rate 1358 e. After ramp up the DUT MUST achieve the "Target objective" defined 1359 in the parameter Section 7.4.3.2 and remain in that state for the 1360 entire test duration (sustain phase). 1362 7.4.3.4. Measurement 1364 Following KPI metrics MUST be reported for each test scenario and 1365 HTTP response object sizes separately: 1367 TTFB (minimum, average and maximum) and TTLB (minimum, average and 1368 maximum) 1370 All KPI's are measured once the target throughput achieves the steady 1371 state. 1373 7.4.4. Test Procedures and Expected Results 1375 The test procedure is designed to measure the average application 1376 transaction latencies or TTLB when the DUT is operating close to 50% 1377 of its maximum achievable throughput or connections per second. This 1378 test procedure CAN be repeated multiple times with different IP types 1379 (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic distribution), 1380 HTTP response object sizes and single and multiple transactions per 1381 connection scenarios. 1383 7.4.4.1. Step 1: Test Initialization and Qualification 1385 Verify the link status of the all connected physical interfaces. All 1386 interfaces are expected to be in "UP" status. 1388 Configure traffic load profile of the test equipment to establish 1389 "Initial objective" as defined in the parameters Section 7.4.3.2. 1390 The traffic load profile can be defined as described in 1391 Section 4.3.4. 1393 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 1394 phase. The measured KPIs during the sustain phase MUST meet the 1395 validation criteria a, b, c, d, e and f defined in Section 7.4.3.3. 1397 If the KPI metrics do not meet the validation criteria, the test 1398 procedure MUST NOT be continued to "Step 2". 1400 7.4.4.2. Step 2: Test Run with Target Objective 1402 Configure test equipment to establish "Target objective" defined in 1403 the parameters table. The test equipment SHOULD follow the traffic 1404 load profile definition as described in Section 4.3.4. 1406 During the ramp up and sustain phase, other KPIs such as throughput, 1407 concurrent TCP connections and application transactions per second 1408 MUST NOT reach to the maximum value that the DUT/SUT can support. 1409 The test results for specific test iterations SHOULD NOT be reported, 1410 if the above mentioned KPI (especially throughput) reaches to the 1411 maximum value. (Example: If the test iteration with 64 KByte of HTTP 1412 response object size reached the maximum throughput limitation of the 1413 DUT, the test iteration MAY be interrupted and the result for 64 1414 KByte SHOULD NOT be reported). 1416 The test equipment SHOULD start to measure and record all specified 1417 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1418 the test until all traffic profile phases are completed. DUT/SUT is 1419 expected to reach the desired "Target objective" at the sustain 1420 phase. In addition, the measured KPIs MUST meet all validation 1421 criteria. 1423 Follow step 3, if the KPI metrics do not meet the validation 1424 criteria. 1426 7.4.4.3. Step 3: Test Iteration 1428 Determine the maximum achievable connections per second within the 1429 validation criteria and measure the latency values. 1431 7.5. Concurrent TCP/HTTP Connection Capacity 1433 7.5.1. Objective 1435 Determine the maximum number of concurrent TCP connections that the 1436 DUT/ SUT sustains when using HTTP traffic. 1438 7.5.2. Test Setup 1440 Test bed setup SHOULD be configured as defined in Section 4. Any 1441 specific test bed configuration changes such as number of interfaces 1442 and interface type, etc. must be documented. 1444 7.5.3. Test Parameters 1446 In this section, test scenario specific parameters SHOULD be defined. 1448 7.5.3.1. DUT/SUT Configuration Parameters 1450 DUT/SUT parameters MUST conform to the requirements defined in 1451 Section 4.2. Any configuration changes for this specific test 1452 scenario MUST be documented. 1454 7.5.3.2. Test Equipment Configuration Parameters 1456 Test equipment configuration parameters MUST conform to the 1457 requirements defined in Section 4.3. Following parameters MUST be 1458 noted for this test scenario: 1460 Client IP address range defined in Section 4.3.1.2 1462 Server IP address range defined in Section 4.3.2.2 1464 Traffic distribution ratio between IPv4 and IPv6 defined in 1465 Section 4.3.1.2 1467 Target concurrent connection: Initial value from product datasheet 1468 (if known) 1470 Initial concurrent connection: 10% of "Target concurrent 1471 connection" (an optional parameter for documentation) 1473 Maximum connections per second during ramp up phase: 50% of 1474 maximum connections per second measured in test scenario TCP/HTTP 1475 Connections per second (Section 7.2) 1477 Ramp up time (in traffic load profile for "Target concurrent 1478 connection"): "Target concurrent connection" / "Maximum 1479 connections per second during ramp up phase" 1481 Ramp up time (in traffic load profile for "Initial concurrent 1482 connection"): "Initial concurrent connection" / "Maximum 1483 connections per second during ramp up phase" 1485 The client MUST negotiate HTTP 1.1 with persistence and each client 1486 MAY open multiple concurrent TCP connections per server endpoint IP. 1488 Each client sends 10 GET commands requesting 1 KByte HTTP response 1489 object in the same TCP connection (10 transactions/TCP connection) 1490 and the delay (think time) between the transaction MUST be X seconds. 1492 X = ("Ramp up time" + "steady state time") /10 1494 The established connections SHOULD remain open until the ramp down 1495 phase of the test. During the ramp down phase, all connections 1496 SHOULD be successfully closed with FIN. 1498 7.5.3.3. Test Results Validation Criteria 1500 The following test Criteria is defined as test results validation 1501 criteria. Test results validation criteria MUST be monitored during 1502 the whole sustain phase of the traffic load profile. 1504 a. Number of failed Application transactions (receiving any HTTP 1505 response code other than 200 OK) MUST be less than 0.001% (1 out 1506 of 100,000 transaction) of total attempted transactions 1508 b. Number of Terminated TCP connections due to unexpected TCP RST 1509 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1510 connections) of total initiated TCP connections 1512 c. During the sustain phase, traffic SHOULD be forwarded constantly 1514 7.5.3.4. Measurement 1516 Following KPI metric MUST be reported for this test scenario: 1518 average Concurrent TCP Connections 1520 7.5.4. Test Procedures and Expected Results 1522 The test procedure is designed to measure the concurrent TCP 1523 connection capacity of the DUT/SUT at the sustaining period of 1524 traffic load profile. The test procedure consists of three major 1525 steps. This test procedure MAY be repeated multiple times with 1526 different IPv4 and IPv6 traffic distribution. 1528 7.5.4.1. Step 1: Test Initialization and Qualification 1530 Verify the link status of the all connected physical interfaces. All 1531 interfaces are expected to be in "UP" status. 1533 Configure test equipment to establish "Initial concurrent TCP 1534 connections" defined in Section 7.5.3.2. Except ramp up time, the 1535 traffic load profile SHOULD be defined as described in Section 4.3.4. 1537 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 1538 concurrent TCP connections". The measured KPIs during the sustain 1539 phase MUST meet the validation criteria "a" and "b" defined in 1540 Section 7.5.3.3. 1542 If the KPI metrics do not meet the validation criteria, the test 1543 procedure MUST NOT be continued to "Step 2". 1545 7.5.4.2. Step 2: Test Run with Target Objective 1547 Configure test equipment to establish "Target concurrent TCP 1548 connections". The test equipment SHOULD follow the traffic load 1549 profile definition (except ramp up time) as described in 1550 Section 4.3.4. 1552 During the ramp up and sustain phase, the other KPIs such as 1553 throughput, TCP connections per second and application transactions 1554 per second MUST NOT reach to the maximum value that the DUT/SUT can 1555 support. 1557 The test equipment SHOULD start to measure and record KPIs defined in 1558 Section 7.5.3.4. The frequency of measurement SHOULD be 2 seconds. 1559 Continue the test until all traffic profile phases are completed. 1561 The DUT/SUT is expected to reach the desired target concurrent 1562 connection at the sustain phase. In addition, the measured KPIs must 1563 meet all validation criteria. 1565 Follow step 3, if the KPI metrics do not meet the validation 1566 criteria. 1568 7.5.4.3. Step 3: Test Iteration 1570 Determine the maximum and average achievable concurrent TCP 1571 connections capacity within the validation criteria. 1573 7.6. TCP/HTTPS Connections per Second 1575 7.6.1. Objective 1577 Using HTTPS traffic, determine the maximum sustainable SSL/TLS 1578 session establishment rate supported by the DUT/SUT under different 1579 throughput load conditions. 1581 Test iterations MUST include common cipher suites and key strengths 1582 as well as forward looking stronger keys. Specific test iterations 1583 MUST include ciphers and keys defined in Section 7.6.3.2. 1585 For each cipher suite and key strengths, test iterations MUST use a 1586 single HTTPS response object size defined in the test equipment 1587 configuration parameters Section 7.6.3.2 to measure connections per 1588 second performance under a variety of DUT Security inspection load 1589 conditions. 1591 7.6.2. Test Setup 1593 Test bed setup SHOULD be configured as defined in Section 4. Any 1594 specific test bed configuration changes such as number of interfaces 1595 and interface type, etc. MUST be documented. 1597 7.6.3. Test Parameters 1599 In this section, test scenario specific parameters SHOULD be defined. 1601 7.6.3.1. DUT/SUT Configuration Parameters 1603 DUT/SUT parameters MUST conform to the requirements defined in 1604 Section 4.2. Any configuration changes for this specific test 1605 scenario MUST be documented. 1607 7.6.3.2. Test Equipment Configuration Parameters 1609 Test equipment configuration parameters MUST conform to the 1610 requirements defined in Section 4.3. Following parameters MUST be 1611 documented for this test scenario: 1613 Client IP address range defined in Section 4.3.1.2 1615 Server IP address range defined in Section 4.3.2.2 1617 Traffic distribution ratio between IPv4 and IPv6 defined in 1618 Section 4.3.1.2 1620 Target connections per second: Initial value from product datasheet 1621 (if known) 1623 Initial connections per second: 10% of "Target connections per 1624 second" (an optional parameter for documentation) 1626 RECOMMENDED ciphers and keys defined in Section 4.3.1.3 1627 The client MUST negotiate HTTPS 1.1 and close the connection with FIN 1628 immediately after completion of one transaction. In each test 1629 iteration, client MUST send GET command requesting a fixed HTTPS 1630 response object size. The RECOMMENDED object sizes are 1, 2, 4, 16, 1631 64 KByte. 1633 7.6.3.3. Test Results Validation Criteria 1635 The following test Criteria is defined as test results validation 1636 criteria: 1638 a. Number of failed Application transactions (receiving any HTTP 1639 response code other than 200 OK) MUST be less than 0.001% (1 out 1640 of 100,000 transactions) of attempt transactions 1642 b. Number of Terminated TCP connections due to unexpected TCP RST 1643 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1644 connections) of total initiated TCP connections 1646 c. During the sustain phase, traffic should be forwarded at a 1647 constant rate 1649 d. Concurrent TCP connections MUST be constant during steady state 1650 and any deviation of concurrent TCP connections SHOULD be less 1651 than 10%. This confirms the DUT opens and closes TCP connections 1652 almost at the same rate 1654 7.6.3.4. Measurement 1656 Following KPI metrics MUST be reported for this test scenario: 1658 average TCP Connections Per Second, average TLS Handshake Rate (TLS 1659 Handshake Rate can be measured in the test scenario using 1KB object 1660 size) 1662 7.6.4. Test Procedures and Expected Results 1664 The test procedure is designed to measure the TCP connections per 1665 second rate of the DUT/SUT at the sustaining period of traffic load 1666 profile. The test procedure consists of three major steps. This 1667 test procedure MAY be repeated multiple times with different IPv4 and 1668 IPv6 traffic distribution. 1670 7.6.4.1. Step 1: Test Initialization and Qualification 1672 Verify the link status of all connected physical interfaces. All 1673 interfaces are expected to be in "UP" status. 1675 Configure traffic load profile of the test equipment to establish 1676 "Initial connections per second" as defined in Section 7.6.3.2. The 1677 traffic load profile CAN be defined as described in Section 4.3.4. 1679 The DUT/SUT SHOULD reach the "Initial connections per second" before 1680 the sustain phase. The measured KPIs during the sustain phase MUST 1681 meet the validation criteria a, b, c, and d defined in 1682 Section 7.6.3.3. 1684 If the KPI metrics do not meet the validation criteria, the test 1685 procedure MUST NOT be continued to "Step 2". 1687 7.6.4.2. Step 2: Test Run with Target Objective 1689 Configure test equipment to establish "Target connections per second" 1690 defined in the parameters table. The test equipment SHOULD follow 1691 the traffic load profile definition as described in Section 4.3.4. 1693 During the ramp up and sustain phase, other KPIs such as throughput, 1694 concurrent TCP connections and application transactions per second 1695 MUST NOT reach the maximum value that the DUT/SUT can support. The 1696 test results for specific test iteration SHOULD NOT be reported, if 1697 the above mentioned KPI (especially throughput) reaches the maximum 1698 value. (Example: If the test iteration with 64 KByte of HTTPS 1699 response object size reached the maximum throughput limitation of the 1700 DUT, the test iteration can be interrupted and the result for 64 1701 KByte SHOULD NOT be reported). 1703 The test equipment SHOULD start to measure and record all specified 1704 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1705 the test until all traffic profile phases are completed. 1707 The DUT/SUT is expected to reach the desired target connections per 1708 second rate at the sustain phase. In addition, the measured KPIs 1709 must meet all validation criteria. 1711 Follow the step 3, if the KPI metrics do not meet the validation 1712 criteria. 1714 7.6.4.3. Step 3: Test Iteration 1716 Determine the maximum and average achievable connections per second 1717 within the validation criteria. 1719 7.7. HTTPS Throughput 1721 7.7.1. Objective 1723 Determine the throughput for HTTPS transactions varying the HTTPS 1724 response object size. 1726 Test iterations MUST include common cipher suites and key strengths 1727 as well as forward looking stronger keys. Specific test iterations 1728 MUST include the ciphers and keys defined in the parameter 1729 Section 7.7.3.2. 1731 7.7.2. Test Setup 1733 Test bed setup SHOULD be configured as defined in Section 4. Any 1734 specific test bed configuration changes such as number of interfaces 1735 and interface type, etc. must be documented. 1737 7.7.3. Test Parameters 1739 In this section, test scenario specific parameters SHOULD be defined. 1741 7.7.3.1. DUT/SUT Configuration Parameters 1743 DUT/SUT parameters MUST conform to the requirements defined in 1744 Section 4.2. Any configuration changes for this specific test 1745 scenario MUST be documented. 1747 7.7.3.2. Test Equipment Configuration Parameters 1749 Test equipment configuration parameters MUST conform to the 1750 requirements defined in Section 4.3. Following parameters MUST be 1751 documented for this test scenario: 1753 Client IP address range defined in Section 4.3.1.2 1755 Server IP address range defined in Section 4.3.2.2 1757 Traffic distribution ratio between IPv4 and IPv6 defined in 1758 Section 4.3.1.2 1760 Target Throughput: Initial value from product datasheet (if known) 1762 Initial Throughput: 10% of "Target Throughput" (an optional parameter 1763 for documentation) 1765 Number of HTTPS response object requests (transactions) per 1766 connection: 10 1767 RECOMMENDED ciphers and keys defined in Section 4.3.1.3 1769 RECOMMENDED HTTPS response object size: 1 KByte, 2 KByte, 4 KByte, 16 1770 KByte, 64 KByte, 256 KByte and mixed object defined in the table 1771 below. 1773 +---------------------+---------------------+ 1774 | Object size (KByte) | Number of requests/ | 1775 | | Weight | 1776 +---------------------+---------------------+ 1777 | 0.2 | 1 | 1778 +---------------------+---------------------+ 1779 | 6 | 1 | 1780 +---------------------+---------------------+ 1781 | 8 | 1 | 1782 +---------------------+---------------------+ 1783 | 9 | 1 | 1784 +---------------------+---------------------+ 1785 | 10 | 1 | 1786 +---------------------+---------------------+ 1787 | 25 | 1 | 1788 +---------------------+---------------------+ 1789 | 26 | 1 | 1790 +---------------------+---------------------+ 1791 | 35 | 1 | 1792 +---------------------+---------------------+ 1793 | 59 | 1 | 1794 +---------------------+---------------------+ 1795 | 347 | 1 | 1796 +---------------------+---------------------+ 1798 Table 5: Mixed Objects 1800 7.7.3.3. Test Results Validation Criteria 1802 The following test Criteria is defined as test results validation 1803 criteria. Test results validation criteria MUST be monitored during 1804 the whole sustain phase of the traffic load profile. 1806 a. Number of failed Application transactions (receiving any HTTP 1807 response code other than 200 OK) MUST be less than 0.001% (1 out 1808 of 100,000 transactions) of attempt transactions. 1810 b. Traffic should be forwarded constantly. 1812 c. Concurrent TCP connections MUST be constant during steady state 1813 and any deviation of concurrent TCP connections SHOULD be less 1814 than 10%. This confirms the DUT opens and closes TCP connections 1815 almost at the same rate 1817 7.7.3.4. Measurement 1819 The KPI metrics MUST be reported for this test scenario: 1821 average Throughput and average HTTPS Transactions Per Second 1823 7.7.4. Test Procedures and Expected Results 1825 The test procedure consists of three major steps. This test 1826 procedure MAY be repeated multiple times with different IPv4 and IPv6 1827 traffic distribution and HTTPS response object sizes. 1829 7.7.4.1. Step 1: Test Initialization and Qualification 1831 Verify the link status of the all connected physical interfaces. All 1832 interfaces are expected to be in "UP" status. 1834 Configure traffic load profile of the test equipment to establish 1835 "initial throughput" as defined in the parameters Section 7.7.3.2. 1837 The traffic load profile should be defined as described in 1838 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial Throughput" 1839 during the sustain phase. Measure all KPI as defined in 1840 Section 7.7.3.4. 1842 The measured KPIs during the sustain phase MUST meet the validation 1843 criteria "a" defined in Section 7.7.3.3. 1845 If the KPI metrics do not meet the validation criteria, the test 1846 procedure MUST NOT be continued to "Step 2". 1848 7.7.4.2. Step 2: Test Run with Target Objective 1850 The test equipment SHOULD start to measure and record all specified 1851 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1852 the test until all traffic profile phases are completed. 1854 The DUT/SUT is expected to reach the desired "Target Throughput" at 1855 the sustain phase. In addition, the measured KPIs MUST meet all 1856 validation criteria. 1858 Perform the test separately for each HTTPS response object size. 1860 Follow step 3, if the KPI metrics do not meet the validation 1861 criteria. 1863 7.7.4.3. Step 3: Test Iteration 1865 Determine the maximum and average achievable throughput within the 1866 validation criteria. Final test iteration MUST be performed for the 1867 test duration defined in Section 4.3.4. 1869 7.8. HTTPS Transaction Latency 1871 7.8.1. Objective 1873 Using HTTPS traffic, determine the average HTTPS transaction latency 1874 when DUT is running with sustainable HTTPS transactions per second 1875 supported by the DUT/SUT under different HTTPS response object size. 1877 Scenario 1: The client MUST negotiate HTTPS and close the connection 1878 with FIN immediately after completion of a single transaction (GET 1879 and RESPONSE). 1881 Scenario 2: The client MUST negotiate HTTPS and close the connection 1882 with FIN immediately after completion of 10 transactions (GET and 1883 RESPONSE) within a single TCP connection. 1885 7.8.2. Test Setup 1887 Test bed setup SHOULD be configured as defined in Section 4. Any 1888 specific test bed configuration changes such as number of interfaces 1889 and interface type, etc. MUST be documented. 1891 7.8.3. Test Parameters 1893 In this section, test scenario specific parameters SHOULD be defined. 1895 7.8.3.1. DUT/SUT Configuration Parameters 1897 DUT/SUT parameters MUST conform to the requirements defined in 1898 Section 4.2. Any configuration changes for this specific test 1899 scenario MUST be documented. 1901 7.8.3.2. Test Equipment Configuration Parameters 1903 Test equipment configuration parameters MUST conform to the 1904 requirements defined in Section 4.3. Following parameters MUST be 1905 documented for this test scenario: 1907 Client IP address range defined in Section 4.3.1.2 1909 Server IP address range defined in Section 4.3.2.2 1910 Traffic distribution ratio between IPv4 and IPv6 defined in 1911 Section 4.3.1.2 1913 RECOMMENDED cipher suites and key sizes defined in Section 4.3.1.3 1915 Target objective for scenario 1: 50% of the maximum connections per 1916 second measured in test scenario TCP/HTTPS Connections per second 1917 (Section 7.6) 1919 Target objective for scenario 2: 50% of the maximum throughput 1920 measured in test scenario HTTPS Throughput (Section 7.7) 1922 Initial objective for scenario 1: 10% of Target objective for 1923 scenario 1" (an optional parameter for documentation) 1925 Initial objective for scenario 2: 10% of "Target objective for 1926 scenario 2" (an optional parameter for documentation) 1928 HTTPS transaction per TCP connection: test scenario 1 with single 1929 transaction and the second scenario with 10 transactions 1931 HTTPS 1.1 with GET command requesting a single 1, 16 or 64 KByte 1932 object. For each test iteration, client MUST request a single HTTPS 1933 response object size. 1935 7.8.3.3. Test Results Validation Criteria 1937 The following test Criteria is defined as test results validation 1938 criteria. Test results validation criteria MUST be monitored during 1939 the whole sustain phase of the traffic load profile. Ramp up and 1940 ramp down phase SHOULD NOT be considered. 1942 Generic criteria: 1944 a. Number of failed Application transactions (receiving any HTTP 1945 response code other than 200 OK) MUST be less than 0.001% (1 out 1946 of 100,000 transactions) of attempt transactions. 1948 b. Number of Terminated TCP connections due to unexpected TCP RST 1949 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1950 connections) of total initiated TCP connections 1952 c. During the sustain phase, traffic should be forwarded at a 1953 constant rate. 1955 d. Concurrent TCP connections MUST be constant during steady state 1956 and any deviation of concurrent TCP connections SHOULD be less 1957 than 10%. This confirms the DUT opens and closes TCP connections 1958 almost at the same rate 1960 e. After ramp up the DUT MUST achieve the "Target objective" defined 1961 in the parameter Section 7.8.3.2 and remain in that state for the 1962 entire test duration (sustain phase). 1964 7.8.3.4. Measurement 1966 Following KPI metrics MUST be reported for each test scenario and 1967 HTTPS response object sizes separately: 1969 TTFB (minimum, average and maximum) and TTLB (minimum, average and 1970 maximum) 1972 All KPI's are measured once the target connections per second 1973 achieves the steady state. 1975 7.8.4. Test Procedures and Expected Results 1977 The test procedure is designed to measure average TTFB or TTLB when 1978 the DUT is operating close to 50% of its maximum achievable 1979 connections per second. This test procedure can be repeated multiple 1980 times with different IP types (IPv4 only, IPv6 only and IPv4 and IPv6 1981 mixed traffic distribution), HTTPS response object sizes and single 1982 and multiple transactions per connection scenarios. 1984 7.8.4.1. Step 1: Test Initialization and Qualification 1986 Verify the link status of the all connected physical interfaces. All 1987 interfaces are expected to be in "UP" status. 1989 Configure traffic load profile of the test equipment to establish 1990 "Initial objective" as defined in the parameters Section 7.8.3.2. 1991 The traffic load profile can be defined as described in 1992 Section 4.3.4. 1994 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 1995 phase. The measured KPIs during the sustain phase MUST meet the 1996 validation criteria a, b, c, d, e and f defined in Section 7.8.3.3. 1998 If the KPI metrics do not meet the validation criteria, the test 1999 procedure MUST NOT be continued to "Step 2". 2001 7.8.4.2. Step 2: Test Run with Target Objective 2003 Configure test equipment to establish "Target objective" defined in 2004 the parameters table. The test equipment SHOULD follow the traffic 2005 load profile definition as described in Section 4.3.4. 2007 During the ramp up and sustain phase, other KPIs such as throughput, 2008 concurrent TCP connections and application transactions per second 2009 MUST NOT reach to the maximum value that the DUT/SUT can support. 2010 The test results for specific test iterations SHOULD NOT be reported, 2011 if the above mentioned KPI (especially throughput) reaches to the 2012 maximum value. (Example: If the test iteration with 64 KByte of HTTP 2013 response object size reached the maximum throughput limitation of the 2014 DUT, the test iteration MAY be interrupted and the result for 64 2015 KByte SHOULD NOT be reported). 2017 The test equipment SHOULD start to measure and record all specified 2018 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 2019 the test until all traffic profile phases are completed. DUT/SUT is 2020 expected to reach the desired "Target objective" at the sustain 2021 phase. In addition, the measured KPIs MUST meet all validation 2022 criteria. 2024 Follow step 3, if the KPI metrics do not meet the validation 2025 criteria. 2027 7.8.4.3. Step 3: Test Iteration 2029 Determine the maximum achievable connections per second within the 2030 validation criteria and measure the latency values. 2032 7.9. Concurrent TCP/HTTPS Connection Capacity 2034 7.9.1. Objective 2036 Determine the maximum number of concurrent TCP connections that the 2037 DUT/SUT sustains when using HTTPS traffic. 2039 7.9.2. Test Setup 2041 Test bed setup SHOULD be configured as defined in Section 4. Any 2042 specific test bed configuration changes such as number of interfaces 2043 and interface type, etc. MUST be documented. 2045 7.9.3. Test Parameters 2047 In this section, test scenario specific parameters SHOULD be defined. 2049 7.9.3.1. DUT/SUT Configuration Parameters 2051 DUT/SUT parameters MUST conform to the requirements defined in 2052 Section 4.2. Any configuration changes for this specific test 2053 scenario MUST be documented. 2055 7.9.3.2. Test Equipment Configuration Parameters 2057 Test equipment configuration parameters MUST conform to the 2058 requirements defined in Section 4.3. Following parameters MUST be 2059 documented for this test scenario: 2061 Client IP address range defined in Section 4.3.1.2 2063 Server IP address range defined in Section 4.3.2.2 2065 Traffic distribution ratio between IPv4 and IPv6 defined in 2066 Section 4.3.1.2 2068 RECOMMENDED cipher suites and key sizes defined in Section 4.3.1.3 2070 Target concurrent connections: Initial value from product 2071 datasheet (if known) 2073 Initial concurrent connections: 10% of "Target concurrent 2074 connections" (an optional parameter for documentation) 2076 Connections per second during ramp up phase: 50% of maximum 2077 connections per second measured in test scenario TCP/HTTPS 2078 Connections per second (Section 7.6) 2080 Ramp up time (in traffic load profile for "Target concurrent 2081 connections"): "Target concurrent connections" / "Maximum 2082 connections per second during ramp up phase" 2084 Ramp up time (in traffic load profile for "Initial concurrent 2085 connections"): "Initial concurrent connections" / "Maximum 2086 connections per second during ramp up phase" 2088 The client MUST perform HTTPS transaction with persistence and each 2089 client can open multiple concurrent TCP connections per server 2090 endpoint IP. 2092 Each client sends 10 GET commands requesting 1 KByte HTTPS response 2093 objects in the same TCP connections (10 transactions/TCP connection) 2094 and the delay (think time) between each transactions MUST be X 2095 seconds. 2097 X = ("Ramp up time" + "steady state time") /10 2099 The established connections SHOULD remain open until the ramp down 2100 phase of the test. During the ramp down phase, all connections 2101 SHOULD be successfully closed with FIN. 2103 7.9.3.3. Test Results Validation Criteria 2105 The following test Criteria is defined as test results validation 2106 criteria. Test results validation criteria MUST be monitored during 2107 the whole sustain phase of the traffic load profile. 2109 a. Number of failed Application transactions (receiving any HTTP 2110 response code other than 200 OK) MUST be less than 0.001% (1 out 2111 of 100,000 transactions) of total attempted transactions 2113 b. Number of Terminated TCP connections due to unexpected TCP RST 2114 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2115 connections) of total initiated TCP connections 2117 c. During the sustain phase, traffic SHOULD be forwarded constantly 2119 7.9.3.4. Measurement 2121 Following KPI metric MUST be reported for this test scenario: 2123 average Concurrent TCP Connections 2125 7.9.4. Test Procedures and Expected Results 2127 The test procedure is designed to measure the concurrent TCP 2128 connection capacity of the DUT/SUT at the sustaining period of 2129 traffic load profile. The test procedure consists of three major 2130 steps. This test procedure MAY be repeated multiple times with 2131 different IPv4 and IPv6 traffic distribution. 2133 7.9.4.1. Step 1: Test Initialization and Qualification 2135 Verify the link status of all connected physical interfaces. All 2136 interfaces are expected to be in "UP" status. 2138 Configure test equipment to establish "initial concurrent TCP 2139 connections" defined in Section 7.9.3.2. Except ramp up time, the 2140 traffic load profile SHOULD be defined as described in Section 4.3.4. 2142 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 2143 concurrent TCP connections". The measured KPIs during the sustain 2144 phase MUST meet the validation criteria "a" and "b" defined in 2145 Section 7.9.3.3. 2147 If the KPI metrics do not meet the validation criteria, the test 2148 procedure MUST NOT be continued to "Step 2". 2150 7.9.4.2. Step 2: Test Run with Target Objective 2152 Configure test equipment to establish "Target concurrent TCP 2153 connections". The test equipment SHOULD follow the traffic load 2154 profile definition (except ramp up time) as described in 2155 Section 4.3.4. 2157 During the ramp up and sustain phase, the other KPIs such as 2158 throughput, TCP connections per second and application transactions 2159 per second MUST NOT reach to the maximum value that the DUT/SUT can 2160 support. 2162 The test equipment SHOULD start to measure and record KPIs defined in 2163 Section 7.9.3.4. The frequency of measurement SHOULD be 2 seconds. 2164 Continue the test until all traffic profile phases are completed. 2166 The DUT/SUT is expected to reach the desired target concurrent 2167 connections at the sustain phase. In addition, the measured KPIs 2168 MUST meet all validation criteria. 2170 Follow step 3, if the KPI metrics do not meet the validation 2171 criteria. 2173 7.9.4.3. Step 3: Test Iteration 2175 Determine the maximum and average achievable concurrent TCP 2176 connections within the validation criteria. 2178 8. IANA Considerations 2180 This document makes no request of IANA. 2182 Note to RFC Editor: this section may be removed on publication as an 2183 RFC. 2185 9. Security Considerations 2187 The primary goal of this document is to provide benchmarking 2188 terminology and methodology for next-generation network security 2189 devices. However, readers should be aware that there is some overlap 2190 between performance and security issues. Specifically, the optimal 2191 configuration for network security device performance may not be the 2192 most secure, and vice-versa. The Cipher suites recommended in this 2193 document are just for test purpose only. The Cipher suite 2194 recommendation for a real deployment is outside the scope of this 2195 document. 2197 10. Acknowledgements 2199 Acknowledgements will be added in the future release. 2201 11. Contributors 2203 The authors would like to thank the many people that contributed 2204 their time and knowledge to this effort. 2206 Specifically, to the co-chairs of the NetSecOPEN Test Methodology 2207 working group and the NetSecOPEN Security Effectiveness working group 2208 - Alex Samonte, Aria Eslambolchizadeh, Carsten Rossenhoevel and David 2209 DeSanto. 2211 Additionally, the following people provided input, comments and spent 2212 time reviewing the myriad of drafts. If we have missed anyone the 2213 fault is entirely our own. Thanks to - Amritam Putatunda, Chao Guo, 2214 Chris Chapman, Chris Pearson, Chuck McAuley, David White, Jurrie Van 2215 Den Breekel, Michelle Rhines, Rob Andrews, Samaresh Nair, and Tim 2216 Winters. 2218 12. References 2220 12.1. Normative References 2222 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2223 Requirement Levels", BCP 14, RFC 2119, 2224 DOI 10.17487/RFC2119, March 1997, 2225 . 2227 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2228 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 2229 May 2017, . 2231 12.2. Informative References 2233 [RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., 2234 Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext 2235 Transfer Protocol -- HTTP/1.1", RFC 2616, 2236 DOI 10.17487/RFC2616, June 1999, 2237 . 2239 [RFC2647] Newman, D., "Benchmarking Terminology for Firewall 2240 Performance", RFC 2647, DOI 10.17487/RFC2647, August 1999, 2241 . 2243 [RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin, 2244 "Benchmarking Methodology for Firewall Performance", 2245 RFC 3511, DOI 10.17487/RFC3511, April 2003, 2246 . 2248 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 2249 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 2250 . 2252 Appendix A. NetSecOPEN Basic Traffic Mix 2254 A traffic mix for testing performance of next generation firewalls 2255 MUST scale to stress the DUT based on real-world conditions. In 2256 order to achieve this the following MUST be included: 2258 o Clients connecting to multiple different server FQDNs per 2259 application 2261 o Clients loading apps and pages with connections and objects in 2262 specific orders 2264 o Multiple unique certificates for HTTPS/TLS 2266 o A wide variety of different object sizes 2268 o Different URL paths 2270 o Mix of HTTP and HTTPS 2272 A traffic mix for testing performance of next generation firewalls 2273 MUST also facilitate application identification using different 2274 detection methods with and without decryption of the traffic. Such 2275 as: 2277 o HTTP HOST based application detection 2278 o HTTPS/TLS Server Name Indication (SNI) 2280 o Certificate Subject Common Name (CN) 2282 The mix MUST be of sufficient complexity and volume to render 2283 differences in individual apps as statistically insignificant. For 2284 example, changes in like to like apps - such as one type of video 2285 service vs. another both consist of larger objects whereas one news 2286 site vs. another both typically have more connections then other apps 2287 because of trackers and embedded advertising content. To achieve 2288 sufficient complexity, a mix MUST have: 2290 o Thousands of URLs each client walks thru 2292 o Hundreds of FQDNs each client connects to 2294 o Hundreds of unique certificates for HTTPS/TLS 2296 o Thousands of different object sizes per client in orders matching 2297 applications 2299 The following is a description of what a popular application in an 2300 enterprise traffic mix contains. 2302 Table 6 lists the FQDNs, number of transactions and bytes transferred 2303 as an example, client interactions with Office 365 Outlook, Word, 2304 Excel, PowerPoint, SharePoint and Skype. 2306 +---------------------------------+------------+-------------+ 2307 | Office365 FQDN | Bytes | Transaction | 2308 +============================================================+ 2309 | r1.res.office365.com | 14,056,960 | 192 | 2310 +---------------------------------+------------+-------------+ 2311 | s1-word-edit-15.cdn.office.net | 6,731,019 | 22 | 2312 +---------------------------------+------------+-------------+ 2313 | company1-my.sharepoint.com | 6,269,492 | 42 | 2314 +---------------------------------+------------+-------------+ 2315 | swx.cdn.skype.com | 6,100,027 | 12 | 2316 +---------------------------------+------------+-------------+ 2317 | static.sharepointonline.com | 6,036,947 | 41 | 2318 +---------------------------------+------------+-------------+ 2319 | spoprod-a.akamaihd.net | 3,904,250 | 25 | 2320 +---------------------------------+------------+-------------+ 2321 | s1-excel-15.cdn.office.net | 2,767,941 | 16 | 2322 +---------------------------------+------------+-------------+ 2323 | outlook.office365.com | 2,047,301 | 86 | 2324 +---------------------------------+------------+-------------+ 2325 | shellprod.msocdn.com | 1,008,370 | 11 | 2326 +---------------------------------+------------+-------------+ 2327 | word-edit.officeapps.live.com | 932,080 | 25 | 2328 +---------------------------------+------------+-------------+ 2329 | res.delve.office.com | 760,146 | 2 | 2330 +---------------------------------+------------+-------------+ 2331 | s1-powerpoint-15.cdn.office.net | 557,604 | 3 | 2332 +---------------------------------+------------+-------------+ 2333 | appsforoffice.microsoft.com | 511,171 | 5 | 2334 +---------------------------------+------------+-------------+ 2335 | powerpoint.officeapps.live.com | 471,625 | 14 | 2336 +---------------------------------+------------+-------------+ 2337 | excel.officeapps.live.com | 342,040 | 14 | 2338 +---------------------------------+------------+-------------+ 2339 | s1-officeapps-15.cdn.office.net | 331,343 | 5 | 2340 +---------------------------------+------------+-------------+ 2341 | webdir0a.online.lync.com | 66,930 | 15 | 2342 +---------------------------------+------------+-------------+ 2343 | portal.office.com | 13,956 | 1 | 2344 +---------------------------------+------------+-------------+ 2345 | config.edge.skype.com | 6,911 | 2 | 2346 +---------------------------------+------------+-------------+ 2347 | clientlog.portal.office.com | 6,608 | 8 | 2348 +---------------------------------+------------+-------------+ 2349 | webdir.online.lync.com | 4,343 | 5 | 2350 +---------------------------------+------------+-------------+ 2351 | graph.microsoft.com | 2,289 | 2 | 2352 +---------------------------------+------------+-------------+ 2353 | nam.loki.delve.office.com | 1,812 | 5 | 2354 +---------------------------------+------------+-------------+ 2355 | login.microsoftonline.com | 464 | 2 | 2356 +---------------------------------+------------+-------------+ 2357 | login.windows.net | 232 | 1 | 2358 +---------------------------------+------------+-------------+ 2360 Table 6: Office365 2362 Clients MUST connect to multiple server FQDNs in the same order as 2363 real applications. Connections MUST be made when the client is 2364 interacting with the application and MUST NOT first setup up all 2365 connections. Connections SHOULD stay open per client for subsequent 2366 transactions to the same FQDN similar to how a web browser behaves. 2367 Clients MUST use different URL Paths and Object sizes in orders as 2368 they are observed in real Applications. Clients MAY also setup 2369 multiple connections per FQDN to process multiple transactions in a 2370 sequence at the same time. Table 7 has a partial example sequence of 2371 the Office 365 Word application transactions. 2373 +---------------------------------+----------------------+----------+ 2374 | FQDN | URL Path | Object | 2375 | | | size | 2376 +===================================================================+ 2377 | company1-my.sharepoint.com | /personal... | 23,132 | 2378 +---------------------------------+----------------------+----------+ 2379 | word-edit.officeapps.live.com | /we/WsaUpload.ashx | 2 | 2380 +---------------------------------+----------------------+----------+ 2381 | static.sharepointonline.com | /bld/.../blank.js | 454 | 2382 +---------------------------------+----------------------+----------+ 2383 | static.sharepointonline.com | /bld/.../ | 23,254 | 2384 | | initstrings.js | | 2385 +---------------------------------+----------------------+----------+ 2386 | static.sharepointonline.com | /bld/.../init.js | 292,740 | 2387 +---------------------------------+----------------------+----------+ 2388 | company1-my.sharepoint.com | /ScriptResource... | 102,774 | 2389 +---------------------------------+----------------------+----------+ 2390 | company1-my.sharepoint.com | /ScriptResource... | 40,329 | 2391 +---------------------------------+----------------------+----------+ 2392 | company1-my.sharepoint.com | /WebResource... | 23,063 | 2393 +---------------------------------+----------------------+----------+ 2394 | word-edit.officeapps.live.com | /we/wordeditorframe. | 60,657 | 2395 | | aspx... | | 2396 +---------------------------------+----------------------+----------+ 2397 | static.sharepointonline.com | /bld/_layouts/.../ | 454 | 2398 | | blank.js | | 2399 +---------------------------------+----------------------+----------+ 2400 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 19,201 | 2401 | | EditSurface.css | | 2402 +---------------------------------+----------------------+----------+ 2403 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 221,397 | 2404 | | WordEditor.css | | 2405 +---------------------------------+----------------------+----------+ 2406 | s1-officeapps-15.cdn.office.net | /we/s/.../ | 107,571 | 2407 | | Microsoft | | 2408 | | Ajax.js | | 2409 +---------------------------------+----------------------+----------+ 2410 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 39,981 | 2411 | | wacbootwe.js | | 2412 +---------------------------------+----------------------+----------+ 2413 | s1-officeapps-15.cdn.office.net | /we/s/.../ | 51,749 | 2414 | | CommonIntl.js | | 2415 +---------------------------------+----------------------+----------+ 2416 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 6,050 | 2417 | | Compat.js | | 2418 +---------------------------------+----------------------+----------+ 2419 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 54,158 | 2420 | | Box4Intl.js | | 2421 +---------------------------------+----------------------+----------+ 2422 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 24,946 | 2423 | | WoncaIntl.js | | 2424 +---------------------------------+----------------------+----------+ 2425 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 53,515 | 2426 | | WordEditorIntl.js | | 2427 +---------------------------------+----------------------+----------+ 2428 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 1,978,712| 2429 | | WordEditorExp.js | | 2430 +---------------------------------+----------------------+----------+ 2431 | s1-word-edit-15.cdn.office.net | /we/s/.../jSanity.js | 10,912 | 2432 +---------------------------------+----------------------+----------+ 2433 | word-edit.officeapps.live.com | /we/OneNote.ashx | 145,708 | 2434 +---------------------------------+----------------------+----------+ 2436 Table 7: Office365 Word Transactions 2438 For application identification the HTTPS/TLS traffic MUST include 2439 realistic Certificate Subject Common Name (CN) data as well as Server 2440 Name Indications (SNI). For example, a DUT MAY detect Facebook Chat 2441 traffic by inspecting the certificate and detecting *.facebook.com in 2442 the certificate subject CN and subsequently detect the word chat in 2443 the FQDN 5-edge-chat.facebook.com and identify traffic on the 2444 connection to be Facebook Chat. 2446 Table 8 includes further examples in SNI and CN pairs for several 2447 FQDNs of Office 365. 2449 +------------------------------+----------------------------------+ 2450 |Server Name Indication (SNI) | Certificate Subject | 2451 | | Common Name (CN) | 2452 +=================================================================+ 2453 | r1.res.office365.com | *.res.outlook.com | 2454 +------------------------------+----------------------------------+ 2455 | login.windows.net | graph.windows.net | 2456 +------------------------------+----------------------------------+ 2457 | webdir0a.online.lync.com | *.online.lync.com | 2458 +------------------------------+----------------------------------+ 2459 | login.microsoftonline.com | stamp2.login.microsoftonline.com | 2460 +------------------------------+----------------------------------+ 2461 | webdir.online.lync.com | *.online.lync.com | 2462 +------------------------------+----------------------------------+ 2463 | graph.microsoft.com | graph.microsoft.com | 2464 +------------------------------+----------------------------------+ 2465 | outlook.office365.com | outlook.com | 2466 +------------------------------+----------------------------------+ 2467 | appsforoffice.microsoft.com | appsforoffice.microsoft.com | 2468 +------------------------------+----------------------------------+ 2470 Table 8: Office365 SNI and CN Pairs Examples 2472 NetSecOPEN has provided a reference enterprise perimeter traffic mix 2473 with dozens of applications, hundreds of connections, and thousands 2474 of transactions. 2476 The enterprise perimeter traffic mix consists of 70% HTTPS and 30% 2477 HTTP by Bytes, 58% HTTPS and 42% HTTP by Transactions. By 2478 connections with a single connection per FQDN the mix consists of 43% 2479 HTTPS and 57% HTTP. With multiple connections per FQDN the HTTPS 2480 percentage is higher. 2482 Table 9 is a summary of the NetSecOPEN enterprise perimeter traffic 2483 mix sorted by bytes with unique FQDNs and transactions per 2484 applications. 2486 +------------------+-------+--------------+-------------+ 2487 | Application | FQDNs | Transactions | Bytes | 2488 +=======================================================+ 2489 | Office365 | 26 | 558 | 52,931,947 | 2490 +------------------+-------+--------------+-------------+ 2491 | Box | 4 | 90 | 23,276,089 | 2492 +------------------+-------+--------------+-------------+ 2493 | Salesforce | 6 | 365 | 23,137,548 | 2494 +------------------+-------+--------------+-------------+ 2495 | Gmail | 13 | 139 | 16,399,289 | 2496 +------------------+-------+--------------+-------------+ 2497 | Linkedin | 10 | 206 | 15,040,918 | 2498 +------------------+-------+--------------+-------------+ 2499 | DailyMotion | 8 | 77 | 14,751,514 | 2500 +------------------+-------+--------------+-------------+ 2501 | GoogleDocs | 2 | 71 | 14,205,476 | 2502 +------------------+-------+--------------+-------------+ 2503 | Wikia | 15 | 159 | 13,909,777 | 2504 +------------------+-------+--------------+-------------+ 2505 | Foxnews | 82 | 499 | 13,758,899 | 2506 +------------------+-------+--------------+-------------+ 2507 | Yahoo Finance | 33 | 254 | 13,134,011 | 2508 +------------------+-------+--------------+-------------+ 2509 | Youtube | 8 | 97 | 13,056,216 | 2510 +------------------+-------+--------------+-------------+ 2511 | Facebook | 4 | 207 | 12,726,231 | 2512 +------------------+-------+--------------+-------------+ 2513 | CNBC | 77 | 275 | 11,939,566 | 2514 +------------------+-------+--------------+-------------+ 2515 | Lightreading | 27 | 304 | 11,200,864 | 2516 +------------------+-------+--------------+-------------+ 2517 | BusinessInsider | 16 | 142 | 11,001,575 | 2518 +------------------+-------+--------------+-------------+ 2519 | Alexa | 5 | 153 | 10,475,151 | 2520 +------------------+-------+--------------+-------------+ 2521 | CNN | 41 | 206 | 10,423,740 | 2522 +------------------+-------+--------------+-------------+ 2523 | Twitter Video | 2 | 72 | 10,112,820 | 2524 +------------------+-------+--------------+-------------+ 2525 | Cisco Webex | 1 | 213 | 9,988,417 | 2526 +------------------+-------+--------------+-------------+ 2527 | Slack | 3 | 40 | 9,938,686 | 2528 +------------------+-------+--------------+-------------+ 2529 | Google Maps | 5 | 191 | 8,771,873 | 2530 +------------------+-------+--------------+-------------+ 2531 | SpectrumIEEE | 7 | 145 | 8,682,629 | 2532 +------------------+-------+--------------+-------------+ 2533 | Yelp | 9 | 146 | 8,607,645 | 2534 +------------------+-------+--------------+-------------+ 2535 | Vimeo | 12 | 74 | 8,555,960 | 2536 +------------------+-------+--------------+-------------+ 2537 | Wikihow | 11 | 140 | 8,042,314 | 2538 +------------------+-------+--------------+-------------+ 2539 | Netflix | 3 | 31 | 7,839,256 | 2540 +------------------+-------+--------------+-------------+ 2541 | Instagram | 3 | 114 | 7,230,883 | 2542 +------------------+-------+--------------+-------------+ 2543 | Morningstar | 30 | 150 | 7,220,121 | 2544 +------------------+-------+--------------+-------------+ 2545 | Docusign | 5 | 68 | 6,972,738 | 2546 +------------------+-------+--------------+-------------+ 2547 | Twitter | 1 | 100 | 6,939,150 | 2548 +------------------+-------+--------------+-------------+ 2549 | Tumblr | 11 | 70 | 6,877,200 | 2550 +------------------+-------+--------------+-------------+ 2551 | Whatsapp | 3 | 46 | 6,829,848 | 2552 +------------------+-------+--------------+-------------+ 2553 | Imdb | 16 | 251 | 6,505,227 | 2554 +------------------+-------+--------------+-------------+ 2555 | NOAAgov | 1 | 44 | 6,316,283 | 2556 +------------------+-------+--------------+-------------+ 2557 | IndustryWeek | 23 | 192 | 6,242,403 | 2558 +------------------+-------+--------------+-------------+ 2559 | Spotify | 18 | 119 | 6,231,013 | 2560 +------------------+-------+--------------+-------------+ 2561 | AutoNews | 16 | 165 | 6,115,354 | 2562 +------------------+-------+--------------+-------------+ 2563 | Evernote | 3 | 47 | 6,063,168 | 2564 +------------------+-------+--------------+-------------+ 2565 | NatGeo | 34 | 104 | 6,026,344 | 2566 +------------------+-------+--------------+-------------+ 2567 | BBC News | 18 | 156 | 5,898,572 | 2568 +------------------+-------+--------------+-------------+ 2569 | Investopedia | 38 | 241 | 5,792,038 | 2570 +------------------+-------+--------------+-------------+ 2571 | Pinterest | 8 | 102 | 5,658,994 | 2572 +------------------+-------+--------------+-------------+ 2573 | Succesfactors | 2 | 112 | 5,049,001 | 2574 +------------------+-------+--------------+-------------+ 2575 | AbaJournal | 6 | 93 | 4,985,626 | 2576 +------------------+-------+--------------+-------------+ 2577 | Pbworks | 4 | 78 | 4,670,980 | 2578 +------------------+-------+--------------+-------------+ 2579 | NetworkWorld | 42 | 153 | 4,651,354 | 2580 +------------------+-------+--------------+-------------+ 2581 | WebMD | 24 | 280 | 4,416,736 | 2582 +------------------+-------+--------------+-------------+ 2583 | OilGasJournal | 14 | 105 | 4,095,255 | 2584 +------------------+-------+--------------+-------------+ 2585 | Trello | 5 | 39 | 4,080,182 | 2586 +------------------+-------+--------------+-------------+ 2587 | BusinessWire | 5 | 109 | 4,055,331 | 2588 +------------------+-------+--------------+-------------+ 2589 | Dropbox | 5 | 17 | 4,023,469 | 2590 +------------------+-------+--------------+-------------+ 2591 | Nejm | 20 | 190 | 4,003,657 | 2592 +------------------+-------+--------------+-------------+ 2593 | OilGasDaily | 7 | 199 | 3,970,498 | 2594 +------------------+-------+--------------+-------------+ 2595 | Chase | 6 | 52 | 3,719,232 | 2596 +------------------+-------+--------------+-------------+ 2597 | MedicalNews | 6 | 117 | 3,634,187 | 2598 +------------------+-------+--------------+-------------+ 2599 | Marketwatch | 25 | 142 | 3,291,226 | 2600 +------------------+-------+--------------+-------------+ 2601 | Imgur | 5 | 48 | 3,189,919 | 2602 +------------------+-------+--------------+-------------+ 2603 | NPR | 9 | 83 | 3,184,303 | 2604 +------------------+-------+--------------+-------------+ 2605 | Onelogin | 2 | 31 | 3,132,707 | 2606 +------------------+-------+--------------+-------------+ 2607 | Concur | 2 | 50 | 3,066,326 | 2608 +------------------+-------+--------------+-------------+ 2609 | Service-now | 1 | 37 | 2,985,329 | 2610 +------------------+-------+--------------+-------------+ 2611 | Apple itunes | 14 | 80 | 2,843,744 | 2612 +------------------+-------+--------------+-------------+ 2613 | BerkeleyEdu | 3 | 69 | 2,622,009 | 2614 +------------------+-------+--------------+-------------+ 2615 | MSN | 39 | 203 | 2,532,972 | 2616 +------------------+-------+--------------+-------------+ 2617 | Indeed | 3 | 47 | 2,325,197 | 2618 +------------------+-------+--------------+-------------+ 2619 | MayoClinic | 6 | 56 | 2,269,085 | 2620 +------------------+-------+--------------+-------------+ 2621 | Ebay | 9 | 164 | 2,219,223 | 2622 +------------------+-------+--------------+-------------+ 2623 | UCLAedu | 3 | 42 | 1,991,311 | 2624 +------------------+-------+--------------+-------------+ 2625 | ConstructionDive | 5 | 125 | 1,828,428 | 2626 +------------------+-------+--------------+-------------+ 2627 | EducationNews | 4 | 78 | 1,605,427 | 2628 +------------------+-------+--------------+-------------+ 2629 | BofA | 12 | 68 | 1,584,851 | 2630 +------------------+-------+--------------+-------------+ 2631 | ScienceDirect | 7 | 26 | 1,463,951 | 2632 +------------------+-------+--------------+-------------+ 2633 | Reddit | 8 | 55 | 1,441,909 | 2634 +------------------+-------+--------------+-------------+ 2635 | FoodBusinessNews | 5 | 49 | 1,378,298 | 2636 +------------------+-------+--------------+-------------+ 2637 | Amex | 8 | 42 | 1,270,696 | 2638 +------------------+-------+--------------+-------------+ 2639 | Weather | 4 | 50 | 1,243,826 | 2640 +------------------+-------+--------------+-------------+ 2641 | Wikipedia | 3 | 27 | 958,935 | 2642 +------------------+-------+--------------+-------------+ 2643 | Bing | 1 | 52 | 697,514 | 2644 +------------------+-------+--------------+-------------+ 2645 | ADP | 1 | 30 | 508,654 | 2646 +------------------+-------+--------------+-------------+ 2647 | | | | | 2648 +------------------+-------+--------------+-------------+ 2649 | Grand Total | 983 | 10021 | 569,819,095 | 2650 +------------------+-------+--------------+-------------+ 2652 Table 9: Summary of NetSecOPEN Enterprise Perimeter Traffic Mix 2654 Authors' Addresses 2656 Balamuhunthan Balarajah 2658 Email: bm.balarajah@gmail.com 2660 Carsten Rossenhoevel 2661 EANTC AG 2662 Salzufer 14 2663 Berlin 10587 2664 Germany 2666 Email: cross@eantc.de 2668 Brian Monkman 2669 NetSecOPEN 2670 417 Independence Court 2671 Mechanicsburg, PA 17050 2672 USA 2674 Email: bmonkman@netsecopen.org