idnits 2.17.1 draft-ietf-bmwg-ngfw-performance-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (September 3, 2019) is 1669 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 2616 (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) -- Obsolete informational reference (is this intentional?): RFC 3511 (Obsoleted by RFC 9411) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Methodology Working Group B. Balarajah 3 Internet-Draft 4 Intended status: Informational C. Rossenhoevel 5 Expires: March 6, 2020 EANTC AG 6 B. Monkman 7 NetSecOPEN 8 September 3, 2019 10 Benchmarking Methodology for Network Security Device Performance 11 draft-ietf-bmwg-ngfw-performance-01 13 Abstract 15 This document provides benchmarking terminology and methodology for 16 next-generation network security devices including next-generation 17 firewalls (NGFW), intrusion detection and prevention solutions (IDS/ 18 IPS) and unified threat management (UTM) implementations. This 19 document aims to strongly improve the applicability, reproducibility, 20 and transparency of benchmarks and to align the test methodology with 21 today's increasingly complex layer 7 application use cases. The main 22 areas covered in this document are test terminology, traffic profiles 23 and benchmarking methodology for NGFWs to start with. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at https://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on March 6, 2020. 42 Copyright Notice 44 Copyright (c) 2019 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (https://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 60 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 61 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 62 4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 4 63 4.1. Testbed Configuration . . . . . . . . . . . . . . . . . . 4 64 4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 5 65 4.3. Test Equipment Configuration . . . . . . . . . . . . . . 9 66 4.3.1. Client Configuration . . . . . . . . . . . . . . . . 9 67 4.3.2. Backend Server Configuration . . . . . . . . . . . . 11 68 4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 11 69 4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 12 70 5. Test Bed Considerations . . . . . . . . . . . . . . . . . . . 13 71 6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 14 72 6.1. Key Performance Indicators . . . . . . . . . . . . . . . 15 73 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 16 74 7.1. Throughput Performance With NetSecOPEN Traffic Mix . . . 17 75 7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 17 76 7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 17 77 7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 17 78 7.1.4. Test Procedures and expected Results . . . . . . . . 19 79 7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 20 80 7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 20 81 7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 20 82 7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 20 83 7.2.4. Test Procedures and Expected Results . . . . . . . . 22 84 7.3. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 23 85 7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 23 86 7.3.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 23 87 7.3.3. Test Parameters . . . . . . . . . . . . . . . . . . . 23 88 7.3.4. Test Procedures and Expected Results . . . . . . . . 25 89 7.4. TCP/HTTP Transaction Latency . . . . . . . . . . . . . . 26 90 7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 26 91 7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 26 92 7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 26 93 7.4.4. Test Procedures and Expected Results . . . . . . . . 28 94 7.5. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 29 95 7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 29 96 7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 30 97 7.5.3. Test Parameters . . . . . . . . . . . . . . . . . . . 30 98 7.5.4. Test Procedures and expected Results . . . . . . . . 31 99 7.6. TCP/HTTPS Connections per second . . . . . . . . . . . . 33 100 7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 33 101 7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 33 102 7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 33 103 7.6.4. Test Procedures and expected Results . . . . . . . . 35 104 7.7. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 36 105 7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 36 106 7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 36 107 7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 36 108 7.7.4. Test Procedures and Expected Results . . . . . . . . 39 109 7.8. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 40 110 7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 40 111 7.8.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 40 112 7.8.3. Test Parameters . . . . . . . . . . . . . . . . . . . 40 113 7.8.4. Test Procedures and Expected Results . . . . . . . . 42 114 7.9. Concurrent TCP/HTTPS Connection Capacity . . . . . . . . 43 115 7.9.1. Objective . . . . . . . . . . . . . . . . . . . . . . 43 116 7.9.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 43 117 7.9.3. Test Parameters . . . . . . . . . . . . . . . . . . . 43 118 7.9.4. Test Procedures and expected Results . . . . . . . . 45 119 8. Formal Syntax . . . . . . . . . . . . . . . . . . . . . . . . 46 120 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 46 121 10. Security Considerations . . . . . . . . . . . . . . . . . . . 46 122 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 47 123 12. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 47 124 13. References . . . . . . . . . . . . . . . . . . . . . . . . . 47 125 13.1. Normative References . . . . . . . . . . . . . . . . . . 47 126 13.2. Informative References . . . . . . . . . . . . . . . . . 47 127 Appendix A. NetSecOPEN Basic Traffic Mix . . . . . . . . . . . . 48 128 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 56 130 1. Introduction 132 15 years have passed since IETF recommended test methodology and 133 terminology for firewalls initially ([RFC2647], [RFC3511]). The 134 requirements for network security element performance and 135 effectiveness have increased tremendously since then. Security 136 function implementations have evolved to more advanced areas and have 137 diversified into intrusion detection and prevention, threat 138 management, analysis of encrypted traffic, etc. In an industry of 139 growing importance, well-defined and reproducible key performance 140 indicators (KPIs) are increasingly needed: They enable fair and 141 reasonable comparison of network security functions. All these 142 reasons have led to the creation of a new next-generation firewall 143 benchmarking document. 145 2. Requirements 147 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 148 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 149 "OPTIONAL" in this document are to be interpreted as described in BCP 150 14 [RFC2119], [RFC8174] when, and only when, they appear in all 151 capitals, as shown here. 153 3. Scope 155 This document provides testing terminology and testing methodology 156 for next-generation firewalls and related security functions. It 157 covers two main areas: security effectiveness configurations, 158 followed by performance benchmark testing. This document focuses on 159 advanced, realistic, and reproducible testing methods. Additionally, 160 it describes test bed environments, test tool requirements and test 161 result formats. 163 4. Test Setup 165 Test setup defined in this document is applicable to all benchmarking 166 test scenarios described in Section 7. 168 4.1. Testbed Configuration 170 Testbed configuration MUST ensure that any performance implications 171 that are discovered during the benchmark testing aren't due to the 172 inherent physical network limitations such as number of physical 173 links and forwarding performance capabilities (throughput and 174 latency) of the network devise in the testbed. For this reason, this 175 document recommends avoiding external devices such as switches and 176 routers in the testbed wherever possible. 178 However, in the typical deployment, the security devices ( Device 179 Under Test/System Under Test) are connected to routers and switches 180 which will reduce the number of entries in MAC or ARP tables of the 181 Device Under Test/System Under Test (DUT/SUT). If MAC or ARP tables 182 have many entries, this may impact the actual DUT/SUT performance due 183 to MAC and ARP/ND table lookup processes. Therefore, it is 184 RECOMMENDED to connect aggregation switches or routers between test 185 equipment and DUT/SUT as shown in Figure 1. The aggregation switches 186 or routers can be also used to aggregate the test equipment or DUT/ 187 SUT ports, if the numbers of used ports are mismatched between test 188 equipment and DUT/SUT. 190 If the test equipment is capable of emulating layer 3 routing 191 functionality and there is no need for test equipment port 192 aggregation, it is RECOMMENDED to configure the test setup as shown 193 in Figure 2. 195 +-------------------+ +-----------+ +--------------------+ 196 |Aggregation Switch/| | | | Aggregation Switch/| 197 | Router +------+ DUT/SUT +------+ Router | 198 | | | | | | 199 +----------+--------+ +-----------+ +--------+-----------+ 200 | | 201 | | 202 +-----------+-----------+ +-----------+-----------+ 203 | | | | 204 | +-------------------+ | | +-------------------+ | 205 | | Emulated Router(s)| | | | Emulated Router(s)| | 206 | | (Optional) | | | | (Optional) | | 207 | +-------------------+ | | +-------------------+ | 208 | +-------------------+ | | +-------------------+ | 209 | | Clients | | | | Servers | | 210 | +-------------------+ | | +-------------------+ | 211 | | | | 212 | Test Equipment | | Test Equipment | 213 +-----------------------+ +-----------------------+ 215 Figure 1: Testbed Setup - Option 1 217 +-----------------------+ +-----------------------+ 218 | +-------------------+ | +-----------+ | +-------------------+ | 219 | | Emulated Router(s)| | | | | | Emulated Router(s)| | 220 | | (Optional) | +----- DUT/SUT +-----+ (Optional) | | 221 | +-------------------+ | | | | +-------------------+ | 222 | +-------------------+ | +-----------+ | +-------------------+ | 223 | | Clients | | | | Servers | | 224 | +-------------------+ | | +-------------------+ | 225 | | | | 226 | Test Equipment | | Test Equipment | 227 +-----------------------+ +-----------------------+ 229 Figure 2: Testbed Setup - Option 2 231 4.2. DUT/SUT Configuration 233 A unique DUT/SUT configuration MUST be used for all benchmarking 234 tests described in Section 7. Since each DUT/SUT will have their own 235 unique configuration, users SHOULD configure their device with the 236 same parameters and security features that would be used in the 237 actual deployment of the device or a typical deployment in order to 238 achieve maximum security coverage. 240 This document attempts to define the recommended security features 241 which SHOULD be consistently enabled for all the benchmarking tests 242 described in Section 7. Table 1 below describes the RECOMMENDED sets 243 of feature list which SHOULD be configured on the DUT/SUT. 245 Based on customer use case, users MAY enable or disable SSL 246 inspection feature for "Throughput Performance with NetSecOPEN 247 Traffic Mix" test scenario described in Section 7.1 249 To improve repeatability, a summary of the DUT configuration 250 including description of all enabled DUT/SUT features MUST be 251 published with the benchmarking results. 253 +------------------------+ 254 | NGFW | 255 +-------------- +-------------+----------+ 256 | | | | 257 |DUT Features | RECOMMENDED | OPTIONAL | 258 | | | | 259 +----------------------------------------+ 260 |SSL Inspection | x | | 261 +----------------------------------------+ 262 |IDS/IPS | x | | 263 +----------------------------------------+ 264 |Web Filtering | | x | 265 +----------------------------------------+ 266 |Antivirus | x | | 267 +----------------------------------------+ 268 |Anti Spyware | x | | 269 +----------------------------------------+ 270 |Anti Botnet | x | | 271 +----------------------------------------+ 272 |DLP | | x | 273 +----------------------------------------+ 274 |DDoS | | x | 275 +----------------------------------------+ 276 |Certificate | | x | 277 |Validation | | | 278 +----------------------------------------+ 279 |Logging and | x | | 280 |Reporting | | | 281 +-------------- +------------------------+ 282 |Application | x | | 283 |Identification | | | 284 +---------------+-------------+----------+ 286 Table 1: DUT/SUT Feature List 288 In summary, DUT/SUT SHOULD be configured as follows: 290 o All security inspection enabled 292 o Disposition of all traffic is logged - Logging to an external 293 device is permissible 295 o Detection of Common Vulnerabilities and Exposures (CVE) matching 296 the following characteristics when searching the National 297 Vulnerability Database (NVD) 299 * Common Vulnerability Scoring System (CVSS) Version: 2 301 * CVSS V2 Metrics: AV:N/Au:N/I:C/A:C 303 * AV=Attack Vector, Au=Authentication, I=Integrity and 304 A=Availability 306 * CVSS V2 Severity: High (7-10) 308 * If doing a group test the published start date and published 309 end date SHOULD be the same 311 o Geographical location filtering and Application Identification and 312 Control configured to be triggered based on a site or application 313 from the defined traffic mix 315 In addition, it is also RECOMMENDED to configure a realistic number 316 of access policy rules on the DUT/SUT. This document determines the 317 number of access policy rules for three different classes of DUT/SUT. 318 The classification of the DUT/SUT MAY be based on its maximum 319 supported firewall throughput performance number defined in the 320 vendor data sheet. This document classifies the DUT/SUT in four 321 different categories; namely Extra Small, Small, Medium, and Large. 323 The RECOMMENDED throughput values for the following classes are: 325 Extra Small (XS) - supported throughput less than 1Gbit/s 327 Small (S) - supported throughput less than 5Gbit/s 329 Medium (M) - supported throughput greater than 5Gbit/s and less than 330 10Gbit/s 332 Large (L) - supported throughput greater than 10Gbit/s 333 The Access Conrol Rules (ACL) defined in Table 2 SHOULD be configured 334 from top to bottom in the correct order as shown in the table. 335 (Note: There will be differences between how security vendors 336 implement ACL decision making.) The configured ACL MUST NOT block 337 the test traffic used for the benchmarking test scenarios. 339 +---------------------------------------------------+---------------+ 340 | | DUD/SUT | 341 | | Classification| 342 | | #rules | 343 +-----------+-----------+------------------+------------+---+---+---+ 344 | | Match | | | | | | | 345 | Rules Type| Criteria | Description | Action | XS| S | M | L | 346 +-------------------------------------------------------------------+ 347 |Application|Application| Any application | block | 5 | 10| 20| 50| 348 |layer | | traffic NOT | | | | | | 349 | | | included in the | | | | | | 350 | | | test traffic | | | | | | 351 +-----------------------+ ------------------------------------------+ 352 |Transport |Src IP and | Any src IP subnet| block | 25| 50|100|250| 353 |layer |TCP/UDP | used in the test | | | | | | 354 | |Dst ports | AND any dst ports| | | | | | 355 | | | NOT used in the | | | | | | 356 | | | test traffic | | | | | | 357 +-------------------------------------------------------------------+ 358 |IP layer |Src/Dst IP | Any src/dst IP | block | 25| 50|100|250| 359 | | | subnet NOT used | | | | | | 360 | | | in the test | | | | | | 361 +-------------------------------------------------------------------+ 362 |Application|Application| Applications | allow | 10| 10| 10| 10| 363 |layer | | included in the | | | | | | 364 | | | test traffic | | | | | | 365 +-------------------------------------------------------------------+ 366 |Transport |Src IP and | Half of the src | allow | 1| 1| 1| 1| 367 |layer |TCP/UDP | IP used in the | | | | | | 368 | |Dst ports | test AND any dst | | | | | | 369 | | | ports used in the| | | | | | 370 | | | test traffic. One| | | | | | 371 | | | rule per subnet | | | | | | 372 +-------------------------------------------------------------------+ 373 |IP layer |Src IP | The rest of the | allow | 1| 1| 1| 1| 374 | | | src IP subnet | | | | | | 375 | | | range used in the| | | | | | 376 | | | test. One rule | | | | | | 377 | | | per subnet | | | | | | 378 +-----------+-----------+------------------+--------+---+---+---+---+ 380 Table 2: DUT/SUT Access List 382 4.3. Test Equipment Configuration 384 In general, test equipment allows configuring parameters in different 385 protocol layers. These parameters thereby influence the traffic 386 flows which will be offered and impact performance measurements. 388 This document specifies common test equipment configuration 389 parameters applicable for all test scenarios defined in Section 7. 390 Any test scenario specific parameters are described under the test 391 setup section of each test scenario individually. 393 4.3.1. Client Configuration 395 This section specifies which parameters SHOULD be considered while 396 configuring clients using test equipment. Also, this section 397 specifies the recommended values for certain parameters. 399 4.3.1.1. TCP Stack Attributes 401 The TCP stack SHOULD use a TCP Reno [RFC5681] variant, which include 402 congestion avoidance, back off and windowing, fast retransmission, 403 and fast recovery on every TCP connection between client and server 404 endpoints. The default IPv4 and IPv6 MSS segments size MUST be set 405 to 1460 bytes and 1440 bytes respectively and a TX and RX receive 406 windows of 64 KByte. Client initial congestion window MUST NOT 407 exceed 10 times the MSS. Delayed ACKs are permitted and the maximum 408 client delayed Ack MUST NOT exceed 10 times the MSS before a forced 409 ACK. Up to 3 retries SHOULD be allowed before a timeout event is 410 declared. All traffic MUST set the TCP PSH flag to high. The source 411 port range SHOULD be in the range of 1024 - 65535. Internal timeout 412 SHOULD be dynamically scalable per RFC 793. Client SHOULD initiate 413 and close TCP connections. TCP connections MUST be closed via FIN. 415 4.3.1.2. Client IP Address Space 417 The sum of the client IP space SHOULD contain the following 418 attributes. The traffic blocks SHOULD consist of multiple unique, 419 discontinuous static address blocks. A default gateway is permitted. 420 The IPv4 ToS byte or IPv6 traffic class should be set to '00' or 421 '000000' respectively. 423 The following equation can be used to determine the required total 424 number of client IP addresses. 426 Desired total number of client IP = Target throughput [Mbit/s] / 427 Throughput per IP address [Mbit/s] 428 Based on deployment and use case scenario, the value for "Throughput 429 per IP address" can be varied. 431 (Option 1) DUT/SUT deployment scenario 1 : 6-7 Mbit/s per IP (e.g. 432 1,400-1,700 IPs per 10Gbit/s throughput) 434 (Option 2) DUT/SUT deployment scenario 2 : 0.1-0.2 Mbit/s per IP 435 (e.g. 50,000-100,000 IPs per 10Gbit/s throughput) 437 Based on deployment and use case scenario, client IP addresses SHOULD 438 be distributed between IPv4 and IPv6 type. The Following options can 439 be considered for a selection of traffic mix ratio. 441 (Option 1) 100 % IPv4, no IPv6 443 (Option 2) 80 % IPv4, 20% IPv6 445 (Option 3) 50 % IPv4, 50% IPv6 447 (Option 4) 20 % IPv4, 80% IPv6 449 (Option 5) no IPv4, 100% IPv6 451 4.3.1.3. Emulated Web Browser Attributes 453 The emulated web browser contains attributes that will materially 454 affect how traffic is loaded. The objective is to emulate modern, 455 typical browser attributes to improve realism of the result set. 457 For HTTP traffic emulation, the emulated browser MUST negotiate HTTP 458 1.1. HTTP persistency MAY be enabled depending on test scenario. 459 The browser MAY open multiple TCP connections per Server endpoint IP 460 at any time depending on how many sequential transactions are needed 461 to be processed. Within the TCP connection multiple transactions MAY 462 be processed if the emulated browser has available connections. The 463 browser SHOULD advertise a User-Agent header. Headers MUST be sent 464 uncompressed. The browser SHOULD enforce content length validation. 466 For encrypted traffic, the following attributes SHALL define the 467 negotiated encryption parameters. The test clients MUST use TLSv1.2 468 or higher. TLS record size MAY be optimized for the HTTPS response 469 object size up to a record size of 16 KByte. The client endpoint 470 MUST send TLS Extension Server Name Indication (SNI) information when 471 opening a security tunnel. Each client connection MUST perform a 472 full handshake with server certificate and MUST NOT use session reuse 473 or resumption. Cipher suite and key size should be defined in the 474 parameter session of each test scenario. 476 4.3.2. Backend Server Configuration 478 This document specifies which parameters should be considered while 479 configuring emulated backend servers using test equipment. 481 4.3.2.1. TCP Stack Attributes 483 The TCP stack on the server side SHOULD be configured similar to the 484 client side configuration described in Section 4.3.1.1. In addition, 485 server initial congestion window MUST NOT exceed 10 times the MSS. 486 Delayed ACKs are permitted and the maximum server delayed ACK MUST 487 NOT exceed 10 times the MSS before a forced ACK. 489 4.3.2.2. Server Endpoint IP Addressing 491 The server IP blocks SHOULD consist of unique, discontinuous static 492 address blocks with one IP per Server Fully Qualified Domain Name 493 (FQDN) endpoint per test port. The IPv4 ToS byte and IPv6 traffic 494 class bytes should be set to '00' and '000000' respectively. 496 4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes 498 The server pool for HTTP SHOULD listen on TCP port 80 and emulate 499 HTTP version 1.1 with persistence. The Server MUST advertise server 500 type in the Server response header [RFC2616]. For HTTPS server, TLS 501 1.2 or higher MUST be used with a maximum record size of 16 KByte and 502 MUST NOT use ticket resumption or Session ID reuse . The server MUST 503 listen on port TCP 443. The server SHALL serve a certificate to the 504 client. It is REQUIRED that the HTTPS server also check Host SNI 505 information with the FQDN. Cipher suite and key size should be 506 defined in the parameter section of each test scenario. 508 4.3.3. Traffic Flow Definition 510 This section describes the traffic pattern between client and server 511 endpoints. At the beginning of the test, the server endpoint 512 initializes and will be ready to accept connection states including 513 initialization of the TCP stack as well as bound HTTP and HTTPS 514 servers. When a client endpoint is needed, it will initialize and be 515 given attributes such as a MAC and IP address. The behavior of the 516 client is to sweep though the given server IP space, sequentially 517 generating a recognizable service by the DUT. Thus, a balanced, mesh 518 between client endpoints and server endpoints will be generated in a 519 client port server port combination. Each client endpoint performs 520 the same actions as other endpoints, with the difference being the 521 source IP of the client endpoint and the target server IP pool. The 522 client SHALL use Fully Qualified Domain Names (FQDN) in Host Headers 523 and for TLS Server Name Indication (SNI). 525 4.3.3.1. Description of Intra-Client Behavior 527 Client endpoints are independent of other clients that are 528 concurrently executing. When a client endpoint initiates traffic, 529 this section describes how the client steps though different 530 services. Once the test is initialized, the client endpoints SHOULD 531 randomly hold (perform no operation) for a few milliseconds to allow 532 for better randomization of start of client traffic. Each client 533 will either open a new TCP connection or connect to a TCP persistence 534 stack still open to that specific server. At any point that the 535 service profile may require encryption, a TLS encryption tunnel will 536 form presenting the URL request to the server. The server will then 537 perform an SNI name check with the proposed FQDN compared to the 538 domain embedded in the certificate. Only when correct, will the 539 server process the HTTPS response object. The initial response 540 object to the server MUST NOT have a fixed size; its size is based on 541 benchmarking tests described in Section 7. Multiple additional sub- 542 URLs (response objects on the service page) MAY be requested 543 simultaneously. This MAY be to the same server IP as the initial 544 URL. Each sub-object will also use a conical FQDN and URL path, as 545 observed in the traffic mix used. 547 4.3.4. Traffic Load Profile 549 The loading of traffic is described in this section. The loading of 550 a traffic load profile has five distinct phases: Init, ramp up, 551 sustain, ramp down, and collection. 553 1. During the Init phase, test bed devices including the client and 554 server endpoints should negotiate layer 2-3 connectivity such as 555 MAC learning and ARP. Only after successful MAC learning or ARP/ 556 ND resolution SHALL the test iteration move to the next phase. 557 No measurements are made in this phase. The minimum RECOMMEND 558 time for Init phase is 5 seconds. During this phase, the 559 emulated clients SHOULD NOT initiate any sessions with the DUT/ 560 SUT, in contrast, the emulated servers should be ready to accept 561 requests from DUT/SUT or from emulated clients. 563 2. In the ramp up phase, the test equipment SHOULD start to generate 564 the test traffic. It SHOULD use a set approximate number of 565 unique client IP addresses actively to generate traffic. The 566 traffic SHOULD ramp from zero to desired target objective. The 567 target objective will be defined for each benchmarking test. The 568 duration for the ramp up phase MUST be configured long enough, so 569 that the test equipment does not overwhelm DUT/SUT's supported 570 performance metrics namely; connections per second, concurrent 571 TCP connections, and application transactions per second. The 572 RECOMMENDED time duration for the ramp up phase is 180-300 573 seconds. No measurements are made in this phase. 575 3. In the sustain phase, the test equipment SHOULD continue 576 generating traffic to constant target value for a constant number 577 of active client IPs. The RECOMMENDED time duration for sustain 578 phase is 600 seconds. This is the phase where measurements 579 occur. 581 4. In the ramp down/close phase, no new connections are established, 582 and no measurements are made. The time duration for ramp up and 583 ramp down phase SHOULD be same. The RECOMMENDED duration of this 584 phase is between 180 to 300 seconds. 586 5. The last phase is administrative and will occur when the test 587 equipment merges and collates the report data. 589 5. Test Bed Considerations 591 This section recommends steps to control the test environment and 592 test equipment, specifically focusing on virtualized environments and 593 virtualized test equipment. 595 1. Ensure that any ancillary switching or routing functions between 596 the system under test and the test equipment do not limit the 597 performance of the traffic generator. This is specifically 598 important for virtualized components (vSwitches, vRouters). 600 2. Verify that the performance of the test equipment matches and 601 reasonably exceeds the expected maximum performance of the system 602 under test. 604 3. Assert that the test bed characteristics are stable during the 605 entire test session. Several factors might influence stability 606 specifically for virtualized test beds, for example additional 607 workloads in a virtualized system, load balancing and movement of 608 virtual machines during the test, or simple issues such as 609 additional heat created by high workloads leading to an emergency 610 CPU performance reduction. 612 Test bed reference pre-tests help to ensure that the maximum desired 613 traffic generator aspects such as throughput, transaction per second, 614 connection per second, concurrent connection and latency. 616 Once the desired maximum performance goals for the system under test 617 have been identified, a safety margin of 10% SHOULD be added for 618 throughput and subtracted for maximum latency and maximum packet 619 loss. 621 Test bed preparation may be performed either by configuring the DUT 622 in the most trivial setup (fast forwarding) or without presence of 623 DUT. 625 6. Reporting 627 This section describes how the final report should be formatted and 628 presented. The final test report MAY have two major sections; 629 Introduction and result sections. The following attributes SHOULD be 630 present in the introduction section of the test report. 632 1. The name of the NetSecOPEN traffic mix (see Appendix A) MUST be 633 prominent. 635 2. The time and date of the execution of the test MUST be prominent. 637 3. Summary of testbed software and Hardware details 639 A. DUT Hardware/Virtual Configuration 641 + This section SHOULD clearly identify the make and model of 642 the DUT 644 + The port interfaces, including speed and link information 645 MUST be documented. 647 + If the DUT is a virtual VNF, interface acceleration such 648 as DPDK and SR-IOV MUST be documented as well as cores 649 used, RAM used, and the pinning / resource sharing 650 configuration. The Hypervisor and version MUST be 651 documented. 653 + Any additional hardware relevant to the DUT such as 654 controllers MUST be documented 656 B. DUT Software 658 + The operating system name MUST be documented 660 + The version MUST be documented 662 + The specific configuration MUST be documented 664 C. DUT Enabled Features 666 + Configured DUT/SUT features (see Table 1) MUST be 667 documented 669 + Attributes of those featured MUST be documented 671 + Any additional relevant information about features MUST be 672 documented 674 D. Test equipment hardware and software 676 + Test equipment vendor name 678 + Hardware details including model number, interface type 680 + Test equipment firmware and test application software 681 version 683 4. Results Summary / Executive Summary 685 1. Results SHOULD resemble a pyramid in how it is reported, with 686 the introduction section documenting the summary of results 687 in a prominent, easy to read block. 689 2. In the result section of the test report, the following 690 attributes should be present for each test scenario. 692 a. KPIs MUST be documented separately for each test 693 scenario. The format of the KPI metrics should be 694 presented as described in Section 6.1. 696 b. The next level of details SHOULD be graphs showing each 697 of these metrics over the duration (sustain phase) of the 698 test. This allows the user to see the measured 699 performance stability changes over time. 701 6.1. Key Performance Indicators 703 This section lists KPIs for overall benchmarking tests scenarios. 704 All KPIs MUST be measured during the sustain phase of the traffic 705 load profile described in Section 4.3.4. All KPIs MUST be measured 706 from the result output of test equipment. 708 o Concurrent TCP Connections 709 This key performance indicator measures the average concurrent 710 open TCP connections in the sustaining period. 712 o TCP Connections Per Second 713 This key performance indicator measures the average established 714 TCP connections per second in the sustaining period. For "TCP/ 715 HTTP(S) Connection Per Second" benchmarking test scenario, the KPI 716 is measured average established and terminated TCP connections per 717 second simultaneously. 719 o Application Transactions Per Second 720 This key performance indicator measures the average successfully 721 completed application transactions per second in the sustaining 722 period. 724 o TLS Handshake Rate 725 This key performance indicator measures the average TLS 1.2 or 726 higher session formation rate within the sustaining period. 728 o Throughput 729 This key performance indicator measures the average Layer 2 730 throughput within the sustaining period as well as average packets 731 per seconds within the same period. The value of throughput 732 SHOULD be presented in Gbit/s rounded to two places of precision 733 with a more specific Kbit/s in parenthesis. Optionally, goodput 734 MAY also be logged as an average goodput rate measured over the 735 same period. Goodput result SHALL also be presented in the same 736 format as throughput. 738 o URL Response time / Time to Last Byte (TTLB) 739 This key performance indicator measures the minimum, average and 740 maximum per URL response time in the sustaining period. The 741 latency is measured at Client and in this case would be the time 742 duration between sending a GET request from Client and the 743 receival of the complete response from the server. 745 o Application Transaction Latency 746 This key performance indicator measures the minimum, average and 747 maximum the amount of time to receive all objects from the server. 748 The value of application transaction latency SHOULD be presented 749 in millisecond rounded to zero decimal. 751 o Time to First Byte (TTFB) 752 This key performance indicator will measure minimum, average and 753 maximum the time to first byte. TTFB is the elapsed time between 754 sending the SYN packet from the client and receiving the first 755 byte of application date from the DUT/SUT. TTFB SHOULD be 756 expressed in millisecond. 758 7. Benchmarking Tests 759 7.1. Throughput Performance With NetSecOPEN Traffic Mix 761 7.1.1. Objective 763 Using NetSecOPEN traffic mix, determine the maximum sustainable 764 throughput performance supported by the DUT/SUT. (see Appendix A for 765 details about traffic mix) 767 This test scenario is RECOMMENDED to perform twice; one with SSL 768 inspection feature enabled and the second scenario with SSL 769 inspection feature disabled on the DUT/SUT. 771 7.1.2. Test Setup 773 Test bed setup MUST be configured as defined in Section 4. Any test 774 scenario specific test bed configuration changes MUST be documented. 776 7.1.3. Test Parameters 778 In this section, test scenario specific parameters SHOULD be defined. 780 7.1.3.1. DUT/SUT Configuration Parameters 782 DUT/SUT parameters MUST conform to the requirements defined in 783 Section 4.2. Any configuration changes for this specific test 784 scenario MUST be documented. 786 7.1.3.2. Test Equipment Configuration Parameters 788 Test equipment configuration parameters MUST conform to the 789 requirements defined in Section 4.3. Following parameters MUST be 790 noted for this test scenario: 792 Client IP address range defined in Section 4.3.1.2 794 Server IP address range defined in Section 4.3.2.2 796 Traffic distribution ratio between IPv4 and IPv6 defined in 797 Section 4.3.1.2 799 Target throughput: It can be defined based on requirements. 800 Otherwise it represents aggregated line rate of interface(s) used 801 in the DUT/SUT 803 Initial throughput: 10% of the "Target throughput" 805 One of the following ciphers and keys are RECOMMENDED to use for 806 this test scenarios. 808 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 809 Algorithm: ecdsa_secp256r1_sha256 and Supported group: 810 sepc256r1) 812 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 813 Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256) 815 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 816 Algorithm: ecdsa_secp384r1_sha384 and Supported group: 817 sepc521r1) 819 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 820 Algorithm: rsa_pkcs1_sha384 and Supported group: secp256) 822 7.1.3.3. Traffic Profile 824 Traffic profile: Test scenario MUST be run with a single application 825 traffic mix profile (see Appendix A for details about traffic mix). 826 The name of the NetSecOPEN traffic mix MUST be documented. 828 7.1.3.4. Test Results Validation Criteria 830 The following test Criteria is defined as test results validation 831 criteria. Test results validation criteria MUST be monitored during 832 the whole sustain phase of the traffic load profile. 834 a. Number of failed Application transactions (receiving any HTTP 835 response code other than 200 OK) MUST be less than 0.001% (1 out 836 of 100,000 transactions) of total attempt transactions 838 b. Number of Terminated TCP connections due to unexpected TCP RST 839 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 840 connections) of total initiated TCP connections 842 c. Maximum deviation (max. dev) of application transaction time or 843 TTLB (Time To Last Byte) MUST be less than X (The value for "X" 844 will be finalized and updated after completion of PoC test) 845 The following equation MUST be used to calculate the deviation of 846 application transaction latency or TTLB 847 max. dev = max((avg_latency - min_latency),(max_latency - 848 avg_latency)) / (Initial latency) 849 Where, the initial latency is calculated using the following 850 equation. For this calculation, the latency values (min', avg' 851 and max') MUST be measured during test procedure step 1 as 852 defined in Section 7.1.4.1. 853 The variable latency represents application transaction latency 854 or TTLB. 856 Initial latency:= min((avg' latency - min' latency) | (max' 857 latency - avg' latency)) 859 d. Maximum value of Time to First Byte (TTFB) MUST be less than X 861 7.1.3.5. Measurement 863 Following KPI metrics MUST be reported for this test scenario. 865 Mandatory KPIs: average Throughput, average Concurrent TCP 866 connections, TTLB/application transaction latency (minimum, average 867 and maximum) and average application transactions per second 869 Optional KPIs: average TCP connections per second, average TLS 870 handshake rate and TTFB 872 7.1.4. Test Procedures and expected Results 874 The test procedures are designed to measure the throughput 875 performance of the DUT/SUT at the sustaining period of traffic load 876 profile. The test procedure consists of three major steps. 878 7.1.4.1. Step 1: Test Initialization and Qualification 880 Verify the link status of the all connected physical interfaces. All 881 interfaces are expected to be in "UP" status. 883 Configure traffic load profile of the test equipment to generate test 884 traffic at the "Initial throughput" rate as described in the 885 parameters Section 7.1.3.2. The test equipment SHOULD follow the 886 traffic load profile definition as described in Section 4.3.4. The 887 DUT/SUT SHOULD reach the "Initial throughput" during the sustain 888 phase. Measure all KPI as defined in Section 7.1.3.5. The measured 889 KPIs during the sustain phase MUST meet validation criteria "a" and 890 "b" defined in Section 7.1.3.4. 892 If the KPI metrics do not meet the validation criteria, the test 893 procedure MUST NOT be continued to step 2. 895 7.1.4.2. Step 2: Test Run with Target Objective 897 Configure test equipment to generate traffic at the "Target 898 throughput" rate defined in the parameter table. The test equipment 899 SHOULD follow the traffic load profile definition as described in 900 Section 4.3.4. The test equipment SHOULD start to measure and record 901 all specified KPIs. The frequency of KPI metric measurements SHOULD 902 be 2 seconds. Continue the test until all traffic profile phases are 903 completed. 905 The DUT/SUT is expected to reach the desired target throughput during 906 the sustain phase. In addition, the measured KPIs MUST meet all 907 validation criteria. Follow step 3, if the KPI metrics do not meet 908 the validation criteria. 910 7.1.4.3. Step 3: Test Iteration 912 Determine the maximum and average achievable throughput within the 913 validation criteria. Final test iteration MUST be performed for the 914 test duration defined in Section 4.3.4. 916 7.2. TCP/HTTP Connections Per Second 918 7.2.1. Objective 920 Using HTTP traffic, determine the maximum sustainable TCP connection 921 establishment rate supported by the DUT/SUT under different 922 throughput load conditions. 924 To measure connections per second, test iterations MUST use different 925 fixed HTTP response object sizes defined in Section 7.2.3.2. 927 7.2.2. Test Setup 929 Test bed setup SHOULD be configured as defined in Section 4. Any 930 specific test bed configuration changes such as number of interfaces 931 and interface type, etc. MUST be documented. 933 7.2.3. Test Parameters 935 In this section, test scenario specific parameters SHOULD be defined. 937 7.2.3.1. DUT/SUT Configuration Parameters 939 DUT/SUT parameters MUST conform to the requirements defined in 940 Section 4.2. Any configuration changes for this specific test 941 scenario MUST be documented. 943 7.2.3.2. Test Equipment Configuration Parameters 945 Test equipment configuration parameters MUST conform to the 946 requirements defined in Section 4.3. Following parameters MUST be 947 documented for this test scenario: 949 Client IP address range defined in Section 4.3.1.2 951 Server IP address range defined in Section 4.3.2.2 952 Traffic distribution ratio between IPv4 and IPv6 defined in 953 Section 4.3.1.2 955 Target connections per second: Initial value from product data sheet 956 (if known) 958 Initial connections per second: 10% of "Target connections per 959 second" 961 The client SHOULD negotiate HTTP 1.1 and close the connection with 962 FIN immediately after completion of one transaction. In each test 963 iteration, client MUST send GET command requesting a fixed HTTP 964 response object size. 966 The RECOMMENDED response object sizes are 1, 2, 4, 16, 64 KByte 968 7.2.3.3. Test Results Validation Criteria 970 The following test Criteria is defined as test results validation 971 criteria. Test results validation criteria MUST be monitored during 972 the whole sustain phase of the traffic load profile. 974 a. Number of failed Application transactions (receiving any HTTP 975 response code other than 200 OK) MUST be less than 0.001% (1 out 976 of 100,000 transactions) of total attempt transactions 978 b. Number of Terminated TCP connections due to unexpected TCP RST 979 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 980 connections) of total initiated TCP connections 982 c. During the sustain phase, traffic should be forwarded at a 983 constant rate 985 d. Concurrent TCP connections SHOULD be constant during steady 986 state. Any deviation of concurrent TCP connections MUST be less 987 than 10%. This confirms the DUT opens and closes TCP connections 988 almost at the same rate 990 7.2.3.4. Measurement 992 Following KPI metrics MUST be reported for each test iteration. 994 Mandatory KPIs: average TCP connections per second, average 995 Throughput and Average Time to First Byte (TTFB). 997 7.2.4. Test Procedures and Expected Results 999 The test procedure is designed to measure the TCP connections per 1000 second rate of the DUT/SUT at the sustaining period of the traffic 1001 load profile. The test procedure consists of three major steps. 1002 This test procedure MAY be repeated multiple times with different IP 1003 types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 1004 distribution. 1006 7.2.4.1. Step 1: Test Initialization and Qualification 1008 Verify the link status of all connected physical interfaces. All 1009 interfaces are expected to be in "UP" status. 1011 Configure the traffic load profile of the test equipment to establish 1012 "initial connections per second" as defined in the parameters 1013 Section 7.2.3.2. The traffic load profile SHOULD be defined as 1014 described in Section 4.3.4. 1016 The DUT/SUT SHOULD reach the "Initial connections per second" before 1017 the sustain phase. The measured KPIs during the sustain phase MUST 1018 meet validation criteria a, b, c, and d defined in Section 7.2.3.3. 1020 If the KPI metrics do not meet the validation criteria, the test 1021 procedure MUST NOT be continued to "Step 2". 1023 7.2.4.2. Step 2: Test Run with Target Objective 1025 Configure test equipment to establish "Target connections per second" 1026 defined in the parameters table. The test equipment SHOULD follow 1027 the traffic load profile definition as described in Section 4.3.4. 1029 During the ramp up and sustain phase of each test iteration, other 1030 KPIs such as throughput, concurrent TCP connections and application 1031 transactions per second MUST NOT reach to the maximum value the DUT/ 1032 SUT can support. The test results for specific test iterations 1033 SHOULD NOT be reported, if the above mentioned KPI (especially 1034 throughput) reaches the maximum value. (Example: If the test 1035 iteration with 64 KByte of HTTP response object size reached the 1036 maximum throughput limitation of the DUT, the test iteration MAY be 1037 interrupted and the result for 64 KByte SHOULD NOT be reported). 1039 The test equipment SHOULD start to measure and record all specified 1040 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1041 the test until all traffic profile phases are completed. 1043 The DUT/SUT is expected to reach the desired target connections per 1044 second rate at the sustain phase. In addition, the measured KPIs 1045 MUST meet all validation criteria. 1047 Follow step 3, if the KPI metrics do not meet the validation 1048 criteria. 1050 7.2.4.3. Step 3: Test Iteration 1052 Determine the maximum and average achievable connections per second 1053 within the validation criteria. 1055 7.3. HTTP Throughput 1057 7.3.1. Objective 1059 Determine the throughput for HTTP transactions varying the HTTP 1060 response object size. 1062 7.3.2. Test Setup 1064 Test bed setup SHOULD be configured as defined in Section 4. Any 1065 specific test bed configuration changes such as number of interfaces 1066 and interface type, etc. must be documented. 1068 7.3.3. Test Parameters 1070 In this section, test scenario specific parameters SHOULD be defined. 1072 7.3.3.1. DUT/SUT Configuration Parameters 1074 DUT/SUT parameters MUST conform to the requirements defined in 1075 Section 4.2. Any configuration changes for this specific test 1076 scenario MUST be documented. 1078 7.3.3.2. Test Equipment Configuration Parameters 1080 Test equipment configuration parameters MUST conform to the 1081 requirements defined in Section 4.3. Following parameters MUST be 1082 documented for this test scenario: 1084 Client IP address range defined in Section 4.3.1.2 1086 Server IP address range defined in Section 4.3.2.2 1088 Traffic distribution ratio between IPv4 and IPv6 defined in 1089 Section 4.3.1.2 1090 Target Throughput: Initial value from product data sheet (if known) 1092 Initial Throughput: 10% of "Target Throughput" 1094 Number of HTTP response object requests (transactions) per 1095 connection: 10 1097 RECOMMENDED HTTP response object size: 1 KByte, 16 KByte, 64 KByte, 1098 256 KByte and mixed objects defined in the table 1100 +---------------------+---------------------+ 1101 | Object size (KByte) | Number of requests/ | 1102 | | Weight | 1103 +---------------------+---------------------+ 1104 | 0.2 | 1 | 1105 +---------------------+---------------------+ 1106 | 6 | 1 | 1107 +---------------------+---------------------+ 1108 | 8 | 1 | 1109 +---------------------+---------------------+ 1110 | 9 | 1 | 1111 +---------------------+---------------------+ 1112 | 10 | 1 | 1113 +---------------------+---------------------+ 1114 | 25 | 1 | 1115 +---------------------+---------------------+ 1116 | 26 | 1 | 1117 +---------------------+---------------------+ 1118 | 35 | 1 | 1119 +---------------------+---------------------+ 1120 | 59 | 1 | 1121 +---------------------+---------------------+ 1122 | 347 | 1 | 1123 +---------------------+---------------------+ 1125 Table 3: Mixed Objects 1127 7.3.3.3. Test Results Validation Criteria 1129 The following test Criteria is defined as test results validation 1130 criteria. Test results validation criteria MUST be monitored during 1131 the whole sustain phase of the traffic load profile 1133 a. Number of failed Application transactions (receiving any HTTP 1134 response code other than 200 OK) MUST be less than 0.001% (1 out 1135 of 100,000 transactions) of attempt transactions. 1137 b. Traffic should be forwarded constantly. 1139 c. Concurrent connetions MUST be constant. The deviation of 1140 concurrent TCP connection MUST NOT increase more than 10% 1142 7.3.3.4. Measurement 1144 The KPI metrics MUST be reported for this test scenario: 1146 Average Throughput, average HTTP transactions per second, concurrent 1147 connections, and average TCP connections per second. 1149 7.3.4. Test Procedures and Expected Results 1151 The test procedure is designed to measure HTTP throughput of the DUT/ 1152 SUT. The test procedure consists of three major steps. This test 1153 procedure MAY be repeated multiple times with different IPv4 and IPv6 1154 traffic distribution and HTTP response object sizes. 1156 7.3.4.1. Step 1: Test Initialization and Qualification 1158 Verify the link status of the all connected physical interfaces. All 1159 interfaces are expected to be in "UP" status. 1161 Configure traffic load profile of the test equipment to establish 1162 "Initial Throughput" as defined in the parameters Section 7.3.3.2. 1164 The traffic load profile SHOULD be defined as described in 1165 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial Throughput" 1166 during the sustain phase. Measure all KPI as defined in 1167 Section 7.3.3.4. 1169 The measured KPIs during the sustain phase MUST meet the validation 1170 criteria "a" defined in Section 7.3.3.3. 1172 If the KPI metrics do not meet the validation criteria, the test 1173 procedure MUST NOT be continued to "Step 2". 1175 7.3.4.2. Step 2: Test Run with Target Objective 1177 The test equipment SHOULD start to measure and record all specified 1178 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1179 the test until all traffic profile phases are completed. 1181 The DUT/SUT is expected to reach the desired "Target Throughput" at 1182 the sustain phase. In addition, the measured KPIs must meet all 1183 validation criteria. 1185 Perform the test separately for each HTTP response object size. 1187 Follow step 3, if the KPI metrics do not meet the validation 1188 criteria. 1190 7.3.4.3. Step 3: Test Iteration 1192 Determine the maximum and average achievable throughput within the 1193 validation criteria. Final test iteration MUST be performed for the 1194 test duration defined in Section 4.3.4. 1196 7.4. TCP/HTTP Transaction Latency 1198 7.4.1. Objective 1200 Using HTTP traffic, determine the average HTTP transaction latency 1201 when DUT is running with sustainable HTTP transactions per second 1202 supported by the DUT/SUT under different HTTP response object sizes. 1204 Test iterations MUST be performed with different HTTP response object 1205 sizes in two different scenarios.one with a single transaction and 1206 the other with multiple transactions within a single TCP connection. 1207 For consistency both the single and multiple transaction test MUST be 1208 configured with HTTP 1.1. 1210 Scenario 1: The client MUST negotiate HTTP 1.1 and close the 1211 connection with FIN immediately after completion of a single 1212 transaction (GET and RESPONSE). 1214 Scenario 2: The client MUST negotiate HTTP 1.1 and close the 1215 connection FIN immediately after completion of 10 transactions (GET 1216 and RESPONSE) within a single TCP connection. 1218 7.4.2. Test Setup 1220 Test bed setup SHOULD be configured as defined in Section 4. Any 1221 specific test bed configuration changes such as number of interfaces 1222 and interface type, etc. MUST be documented. 1224 7.4.3. Test Parameters 1226 In this section, test scenario specific parameters SHOULD be defined. 1228 7.4.3.1. DUT/SUT Configuration Parameters 1230 DUT/SUT parameters MUST conform to the requirements defined in 1231 Section 4.2. Any configuration changes for this specific test 1232 scenario MUST be documented. 1234 7.4.3.2. Test Equipment Configuration Parameters 1236 Test equipment configuration parameters MUST conform to the 1237 requirements defined in Section 4.3 . Following parameters MUST be 1238 documented for this test scenario: 1240 Client IP address range defined in Section 4.3.1.2 1242 Server IP address range defined in Section 4.3.2.2 1244 Traffic distribution ratio between IPv4 and IPv6 defined in 1245 Section 4.3.1.2 1247 Target objective for scenario 1: 50% of the maximum connection per 1248 second measured in test scenario TCP/HTTP Connections Per Second 1249 (Section 7.2) 1251 Target objective for scenario 2: 50% of the maximum throughput 1252 measured in test scenario HTTP Throughput (Section 7.3) 1254 Initial objective for scenario 1: 10% of Target objective for 1255 scenario 1" 1257 Initial objective for scenario 2: 10% of "Target objective for 1258 scenario 2" 1260 HTTP transaction per TCP connection: test scenario 1 with single 1261 transaction and the second scenario with 10 transactions 1263 HTTP 1.1 with GET command requesting a single object. The 1264 RECOMMENDED object sizes are 1, 16 or 64 KByte. For each test 1265 iteration, client MUST request a single HTTP response object size. 1267 7.4.3.3. Test Results Validation Criteria 1269 The following test Criteria is defined as test results validation 1270 criteria. Test results validation criteria MUST be monitored during 1271 the whole sustain phase of the traffic load profile. Ramp up and 1272 ramp down phase SHOULD NOT be considered. 1274 Generic criteria: 1276 a. Number of failed Application transactions (receiving any HTTP 1277 response code other than 200 OK) MUST be less than 0.001% (1 out 1278 of 100,000 transactions) of attempt transactions. 1280 b. Number of Terminated TCP connections due to unexpected TCP RST 1281 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1282 connections) of total initiated TCP connections 1284 c. During the sustain phase, traffic should be forwarded at a 1285 constant rate. 1287 d. Concurrent TCP connections should be constant during steady 1288 state. This confirms the DUT opens and closes TCP connections at 1289 the same rate. 1291 e. After ramp up the DUT MUST achieve the "Target objective" defined 1292 in the parameter Section 7.4.3.2 and remain in that state for the 1293 entire test duration (sustain phase). 1295 7.4.3.4. Measurement 1297 Following KPI metrics MUST be reported for each test scenario and 1298 HTTP response object sizes separately: 1300 average TCP connections per second and average application 1301 transaction latency 1303 All KPI's are measured once the target throughput achieves the steady 1304 state. 1306 7.4.4. Test Procedures and Expected Results 1308 The test procedure is designed to measure the average application 1309 transaction latencies or TTLB when the DUT is operating close to 50% 1310 of its maximum achievable throughput or connections per second. This 1311 test procedure CAN be repeated multiple times with different IP types 1312 (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic distribution), 1313 HTTP response object sizes and single and multiple transactions per 1314 connection scenarios. 1316 7.4.4.1. Step 1: Test Initialization and Qualification 1318 Verify the link status of the all connected physical interfaces. All 1319 interfaces are expected to be in "UP" status. 1321 Configure traffic load profile of the test equipment to establish 1322 "Initial objective" as defined in the parameters Section 7.4.3.2. 1323 The traffic load profile can be defined as described in 1324 Section 4.3.4. 1326 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 1327 phase. The measured KPIs during the sustain phase MUST meet the 1328 validation criteria a, b, c, d, e and f defined in Section 7.4.3.3. 1330 If the KPI metrics do not meet the validation criteria, the test 1331 procedure MUST NOT be continued to "Step 2". 1333 7.4.4.2. Step 2: Test Run with Target Objective 1335 Configure test equipment to establish "Target objective" defined in 1336 the parameters table. The test equipment SHOULD follow the traffic 1337 load profile definition as described in Section 4.3.4. 1339 During the ramp up and sustain phase, other KPIs such as throughput, 1340 concurrent TCP connections and application transactions per second 1341 MUST NOT reach to the maximum value that the DUT/SUT can support. 1342 The test results for specific test iterations SHOULD NOT be reported, 1343 if the above mentioned KPI (especially throughput) reaches to the 1344 maximum value. (Example: If the test iteration with 64 KByte of HTTP 1345 response object size reached the maximum throughput limitation of the 1346 DUT, the test iteration MAY be interrupted and the result for 64 1347 KByte SHOULD NOT be reported). 1349 The test equipment SHOULD start to measure and record all specified 1350 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1351 the test until all traffic profile phases are completed. DUT/SUT is 1352 expected to reach the desired "Target objective" at the sustain 1353 phase. In addition, the measured KPIs MUST meet all validation 1354 criteria. 1356 Follow step 3, if the KPI metrics do not meet the validation 1357 criteria. 1359 7.4.4.3. Step 3: Test Iteration 1361 Determine the maximum achievable connections per second within the 1362 validation criteria and measure the latency values. 1364 7.5. Concurrent TCP/HTTP Connection Capacity 1366 7.5.1. Objective 1368 Determine the maximum number of concurrent TCP connections that the 1369 DUT/ SUT sustains when using HTTP traffic. 1371 7.5.2. Test Setup 1373 Test bed setup SHOULD be configured as defined in Section 4. Any 1374 specific test bed configuration changes such as number of interfaces 1375 and interface type, etc. must be documented. 1377 7.5.3. Test Parameters 1379 In this section, test scenario specific parameters SHOULD be defined. 1381 7.5.3.1. DUT/SUT Configuration Parameters 1383 DUT/SUT parameters MUST conform to the requirements defined in 1384 Section 4.2. Any configuration changes for this specific test 1385 scenario MUST be documented. 1387 7.5.3.2. Test Equipment Configuration Parameters 1389 Test equipment configuration parameters MUST conform to the 1390 requirements defined in Section 4.3. Following parameters MUST be 1391 noted for this test scenario: 1393 Client IP address range defined in Section 4.3.1.2 1395 Server IP address range defined in Section 4.3.2.2 1397 Traffic distribution ratio between IPv4 and IPv6 defined in 1398 Section 4.3.1.2 1400 Target concurrent connection: Initial value from product data 1401 sheet (if known) 1403 Initial concurrent connection: 10% of "Target concurrent 1404 connection" 1406 Maximum connections per second during ramp up phase: 50% of 1407 maximum connections per second measured in test scenario TCP/HTTP 1408 Connections per second (Section 7.2) 1410 Ramp up time (in traffic load profile for "Target concurrent 1411 connection"): "Target concurrent connection" / "Maximum 1412 connections per second during ramp up phase" 1414 Ramp up time (in traffic load profile for "Initial concurrent 1415 connection"): "Initial concurrent connection" / "Maximum 1416 connections per second during ramp up phase" 1418 The client MUST negotiate HTTP 1.1 with persistence and each client 1419 MAY open multiple concurrent TCP connections per server endpoint IP. 1421 Each client sends 10 GET commands requesting 1 KByte HTTP response 1422 object in the same TCP connection (10 transactions/TCP connection) 1423 and the delay (think time) between the transaction MUST be X seconds. 1425 X = ("Ramp up time" + "steady state time") /10 1427 The established connections SHOULD remain open until the ramp down 1428 phase of the test. During the ramp down phase, all connections 1429 SHOULD be successfully closed with FIN. 1431 7.5.3.3. Test Results Validation Criteria 1433 The following test Criteria is defined as test results validation 1434 criteria. Test results validation criteria MUST be monitored during 1435 the whole sustain phase of the traffic load profile. 1437 a. Number of failed Application transactions (receiving any HTTP 1438 response code other than 200 OK) MUST be less than 0.001% (1 out 1439 of 100,000 transaction) of total attempted transactions 1441 b. Number of Terminated TCP connections due to unexpected TCP RST 1442 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1443 connections) of total initiated TCP connections 1445 c. During the sustain phase, traffic should be forwarded constantly 1447 7.5.3.4. Measurement 1449 Following KPI metrics MUST be reported for this test scenario: 1451 average Throughput, Concurrent TCP connections (minimum, average and 1452 maximum), TTLB/ application transaction latency (minimum, average and 1453 maximum) and average application transactions per second. 1455 7.5.4. Test Procedures and expected Results 1457 The test procedure is designed to measure the concurrent TCP 1458 connection capacity of the DUT/SUT at the sustaining period of 1459 traffic load profile. The test procedure consists of three major 1460 steps. This test procedure MAY be repeated multiple times with 1461 different IPv4 and IPv6 traffic distribution. 1463 7.5.4.1. Step 1: Test Initialization and Qualification 1465 Verify the link status of the all connected physical interfaces. All 1466 interfaces are expected to be in "UP" status. 1468 Configure test equipment to establish "Initial concurrent TCP 1469 connections" defined in Section 7.5.3.2. Except ramp up time, the 1470 traffic load profile SHOULD be defined as described in Section 4.3.4. 1472 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 1473 concurrent TCP connections". The measured KPIs during the sustain 1474 phase MUST meet the validation criteria "a" and "b" defined in 1475 Section 7.5.3.3. 1477 If the KPI metrics do not meet the validation criteria, the test 1478 procedure MUST NOT be continued to "Step 2". 1480 7.5.4.2. Step 2: Test Run with Target Objective 1482 Configure test equipment to establish "Target concurrent TCP 1483 connections". The test equipment SHOULD follow the traffic load 1484 profile definition (except ramp up time) as described in 1485 Section 4.3.4. 1487 During the ramp up and sustain phase, the other KPIs such as 1488 throughput, TCP connections per second and application transactions 1489 per second MUST NOT reach to the maximum value that the DUT/SUT can 1490 support. 1492 The test equipment SHOULD start to measure and record KPIs defined in 1493 Section 7.5.3.4. The frequency of measurement SHOULD be 2 seconds. 1494 Continue the test until all traffic profile phases are completed. 1496 The DUT/SUT is expected to reach the desired target concurrent 1497 connection at the sustain phase. In addition, the measured KPIs must 1498 meet all validation criteria. 1500 Follow step 3, if the KPI metrics do not meet the validation 1501 criteria. 1503 7.5.4.3. Step 3: Test Iteration 1505 Determine the maximum and average achievable concurrent TCP 1506 connections capacity within the validation criteria. 1508 7.6. TCP/HTTPS Connections per second 1510 7.6.1. Objective 1512 Using HTTPS traffic, determine the maximum sustainable SSL/TLS 1513 session establishment rate supported by the DUT/SUT under different 1514 throughput load conditions. 1516 Test iterations MUST include common cipher suites and key strengths 1517 as well as forward looking stronger keys. Specific test iterations 1518 MUST include ciphers and keys defined in Section 7.6.3.2. 1520 For each cipher suite and key strengths, test iterations MUST use a 1521 single HTTPS response object size defined in the test equipment 1522 configuration parameters Section 7.6.3.2 to measure connections per 1523 second performance under a variety of DUT Security inspection load 1524 conditions. 1526 7.6.2. Test Setup 1528 Test bed setup SHOULD be configured as defined in Section 4. Any 1529 specific test bed configuration changes such as number of interfaces 1530 and interface type, etc. MUST be documented. 1532 7.6.3. Test Parameters 1534 In this section, test scenario specific parameters SHOULD be defined. 1536 7.6.3.1. DUT/SUT Configuration Parameters 1538 DUT/SUT parameters MUST conform to the requirements defined in 1539 Section 4.2. Any configuration changes for this specific test 1540 scenario MUST be documented. 1542 7.6.3.2. Test Equipment Configuration Parameters 1544 Test equipment configuration parameters MUST conform to the 1545 requirements defined in Section 4.3. Following parameters MUST be 1546 documented for this test scenario: 1548 Client IP address range defined in Section 4.3.1.2 1550 Server IP address range defined in Section 4.3.2.2 1552 Traffic distribution ratio between IPv4 and IPv6 defined in 1553 Section 4.3.1.2 1554 Target connections per second: Initial value from product data sheet 1555 (if known) 1557 Initial connections per second: 10% of "Target connections per 1558 second" 1560 RECOMMENDED ciphers and keys: 1562 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 1563 Algorithm: ecdsa_secp256r1_sha256 and Supported group: sepc256r1) 1565 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 1566 Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256) 1568 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 1569 Algorithm: ecdsa_secp384r1_sha384 and Supported group: sepc521r1) 1571 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 1572 Algorithm: rsa_pkcs1_sha384 and Supported group: secp256) 1574 The client MUST negotiate HTTPS 1.1 and close the connection with FIN 1575 immediately after completion of one transaction. In each test 1576 iteration, client MUST send GET command requesting a fixed HTTPS 1577 response object size. The RECOMMENDED object sizes are 1, 2, 4, 16, 1578 64 KByte. 1580 7.6.3.3. Test Results Validation Criteria 1582 The following test Criteria is defined as test results validation 1583 criteria: 1585 a. Number of failed Application transactions (receiving any HTTP 1586 response code other than 200 OK) MUST be less than 0.001% (1 out 1587 of 100,000 transactions) of attempt transactions 1589 b. Number of Terminated TCP connections due to unexpected TCP RST 1590 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1591 connections) of total initiated TCP connections 1593 c. During the sustain phase, traffic should be forwarded at a 1594 constant rate 1596 d. Concurrent TCP connections SHOULD be constant during steady 1597 state. This confirms that the DUT open and close the TCP 1598 connections at the same rate 1600 7.6.3.4. Measurement 1602 Following KPI metrics MUST be reported for this test scenario: 1604 average TCP connections per second, average Throughput and Average 1605 Time to TCP First Byte. 1607 7.6.4. Test Procedures and expected Results 1609 The test procedure is designed to measure the TCP connections per 1610 second rate of the DUT/SUT at the sustaining period of traffic load 1611 profile. The test procedure consists of three major steps. This 1612 test procedure MAY be repeated multiple times with different IPv4 and 1613 IPv6 traffic distribution. 1615 7.6.4.1. Step 1: Test Initialization and Qualification 1617 Verify the link status of all connected physical interfaces. All 1618 interfaces are expected to be in "UP" status. 1620 Configure traffic load profile of the test equipment to establish 1621 "Initial connections per second" as defined in Section 7.6.3.2. The 1622 traffic load profile CAN be defined as described in Section 4.3.4. 1624 The DUT/SUT SHOULD reach the "Initial connections per second" before 1625 the sustain phase. The measured KPIs during the sustain phase MUST 1626 meet the validation criteria a, b, c, and d defined in 1627 Section 7.6.3.3. 1629 If the KPI metrics do not meet the validation criteria, the test 1630 procedure MUST NOT be continued to "Step 2". 1632 7.6.4.2. Step 2: Test Run with Target Objective 1634 Configure test equipment to establish "Target connections per second" 1635 defined in the parameters table. The test equipment SHOULD follow 1636 the traffic load profile definition as described in Section 4.3.4. 1638 During the ramp up and sustain phase, other KPIs such as throughput, 1639 concurrent TCP connections and application transactions per second 1640 MUST NOT reach the maximum value that the DUT/SUT can support. The 1641 test results for specific test iteration SHOULD NOT be reported, if 1642 the above mentioned KPI (especially throughput) reaches the maximum 1643 value. (Example: If the test iteration with 64 KByte of HTTPS 1644 response object size reached the maximum throughput limitation of the 1645 DUT, the test iteration can be interrupted and the result for 64 1646 KByte SHOULD NOT be reported). 1648 The test equipment SHOULD start to measure and record all specified 1649 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1650 the test until all traffic profile phases are completed. 1652 The DUT/SUT is expected to reach the desired target connections per 1653 second rate at the sustain phase. In addition, the measured KPIs 1654 must meet all validation criteria. 1656 Follow the step 3, if the KPI metrics do not meet the validation 1657 criteria. 1659 7.6.4.3. Step 3: Test Iteration 1661 Determine the maximum and average achievable connections per second 1662 within the validation criteria. 1664 7.7. HTTPS Throughput 1666 7.7.1. Objective 1668 Determine the throughput for HTTPS transactions varying the HTTPS 1669 response object size. 1671 Test iterations MUST include common cipher suites and key strengths 1672 as well as forward looking stronger keys. Specific test iterations 1673 MUST include the ciphers and keys defined in the parameter 1674 Section 7.7.3.2. 1676 7.7.2. Test Setup 1678 Test bed setup SHOULD be configured as defined in Section 4. Any 1679 specific test bed configuration changes such as number of interfaces 1680 and interface type, etc. must be documented. 1682 7.7.3. Test Parameters 1684 In this section, test scenario specific parameters SHOULD be defined. 1686 7.7.3.1. DUT/SUT Configuration Parameters 1688 DUT/SUT parameters MUST conform to the requirements defined in 1689 Section 4.2. Any configuration changes for this specific test 1690 scenario MUST be documented. 1692 7.7.3.2. Test Equipment Configuration Parameters 1694 Test equipment configuration parameters MUST conform to the 1695 requirements defined in Section 4.3. Following parameters MUST be 1696 documented for this test scenario: 1698 Client IP address range defined in Section 4.3.1.2 1700 Server IP address range defined in Section 4.3.2.2 1702 Traffic distribution ratio between IPv4 and IPv6 defined in 1703 Section 4.3.1.2 1705 Target Throughput: Initial value from product data sheet (if known) 1707 Initial Throughput: 10% of "Target Throughput" 1709 Number of HTTPS response object requests (transactions) per 1710 connection: 10 1712 RECOMMENDED ciphers and keys: 1714 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 1715 Algorithm: ecdsa_secp256r1_sha256 and Supported group: sepc256r1) 1717 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 1718 Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256) 1720 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 1721 Algorithm: ecdsa_secp384r1_sha384 and Supported group: sepc521r1) 1723 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 1724 Algorithm: rsa_pkcs1_sha384 and Supported group: secp256) 1726 RECOMMENDED HTTPS response object size: 1 KByte, 2 KByte, 4 KByte, 16 1727 KByte, 64 KByte, 256 KByte and mixed object defined in the table 1728 below. 1730 +---------------------+---------------------+ 1731 | Object size (KByte) | Number of requests/ | 1732 | | Weight | 1733 +---------------------+---------------------+ 1734 | 0.2 | 1 | 1735 +---------------------+---------------------+ 1736 | 6 | 1 | 1737 +---------------------+---------------------+ 1738 | 8 | 1 | 1739 +---------------------+---------------------+ 1740 | 9 | 1 | 1741 +---------------------+---------------------+ 1742 | 10 | 1 | 1743 +---------------------+---------------------+ 1744 | 25 | 1 | 1745 +---------------------+---------------------+ 1746 | 26 | 1 | 1747 +---------------------+---------------------+ 1748 | 35 | 1 | 1749 +---------------------+---------------------+ 1750 | 59 | 1 | 1751 +---------------------+---------------------+ 1752 | 347 | 1 | 1753 +---------------------+---------------------+ 1755 Table 4: Mixed Objects 1757 7.7.3.3. Test Results Validation Criteria 1759 The following test Criteria is defined as test results validation 1760 criteria. Test results validation criteria MUST be monitored during 1761 the whole sustain phase of the traffic load profile. 1763 a. Number of failed Application transactions (receiving any HTTP 1764 response code other than 200 OK) MUST be less than 0.001% (1 out 1765 of 100,000 transactions) of attempt transactions. 1767 b. Traffic should be forwarded constantly. 1769 c. The deviation of concurrent TCP connections MUST be less than 10% 1771 7.7.3.4. Measurement 1773 The KPI metrics MUST be reported for this test scenario: 1775 Average Throughput, Average transactions per second, concurrent 1776 connections, and average TCP connections per second. 1778 7.7.4. Test Procedures and Expected Results 1780 The test procedure consists of three major steps. This test 1781 procedure MAY be repeated multiple times with different IPv4 and IPv6 1782 traffic distribution and HTTPS response object sizes. 1784 7.7.4.1. Step 1: Test Initialization and Qualification 1786 Verify the link status of the all connected physical interfaces. All 1787 interfaces are expected to be in "UP" status. 1789 Configure traffic load profile of the test equipment to establish 1790 "initial throughput" as defined in the parameters Section 7.7.3.2. 1792 The traffic load profile should be defined as described in 1793 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial Throughput" 1794 during the sustain phase. Measure all KPI as defined in 1795 Section 7.7.3.4. 1797 The measured KPIs during the sustain phase MUST meet the validation 1798 criteria "a" defined in Section 7.7.3.3. 1800 If the KPI metrics do not meet the validation criteria, the test 1801 procedure MUST NOT be continued to "Step 2". 1803 7.7.4.2. Step 2: Test Run with Target Objective 1805 The test equipment SHOULD start to measure and record all specified 1806 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1807 the test until all traffic profile phases are completed. 1809 The DUT/SUT is expected to reach the desired "Target Throughput" at 1810 the sustain phase. In addition, the measured KPIs MUST meet all 1811 validation criteria. 1813 Perform the test separately for each HTTPS response object size. 1815 Follow step 3, if the KPI metrics do not meet the validation 1816 criteria. 1818 7.7.4.3. Step 3: Test Iteration 1820 Determine the maximum and average achievable throughput within the 1821 validation criteria. Final test iteration MUST be performed for the 1822 test duration defined in Section 4.3.4. 1824 7.8. HTTPS Transaction Latency 1826 7.8.1. Objective 1828 Using HTTPS traffic, determine the average HTTPS transaction latency 1829 when DUT is running with sustainable HTTPS transactions per second 1830 supported by the DUT/SUT under different HTTPS response object size. 1832 Scenario 1: The client MUST negotiate HTTPS and close the connection 1833 with FIN immediately after completion of a single transaction (GET 1834 and RESPONSE). 1836 Scenario 2: The client MUST negotiate HTTPS and close the connection 1837 with FIN immediately after completion of 10 transactions (GET and 1838 RESPONSE) within a single TCP connection. 1840 7.8.2. Test Setup 1842 Test bed setup SHOULD be configured as defined in Section 4. Any 1843 specific test bed configuration changes such as number of interfaces 1844 and interface type, etc. MUST be documented. 1846 7.8.3. Test Parameters 1848 In this section, test scenario specific parameters SHOULD be defined. 1850 7.8.3.1. DUT/SUT Configuration Parameters 1852 DUT/SUT parameters MUST conform to the requirements defined in 1853 Section 4.2. Any configuration changes for this specific test 1854 scenario MUST be documented. 1856 7.8.3.2. Test Equipment Configuration Parameters 1858 Test equipment configuration parameters MUST conform to the 1859 requirements defined in Section 4.3. Following parameters MUST be 1860 documented for this test scenario: 1862 Client IP address range defined in Section 4.3.1.2 1864 Server IP address range defined in Section 4.3.2.2 1866 Traffic distribution ratio between IPv4 and IPv6 defined in 1867 Section 4.3.1.2 1869 RECOMMENDED cipher suites and key size: ECDHE-ECDSA-AES256-GCM-SHA384 1870 with Secp521 bits key size (Signature Hash Algorithm: 1871 ecdsa_secp384r1_sha384 and Supported group: sepc521r1) 1872 Target objective for scenario 1: 50% of the maximum connections per 1873 second measured in test scenario TCP/HTTPS Connections per second 1874 (Section 7.6) 1876 Target objective for scenario 2: 50% of the maximum throughput 1877 measured in test scenario HTTPS Throughput (Section 7.7) 1879 Initial objective for scenario 1: 10% of Target objective for 1880 scenario 1" 1882 Initial objective for scenario 2: 10% of "Target objective for 1883 scenario 2" 1885 HTTPS transaction per TCP connection: test scenario 1 with single 1886 transaction and the second scenario with 10 transactions 1888 HTTPS 1.1 with GET command requesting a single 1, 16 or 64 KByte 1889 object. For each test iteration, client MUST request a single HTTPS 1890 response object size. 1892 7.8.3.3. Test Results Validation Criteria 1894 The following test Criteria is defined as test results validation 1895 criteria. Test results validation criteria MUST be monitored during 1896 the whole sustain phase of the traffic load profile. Ramp up and 1897 ramp down phase SHOULD NOT be considered. 1899 Generic criteria: 1901 a. Number of failed Application transactions (receiving any HTTP 1902 response code other than 200 OK) MUST be less than 0.001% (1 out 1903 of 100,000 transactions) of attempt transactions. 1905 b. Number of Terminated TCP connections due to unexpected TCP RST 1906 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1907 connections) of total initiated TCP connections 1909 c. During the sustain phase, traffic should be forwarded at a 1910 constant rate. 1912 d. Concurrent TCP connections should be constant during steady 1913 state. This confirms the DUT opens and closes TCP connections at 1914 the same rate. 1916 e. After ramp up the DUT MUST achieve the "Target objective" defined 1917 in the parameter Section 7.8.3.2 and remain in that state for the 1918 entire test duration (sustain phase). 1920 7.8.3.4. Measurement 1922 Following KPI metrics MUST be reported for each test scenario and 1923 HTTPS response object sizes separately: 1925 average TCP connections per second and average application 1926 transaction latency or TTLB 1928 All KPI's are measured once the target connections per second 1929 achieves the steady state. 1931 7.8.4. Test Procedures and Expected Results 1933 The test procedure is designed to measure average application 1934 transaction latency or TTLB when the DUT is operating close to 50% of 1935 its maximum achievable connections per second. This test procedure 1936 can be repeated multiple times with different IP types (IPv4 only, 1937 IPv6 only and IPv4 and IPv6 mixed traffic distribution), HTTPS 1938 response object sizes and single and multiple transactions per 1939 connection scenarios. 1941 7.8.4.1. Step 1: Test Initialization and Qualification 1943 Verify the link status of the all connected physical interfaces. All 1944 interfaces are expected to be in "UP" status. 1946 Configure traffic load profile of the test equipment to establish 1947 "Initial objective" as defined in the parameters Section 7.8.3.2. 1948 The traffic load profile can be defined as described in 1949 Section 4.3.4. 1951 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 1952 phase. The measured KPIs during the sustain phase MUST meet the 1953 validation criteria a, b, c, d, e and f defined in Section 7.8.3.3. 1955 If the KPI metrics do not meet the validation criteria, the test 1956 procedure MUST NOT be continued to "Step 2". 1958 7.8.4.2. Step 2: Test Run with Target Objective 1960 Configure test equipment to establish "Target objective" defined in 1961 the parameters table. The test equipment SHOULD follow the traffic 1962 load profile definition as described in Section 4.3.4. 1964 During the ramp up and sustain phase, other KPIs such as throughput, 1965 concurrent TCP connections and application transactions per second 1966 MUST NOT reach to the maximum value that the DUT/SUT can support. 1967 The test results for specific test iterations SHOULD NOT be reported, 1968 if the above mentioned KPI (especially throughput) reaches to the 1969 maximum value. (Example: If the test iteration with 64 KByte of HTTP 1970 response object size reached the maximum throughput limitation of the 1971 DUT, the test iteration MAY be interrupted and the result for 64 1972 KByte SHOULD NOT be reported). 1974 The test equipment SHOULD start to measure and record all specified 1975 KPIs. The frequency of measurement SHOULD be 2 seconds. Continue 1976 the test until all traffic profile phases are completed. DUT/SUT is 1977 expected to reach the desired "Target objective" at the sustain 1978 phase. In addition, the measured KPIs MUST meet all validation 1979 criteria. 1981 Follow step 3, if the KPI metrics do not meet the validation 1982 criteria. 1984 7.8.4.3. Step 3: Test Iteration 1986 Determine the maximum achievable connections per second within the 1987 validation criteria and measure the latency values. 1989 7.9. Concurrent TCP/HTTPS Connection Capacity 1991 7.9.1. Objective 1993 Determine the maximum number of concurrent TCP connections that the 1994 DUT/SUT sustains when using HTTPS traffic. 1996 7.9.2. Test Setup 1998 Test bed setup SHOULD be configured as defined in Section 4. Any 1999 specific test bed configuration changes such as number of interfaces 2000 and interface type, etc. MUST be documented. 2002 7.9.3. Test Parameters 2004 In this section, test scenario specific parameters SHOULD be defined. 2006 7.9.3.1. DUT/SUT Configuration Parameters 2008 DUT/SUT parameters MUST conform to the requirements defined in 2009 Section 4.2. Any configuration changes for this specific test 2010 scenario MUST be documented. 2012 7.9.3.2. Test Equipment Configuration Parameters 2014 Test equipment configuration parameters MUST conform to the 2015 requirements defined in Section 4.3. Following parameters MUST be 2016 documented for this test scenario: 2018 Client IP address range defined in Section 4.3.1.2 2020 Server IP address range defined in Section 4.3.2.2 2022 Traffic distribution ratio between IPv4 and IPv6 defined in 2023 Section 4.3.1.2 2025 RECOMMENDED cipher suites and key size: ECDHE-ECDSA-AES256-GCM- 2026 SHA384 with Secp521 bits key size (Signature Hash Algorithm: 2027 ecdsa_secp384r1_sha384 and Supported group: sepc521r1) 2029 Target concurrent connections: Initial value from product data 2030 sheet (if known) 2032 Initial concurrent connections: 10% of "Target concurrent 2033 connections" 2035 Connections per second during ramp up phase: 50% of maximum 2036 connections per second measured in test scenario TCP/HTTPS 2037 Connections per second (Section 7.6) 2039 Ramp up time (in traffic load profile for "Target concurrent 2040 connections"): "Target concurrent connections" / "Maximum 2041 connections per second during ramp up phase" 2043 Ramp up time (in traffic load profile for "Initial concurrent 2044 connections"): "Initial concurrent connections" / "Maximum 2045 connections per second during ramp up phase" 2047 The client MUST perform HTTPS transaction with persistence and each 2048 client can open multiple concurrent TCP connections per server 2049 endpoint IP. 2051 Each client sends 10 GET commands requesting 1 KByte HTTPS response 2052 objects in the same TCP connections (10 transactions/TCP connection) 2053 and the delay (think time) between each transactions MUST be X 2054 seconds. 2056 X = ("Ramp up time" + "steady state time") /10 2057 The established connections SHOULD remain open until the ramp down 2058 phase of the test. During the ramp down phase, all connections 2059 SHOULD be successfully closed with FIN. 2061 7.9.3.3. Test Results Validation Criteria 2063 The following test Criteria is defined as test results validation 2064 criteria. Test results validation criteria MUST be monitored during 2065 the whole sustain phase of the traffic load profile. 2067 a. Number of failed Application transactions (receiving any HTTP 2068 response code other than 200 OK) MUST be less than 0.001% (1 out 2069 of 100,000 transactions) of total attempted transactions 2071 b. Number of Terminated TCP connections due to unexpected TCP RST 2072 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2073 connections) of total initiated TCP connections 2075 c. During the sustain phase, traffic SHOULD be forwarded constantly 2077 7.9.3.4. Measurement 2079 Following KPI metrics MUST be reported for this test scenario: 2081 Average Throughput, max. Min. Avg. Concurrent TCP connections, TTLB/ 2082 application transaction latency and average application transactions 2083 per second 2085 7.9.4. Test Procedures and expected Results 2087 The test procedure is designed to measure the concurrent TCP 2088 connection capacity of the DUT/SUT at the sustaining period of 2089 traffic load profile. The test procedure consists of three major 2090 steps. This test procedure MAY be repeated multiple times with 2091 different IPv4 and IPv6 traffic distribution. 2093 7.9.4.1. Step 1: Test Initialization and Qualification 2095 Verify the link status of all connected physical interfaces. All 2096 interfaces are expected to be in "UP" status. 2098 Configure test equipment to establish "initial concurrent TCP 2099 connections" defined in Section 7.9.3.2. Except ramp up time, the 2100 traffic load profile SHOULD be defined as described in Section 4.3.4. 2102 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 2103 concurrent TCP connections". The measured KPIs during the sustain 2104 phase MUST meet the validation criteria "a" and "b" defined in 2105 Section 7.9.3.3. 2107 If the KPI metrics do not meet the validation criteria, the test 2108 procedure MUST NOT be continued to "Step 2". 2110 7.9.4.2. Step 2: Test Run with Target Objective 2112 Configure test equipment to establish "Target concurrent TCP 2113 connections".The test equipment SHOULD follow the traffic load 2114 profile definition (except ramp up time) as described in 2115 Section 4.3.4. 2117 During the ramp up and sustain phase, the other KPIs such as 2118 throughput, TCP connections per second and application transactions 2119 per second MUST NOT reach to the maximum value that the DUT/SUT can 2120 support. 2122 The test equipment SHOULD start to measure and record KPIs defined in 2123 Section 7.9.3.4. The frequency of measurement SHOULD be 2 seconds. 2124 Continue the test until all traffic profile phases are completed. 2126 The DUT/SUT is expected to reach the desired target concurrent 2127 connections at the sustain phase. In addition, the measured KPIs 2128 MUST meet all validation criteria. 2130 Follow step 3, if the KPI metrics do not meet the validation 2131 criteria. 2133 7.9.4.3. Step 3: Test Iteration 2135 Determine the maximum and average achievable concurrent TCP 2136 connections within the validation criteria. 2138 8. Formal Syntax 2140 9. IANA Considerations 2142 This document makes no request of IANA. 2144 Note to RFC Editor: this section may be removed on publication as an 2145 RFC. 2147 10. Security Considerations 2149 The primary goal of this document is to provide benchmarking 2150 terminology and methodology for next-generation network security 2151 devices. However, readers should be aware that there is some overlap 2152 between performance and security issues. Specifically, the optimal 2153 configuration for network security device performance may not be the 2154 most secure, and vice-versa. The Cipher suites are recommended in 2155 this document are just for test purpose only. The Cipher suite 2156 recommendation for a real deployment is outside the scope of this 2157 document. 2159 11. Acknowledgements 2161 Acknowledgements will be added in the future release. 2163 12. Contributors 2165 The authors would like to thank the many people that contributed 2166 their time and knowledge to this effort. 2168 Specifically, to the co-chairs of the NetSecOPEN Test Methodology 2169 working group and the NetSecOPEN Security Effectiveness working group 2170 - Alex Samonte, Aria Eslambolchizadeh, Carsten Rossenhoevel and David 2171 DeSanto. 2173 Additionally, the following people provided input, comments and spent 2174 time reviewing the myriad of drafts. If we have missed anyone the 2175 fault is entirely our own. Thanks to - Amritam Putatunda, Chao Guo, 2176 Chris Chapman, Chris Pearson, Chuck McAuley, David White, Jurrie Van 2177 Den Breekel, Michelle Rhines, Rob Andrews, Samaresh Nair, and Tim 2178 Winters. 2180 13. References 2182 13.1. Normative References 2184 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2185 Requirement Levels", BCP 14, RFC 2119, 2186 DOI 10.17487/RFC2119, March 1997, 2187 . 2189 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2190 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 2191 May 2017, . 2193 13.2. Informative References 2195 [RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., 2196 Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext 2197 Transfer Protocol -- HTTP/1.1", RFC 2616, 2198 DOI 10.17487/RFC2616, June 1999, 2199 . 2201 [RFC2647] Newman, D., "Benchmarking Terminology for Firewall 2202 Performance", RFC 2647, DOI 10.17487/RFC2647, August 1999, 2203 . 2205 [RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin, 2206 "Benchmarking Methodology for Firewall Performance", 2207 RFC 3511, DOI 10.17487/RFC3511, April 2003, 2208 . 2210 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 2211 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 2212 . 2214 Appendix A. NetSecOPEN Basic Traffic Mix 2216 A traffic mix for testing performance of next generation firewalls 2217 MUST scale to stress the DUT based on real-world conditions. In 2218 order to achieve this the following MUST be included: 2220 o Clients connecting to multiple different server FQDNs per 2221 application 2223 o Clients loading apps and pages with connections and objects in 2224 specific orders 2226 o Multiple unique certificates for HTTPS/TLS 2228 o A wide variety of different object sizes 2230 o Different URL paths 2232 o Mix of HTTP and HTTPS 2234 A traffic mix for testing performance of next generation firewalls 2235 MUST also facility application identification using different 2236 detection methods with and without decryption of the traffic. Such 2237 as: 2239 o HTTP HOST based application detection 2241 o HTTPS/TLS Server Name Indication (SNI) 2243 o Certificate Subject Common Name (CN) 2245 The mix MUST be of sufficient complexity and volume to render 2246 differences in individual apps as statistically insignificant. For 2247 example, changes in like to like apps - such as one type of video 2248 service vs. another both consist of larger objects whereas one news 2249 site vs. another both typically have more connections then other apps 2250 because of trackers and embedded advertising content. To achieve 2251 sufficient complexity, a mix MUST have: 2253 o Thousands of URLs each client walks thru 2255 o Hundreds of FQDNs each client connects to 2257 o Hundreds of unique certificates for HTTPS/TLS 2259 o Thousands of different object sizes per client in orders matching 2260 applications 2262 The following is a description of what a popular application in an 2263 enterprise traffic mix contains. 2265 Table 5 lists the FQDNs, number of transactions and bytes transferred 2266 as an example client interacts with Office 365 Outlook, Word, Excel, 2267 PowerPoint, SharePoint and Skype. 2269 +---------------------------------+------------+-------------+ 2270 | Office365 FQDN | Bytes | Transaction | 2271 +============================================================+ 2272 | r1.res.office365.com | 14,056,960 | 192 | 2273 +---------------------------------+------------+-------------+ 2274 | s1-word-edit-15.cdn.office.net | 6,731,019 | 22 | 2275 +---------------------------------+------------+-------------+ 2276 | company1-my.sharepoint.com | 6,269,492 | 42 | 2277 +---------------------------------+------------+-------------+ 2278 | swx.cdn.skype.com | 6,100,027 | 12 | 2279 +---------------------------------+------------+-------------+ 2280 | static.sharepointonline.com | 6,036,947 | 41 | 2281 +---------------------------------+------------+-------------+ 2282 | spoprod-a.akamaihd.net | 3,904,250 | 25 | 2283 +---------------------------------+------------+-------------+ 2284 | s1-excel-15.cdn.office.net | 2,767,941 | 16 | 2285 +---------------------------------+------------+-------------+ 2286 | outlook.office365.com | 2,047,301 | 86 | 2287 +---------------------------------+------------+-------------+ 2288 | shellprod.msocdn.com | 1,008,370 | 11 | 2289 +---------------------------------+------------+-------------+ 2290 | word-edit.officeapps.live.com | 932,080 | 25 | 2291 +---------------------------------+------------+-------------+ 2292 | res.delve.office.com | 760,146 | 2 | 2293 +---------------------------------+------------+-------------+ 2294 | s1-powerpoint-15.cdn.office.net | 557,604 | 3 | 2295 +---------------------------------+------------+-------------+ 2296 | appsforoffice.microsoft.com | 511,171 | 5 | 2297 +---------------------------------+------------+-------------+ 2298 | powerpoint.officeapps.live.com | 471,625 | 14 | 2299 +---------------------------------+------------+-------------+ 2300 | excel.officeapps.live.com | 342,040 | 14 | 2301 +---------------------------------+------------+-------------+ 2302 | s1-officeapps-15.cdn.office.net | 331,343 | 5 | 2303 +---------------------------------+------------+-------------+ 2304 | webdir0a.online.lync.com | 66,930 | 15 | 2305 +---------------------------------+------------+-------------+ 2306 | portal.office.com | 13,956 | 1 | 2307 +---------------------------------+------------+-------------+ 2308 | config.edge.skype.com | 6,911 | 2 | 2309 +---------------------------------+------------+-------------+ 2310 | clientlog.portal.office.com | 6,608 | 8 | 2311 +---------------------------------+------------+-------------+ 2312 | webdir.online.lync.com | 4,343 | 5 | 2313 +---------------------------------+------------+-------------+ 2314 | graph.microsoft.com | 2,289 | 2 | 2315 +---------------------------------+------------+-------------+ 2316 | nam.loki.delve.office.com | 1,812 | 5 | 2317 +---------------------------------+------------+-------------+ 2318 | login.microsoftonline.com | 464 | 2 | 2319 +---------------------------------+------------+-------------+ 2320 | login.windows.net | 232 | 1 | 2321 +---------------------------------+------------+-------------+ 2323 Table 5: Office365 2325 Clients MUST connect to multiple server FQDNs in the same order as 2326 real applications. Connections MUST be made when the client is 2327 interacting with the application and MUST NOT first setup up all 2328 connections. Connections SHOULD stay open per client for subsequent 2329 transactions to the same FQDN similar to how a web browser behaves. 2330 Clients MUST use different URL Paths and Object sizes in orders as 2331 they are observed in real Applications. Clients MAY also setup 2332 multiple connections per FQDN to process multiple transactions in a 2333 sequence at the same time. Table 6 has a partial example sequence of 2334 the Office 365 Word application transactions. 2336 +---------------------------------+----------------------+----------+ 2337 | FQDN | URL Path | Object | 2338 | | | size | 2339 +===================================================================+ 2340 | company1-my.sharepoint.com | /personal... | 23,132 | 2341 +---------------------------------+----------------------+----------+ 2342 | word-edit.officeapps.live.com | /we/WsaUpload.ashx | 2 | 2343 +---------------------------------+----------------------+----------+ 2344 | static.sharepointonline.com | /bld/.../blank.js | 454 | 2345 +---------------------------------+----------------------+----------+ 2346 | static.sharepointonline.com | /bld/.../ | 23,254 | 2347 | | initstrings.js | | 2348 +---------------------------------+----------------------+----------+ 2349 | static.sharepointonline.com | /bld/.../init.js | 292,740 | 2350 +---------------------------------+----------------------+----------+ 2351 | company1-my.sharepoint.com | /ScriptResource... | 102,774 | 2352 +---------------------------------+----------------------+----------+ 2353 | company1-my.sharepoint.com | /ScriptResource... | 40,329 | 2354 +---------------------------------+----------------------+----------+ 2355 | company1-my.sharepoint.com | /WebResource... | 23,063 | 2356 +---------------------------------+----------------------+----------+ 2357 | word-edit.officeapps.live.com | /we/wordeditorframe. | 60,657 | 2358 | | aspx... | | 2359 +---------------------------------+----------------------+----------+ 2360 | static.sharepointonline.com | /bld/_layouts/.../ | 454 | 2361 | | blank.js | | 2362 +---------------------------------+----------------------+----------+ 2363 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 19,201 | 2364 | | EditSurface.css | | 2365 +---------------------------------+----------------------+----------+ 2366 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 221,397 | 2367 | | WordEditor.css | | 2368 +---------------------------------+----------------------+----------+ 2369 | s1-officeapps-15.cdn.office.net | /we/s/.../ | 107,571 | 2370 | | Microsoft | | 2371 | | Ajax.js | | 2372 +---------------------------------+----------------------+----------+ 2373 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 39,981 | 2374 | | wacbootwe.js | | 2375 +---------------------------------+----------------------+----------+ 2376 | s1-officeapps-15.cdn.office.net | /we/s/.../ | 51,749 | 2377 | | CommonIntl.js | | 2378 +---------------------------------+----------------------+----------+ 2379 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 6,050 | 2380 | | Compat.js | | 2381 +---------------------------------+----------------------+----------+ 2382 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 54,158 | 2383 | | Box4Intl.js | | 2384 +---------------------------------+----------------------+----------+ 2385 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 24,946 | 2386 | | WoncaIntl.js | | 2387 +---------------------------------+----------------------+----------+ 2388 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 53,515 | 2389 | | WordEditorIntl.js | | 2390 +---------------------------------+----------------------+----------+ 2391 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 1,978,712| 2392 | | WordEditorExp.js | | 2393 +---------------------------------+----------------------+----------+ 2394 | s1-word-edit-15.cdn.office.net | /we/s/.../jSanity.js | 10,912 | 2395 +---------------------------------+----------------------+----------+ 2396 | word-edit.officeapps.live.com | /we/OneNote.ashx | 145,708 | 2397 +---------------------------------+----------------------+----------+ 2399 Table 6: Office365 Word Transactions 2401 For application identification the HTTPS/TLS traffic MUST include 2402 realistic Certificate Subject Common Name (CN) data as well as Server 2403 Name Indications (SNI). For example, a DUT MAY detect Facebook Chat 2404 traffic by inspecting the certificate and detecting *.facebook.com in 2405 the certificate subject CN and subsequently detect the word chat in 2406 the FQDN 5-edge-chat.facebook.com and identify traffic on the 2407 connection to be Facebook Chat. 2409 Table 7 includes further examples in SNI and CN pairs for several 2410 FQDNs of Office 365. 2412 +------------------------------+----------------------------------+ 2413 |Server Name Indication (SNI) | Certificate Subject | 2414 | | Common Name (CN) | 2415 +=================================================================+ 2416 | r1.res.office365.com | *.res.outlook.com | 2417 +------------------------------+----------------------------------+ 2418 | login.windows.net | graph.windows.net | 2419 +------------------------------+----------------------------------+ 2420 | webdir0a.online.lync.com | *.online.lync.com | 2421 +------------------------------+----------------------------------+ 2422 | login.microsoftonline.com | stamp2.login.microsoftonline.com | 2423 +------------------------------+----------------------------------+ 2424 | webdir.online.lync.com | *.online.lync.com | 2425 +------------------------------+----------------------------------+ 2426 | graph.microsoft.com | graph.microsoft.com | 2427 +------------------------------+----------------------------------+ 2428 | outlook.office365.com | outlook.com | 2429 +------------------------------+----------------------------------+ 2430 | appsforoffice.microsoft.com | appsforoffice.microsoft.com | 2431 +------------------------------+----------------------------------+ 2433 Table 7: Office365 SNI and CN Pairs Examples 2435 NetSecOPEN has provided a reference enterprise perimeter traffic mix 2436 with dozens of applications, hundreds of connections, and thousands 2437 of transactions. 2439 The enterprise perimeter traffic mix consists of 70% HTTPS and 30% 2440 HTTP by Bytes, 58% HTTPS and 42% HTTP by Transactions. By 2441 connections with a single connection per FQDN the mix consists of 43% 2442 HTTPS and 57% HTTP. With multiple connections per FQDN the HTTPS 2443 percentage is higher. 2445 Table 8 is a summary of the NetSecOPEN enterprise perimeter traffic 2446 mix sorted by bytes with unique FQDNs and transactions per 2447 applications. 2449 +------------------+-------+--------------+-------------+ 2450 | Application | FQDNs | Transactions | Bytes | 2451 +=======================================================+ 2452 | Office365 | 26 | 558 | 52,931,947 | 2453 +------------------+-------+--------------+-------------+ 2454 | Box | 4 | 90 | 23,276,089 | 2455 +------------------+-------+--------------+-------------+ 2456 | Salesforce | 6 | 365 | 23,137,548 | 2457 +------------------+-------+--------------+-------------+ 2458 | Gmail | 13 | 139 | 16,399,289 | 2459 +------------------+-------+--------------+-------------+ 2460 | Linkedin | 10 | 206 | 15,040,918 | 2461 +------------------+-------+--------------+-------------+ 2462 | DailyMotion | 8 | 77 | 14,751,514 | 2463 +------------------+-------+--------------+-------------+ 2464 | GoogleDocs | 2 | 71 | 14,205,476 | 2465 +------------------+-------+--------------+-------------+ 2466 | Wikia | 15 | 159 | 13,909,777 | 2467 +------------------+-------+--------------+-------------+ 2468 | Foxnews | 82 | 499 | 13,758,899 | 2469 +------------------+-------+--------------+-------------+ 2470 | Yahoo Finance | 33 | 254 | 13,134,011 | 2471 +------------------+-------+--------------+-------------+ 2472 | Youtube | 8 | 97 | 13,056,216 | 2473 +------------------+-------+--------------+-------------+ 2474 | Facebook | 4 | 207 | 12,726,231 | 2475 +------------------+-------+--------------+-------------+ 2476 | CNBC | 77 | 275 | 11,939,566 | 2477 +------------------+-------+--------------+-------------+ 2478 | Lightreading | 27 | 304 | 11,200,864 | 2479 +------------------+-------+--------------+-------------+ 2480 | BusinessInsider | 16 | 142 | 11,001,575 | 2481 +------------------+-------+--------------+-------------+ 2482 | Alexa | 5 | 153 | 10,475,151 | 2483 +------------------+-------+--------------+-------------+ 2484 | CNN | 41 | 206 | 10,423,740 | 2485 +------------------+-------+--------------+-------------+ 2486 | Twitter Video | 2 | 72 | 10,112,820 | 2487 +------------------+-------+--------------+-------------+ 2488 | Cisco Webex | 1 | 213 | 9,988,417 | 2489 +------------------+-------+--------------+-------------+ 2490 | Slack | 3 | 40 | 9,938,686 | 2491 +------------------+-------+--------------+-------------+ 2492 | Google Maps | 5 | 191 | 8,771,873 | 2493 +------------------+-------+--------------+-------------+ 2494 | SpectrumIEEE | 7 | 145 | 8,682,629 | 2495 +------------------+-------+--------------+-------------+ 2496 | Yelp | 9 | 146 | 8,607,645 | 2497 +------------------+-------+--------------+-------------+ 2498 | Vimeo | 12 | 74 | 8,555,960 | 2499 +------------------+-------+--------------+-------------+ 2500 | Wikihow | 11 | 140 | 8,042,314 | 2501 +------------------+-------+--------------+-------------+ 2502 | Netflix | 3 | 31 | 7,839,256 | 2503 +------------------+-------+--------------+-------------+ 2504 | Instagram | 3 | 114 | 7,230,883 | 2505 +------------------+-------+--------------+-------------+ 2506 | Morningstar | 30 | 150 | 7,220,121 | 2507 +------------------+-------+--------------+-------------+ 2508 | Docusign | 5 | 68 | 6,972,738 | 2509 +------------------+-------+--------------+-------------+ 2510 | Twitter | 1 | 100 | 6,939,150 | 2511 +------------------+-------+--------------+-------------+ 2512 | Tumblr | 11 | 70 | 6,877,200 | 2513 +------------------+-------+--------------+-------------+ 2514 | Whatsapp | 3 | 46 | 6,829,848 | 2515 +------------------+-------+--------------+-------------+ 2516 | Imdb | 16 | 251 | 6,505,227 | 2517 +------------------+-------+--------------+-------------+ 2518 | NOAAgov | 1 | 44 | 6,316,283 | 2519 +------------------+-------+--------------+-------------+ 2520 | IndustryWeek | 23 | 192 | 6,242,403 | 2521 +------------------+-------+--------------+-------------+ 2522 | Spotify | 18 | 119 | 6,231,013 | 2523 +------------------+-------+--------------+-------------+ 2524 | AutoNews | 16 | 165 | 6,115,354 | 2525 +------------------+-------+--------------+-------------+ 2526 | Evernote | 3 | 47 | 6,063,168 | 2527 +------------------+-------+--------------+-------------+ 2528 | NatGeo | 34 | 104 | 6,026,344 | 2529 +------------------+-------+--------------+-------------+ 2530 | BBC News | 18 | 156 | 5,898,572 | 2531 +------------------+-------+--------------+-------------+ 2532 | Investopedia | 38 | 241 | 5,792,038 | 2533 +------------------+-------+--------------+-------------+ 2534 | Pinterest | 8 | 102 | 5,658,994 | 2535 +------------------+-------+--------------+-------------+ 2536 | Succesfactors | 2 | 112 | 5,049,001 | 2537 +------------------+-------+--------------+-------------+ 2538 | AbaJournal | 6 | 93 | 4,985,626 | 2539 +------------------+-------+--------------+-------------+ 2540 | Pbworks | 4 | 78 | 4,670,980 | 2541 +------------------+-------+--------------+-------------+ 2542 | NetworkWorld | 42 | 153 | 4,651,354 | 2543 +------------------+-------+--------------+-------------+ 2544 | WebMD | 24 | 280 | 4,416,736 | 2545 +------------------+-------+--------------+-------------+ 2546 | OilGasJournal | 14 | 105 | 4,095,255 | 2547 +------------------+-------+--------------+-------------+ 2548 | Trello | 5 | 39 | 4,080,182 | 2549 +------------------+-------+--------------+-------------+ 2550 | BusinessWire | 5 | 109 | 4,055,331 | 2551 +------------------+-------+--------------+-------------+ 2552 | Dropbox | 5 | 17 | 4,023,469 | 2553 +------------------+-------+--------------+-------------+ 2554 | Nejm | 20 | 190 | 4,003,657 | 2555 +------------------+-------+--------------+-------------+ 2556 | OilGasDaily | 7 | 199 | 3,970,498 | 2557 +------------------+-------+--------------+-------------+ 2558 | Chase | 6 | 52 | 3,719,232 | 2559 +------------------+-------+--------------+-------------+ 2560 | MedicalNews | 6 | 117 | 3,634,187 | 2561 +------------------+-------+--------------+-------------+ 2562 | Marketwatch | 25 | 142 | 3,291,226 | 2563 +------------------+-------+--------------+-------------+ 2564 | Imgur | 5 | 48 | 3,189,919 | 2565 +------------------+-------+--------------+-------------+ 2566 | NPR | 9 | 83 | 3,184,303 | 2567 +------------------+-------+--------------+-------------+ 2568 | Onelogin | 2 | 31 | 3,132,707 | 2569 +------------------+-------+--------------+-------------+ 2570 | Concur | 2 | 50 | 3,066,326 | 2571 +------------------+-------+--------------+-------------+ 2572 | Service-now | 1 | 37 | 2,985,329 | 2573 +------------------+-------+--------------+-------------+ 2574 | Apple itunes | 14 | 80 | 2,843,744 | 2575 +------------------+-------+--------------+-------------+ 2576 | BerkeleyEdu | 3 | 69 | 2,622,009 | 2577 +------------------+-------+--------------+-------------+ 2578 | MSN | 39 | 203 | 2,532,972 | 2579 +------------------+-------+--------------+-------------+ 2580 | Indeed | 3 | 47 | 2,325,197 | 2581 +------------------+-------+--------------+-------------+ 2582 | MayoClinic | 6 | 56 | 2,269,085 | 2583 +------------------+-------+--------------+-------------+ 2584 | Ebay | 9 | 164 | 2,219,223 | 2585 +------------------+-------+--------------+-------------+ 2586 | UCLAedu | 3 | 42 | 1,991,311 | 2587 +------------------+-------+--------------+-------------+ 2588 | ConstructionDive | 5 | 125 | 1,828,428 | 2589 +------------------+-------+--------------+-------------+ 2590 | EducationNews | 4 | 78 | 1,605,427 | 2591 +------------------+-------+--------------+-------------+ 2592 | BofA | 12 | 68 | 1,584,851 | 2593 +------------------+-------+--------------+-------------+ 2594 | ScienceDirect | 7 | 26 | 1,463,951 | 2595 +------------------+-------+--------------+-------------+ 2596 | Reddit | 8 | 55 | 1,441,909 | 2597 +------------------+-------+--------------+-------------+ 2598 | FoodBusinessNews | 5 | 49 | 1,378,298 | 2599 +------------------+-------+--------------+-------------+ 2600 | Amex | 8 | 42 | 1,270,696 | 2601 +------------------+-------+--------------+-------------+ 2602 | Weather | 4 | 50 | 1,243,826 | 2603 +------------------+-------+--------------+-------------+ 2604 | Wikipedia | 3 | 27 | 958,935 | 2605 +------------------+-------+--------------+-------------+ 2606 | Bing | 1 | 52 | 697,514 | 2607 +------------------+-------+--------------+-------------+ 2608 | ADP | 1 | 30 | 508,654 | 2609 +------------------+-------+--------------+-------------+ 2610 | | | | | 2611 +------------------+-------+--------------+-------------+ 2612 | Grand Total | 983 | 10021 | 569,819,095 | 2613 +------------------+-------+--------------+-------------+ 2615 Table 8: Summary of NetSecOPEN Enterprise Perimeter Traffic Mix 2617 Authors' Addresses 2619 Balamuhunthan Balarajah 2621 Email: bm.balarajah@gmail.com 2623 Carsten Rossenhoevel 2624 EANTC AG 2625 Salzufer 14 2626 Berlin 10587 2627 Germany 2629 Email: cross@eantc.de 2630 Brian Monkman 2631 NetSecOPEN 2633 Email: bmonkman@netsecopen.org