idnits 2.17.1 draft-ietf-bmwg-ngfw-performance-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (March 5, 2019) is 1877 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 2616 (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) -- Obsolete informational reference (is this intentional?): RFC 3511 (Obsoleted by RFC 9411) Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Methodology Working Group B. Balarajah 3 Internet-Draft 4 Intended status: Informational C. Rossenhoevel 5 Expires: September 6, 2019 EANTC AG 6 B. Monkman 7 NetSecOPEN 8 March 5, 2019 10 Benchmarking Methodology for Network Security Device Performance 11 draft-ietf-bmwg-ngfw-performance-00 13 Abstract 15 This document provides benchmarking terminology and methodology for 16 next-generation network security devices including next-generation 17 firewalls (NGFW), intrusion detection and prevention solutions (IDS/ 18 IPS) and unified threat management (UTM) implementations. This 19 document aims to strongly improve the applicability, reproducibility, 20 and transparency of benchmarks and to align the test methodology with 21 today's increasingly complex layer 7 application use cases. The main 22 areas covered in this document are test terminology, traffic profiles 23 and benchmarking methodology for NGFWs to start with. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at https://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on September 6, 2019. 42 Copyright Notice 44 Copyright (c) 2019 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (https://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 60 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 61 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 62 4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 4 63 4.1. Testbed Configuration . . . . . . . . . . . . . . . . . . 4 64 4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 5 65 4.3. Test Equipment Configuration . . . . . . . . . . . . . . 9 66 4.3.1. Client Configuration . . . . . . . . . . . . . . . . 9 67 4.3.2. Backend Server Configuration . . . . . . . . . . . . 11 68 4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 11 69 4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 12 70 5. Test Bed Considerations . . . . . . . . . . . . . . . . . . . 13 71 6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 14 72 6.1. Key Performance Indicators . . . . . . . . . . . . . . . 15 73 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 17 74 7.1. Throughput Performance With NetSecOPEN Traffic Mix . . . 17 75 7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 17 76 7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 17 77 7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 17 78 7.1.4. Test Procedures and expected Results . . . . . . . . 19 79 7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 20 80 7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 20 81 7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 20 82 7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 20 83 7.2.4. Test Procedures and Expected Results . . . . . . . . 22 84 7.3. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 23 85 7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 23 86 7.3.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 23 87 7.3.3. Test Parameters . . . . . . . . . . . . . . . . . . . 23 88 7.3.4. Test Procedures and Expected Results . . . . . . . . 25 89 7.4. TCP/HTTP Transaction Latency . . . . . . . . . . . . . . 26 90 7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 26 91 7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 26 92 7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 26 93 7.4.4. Test Procedures and Expected Results . . . . . . . . 28 94 7.5. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 29 95 7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 29 96 7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 29 97 7.5.3. Test Parameters . . . . . . . . . . . . . . . . . . . 30 98 7.5.4. Test Procedures and expected Results . . . . . . . . 31 99 7.6. TCP/HTTPS Connections per second . . . . . . . . . . . . 32 100 7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 32 101 7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 33 102 7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 33 103 7.6.4. Test Procedures and expected Results . . . . . . . . 35 104 7.7. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 36 105 7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 36 106 7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 36 107 7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 36 108 7.7.4. Test Procedures and Expected Results . . . . . . . . 39 109 7.8. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 40 110 7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 40 111 7.8.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 40 112 7.8.3. Test Parameters . . . . . . . . . . . . . . . . . . . 40 113 7.8.4. Test Procedures and Expected Results . . . . . . . . 42 114 7.9. Concurrent TCP/HTTPS Connection Capacity . . . . . . . . 43 115 7.9.1. Objective . . . . . . . . . . . . . . . . . . . . . . 43 116 7.9.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 43 117 7.9.3. Test Parameters . . . . . . . . . . . . . . . . . . . 43 118 7.9.4. Test Procedures and expected Results . . . . . . . . 45 119 8. Formal Syntax . . . . . . . . . . . . . . . . . . . . . . . . 46 120 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 46 121 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 47 122 11. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 47 123 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 47 124 12.1. Normative References . . . . . . . . . . . . . . . . . . 47 125 12.2. Informative References . . . . . . . . . . . . . . . . . 47 126 Appendix A. NetSecOPEN Basic Traffic Mix . . . . . . . . . . . . 48 127 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 56 129 1. Introduction 131 15 years have passed since IETF recommended test methodology and 132 terminology for firewalls initially ([RFC2647], [RFC3511]). The 133 requirements for network security element performance and 134 effectiveness have increased tremendously since then. Security 135 function implementations have evolved to more advanced areas and have 136 diversified into intrusion detection and prevention, threat 137 management, analysis of encrypted traffic, etc. In an industry of 138 growing importance, well-defined and reproducible key performance 139 indicators (KPIs) are increasingly needed: They enable fair and 140 reasonable comparison of network security functions. All these 141 reasons have led to the creation of a new next-generation firewall 142 benchmarking document. 144 2. Requirements 146 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 147 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 148 "OPTIONAL" in this document are to be interpreted as described in BCP 149 14 [RFC2119], [RFC8174] when, and only when, they appear in all 150 capitals, as shown here. 152 3. Scope 154 This document provides testing terminology and testing methodology 155 for next-generation firewalls and related security functions. It 156 covers two main areas: Performance benchmarks and security 157 effectiveness testing. This document focuses on advanced, realistic, 158 and reproducible testing methods. Additionally, it describes test 159 bed environments, test tool requirements and test result formats. 161 4. Test Setup 163 Test setup defined in this document is applicable to all benchmarking 164 test scenarios described in Section 7. 166 4.1. Testbed Configuration 168 Testbed configuration MUST ensure that any performance implications 169 that are discovered during the benchmark testing aren't due to the 170 inherent physical network limitations such as number of physical 171 links and forwarding performance capabilities (throughput and 172 latency) of the network devise in the testbed. For this reason, this 173 document recommends avoiding external devices such as switches and 174 routers in the testbed wherever possible. 176 However, in the typical deployment, the security devices (DUT/SUT) 177 are connected to routers and switches which will reduce the number of 178 entries in MAC or ARP tables of the DUT/SUT. If MAC or ARP tables 179 have many entries, this may impact the actual DUT/SUT performance due 180 to MAC and ARP/ND table lookup processes. Therefore, it is 181 RECOMMENDED to connect aggregation switches or routers between test 182 equipment and DUT/SUT as shown in Figure 1. The aggregation switches 183 or routers can be also used to aggregate the test equipment or DUT/ 184 SUT ports, if the numbers of used ports are mismatched between test 185 equipment and DUT/SUT. 187 If the test equipment is capable of emulating layer 3 routing 188 functionality and there is no need for test equipment port 189 aggregation, it is RECOMMENDED to configure the test setup as shown 190 in Figure 2. 192 +-------------------+ +-----------+ +--------------------+ 193 |Aggregation Switch/| | | | Aggregation Switch/| 194 | Router +------+ DUT/SUT +------+ Router | 195 | | | | | | 196 +----------+--------+ +-----------+ +--------+-----------+ 197 | | 198 | | 199 +-----------+-----------+ +-----------+-----------+ 200 | | | | 201 | +-------------------+ | | +-------------------+ | 202 | | Emulated Router(s)| | | | Emulated Router(s)| | 203 | | (Optional) | | | | (Optional) | | 204 | +-------------------+ | | +-------------------+ | 205 | +-------------------+ | | +-------------------+ | 206 | | Clients | | | | Servers | | 207 | +-------------------+ | | +-------------------+ | 208 | | | | 209 | Test Equipment | | Test Equipment | 210 +-----------------------+ +-----------------------+ 212 Figure 1: Testbed Setup - Option 1 214 +-----------------------+ +-----------------------+ 215 | +-------------------+ | +-----------+ | +-------------------+ | 216 | | Emulated Router(s)| | | | | | Emulated Router(s)| | 217 | | (Optional) | +----- DUT/SUT +-----+ (Optional) | | 218 | +-------------------+ | | | | +-------------------+ | 219 | +-------------------+ | +-----------+ | +-------------------+ | 220 | | Clients | | | | Servers | | 221 | +-------------------+ | | +-------------------+ | 222 | | | | 223 | Test Equipment | | Test Equipment | 224 +-----------------------+ +-----------------------+ 226 Figure 2: Testbed Setup - Option 2 228 4.2. DUT/SUT Configuration 230 A unique DUT/SUT configuration MUST be used for all benchmarking 231 tests described in Section 7. Since each DUT/SUT will have their own 232 unique configuration, users SHOULD configure their device with the 233 same parameters that would be used in the actual deployment of the 234 device or a typical deployment. Users MUST enable security features 235 on the DUT/SUT to achieve maximum security coverage for a specific 236 deployment scenario. 238 This document attempts to define the recommended security features 239 which SHOULD be consistently enabled for all the benchmarking tests 240 described in Section 7. Table 1 below describes the RECOMMENDED sets 241 of feature list which SHOULD be configured on the DUT/SUT. 243 Based on customer use case, users MAY enable or disable SSL 244 inspection feature for "Throughput Performance with NetSecOPEN 245 Traffic Mix" test scenario described in Section 7.1 247 To improve repeatability, a summary of the DUT configuration 248 including description of all enabled DUT/SUT features MUST be 249 published with the benchmarking results. 251 +---------------------+ 252 | NGFW | 253 +-------------- +-----------+---------+ 254 | | | | 255 |DUT Features | Mandatory | Optional| 256 | | | | 257 +-------------------------------------+ 258 |SSL Inspection | x | | 259 +-------------------------------------+ 260 |IDS/IPS | x | | 261 +-------------------------------------+ 262 |Web Filtering | | x | 263 +-------------------------------------+ 264 |Antivirus | x | | 265 +-------------------------------------+ 266 |Anti Spyware | x | | 267 +-------------------------------------+ 268 |Anti Botnet | x | | 269 +-------------------------------------+ 270 |DLP | | x | 271 +-------------------------------------+ 272 |DDoS | | x | 273 +-------------------------------------+ 274 |Certificate | | x | 275 |Validation | | | 276 +-------------------------------------+ 277 |Logging and | x | | 278 |Reporting | | | 279 +-------------- +---------------------+ 280 |Application | x | | 281 |Identification | | | 282 +---------------+-----------+---------+ 284 Table 1: DUT/SUT Feature List 286 In summary, DUT/SUT SHOULD be configured as follows: 288 o All security inspection enabled 290 o Disposition of all traffic is logged - Logging to an external 291 device is permissible 293 o Detection of CVEs matching the following characteristics when 294 searching the National Vulnerability Database (NVD) 296 * CVSS Version: 2 298 * CVSS V2 Metrics: AV:N/Au:N/I:C/A:C 300 * AV=Attack Vector, Au=Authentication, I=Integrity and 301 A=Availability 303 * CVSS V2 Severity: High (7-10) 305 * If doing a group test the published start date and published 306 end date SHOULD be the same 308 o Geographical location filtering and Application Identification and 309 Control configured to be triggered based on a site or application 310 from the defined traffic mix 312 In addition, it is also RECOMMENDED to configure a realistic number 313 of access policy rules on the DUT/SUT. This document determines the 314 number of access policy rules for three different classes of DUT/SUT. 315 The classification of the DUT/SUT MAY be based on its maximum 316 supported firewall throughput performance number defined in the 317 vendor data sheet. This document classifies the DUT/SUT in three 318 different categories; namely small, medium, and maximum. 320 The RECOMMENDED throughput values for the following classes are: 322 Extra Small (XS) - supported throughput less than 1Gbit/s 324 Small (S) - supported throughput less than 5Gbit/s 326 Medium (M) - supported throughput greater than 5Gbit/s and less than 327 10Gbit/s 329 Large (L) - supported throughput greater than 10Gbit/s 331 The Access Conrol Rules (ACL) defined in Table 2 SHOULD be configured 332 from top to bottom in the correct order as shown in the table. 333 (Note: There will be differences between how security vendors 334 implement ACL decision making.) The configured ACL MUST NOT block 335 the test traffic used for the benchmarking test scenarios. 337 +---------------------------------------------------+---------------+ 338 | | DUD/SUT | 339 | | Classification| 340 | | #rules | 341 +-----------+-----------+------------------+------------+---+---+---+ 342 | | Match | | | | | | | 343 | Rules Type| Criteria | Description | Action | XS| S | M | L | 344 +-------------------------------------------------------------------+ 345 |Application|Application| Any application | block | 5 | 10| 20| 50| 346 |layer | | traffic NOT | | | | | | 347 | | | included in the | | | | | | 348 | | | test traffic | | | | | | 349 +-----------------------+ ------------------------------------------+ 350 |Transport |Src IP and | Any src IP subnet| block | 25| 50|100|250| 351 |layer |TCP/UDP | used in the test | | | | | | 352 | |Dst ports | AND any dst ports| | | | | | 353 | | | NOT used in the | | | | | | 354 | | | test traffic | | | | | | 355 +-------------------------------------------------------------------+ 356 |IP layer |Src/Dst IP | Any src/dst IP | block | 25| 50|100|250| 357 | | | subnet NOT used | | | | | | 358 | | | in the test | | | | | | 359 +-------------------------------------------------------------------+ 360 |Application|Application| Applications | allow | 10| 10| 10| 10| 361 |layer | | included in the | | | | | | 362 | | | test traffic | | | | | | 363 +-------------------------------------------------------------------+ 364 |Transport |Src IP and | Half of the src | allow | 1| 1| 1| 1| 365 |layer |TCP/UDP | IP used in the | | | | | | 366 | |Dst ports | test AND any dst | | | | | | 367 | | | ports used in the| | | | | | 368 | | | test traffic. One| | | | | | 369 | | | rule per subnet | | | | | | 370 +-------------------------------------------------------------------+ 371 |IP layer |Src IP | The rest of the | allow | 1| 1| 1| 1| 372 | | | src IP subnet | | | | | | 373 | | | range used in the| | | | | | 374 | | | test. One rule | | | | | | 375 | | | per subnet | | | | | | 376 +-----------+-----------+------------------+--------+---+---+---+---+ 378 Table 2: DUT/SUT Access List 380 4.3. Test Equipment Configuration 382 In general, test equipment allows configuring parameters in different 383 protocol layers. These parameters thereby influence the traffic 384 flows which will be offered and impact performance measurements. 386 This document specifies common test equipment configuration 387 parameters applicable for all test scenarios defined in Section 7. 388 Any test scenario specific parameters are described under the test 389 setup section of each test scenario individually. 391 4.3.1. Client Configuration 393 This section specifies which parameters SHOULD be considered while 394 configuring clients using test equipment. Also, this section 395 specifies the recommended values for certain parameters. 397 4.3.1.1. TCP Stack Attributes 399 The TCP stack SHOULD use a TCP Reno [RFC5681] variant, which include 400 congestion avoidance, back off and windowing, fast retransmission, 401 and fast recovery on every TCP connection between client and server 402 endpoints. The default IPv4 and IPv6 MSS segments size MUST be set 403 to 1460 bytes and 1440 bytes respectively and a TX and RX receive 404 windows of 65536 bytes. Client initial congestion window MUST NOT 405 exceed 10 times the MSS. Delayed ACKs are permitted and the maximum 406 client delayed Ack MUST NOT exceed 10 times the MSS before a forced 407 ACK. Up to 3 retries SHOULD be allowed before a timeout event is 408 declared. All traffic MUST set the TCP PSH flag to high. The source 409 port range SHOULD be in the range of 1024 - 65535. Internal timeout 410 SHOULD be dynamically scalable per RFC 793. Client SHOULD initiate 411 and close TCP connections. TCP connections MUST be closed via FIN. 413 4.3.1.2. Client IP Address Space 415 The sum of the client IP space SHOULD contain the following 416 attributes. The traffic blocks SHOULD consist of multiple unique, 417 discontinuous static address blocks. A default gateway is permitted. 418 The IPv4 ToS byte or IPv6 traffic class should be set to '00' or 419 '000000' respectively. 421 The following equation can be used to determine the required total 422 number of client IP address. 424 Desired total number of client IP = Target throughput [Mbit/s] / 425 Throughput per IP address [Mbit/s] 426 Based on deployment and use case scenario, the value for "Throughput 427 per IP address" can be varied. 429 (Option 1) Enterprise customer use case: 6-7 Mbps per IP (e.g. 430 1,400-1,700 IPs per 10Gbit/s throughput) 432 (Option 2) Mobile ISP use case: 0.1-0.2 Mbps per IP (e.g. 433 50,000-100,000 IPs per 10Gbit/s throughput) 435 Based on deployment and use case scenario, client IP addresses SHOULD 436 be distributed between IPv4 and IPv6 type. The Following options can 437 be considered for a selection of traffic mix ratio. 439 (Option 1) 100 % IPv4, no IPv6 441 (Option 2) 80 % IPv4, 20% IPv6 443 (Option 3) 50 % IPv4, 50% IPv6 445 (Option 4) 20 % IPv4, 80% IPv6 447 (Option 5) no IPv4, 100% IPv6 449 4.3.1.3. Emulated Web Browser Attributes 451 The emulated web browser contains attributes that will materially 452 affect how traffic is loaded. The objective is to emulate modern, 453 typical browser attributes to improve realism of the result set. 455 For HTTP traffic emulation, the emulated browser MUST negotiate HTTP 456 1.1. HTTP persistency MAY be enabled depending on test scenario. 457 The browser MAY open multiple TCP connections per Server endpoint IP 458 at any time depending on how many sequential transactions are needed 459 to be processed. Within the TCP connection multiple transactions MAY 460 be processed if the emulated browser has available connections. The 461 browser SHOULD advertise a User-Agent header. Headers MUST be sent 462 uncompressed. The browser SHOULD enforce content length validation. 464 For encrypted traffic, the following attributes SHALL define the 465 negotiated encryption parameters. The test clients MUST use TLSv1.2 466 or higher. TLS record size MAY be optimized for the HTTPS response 467 object size up to a record size of 16 KByte. The client endpoint 468 MUST send TLS Extension Server Name Indication (SNI) information when 469 opening a security tunnel. Each client connection MUST perform a 470 full handshake with servercertificate and MUST NOT use session reuse 471 or resumption. Cipher suite and key size should be defined in the 472 parameter session of each test scenario. 474 4.3.2. Backend Server Configuration 476 This document specifies which parameters should be considerable while 477 configuring emulated backend servers using test equipment. 479 4.3.2.1. TCP Stack Attributes 481 The TCP stack on the server side SHOULD be configured similar to the 482 client side configuration described in Section 4.3.1.1. In addition, 483 server initial congestion window MUST NOT exceed 10 times the MSS. 484 Delayed ACKs are permitted and the maximum server delayed ACK MUST 485 NOT exceed 10 times the MSS before a forced ACK. 487 4.3.2.2. Server Endpoint IP Addressing 489 The server IP blocks SHOULD consist of unique, discontinuous static 490 address blocks with one IP per Server Fully Qualified Domain Name 491 (FQDN) endpoint per test port. The IPv4 ToS byte and IPv6 traffic 492 class bytes should be set to '00' and '000000' respectively. 494 4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes 496 The server pool for HTTP SHOULD listen on TCP port 80 and emulate 497 HTTP version 1.1 with persistence. The Server MUST advertise server 498 type in the Server response header [RFC2616]. For HTTPS server, TLS 499 1.2 or higher MUST be used with a maximum record size of 16 KBytes 500 and MUST NOT use ticket resumption or Session ID reuse . The server 501 MUST listen on port TCP 443. The server SHALL serve a certificate to 502 the client. It is REQUIRED that the HTTPS server also check Host SNI 503 information with the FQDN. Cipher suite and key size should be 504 defined in the parameter section of each test scenario. 506 4.3.3. Traffic Flow Definition 508 This section describes the traffic pattern between client and server 509 endpoints. At the beginning of the test, the server endpoint 510 initializes and will be ready to accept connection states including 511 initialization of the TCP stack as well as bound HTTP and HTTPS 512 servers. When a client endpoint is needed, it will initialize and be 513 given attributes such as a MAC and IP address. The behavior of the 514 client is to sweep though the given server IP space, sequentially 515 generating a recognizable service by the DUT. Thus, a balanced, mesh 516 between client endpoints and server endpoints will be generated in a 517 client port server port combination. Each client endpoint performs 518 the same actions as other endpoints, with the difference being the 519 source IP of the client endpoint and the target server IP pool. The 520 client SHALL use Fully Qualified Domain Names (FQDN) in Host Headers 521 and for TLS Server Name Indication (SNI). 523 4.3.3.1. Description of Intra-Client Behavior 525 Client endpoints are independent of other clients that are 526 concurrently executing. When a client endpoint initiates traffic, 527 this section describes how the client steps though different 528 services. Once the test is initialized, the client endpoints SHOULD 529 randomly hold (perform no operation) for a few milliseconds to allow 530 for better randomization of start of client traffic. Each client 531 will either open a new TCP connection or connect to a TCP persistence 532 stack still open to that specific server. At any point that the 533 service profile may require encryption, a TLS encryption tunnel will 534 form presenting the URL request to the server. The server will then 535 perform an SNI name check with the proposed FQDN compared to the 536 domain embedded in the certificate. Only when correct, will the 537 server process the HTTPS response object. The initial response 538 object to the server MUST NOT have a fixed size; its size is based on 539 benchmarking tests described in Section 7. Multiple additional sub- 540 URLs (response objects on the service page) MAY be requested 541 simultaneously. This MAY be to the same server IP as the initial 542 URL. Each sub-object will also use a conical FQDN and URL path, as 543 observed in the traffic mix used. 545 4.3.4. Traffic Load Profile 547 The loading of traffic is described in this section. The loading of 548 a traffic load profile has five distinct phases: Init, ramp up, 549 sustain, ramp down, and collection. 551 1. During the Init phase, test bed devices including the client and 552 server endpoints should negotiate layer 2-3 connectivity such as 553 MAC learning and ARP. Only after successful MAC learning or ARP/ 554 ND resolution SHALL the test iteration move to the next phase. 555 No measurements are made in this phase. The minimum RECOMMEND 556 time for Init phase is 5 seconds. During this phase, the 557 emulated clients SHOULD NOT initiate any sessions with the DUT/ 558 SUT, in contrast, the emulated servers should be ready to accept 559 requests from DUT/SUT or from emulated clients. 561 2. In the ramp up phase, the test equipment SHOULD start to generate 562 the test traffic. It SHOULD use a set approximate number of 563 unique client IP addresses actively to generate traffic. The 564 traffic SHOULD ramp from zero to desired target objective. The 565 target objective will be defined for each benchmarking test. The 566 duration for the ramp up phase MUST be configured long enough, so 567 that the test equipment does not overwhelm DUT/SUT's supported 568 performance metrics namely; connections per second, concurrent 569 TCP connections, and application transactions per second. The 570 RECOMMENDED time duration for the ramp up phase is 180-300 571 seconds. No measurements are made in this phase. 573 3. In the sustain phase, the test equipment SHOULD continue 574 generating traffic to constant target value for a constant number 575 of active client IPs. The RECOMMENDED time duration for sustain 576 phase is 600 seconds. This is the phase where measurements 577 occur. 579 4. In the ramp down/close phase, no new connections are established, 580 and no measurements are made. The time duration for ramp up and 581 ramp down phase SHOULD be same. The RECOMMENDED duration of this 582 phase is between 180 to 300 seconds. 584 5. The last phase is administrative and will occur when the test 585 equipment merges and collates the report data. 587 5. Test Bed Considerations 589 This section recommends steps to control the test environment and 590 test equipment, specifically focusing on virtualized environments and 591 virtualized test equipment. 593 1. Ensure that any ancillary switching or routing functions between 594 the system under test and the test equipment do not limit the 595 performance of the traffic generator. This is specifically 596 important for virtualized components (vSwitches, vRouters). 598 2. Verify that the performance of the test equipment matches and 599 reasonably exceeds the expected maximum performance of the system 600 under test. 602 3. Assert that the test bed characteristics are stable during the 603 entire test session. Several factors might influence stability 604 specifically for virtualized test beds, for example additional 605 workloads in a virtualized system, load balancing and movement of 606 virtual machines during the test, or simple issues such as 607 additional heat created by high workloads leading to an emergency 608 CPU performance reduction. 610 Test bed reference pre-tests help to ensure that the desired traffic 611 generator aspects such as maximum throughput and the network 612 performance metrics such as maximum latency and maximum packet loss 613 are met. 615 Once the desired maximum performance goals for the system under test 616 have been identified, a safety margin of 10% SHOULD be added for 617 throughput and subtracted for maximum latency and maximum packet 618 loss. 620 Test bed preparation may be performed either by configuring the DUT 621 in the most trivial setup (fast forwarding) or without presence of 622 DUT. 624 6. Reporting 626 This section describes how the final report should be formatted and 627 presented. The final test report MAY have two major sections; 628 Introduction and result sections. The following attributes SHOULD be 629 present in the introduction section of the test report. 631 1. The name of the NetSecOPEN traffic mix (see Appendix A) MUST be 632 prominent. 634 2. The time and date of the execution of the test MUST be prominent. 636 3. Summary of testbed software and Hardware details 638 A. DUT Hardware/Virtual Configuration 640 + This section SHOULD clearly identify the make and model of 641 the DUT 643 + The port interfaces, including speed and link information 644 MUST be documented. 646 + If the DUT is a virtual VNF, interface acceleration such 647 as DPDK and SR-IOV MUST be documented as well as cores 648 used, RAM used, and the pinning / resource sharing 649 configuration. The Hypervisor and version MUST be 650 documented. 652 + Any additional hardware relevant to the DUT such as 653 controllers MUST be documented 655 B. DUT Software 657 + The operating system name MUST be documented 659 + The version MUST be documented 661 + The specific configuration MUST be documented 663 C. DUT Enabled Features 664 + Specific features, such as logging, NGFW, DPI MUST be 665 documented 667 + Attributes of those featured MUST be documented 669 + Any additional relevant information about features MUST be 670 documented 672 D. Test equipment hardware and software 674 + Test equipment vendor name 676 + Hardware details including model number, interface type 678 + Test equipment firmware and test application software 679 version 681 4. Results Summary / Executive Summary 683 1. Results SHOULD resemble a pyramid in how it is reported, with 684 the introduction section documenting the summary of results 685 in a prominent, easy to read block. 687 2. In the result section of the test report, the following 688 attributes should be present for each test scenario. 690 a. KPIs MUST be documented separately for each test 691 scenario. The format of the KPI metrics should be 692 presented as described in Section 6.1. 694 b. The next level of details SHOULD be graphs showing each 695 of these metrics over the duration (sustain phase) of the 696 test. This allows the user to see the measured 697 performance stability changes over time. 699 6.1. Key Performance Indicators 701 This section lists KPIs for overall benchmarking tests scenarios. 702 All KPIs MUST be measured during the sustain phase of the traffic 703 load profile described in Section 4.3.4. All KPIs MUST be measured 704 from the result output of test equipment. 706 o Concurrent TCP Connections 707 This key performance indicator measures the average concurrent 708 open TCP connections in the sustaining period. 710 o TCP Connections Per Second 711 This key performance indicator measures the average established 712 TCP connections per second in the sustaining period. For "TCP/ 713 HTTP(S) Connection Per Second" benchmarking test scenario, the KPI 714 is measured average established and terminated TCP connections per 715 second simultaneously. 717 o Application Transactions Per Second 718 This key performance indicator measures the average successfully 719 completed application transactions per second in the sustaining 720 period. 722 o TLS Handshake Rate 723 This key performance indicator measures the average TLS 1.2 or 724 higher session formation rate within the sustaining period. 726 o Throughput 727 This key performance indicator measures the average Layer 2 728 throughput within the sustaining period as well as average packets 729 per seconds within the same period. The value of throughput 730 SHOULD be presented in Gbit/s rounded to two places of precision 731 with a more specific kbps in parenthesis. Optionally, goodput MAY 732 also be logged as an average goodput rate measured over the same 733 period. Goodput result SHALL also be presented in the same format 734 as throughput. 736 o URL Response time / Time to Last Byte (TTLB) 737 This key performance indicator measures the minimum, average and 738 maximum per URL response time in the sustaining period. The 739 latency is measured at Client and in this case would be the time 740 duration between sending a GET request from Client and the 741 receival of the complete response from the server. 743 o Application Transaction Latency 744 This key performance indicator measures the minimum, average and 745 maximum the amount of time to receive all objects from the server. 746 The value of application transaction latency SHOULD be presented 747 in millisecond rounded to zero decimal. 749 o Time to First Byte (TTFB) 750 This key performance indicator will measure minimum, average and 751 maximum the time to first byte. TTFB is the elapsed time between 752 sending the SYN packet from the client and receiving the first 753 byte of application date from the DUT/SUT. TTFB SHOULD be 754 expressed in millisecond. 756 7. Benchmarking Tests 758 7.1. Throughput Performance With NetSecOPEN Traffic Mix 760 7.1.1. Objective 762 Using NetSecOPEN traffic mix, determine the maximum sustainable 763 throughput performance supported by the DUT/SUT. (see Appendix A for 764 details about traffic mix) 766 This test scenario is RECOMMENDED to perform twice; one with SSL 767 inspection feature enabled and the second scenario with SSL 768 inspection feature disabled on the DUT/SUT. 770 7.1.2. Test Setup 772 Test bed setup MUST be configured as defined in Section 4. Any test 773 scenario specific test bed configuration changes MUST be documented. 775 7.1.3. Test Parameters 777 In this section, test scenario specific parameters SHOULD be defined. 779 7.1.3.1. DUT/SUT Configuration Parameters 781 DUT/SUT parameters MUST conform to the requirements defined in 782 Section 4.2. Any configuration changes for this specific test 783 scenario MUST be documented. 785 7.1.3.2. Test Equipment Configuration Parameters 787 Test equipment configuration parameters MUST conform to the 788 requirements defined in Section 4.3. Following parameters MUST be 789 noted for this test scenario: 791 Client IP address range defined in Section 4.3.1.2 793 Server IP address range defined in Section 4.3.2.2 795 Traffic distribution ratio between IPv4 and IPv6 defined in 796 Section 4.3.1.2 798 Target throughput: It can be defined based on requirements. 799 Otherwise it represents aggregated line rate of interface(s) used 800 in the DUT/SUT 802 Initial throughput: 10% of the "Target throughput" 803 One of the following ciphers and keys are RECOMMENDED to use for 804 this test scenarios. 806 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 807 Algorithm: ecdsa_secp256r1_sha256 and Supported group: 808 sepc256r1) 810 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 811 Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256) 813 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 814 Algorithm: ecdsa_secp384r1_sha384 and Supported group: 815 sepc521r1) 817 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 818 Algorithm: rsa_pkcs1_sha384 and Supported group: secp256) 820 7.1.3.3. Traffic Profile 822 Traffic profile: Test scenario MUST be run with a single application 823 traffic mix profile (see Appendix A for details about traffic mix). 824 The name of the NetSecOPEN traffic mix MUST be documented. 826 7.1.3.4. Test Results Acceptance Criteria 828 The following test Criteria is defined as test results acceptance 829 criteria. Test results acceptance criteria MUST be monitored during 830 the whole sustain phase of the traffic load profile. 832 a. Number of failed Application transactions MUST be less than 833 0.001% (1 out of 100,000 transactions) of total attempt 834 transactions 836 b. Number of Terminated TCP connections due to unexpected TCP RST 837 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 838 connections) of total initiated TCP connections 840 c. Maximum deviation (max. dev) of application transaction time or 841 TTLB (Time To Last Byte) MUST be less than X (The value for "X" 842 will be finalized and updated after completion of PoC test) 843 The following equation MUST be used to calculate the deviation of 844 application transaction latency or TTLB 845 max. dev = max((avg_latency - min_latency),(max_latency - 846 avg_latency)) / (Initial latency) 847 Where, the initial latency is calculated using the following 848 equation. For this calculation, the latency values (min', avg' 849 and max') MUST be measured during test procedure step 1 as 850 defined in Section 7.1.4.1. 852 The variable latency represents application transaction latency 853 or TTLB. 854 Initial latency:= min((avg' latency - min' latency) | (max' 855 latency - avg' latency)) 857 d. Maximum value of Time to First Byte (TTFB) MUST be less than X 859 7.1.3.5. Measurement 861 Following KPI metrics MUST be reported for this test scenario. 863 Mandatory KPIs: average Throughput, average Concurrent TCP 864 connections, TTLB/application transaction latency (minimum, average 865 and maximum) and average application transactions per second 867 Optional KPIs: average TCP connections per second, average TLS 868 handshake rate and TTFB 870 7.1.4. Test Procedures and expected Results 872 The test procedures are designed to measure the throughput 873 performance of the DUT/SUT at the sustaining period of traffic load 874 profile. The test procedure consists of three major steps. 876 7.1.4.1. Step 1: Test Initialization and Qualification 878 Verify the link status of the all connected physical interfaces. All 879 interfaces are expected to be in "UP" status. 881 Configure traffic load profile of the test equipment to generate test 882 traffic at the "Initial throughput" rate as described in the 883 parameters Section 7.1.3.2. The test equipment SHOULD follow the 884 traffic load profile definition as described in Section 4.3.4. The 885 DUT/SUT SHOULD reach the "Initial throughput" during the sustain 886 phase. Measure all KPI as defined in Section 7.1.3.5. The measured 887 KPIs during the sustain phase MUST meet acceptance criteria "a" and 888 "b" defined in Section 7.1.3.4. 890 If the KPI metrics do not meet the acceptance criteria, the test 891 procedure MUST NOT be continued to step 2. 893 7.1.4.2. Step 2: Test Run with Target Objective 895 Configure test equipment to generate traffic at the "Target 896 throughput" rate defined in the parameter table. The test equipment 897 SHOULD follow the traffic load profile definition as described in 898 Section 4.3.4. The test equipment SHOULD start to measure and record 899 all specified KPIs. The frequency of KPI metric measurements MUST be 900 less than 5 seconds. Continue the test until all traffic profile 901 phases are completed. 903 The DUT/SUT is expected to reach the desired target throughput during 904 the sustain phase. In addition, the measured KPIs MUST meet all 905 acceptance criteria. Follow step 3, if the KPI metrics do not meet 906 the acceptance criteria. 908 7.1.4.3. Step 3: Test Iteration 910 Determine the maximum and average achievable throughput within the 911 acceptance criteria. Final test iteration MUST be performed for the 912 test duration defined in Section 4.3.4. 914 7.2. TCP/HTTP Connections Per Second 916 7.2.1. Objective 918 Using HTTP traffic, determine the maximum sustainable TCP connection 919 establishment rate supported by the DUT/SUT under different 920 throughput load conditions. 922 To measure connections per second, test iterations MUST use different 923 fixed HTTP response object sizes defined in Section 7.2.3.2. 925 7.2.2. Test Setup 927 Test bed setup SHOULD be configured as defined in Section 4. Any 928 specific test bed configuration changes such as number of interfaces 929 and interface type, etc. MUST be documented. 931 7.2.3. Test Parameters 933 In this section, test scenario specific parameters SHOULD be defined. 935 7.2.3.1. DUT/SUT Configuration Parameters 937 DUT/SUT parameters MUST conform to the requirements defined in 938 Section 4.2. Any configuration changes for this specific test 939 scenario MUST be documented. 941 7.2.3.2. Test Equipment Configuration Parameters 943 Test equipment configuration parameters MUST conform to the 944 requirements defined in Section 4.3. Following parameters MUST be 945 documented for this test scenario: 947 Client IP address range defined in Section 4.3.1.2 948 Server IP address range defined in Section 4.3.2.2 950 Traffic distribution ratio between IPv4 and IPv6 defined in 951 Section 4.3.1.2 953 Target connections per second: Initial value from product data sheet 954 (if known) 956 Initial connections per second: 10% of "Target connections per 957 second" 959 The client SHOULD negotiate HTTP 1.1 and close the connection with 960 FIN immediately after completion of one transaction. In each test 961 iteration, client MUST send GET command requesting a fixed HTTP 962 response object size. 964 The RECOMMENDED response object sizes are 1, 2, 4, 16, 64 KByte 966 7.2.3.3. Test Results Acceptance Criteria 968 The following test Criteria is defined as test results acceptance 969 criteria. Test results acceptance criteria MUST be monitored during 970 the whole sustain phase of the traffic load profile. 972 a. Number of failed Application transactions MUST be less than 973 0.001% (1 out of 100,000 transactions) of total attempt 974 transactions 976 b. Number of Terminated TCP connections due to unexpected TCP RST 977 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 978 connections) of total initiated TCP connections 980 c. During the sustain phase, traffic should be forwarded at a 981 constant rate 983 d. Concurrent TCP connections SHOULD be constant during steady 984 state. Any deviation of concurrent TCP connections MUST be less 985 than 10%. This confirms the DUT opens and closes TCP connections 986 almost at the same rate 988 7.2.3.4. Measurement 990 Following KPI metrics MUST be reported for each test iteration. 992 Mandatory KPIs: average TCP connections per second, average 993 Throughput and Average Time to First Byte (TTFB). 995 7.2.4. Test Procedures and Expected Results 997 The test procedure is designed to measure the TCP connections per 998 second rate of the DUT/SUT at the sustaining period of the traffic 999 load profile. The test procedure consists of three major steps. 1000 This test procedure MAY be repeated multiple times with different IP 1001 types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic 1002 distribution. 1004 7.2.4.1. Step 1: Test Initialization and Qualification 1006 Verify the link status of all connected physical interfaces. All 1007 interfaces are expected to be in "UP" status. 1009 Configure the traffic load profile of the test equipment to establish 1010 "initial connections per second" as defined in the parameters 1011 Section 7.2.3.2. The traffic load profile SHOULD be defined as 1012 described in Section 4.3.4. 1014 The DUT/SUT SHOULD reach the "Initial connections per second" before 1015 the sustain phase. The measured KPIs during the sustain phase MUST 1016 meet acceptance criteria a, b, c, and d defined in Section 7.2.3.3. 1018 If the KPI metrics do not meet the acceptance criteria, the test 1019 procedure MUST NOT be continued to "Step 2". 1021 7.2.4.2. Step 2: Test Run with Target Objective 1023 Configure test equipment to establish "Target connections per second" 1024 defined in the parameters table. The test equipment SHOULD follow 1025 the traffic load profile definition as described in Section 4.3.4. 1027 During the ramp up and sustain phase of each test iteration, other 1028 KPIs such as throughput, concurrent TCP connections and application 1029 transactions per second MUST NOT reach to the maximum value the DUT/ 1030 SUT can support. The test results for specific test iterations 1031 SHOULD NOT be reported, if the above mentioned KPI (especially 1032 throughput) reaches the maximum value. (Example: If the test 1033 iteration with 64Kbyte of HTTP response object size reached the 1034 maximum throughput limitation of the DUT, the test iteration MAY be 1035 interrupted and the result for 64kbyte SHOULD NOT be reported). 1037 The test equipment SHOULD start to measure and record all specified 1038 KPIs. The frequency of measurement MUST be less than 5 seconds. 1039 Continue the test until all traffic profile phases are completed. 1041 The DUT/SUT is expected to reach the desired target connections per 1042 second rate at the sustain phase. In addition, the measured KPIs 1043 MUST meet all acceptance criteria. 1045 Follow step 3, if the KPI metrics do not meet the acceptance 1046 criteria. 1048 7.2.4.3. Step 3: Test Iteration 1050 Determine the maximum and average achievable connections per second 1051 within the acceptance criteria. 1053 7.3. HTTP Throughput 1055 7.3.1. Objective 1057 Determine the throughput for HTTP transactions varying the HTTP 1058 response object size. 1060 7.3.2. Test Setup 1062 Test bed setup SHOULD be configured as defined in Section 4. Any 1063 specific test bed configuration changes such as number of interfaces 1064 and interface type, etc. must be documented. 1066 7.3.3. Test Parameters 1068 In this section, test scenario specific parameters SHOULD be defined. 1070 7.3.3.1. DUT/SUT Configuration Parameters 1072 DUT/SUT parameters MUST conform to the requirements defined in 1073 Section 4.2. Any configuration changes for this specific test 1074 scenario MUST be documented. 1076 7.3.3.2. Test Equipment Configuration Parameters 1078 Test equipment configuration parameters MUST conform to the 1079 requirements defined in Section 4.3. Following parameters MUST be 1080 documented for this test scenario: 1082 Client IP address range defined in Section 4.3.1.2 1084 Server IP address range defined in Section 4.3.2.2 1086 Traffic distribution ratio between IPv4 and IPv6 defined in 1087 Section 4.3.1.2 1088 Target Throughput: Initial value from product data sheet (if known) 1090 Initial Throughput: 10% of "Target Throughput" 1092 Number of HTTP response object requests (transactions) per 1093 connection: 10 1095 RECOMMENDED HTTP response object size: 1KB, 16KB, 64KB, 256KB and 1096 mixed objects defined in the table 1098 +---------------------+---------------------+ 1099 | Object size (KByte) | Number of requests/ | 1100 | | Weight | 1101 +---------------------+---------------------+ 1102 | 0.2 | 1 | 1103 +---------------------+---------------------+ 1104 | 6 | 1 | 1105 +---------------------+---------------------+ 1106 | 8 | 1 | 1107 +---------------------+---------------------+ 1108 | 9 | 1 | 1109 +---------------------+---------------------+ 1110 | 10 | 1 | 1111 +---------------------+---------------------+ 1112 | 25 | 1 | 1113 +---------------------+---------------------+ 1114 | 26 | 1 | 1115 +---------------------+---------------------+ 1116 | 35 | 1 | 1117 +---------------------+---------------------+ 1118 | 59 | 1 | 1119 +---------------------+---------------------+ 1120 | 347 | 1 | 1121 +---------------------+---------------------+ 1123 Table 3: Mixed Objects 1125 7.3.3.3. Test Results Acceptance Criteria 1127 The following test Criteria is defined as test results acceptance 1128 criteria. Test results acceptance criteria MUST be monitored during 1129 the whole sustain phase of the traffic load profile 1131 a. Number of failed Application transactions MUST be less than 1132 0.001% (1 out of 100,000 transactions) of attempt transactions. 1134 b. Traffic should be forwarded constantly. 1136 c. Concurrent connetions MUST be constant. The deviation of 1137 concurrent TCP connection MUST NOT increase more than 10% 1139 7.3.3.4. Measurement 1141 The KPI metrics MUST be reported for this test scenario: 1143 Average Throughput, average HTTP transactions per second, concurrent 1144 connections, and average TCP connections per second. 1146 7.3.4. Test Procedures and Expected Results 1148 The test procedure is designed to measure HTTP throughput of the DUT/ 1149 SUT. The test procedure consists of three major steps. This test 1150 procedure MAY be repeated multiple times with different IPv4 and IPv6 1151 traffic distribution and HTTP response object sizes. 1153 7.3.4.1. Step 1: Test Initialization and Qualification 1155 Verify the link status of the all connected physical interfaces. All 1156 interfaces are expected to be in "UP" status. 1158 Configure traffic load profile of the test equipment to establish 1159 "Initial Throughput" as defined in the parameters Section 7.3.3.2. 1161 The traffic load profile SHOULD be defined as described in 1162 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial Throughput" 1163 during the sustain phase. Measure all KPI as defined in 1164 Section 7.3.3.4. 1166 The measured KPIs during the sustain phase MUST meet the acceptance 1167 criteria "a" defined in Section 7.3.3.3. 1169 If the KPI metrics do not meet the acceptance criteria, the test 1170 procedure MUST NOT be continued to "Step 2". 1172 7.3.4.2. Step 2: Test Run with Target Objective 1174 The test equipment SHOULD start to measure and record all specified 1175 KPIs. The frequency of measurement MUST be less than 5 seconds. 1176 Continue the test until all traffic profile phases are completed. 1178 The DUT/SUT is expected to reach the desired "Target Throughput" at 1179 the sustain phase. In addition, the measured KPIs must meet all 1180 acceptance criteria. 1182 Perform the test separately for each HTTP response object size. 1184 Follow step 3, if the KPI metrics do not meet the acceptance 1185 criteria. 1187 7.3.4.3. Step 3: Test Iteration 1189 Determine the maximum and average achievable throughput within the 1190 acceptance criteria. Final test iteration MUST be performed for the 1191 test duration defined in Section 4.3.4. 1193 7.4. TCP/HTTP Transaction Latency 1195 7.4.1. Objective 1197 Using HTTP traffic, determine the average HTTP transaction latency 1198 when DUT is running with sustainable HTTP transactions per second 1199 supported by the DUT/SUT under different HTTP response object sizes. 1201 Test iterations MUST be performed with different HTTP response object 1202 sizes in two different scenarios.one with a single transaction and 1203 the other with multiple transactions within a single TCP connection. 1204 For consistency both the single and multiple transaction test MUST be 1205 configured with HTTP 1.1. 1207 Scenario 1: The client MUST negotiate HTTP 1.1 and close the 1208 connection with FIN immediately after completion of a single 1209 transaction (GET and RESPONSE). 1211 Scenario 2: The client MUST negotiate HTTP 1.1 and close the 1212 connection FIN immediately after completion of 10 transactions (GET 1213 and RESPONSE) within a single TCP connection. 1215 7.4.2. Test Setup 1217 Test bed setup SHOULD be configured as defined in Section 4. Any 1218 specific test bed configuration changes such as number of interfaces 1219 and interface type, etc. MUST be documented. 1221 7.4.3. Test Parameters 1223 In this section, test scenario specific parameters SHOULD be defined. 1225 7.4.3.1. DUT/SUT Configuration Parameters 1227 DUT/SUT parameters MUST conform to the requirements defined in 1228 Section 4.2. Any configuration changes for this specific test 1229 scenario MUST be documented. 1231 7.4.3.2. Test Equipment Configuration Parameters 1233 Test equipment configuration parameters MUST conform to the 1234 requirements defined in Section 4.3 . Following parameters MUST be 1235 documented for this test scenario: 1237 Client IP address range defined in Section 4.3.1.2 1239 Server IP address range defined in Section 4.3.2.2 1241 Traffic distribution ratio between IPv4 and IPv6 defined in 1242 Section 4.3.1.2 1244 Target objective for scenario 1: 50% of the maximum connection per 1245 second measured in test scenario TCP/HTTP Connections Per Second 1246 (Section 7.2) 1248 Target objective for scenario 2: 50% of the maximum throughput 1249 measured in test scenario HTTP Throughput (Section 7.3) 1251 Initial objective for scenario 1: 10% of Target objective for 1252 scenario 1" 1254 Initial objective for scenario 2: 10% of "Target objective for 1255 scenario 2" 1257 HTTP transaction per TCP connection: test scenario 1 with single 1258 transaction and the second scenario with 10 transactions 1260 HTTP 1.1 with GET command requesting a single object. The 1261 RECOMMENDED object sizes are 1, 16 or 64 Kbyte. For each test 1262 iteration, client MUST request a single HTTP response object size. 1264 7.4.3.3. Test Results Acceptance Criteria 1266 The following test Criteria is defined as test results acceptance 1267 criteria. Test results acceptance criteria MUST be monitored during 1268 the whole sustain phase of the traffic load profile. Ramp up and 1269 ramp down phase SHOULD NOT be considered. 1271 Generic criteria: 1273 a. Number of failed Application transactions MUST be less than 1274 0.001% (1 out of 100,000 transactions) of attempt transactions. 1276 b. Number of Terminated TCP connections due to unexpected TCP RST 1277 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1278 connections) of total initiated TCP connections 1280 c. During the sustain phase, traffic should be forwarded at a 1281 constant rate. 1283 d. Concurrent TCP connections should be constant during steady 1284 state. This confirms the DUT opens and closes TCP connections at 1285 the same rate. 1287 e. After ramp up the DUT MUST achieve the "Target objective" defined 1288 in the parameter Section 7.4.3.2 and remain in that state for the 1289 entire test duration (sustain phase). 1291 7.4.3.4. Measurement 1293 Following KPI metrics MUST be reported for each test scenario and 1294 HTTP response object sizes separately: 1296 average TCP connections per second and average application 1297 transaction latency 1299 All KPI's are measured once the target throughput achieves the steady 1300 state. 1302 7.4.4. Test Procedures and Expected Results 1304 The test procedure is designed to measure the average application 1305 transaction latencies or TTLB when the DUT is operating close to 50% 1306 of its maximum achievable throughput or connections per second. This 1307 test procedure CAN be repeated multiple times with different IP types 1308 (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic distribution), 1309 HTTP response object sizes and single and multiple transactions per 1310 connection scenarios. 1312 7.4.4.1. Step 1: Test Initialization and Qualification 1314 Verify the link status of the all connected physical interfaces. All 1315 interfaces are expected to be in "UP" status. 1317 Configure traffic load profile of the test equipment to establish 1318 "Initial objective" as defined in the parameters Section 7.4.3.2. 1319 The traffic load profile can be defined as described in 1320 Section 4.3.4. 1322 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 1323 phase. The measured KPIs during the sustain phase MUST meet the 1324 acceptance criteria a, b, c, d, e and f defined in Section 7.4.3.3. 1326 If the KPI metrics do not meet the acceptance criteria, the test 1327 procedure MUST NOT be continued to "Step 2". 1329 7.4.4.2. Step 2: Test Run with Target Objective 1331 Configure test equipment to establish "Target objective" defined in 1332 the parameters table. The test equipment SHOULD follow the traffic 1333 load profile definition as described in Section 4.3.4. 1335 During the ramp up and sustain phase, other KPIs such as throughput, 1336 concurrent TCP connections and application transactions per second 1337 MUST NOT reach to the maximum value that the DUT/SUT can support. 1338 The test results for specific test iterations SHOULD NOT be reported, 1339 if the above mentioned KPI (especially throughput) reaches to the 1340 maximum value. (Example: If the test iteration with 64Kbyte of HTTP 1341 response object size reached the maximum throughput limitation of the 1342 DUT, the test iteration MAY be interrupted and the result for 64kbyte 1343 SHOULD NOT be reported). 1345 The test equipment SHOULD start to measure and record all specified 1346 KPIs. The frequency of measurement MUST be less than 5 seconds. 1347 Continue the test until all traffic profile phases are completed. 1348 DUT/SUT is expected to reach the desired "Target objective" at the 1349 sustain phase. In addition, the measured KPIs MUST meet all 1350 acceptance criteria. 1352 Follow step 3, if the KPI metrics do not meet the acceptance 1353 criteria. 1355 7.4.4.3. Step 3: Test Iteration 1357 Determine the maximum achievable connections per second within the 1358 acceptance criteria and measure the latency values. 1360 7.5. Concurrent TCP/HTTP Connection Capacity 1362 7.5.1. Objective 1364 Determine the maximum number of concurrent TCP connections that the 1365 DUT/ SUT sustains when using HTTP traffic. 1367 7.5.2. Test Setup 1369 Test bed setup SHOULD be configured as defined in Section 4. Any 1370 specific test bed configuration changes such as number of interfaces 1371 and interface type, etc. must be documented. 1373 7.5.3. Test Parameters 1375 In this section, test scenario specific parameters SHOULD be defined. 1377 7.5.3.1. DUT/SUT Configuration Parameters 1379 DUT/SUT parameters MUST conform to the requirements defined in 1380 Section 4.2. Any configuration changes for this specific test 1381 scenario MUST be documented. 1383 7.5.3.2. Test Equipment Configuration Parameters 1385 Test equipment configuration parameters MUST conform to the 1386 requirements defined in Section 4.3. Following parameters MUST be 1387 noted for this test scenario: 1389 Client IP address range defined in Section 4.3.1.2 1391 Server IP address range defined in Section 4.3.2.2 1393 Traffic distribution ratio between IPv4 and IPv6 defined in 1394 Section 4.3.1.2 1396 Target concurrent connection: Initial value from product data 1397 sheet (if known) 1399 Initial concurrent connection: 10% of "Target concurrent 1400 connection" 1402 Maximum connections per second during ramp up phase: 50% of 1403 maximum connections per second measured in test scenario TCP/HTTP 1404 Connections per second (Section 7.2) 1406 Ramp up time (in traffic load profile for "Target concurrent 1407 connection"): "Target concurrent connection" / "Maximum 1408 connections per second during ramp up phase" 1410 Ramp up time (in traffic load profile for "Initial concurrent 1411 connection"): "Initial concurrent connection" / "Maximum 1412 connections per second during ramp up phase" 1414 The client MUST negotiate HTTP 1.1 with persistence and each client 1415 MAY open multiple concurrent TCP connections per server endpoint IP. 1417 Each client sends 10 GET commands requesting 1Kbyte HTTP response 1418 object in the same TCP connection (10 transactions/TCP connection) 1419 and the delay (think time) between the transaction MUST be X seconds. 1421 X = ("Ramp up time" + "steady state time") /10 1423 The established connections SHOULD remain open until the ramp down 1424 phase of the test. During the ramp down phase, all connections 1425 SHOULD be successfully closed with FIN. 1427 7.5.3.3. Test Results Acceptance Criteria 1429 The following test Criteria is defined as test results acceptance 1430 criteria. Test results acceptance criteria MUST be monitored during 1431 the whole sustain phase of the traffic load profile. 1433 a. Number of failed Application transactions MUST be less than 1434 0.001% (1 out of 100,000 transaction) of total attempted 1435 transactions 1437 b. Number of Terminated TCP connections due to unexpected TCP RST 1438 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1439 connections) of total initiated TCP connections 1441 c. During the sustain phase, traffic should be forwarded constantly 1443 d. During the sustain phase, the maximum deviation (max. dev) of 1444 application transaction latency or TTLB (Time To Last Byte) MUST 1445 be less than 10% 1447 7.5.3.4. Measurement 1449 Following KPI metrics MUST be reported for this test scenario: 1451 average Throughput, Concurrent TCP connections (minimum, average and 1452 maximum), TTLB/ application transaction latency (minimum, average and 1453 maximum) and average application transactions per second. 1455 7.5.4. Test Procedures and expected Results 1457 The test procedure is designed to measure the concurrent TCP 1458 connection capacity of the DUT/SUT at the sustaining period of 1459 traffic load profile. The test procedure consists of three major 1460 steps. This test procedure MAY be repeated multiple times with 1461 different IPv4 and IPv6 traffic distribution. 1463 7.5.4.1. Step 1: Test Initialization and Qualification 1465 Verify the link status of the all connected physical interfaces. All 1466 interfaces are expected to be in "UP" status. 1468 Configure test equipment to establish "Initial concurrent TCP 1469 connections" defined in Section 7.5.3.2. Except ramp up time, the 1470 traffic load profile SHOULD be defined as described in Section 4.3.4. 1472 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 1473 concurrent TCP connections". The measured KPIs during the sustain 1474 phase MUST meet the acceptance criteria "a" and "b" defined in 1475 Section 7.5.3.3. 1477 If the KPI metrics do not meet the acceptance criteria, the test 1478 procedure MUST NOT be continued to "Step 2". 1480 7.5.4.2. Step 2: Test Run with Target Objective 1482 Configure test equipment to establish "Target concurrent TCP 1483 connections". The test equipment SHOULD follow the traffic load 1484 profile definition (except ramp up time) as described in 1485 Section 4.3.4. 1487 During the ramp up and sustain phase, the other KPIs such as 1488 throughput, TCP connections per second and application transactions 1489 per second MUST NOT reach to the maximum value that the DUT/SUT can 1490 support. 1492 The test equipment SHOULD start to measure and record KPIs defined in 1493 Section 7.5.3.4. The frequency of measurement MUST be less than 5 1494 seconds. Continue the test until all traffic profile phases are 1495 completed. 1497 The DUT/SUT is expected to reach the desired target concurrent 1498 connection at the sustain phase. In addition, the measured KPIs must 1499 meet all acceptance criteria. 1501 Follow step 3, if the KPI metrics do not meet the acceptance 1502 criteria. 1504 7.5.4.3. Step 3: Test Iteration 1506 Determine the maximum and average achievable concurrent TCP 1507 connections capacity within the acceptance criteria. 1509 7.6. TCP/HTTPS Connections per second 1511 7.6.1. Objective 1513 Using HTTPS traffic, determine the maximum sustainable SSL/TLS 1514 session establishment rate supported by the DUT/SUT under different 1515 throughput load conditions. 1517 Test iterations MUST include common cipher suites and key strengths 1518 as well as forward looking stronger keys. Specific test iterations 1519 MUST include ciphers and keys defined in Section 7.6.3.2. 1521 For each cipher suite and key strengths, test iterations MUST use a 1522 single HTTPS response object size defined in the test equipment 1523 configuration parameters Section 7.6.3.2 to measure connections per 1524 second performance under a variety of DUT Security inspection load 1525 conditions. 1527 7.6.2. Test Setup 1529 Test bed setup SHOULD be configured as defined in Section 4. Any 1530 specific test bed configuration changes such as number of interfaces 1531 and interface type, etc. MUST be documented. 1533 7.6.3. Test Parameters 1535 In this section, test scenario specific parameters SHOULD be defined. 1537 7.6.3.1. DUT/SUT Configuration Parameters 1539 DUT/SUT parameters MUST conform to the requirements defined in 1540 Section 4.2. Any configuration changes for this specific test 1541 scenario MUST be documented. 1543 7.6.3.2. Test Equipment Configuration Parameters 1545 Test equipment configuration parameters MUST conform to the 1546 requirements defined in Section 4.3. Following parameters MUST be 1547 documented for this test scenario: 1549 Client IP address range defined in Section 4.3.1.2 1551 Server IP address range defined in Section 4.3.2.2 1553 Traffic distribution ratio between IPv4 and IPv6 defined in 1554 Section 4.3.1.2 1556 Target connections per second: Initial value from product data sheet 1557 (if known) 1559 Initial connections per second: 10% of "Target connections per 1560 second" 1562 RECOMMENDED ciphers and keys: 1564 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 1565 Algorithm: ecdsa_secp256r1_sha256 and Supported group: sepc256r1) 1567 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 1568 Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256) 1570 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 1571 Algorithm: ecdsa_secp384r1_sha384 and Supported group: sepc521r1) 1573 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 1574 Algorithm: rsa_pkcs1_sha384 and Supported group: secp256) 1576 The client MUST negotiate HTTPS 1.1 and close the connection with FIN 1577 immediately after completion of one transaction. In each test 1578 iteration, client MUST send GET command requesting a fixed HTTPS 1579 response object size. The RECOMMENDED object sizes are 1, 2, 4, 16, 1580 64 Kbyte. 1582 7.6.3.3. Test Results Acceptance Criteria 1584 The following test Criteria is defined as test results acceptance 1585 criteria: 1587 a. Number of failed Application transactions MUST be less than 1588 0.001% (1 out of 100,000 transactions) of attempt transactions 1590 b. Number of Terminated TCP connections due to unexpected TCP RST 1591 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1592 connections) of total initiated TCP connections 1594 c. During the sustain phase, traffic should be forwarded at a 1595 constant rate 1597 d. Concurrent TCP connections SHOULD be constant during steady 1598 state. This confirms that the DUT open and close the TCP 1599 connections at the same rate 1601 7.6.3.4. Measurement 1603 Following KPI metrics MUST be reported for this test scenario: 1605 average TCP connections per second, average Throughput and Average 1606 Time to TCP First Byte. 1608 7.6.4. Test Procedures and expected Results 1610 The test procedure is designed to measure the TCP connections per 1611 second rate of the DUT/SUT at the sustaining period of traffic load 1612 profile. The test procedure consists of three major steps. This 1613 test procedure MAY be repeated multiple times with different IPv4 and 1614 IPv6 traffic distribution. 1616 7.6.4.1. Step 1: Test Initialization and Qualification 1618 Verify the link status of all connected physical interfaces. All 1619 interfaces are expected to be in "UP" status. 1621 Configure traffic load profile of the test equipment to establish 1622 "Initial connections per second" as defined in Section 7.6.3.2. The 1623 traffic load profile CAN be defined as described in Section 4.3.4. 1625 The DUT/SUT SHOULD reach the "Initial connections per second" before 1626 the sustain phase. The measured KPIs during the sustain phase MUST 1627 meet the acceptance criteria a, b, c, and d defined in 1628 Section 7.6.3.3. 1630 If the KPI metrics do not meet the acceptance criteria, the test 1631 procedure MUST NOT be continued to "Step 2". 1633 7.6.4.2. Step 2: Test Run with Target Objective 1635 Configure test equipment to establish "Target connections per second" 1636 defined in the parameters table. The test equipment SHOULD follow 1637 the traffic load profile definition as described in Section 4.3.4. 1639 During the ramp up and sustain phase, other KPIs such as throughput, 1640 concurrent TCP connections and application transactions per second 1641 MUST NOT reach the maximum value that the DUT/SUT can support. The 1642 test results for specific test iteration SHOULD NOT be reported, if 1643 the above mentioned KPI (especially throughput) reaches the maximum 1644 value. (Example: If the test iteration with 64Kbyte of HTTPS 1645 response object size reached the maximum throughput limitation of the 1646 DUT, the test iteration can be interrupted and the result for 64kbyte 1647 SHOULD NOT be reported). 1649 The test equipment SHOULD start to measure and record all specified 1650 KPIs. The frequency of measurement MUST be less than 5 seconds. 1651 Continue the test until all traffic profile phases are completed. 1653 The DUT/SUT is expected to reach the desired target connections per 1654 second rate at the sustain phase. In addition, the measured KPIs 1655 must meet all acceptance criteria. 1657 Follow the step 3, if the KPI metrics do not meet the acceptance 1658 criteria. 1660 7.6.4.3. Step 3: Test Iteration 1662 Determine the maximum and average achievable connections per second 1663 within the acceptance criteria. 1665 7.7. HTTPS Throughput 1667 7.7.1. Objective 1669 Determine the throughput for HTTPS transactions varying the HTTPS 1670 response object size. 1672 Test iterations MUST include common cipher suites and key strengths 1673 as well as forward looking stronger keys. Specific test iterations 1674 MUST include the ciphers and keys defined in the parameter 1675 Section 7.7.3.2. 1677 7.7.2. Test Setup 1679 Test bed setup SHOULD be configured as defined in Section 4. Any 1680 specific test bed configuration changes such as number of interfaces 1681 and interface type, etc. must be documented. 1683 7.7.3. Test Parameters 1685 In this section, test scenario specific parameters SHOULD be defined. 1687 7.7.3.1. DUT/SUT Configuration Parameters 1689 DUT/SUT parameters MUST conform to the requirements defined in 1690 Section 4.2. Any configuration changes for this specific test 1691 scenario MUST be documented. 1693 7.7.3.2. Test Equipment Configuration Parameters 1695 Test equipment configuration parameters MUST conform to the 1696 requirements defined in Section 4.3. Following parameters MUST be 1697 documented for this test scenario: 1699 Client IP address range defined in Section 4.3.1.2 1701 Server IP address range defined in Section 4.3.2.2 1703 Traffic distribution ratio between IPv4 and IPv6 defined in 1704 Section 4.3.1.2 1705 Target Throughput: Initial value from product data sheet (if known) 1707 Initial Throughput: 10% of "Target Throughput" 1709 Number of HTTPS response object requests (transactions) per 1710 connection: 10 1712 RECOMMENDED ciphers and keys: 1714 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 1715 Algorithm: ecdsa_secp256r1_sha256 and Supported group: sepc256r1) 1717 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 1718 Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256) 1720 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash 1721 Algorithm: ecdsa_secp384r1_sha384 and Supported group: sepc521r1) 1723 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash 1724 Algorithm: rsa_pkcs1_sha384 and Supported group: secp256) 1726 RECOMMENDED HTTPS response object size: 1KB, 2KB, 4KB, 16KB, 64KB, 1727 256KB and mixed object defined in the table below. 1729 +---------------------+---------------------+ 1730 | Object size (KByte) | Number of requests/ | 1731 | | Weight | 1732 +---------------------+---------------------+ 1733 | 0.2 | 1 | 1734 +---------------------+---------------------+ 1735 | 6 | 1 | 1736 +---------------------+---------------------+ 1737 | 8 | 1 | 1738 +---------------------+---------------------+ 1739 | 9 | 1 | 1740 +---------------------+---------------------+ 1741 | 10 | 1 | 1742 +---------------------+---------------------+ 1743 | 25 | 1 | 1744 +---------------------+---------------------+ 1745 | 26 | 1 | 1746 +---------------------+---------------------+ 1747 | 35 | 1 | 1748 +---------------------+---------------------+ 1749 | 59 | 1 | 1750 +---------------------+---------------------+ 1751 | 347 | 1 | 1752 +---------------------+---------------------+ 1754 Table 4: Mixed Objects 1756 7.7.3.3. Test Results Acceptance Criteria 1758 The following test Criteria is defined as test results acceptance 1759 criteria. Test results acceptance criteria MUST be monitored during 1760 the whole sustain phase of the traffic load profile. 1762 a. Number of failed Application transactions MUST be less than 1763 0.001% (1 out of 100,000 transactions) of attempt transactions. 1765 b. Traffic should be forwarded constantly. 1767 c. The deviation of concurrent TCP connections MUST be less than 10% 1769 7.7.3.4. Measurement 1771 The KPI metrics MUST be reported for this test scenario: 1773 Average Throughput, Average transactions per second, concurrent 1774 connections, and average TCP connections per second. 1776 7.7.4. Test Procedures and Expected Results 1778 The test procedure consists of three major steps. This test 1779 procedure MAY be repeated multiple times with different IPv4 and IPv6 1780 traffic distribution and HTTPS response object sizes. 1782 7.7.4.1. Step 1: Test Initialization and Qualification 1784 Verify the link status of the all connected physical interfaces. All 1785 interfaces are expected to be in "UP" status. 1787 Configure traffic load profile of the test equipment to establish 1788 "initial throughput" as defined in the parameters Section 7.7.3.2. 1790 The traffic load profile should be defined as described in 1791 Section 4.3.4. The DUT/SUT SHOULD reach the "Initial Throughput" 1792 during the sustain phase. Measure all KPI as defined in 1793 Section 7.7.3.4. 1795 The measured KPIs during the sustain phase MUST meet the acceptance 1796 criteria "a" defined in Section 7.7.3.3. 1798 If the KPI metrics do not meet the acceptance criteria, the test 1799 procedure MUST NOT be continued to "Step 2". 1801 7.7.4.2. Step 2: Test Run with Target Objective 1803 The test equipment SHOULD start to measure and record all specified 1804 KPIs. The frequency of measurement MUST be less than 5 seconds. 1805 Continue the test until all traffic profile phases are completed. 1807 The DUT/SUT is expected to reach the desired "Target Throughput" at 1808 the sustain phase. In addition, the measured KPIs MUST meet all 1809 acceptance criteria. 1811 Perform the test separately for each HTTPS response object size. 1813 Follow step 3, if the KPI metrics do not meet the acceptance 1814 criteria. 1816 7.7.4.3. Step 3: Test Iteration 1818 Determine the maximum and average achievable throughput within the 1819 acceptance criteria. Final test iteration MUST be performed for the 1820 test duration defined in Section 4.3.4. 1822 7.8. HTTPS Transaction Latency 1824 7.8.1. Objective 1826 Using HTTPS traffic, determine the average HTTPS transaction latency 1827 when DUT is running with sustainable HTTPS transactions per second 1828 supported by the DUT/SUT under different HTTPS response object size. 1830 Scenario 1: The client MUST negotiate HTTPS and close the connection 1831 with FIN immediately after completion of a single transaction (GET 1832 and RESPONSE). 1834 Scenario 2: The client MUST negotiate HTTPS and close the connection 1835 with FIN immediately after completion of 10 transactions (GET and 1836 RESPONSE) within a single TCP connection. 1838 7.8.2. Test Setup 1840 Test bed setup SHOULD be configured as defined in Section 4. Any 1841 specific test bed configuration changes such as number of interfaces 1842 and interface type, etc. MUST be documented. 1844 7.8.3. Test Parameters 1846 In this section, test scenario specific parameters SHOULD be defined. 1848 7.8.3.1. DUT/SUT Configuration Parameters 1850 DUT/SUT parameters MUST conform to the requirements defined in 1851 Section 4.2. Any configuration changes for this specific test 1852 scenario MUST be documented. 1854 7.8.3.2. Test Equipment Configuration Parameters 1856 Test equipment configuration parameters MUST conform to the 1857 requirements defined in Section 4.3. Following parameters MUST be 1858 documented for this test scenario: 1860 Client IP address range defined in Section 4.3.1.2 1862 Server IP address range defined in Section 4.3.2.2 1864 Traffic distribution ratio between IPv4 and IPv6 defined in 1865 Section 4.3.1.2 1867 RECOMMENDED cipher suites and key size: ECDHE-ECDSA-AES256-GCM-SHA384 1868 with Secp521 bits key size (Signature Hash Algorithm: 1869 ecdsa_secp384r1_sha384 and Supported group: sepc521r1) 1870 Target objective for scenario 1: 50% of the maximum connections per 1871 second measured in test scenario TCP/HTTPS Connections per second 1872 (Section 7.6) 1874 Target objective for scenario 2: 50% of the maximum throughput 1875 measured in test scenario HTTPS Throughput (Section 7.7) 1877 Initial objective for scenario 1: 10% of Target objective for 1878 scenario 1" 1880 Initial objective for scenario 2: 10% of "Target objective for 1881 scenario 2" 1883 HTTPS transaction per TCP connection: test scenario 1 with single 1884 transaction and the second scenario with 10 transactions 1886 HTTPS 1.1 with GET command requesting a single 1, 16 or 64 Kbyte 1887 object. For each test iteration, client MUST request a single HTTPS 1888 response object size. 1890 7.8.3.3. Test Results Acceptance Criteria 1892 The following test Criteria is defined as test results acceptance 1893 criteria. Test results acceptance criteria MUST be monitored during 1894 the whole sustain phase of the traffic load profile. Ramp up and 1895 ramp down phase SHOULD NOT be considered. 1897 Generic criteria: 1899 a. Number of failed Application transactions MUST be less than 1900 0.001% (1 out of 100,000 transactions) of attempt transactions. 1902 b. Number of Terminated TCP connections due to unexpected TCP RST 1903 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 1904 connections) of total initiated TCP connections 1906 c. During the sustain phase, traffic should be forwarded at a 1907 constant rate. 1909 d. Concurrent TCP connections should be constant during steady 1910 state. This confirms the DUT opens and closes TCP connections at 1911 the same rate. 1913 e. After ramp up the DUT MUST achieve the "Target objective" defined 1914 in the parameter Section 7.8.3.2 and remain in that state for the 1915 entire test duration (sustain phase). 1917 7.8.3.4. Measurement 1919 Following KPI metrics MUST be reported for each test scenario and 1920 HTTPS response object sizes separately: 1922 average TCP connections per second and average application 1923 transaction latency or TTLB 1925 All KPI's are measured once the target connections per second 1926 achieves the steady state. 1928 7.8.4. Test Procedures and Expected Results 1930 The test procedure is designed to measure average application 1931 transaction latency or TTLB when the DUT is operating close to 50% of 1932 its maximum achievable connections per second. This test procedure 1933 can be repeated multiple times with different IP types (IPv4 only, 1934 IPv6 only and IPv4 and IPv6 mixed traffic distribution), HTTPS 1935 response object sizes and single and multiple transactions per 1936 connection scenarios. 1938 7.8.4.1. Step 1: Test Initialization and Qualification 1940 Verify the link status of the all connected physical interfaces. All 1941 interfaces are expected to be in "UP" status. 1943 Configure traffic load profile of the test equipment to establish 1944 "Initial objective" as defined in the parameters Section 7.8.3.2. 1945 The traffic load profile can be defined as described in 1946 Section 4.3.4. 1948 The DUT/SUT SHOULD reach the "Initial objective" before the sustain 1949 phase. The measured KPIs during the sustain phase MUST meet the 1950 acceptance criteria a, b, c, d, e and f defined in Section 7.8.3.3. 1952 If the KPI metrics do not meet the acceptance criteria, the test 1953 procedure MUST NOT be continued to "Step 2". 1955 7.8.4.2. Step 2: Test Run with Target Objective 1957 Configure test equipment to establish "Target objective" defined in 1958 the parameters table. The test equipment SHOULD follow the traffic 1959 load profile definition as described in Section 4.3.4. 1961 During the ramp up and sustain phase, other KPIs such as throughput, 1962 concurrent TCP connections and application transactions per second 1963 MUST NOT reach to the maximum value that the DUT/SUT can support. 1964 The test results for specific test iterations SHOULD NOT be reported, 1965 if the above mentioned KPI (especially throughput) reaches to the 1966 maximum value. (Example: If the test iteration with 64Kbyte of HTTP 1967 response object size reached the maximum throughput limitation of the 1968 DUT, the test iteration MAY be interrupted and the result for 64kbyte 1969 SHOULD NOT be reported). 1971 The test equipment SHOULD start to measure and record all specified 1972 KPIs. The frequency of measurement MUST be less than 5 seconds. 1973 Continue the test until all traffic profile phases are completed. 1974 DUT/SUT is expected to reach the desired "Target objective" at the 1975 sustain phase. In addition, the measured KPIs MUST meet all 1976 acceptance criteria. 1978 Follow step 3, if the KPI metrics do not meet the acceptance 1979 criteria. 1981 7.8.4.3. Step 3: Test Iteration 1983 Determine the maximum achievable connections per second within the 1984 acceptance criteria and measure the latency values. 1986 7.9. Concurrent TCP/HTTPS Connection Capacity 1988 7.9.1. Objective 1990 Determine the maximum number of concurrent TCP connections that the 1991 DUT/SUT sustains when using HTTPS traffic. 1993 7.9.2. Test Setup 1995 Test bed setup SHOULD be configured as defined in Section 4. Any 1996 specific test bed configuration changes such as number of interfaces 1997 and interface type, etc. MUST be documented. 1999 7.9.3. Test Parameters 2001 In this section, test scenario specific parameters SHOULD be defined. 2003 7.9.3.1. DUT/SUT Configuration Parameters 2005 DUT/SUT parameters MUST conform to the requirements defined in 2006 Section 4.2. Any configuration changes for this specific test 2007 scenario MUST be documented. 2009 7.9.3.2. Test Equipment Configuration Parameters 2011 Test equipment configuration parameters MUST conform to the 2012 requirements defined in Section 4.3. Following parameters MUST be 2013 documented for this test scenario: 2015 Client IP address range defined in Section 4.3.1.2 2017 Server IP address range defined in Section 4.3.2.2 2019 Traffic distribution ratio between IPv4 and IPv6 defined in 2020 Section 4.3.1.2 2022 RECOMMENDED cipher suites and key size: ECDHE-ECDSA-AES256-GCM- 2023 SHA384 with Secp521 bits key size (Signature Hash Algorithm: 2024 ecdsa_secp384r1_sha384 and Supported group: sepc521r1) 2026 Target concurrent connections: Initial value from product data 2027 sheet (if known) 2029 Initial concurrent connections: 10% of "Target concurrent 2030 connections" 2032 Connections per second during ramp up phase: 50% of maximum 2033 connections per second measured in test scenario TCP/HTTPS 2034 Connections per second (Section 7.6) 2036 Ramp up time (in traffic load profile for "Target concurrent 2037 connections"): "Target concurrent connections" / "Maximum 2038 connections per second during ramp up phase" 2040 Ramp up time (in traffic load profile for "Initial concurrent 2041 connections"): "Initial concurrent connections" / "Maximum 2042 connections per second during ramp up phase" 2044 The client MUST perform HTTPS transaction with persistence and each 2045 client can open multiple concurrent TCP connections per server 2046 endpoint IP. 2048 Each client sends 10 GET commands requesting 1Kbyte HTTPS response 2049 objects in the same TCP connections (10 transactions/TCP connection) 2050 and the delay (think time) between each transactions MUST be X 2051 seconds. 2053 X = ("Ramp up time" + "steady state time") /10 2054 The established connections SHOULD remain open until the ramp down 2055 phase of the test. During the ramp down phase, all connections 2056 SHOULD be successfully closed with FIN. 2058 7.9.3.3. Test Results Acceptance Criteria 2060 The following test Criteria is defined as test results acceptance 2061 criteria. Test results acceptance criteria MUST be monitored during 2062 the whole sustain phase of the traffic load profile. 2064 a. Number of failed Application transactions MUST be less than 2065 0.001% (1 out of 100,000 transactions) of total attempted 2066 transactions 2068 b. Number of Terminated TCP connections due to unexpected TCP RST 2069 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 2070 connections) of total initiated TCP connections 2072 c. During the sustain phase, traffic SHOULD be forwarded constantly 2074 d. During the sustain phase, the maximum deviation (max. dev) of 2075 application transaction latency or TTLB (Time To Last Byte) MUST 2076 be less than 10% 2078 7.9.3.4. Measurement 2080 Following KPI metrics MUST be reported for this test scenario: 2082 Average Throughput, max. Min. Avg. Concurrent TCP connections, TTLB/ 2083 application transaction latency and average application transactions 2084 per second 2086 7.9.4. Test Procedures and expected Results 2088 The test procedure is designed to measure the concurrent TCP 2089 connection capacity of the DUT/SUT at the sustaining period of 2090 traffic load profile. The test procedure consists of three major 2091 steps. This test procedure MAY be repeated multiple times with 2092 different IPv4 and IPv6 traffic distribution. 2094 7.9.4.1. Step 1: Test Initialization and Qualification 2096 Verify the link status of all connected physical interfaces. All 2097 interfaces are expected to be in "UP" status. 2099 Configure test equipment to establish "initial concurrent TCP 2100 connections" defined in Section 7.9.3.2. Except ramp up time, the 2101 traffic load profile SHOULD be defined as described in Section 4.3.4. 2103 During the sustain phase, the DUT/SUT SHOULD reach the "Initial 2104 concurrent TCP connections". The measured KPIs during the sustain 2105 phase MUST meet the acceptance criteria "a" and "b" defined in 2106 Section 7.9.3.3. 2108 If the KPI metrics do not meet the acceptance criteria, the test 2109 procedure MUST NOT be continued to "Step 2". 2111 7.9.4.2. Step 2: Test Run with Target Objective 2113 Configure test equipment to establish "Target concurrent TCP 2114 connections".The test equipment SHOULD follow the traffic load 2115 profile definition (except ramp up time) as described in 2116 Section 4.3.4. 2118 During the ramp up and sustain phase, the other KPIs such as 2119 throughput, TCP connections per second and application transactions 2120 per second MUST NOT reach to the maximum value that the DUT/SUT can 2121 support. 2123 The test equipment SHOULD start to measure and record KPIs defined in 2124 Section 7.9.3.4. The frequency of measurement MUST be less than 5 2125 seconds. Continue the test until all traffic profile phases are 2126 completed. 2128 The DUT/SUT is expected to reach the desired target concurrent 2129 connections at the sustain phase. In addition, the measured KPIs 2130 MUST meet all acceptance criteria. 2132 Follow step 3, if the KPI metrics do not meet the acceptance 2133 criteria. 2135 7.9.4.3. Step 3: Test Iteration 2137 Determine the maximum and average achievable concurrent TCP 2138 connections within the acceptance criteria. 2140 8. Formal Syntax 2142 9. IANA Considerations 2144 This document makes no request of IANA. 2146 Note to RFC Editor: this section may be removed on publication as an 2147 RFC. 2149 10. Acknowledgements 2151 Acknowledgements will be added in the future release. 2153 11. Contributors 2155 The authors would like to thank the many people that contributed 2156 their time and knowledge to this effort. 2158 Specifically, to the co-chairs of the NetSecOPEN Test Methodology 2159 working group and the NetSecOPEN Security Effectiveness working group 2160 - Alex Samonte, Aria Eslambolchizadeh, Carsten Rossenhoevel and David 2161 DeSanto. 2163 Additionally, the following people provided input, comments and spent 2164 time reviewing the myriad of drafts. If we have missed anyone the 2165 fault is entirely our own. Thanks to - Amritam Putatunda, Chao Guo, 2166 Chris Chapman, Chris Pearson, Chuck McAuley, David White, Jurrie Van 2167 Den Breekel, Michelle Rhines, Rob Andrews, Samaresh Nair, and Tim 2168 Winters. 2170 12. References 2172 12.1. Normative References 2174 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2175 Requirement Levels", BCP 14, RFC 2119, 2176 DOI 10.17487/RFC2119, March 1997, 2177 . 2179 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2180 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 2181 May 2017, . 2183 12.2. Informative References 2185 [RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., 2186 Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext 2187 Transfer Protocol -- HTTP/1.1", RFC 2616, 2188 DOI 10.17487/RFC2616, June 1999, 2189 . 2191 [RFC2647] Newman, D., "Benchmarking Terminology for Firewall 2192 Performance", RFC 2647, DOI 10.17487/RFC2647, August 1999, 2193 . 2195 [RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin, 2196 "Benchmarking Methodology for Firewall Performance", 2197 RFC 3511, DOI 10.17487/RFC3511, April 2003, 2198 . 2200 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 2201 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 2202 . 2204 Appendix A. NetSecOPEN Basic Traffic Mix 2206 A traffic mix for testing performance of next generation firewalls 2207 MUST scale to stress the DUT based on real-world conditions. In 2208 order to achieve this the following MUST be included: 2210 o Clients connecting to multiple different server FQDNs per 2211 application 2213 o Clients loading apps and pages with connections and objects in 2214 specific orders 2216 o Multiple unique certificates for HTTPS/TLS 2218 o A wide variety of different object sizes 2220 o Different URL paths 2222 o Mix of HTTP and HTTPS 2224 A traffic mix for testing performance of next generation firewalls 2225 MUST also facility application identification using different 2226 detection methods with and without decryption of the traffic. Such 2227 as: 2229 o HTTP HOST based application detection 2231 o HTTPS/TLS Server Name Indication (SNI) 2233 o Certificate Subject Common Name (CN) 2235 The mix MUST be of sufficient complexity and volume to render 2236 differences in individual apps as statistically insignificant. For 2237 example, changes in like to like apps - such as one type of video 2238 service vs. another both consist of larger objects whereas one news 2239 site vs. another both typically have more connections then other apps 2240 because of trackers and embedded advertising content. To achieve 2241 sufficient complexity, a mix MUST have: 2243 o Thousands of URLs each client walks thru 2245 o Hundreds of FQDNs each client connects to 2247 o Hundreds of unique certificates for HTTPS/TLS 2249 o Thousands of different object sizes per client in orders matching 2250 applications 2252 The following is a description of what a popular application in an 2253 enterprise traffic mix contains. 2255 Table 5 lists the FQDNs, number of transactions and bytes transferred 2256 as an example client interacts with Office 365 Outlook, Word, Excel, 2257 PowerPoint, SharePoint and Skype. 2259 +---------------------------------+------------+-------------+ 2260 | Office365 FQDN | Bytes | Transaction | 2261 +============================================================+ 2262 | r1.res.office365.com | 14,056,960 | 192 | 2263 +---------------------------------+------------+-------------+ 2264 | s1-word-edit-15.cdn.office.net | 6,731,019 | 22 | 2265 +---------------------------------+------------+-------------+ 2266 | company1-my.sharepoint.com | 6,269,492 | 42 | 2267 +---------------------------------+------------+-------------+ 2268 | swx.cdn.skype.com | 6,100,027 | 12 | 2269 +---------------------------------+------------+-------------+ 2270 | static.sharepointonline.com | 6,036,947 | 41 | 2271 +---------------------------------+------------+-------------+ 2272 | spoprod-a.akamaihd.net | 3,904,250 | 25 | 2273 +---------------------------------+------------+-------------+ 2274 | s1-excel-15.cdn.office.net | 2,767,941 | 16 | 2275 +---------------------------------+------------+-------------+ 2276 | outlook.office365.com | 2,047,301 | 86 | 2277 +---------------------------------+------------+-------------+ 2278 | shellprod.msocdn.com | 1,008,370 | 11 | 2279 +---------------------------------+------------+-------------+ 2280 | word-edit.officeapps.live.com | 932,080 | 25 | 2281 +---------------------------------+------------+-------------+ 2282 | res.delve.office.com | 760,146 | 2 | 2283 +---------------------------------+------------+-------------+ 2284 | s1-powerpoint-15.cdn.office.net | 557,604 | 3 | 2285 +---------------------------------+------------+-------------+ 2286 | appsforoffice.microsoft.com | 511,171 | 5 | 2287 +---------------------------------+------------+-------------+ 2288 | powerpoint.officeapps.live.com | 471,625 | 14 | 2289 +---------------------------------+------------+-------------+ 2290 | excel.officeapps.live.com | 342,040 | 14 | 2291 +---------------------------------+------------+-------------+ 2292 | s1-officeapps-15.cdn.office.net | 331,343 | 5 | 2293 +---------------------------------+------------+-------------+ 2294 | webdir0a.online.lync.com | 66,930 | 15 | 2295 +---------------------------------+------------+-------------+ 2296 | portal.office.com | 13,956 | 1 | 2297 +---------------------------------+------------+-------------+ 2298 | config.edge.skype.com | 6,911 | 2 | 2299 +---------------------------------+------------+-------------+ 2300 | clientlog.portal.office.com | 6,608 | 8 | 2301 +---------------------------------+------------+-------------+ 2302 | webdir.online.lync.com | 4,343 | 5 | 2303 +---------------------------------+------------+-------------+ 2304 | graph.microsoft.com | 2,289 | 2 | 2305 +---------------------------------+------------+-------------+ 2306 | nam.loki.delve.office.com | 1,812 | 5 | 2307 +---------------------------------+------------+-------------+ 2308 | login.microsoftonline.com | 464 | 2 | 2309 +---------------------------------+------------+-------------+ 2310 | login.windows.net | 232 | 1 | 2311 +---------------------------------+------------+-------------+ 2313 Table 5: Office365 2315 Clients MUST connect to multiple server FQDNs in the same order as 2316 real applications. Connections MUST be made when the client is 2317 interacting with the application and MUST NOT first setup up all 2318 connections. Connections SHOULD stay open per client for subsequent 2319 transactions to the same FQDN similar to how a web browser behaves. 2320 Clients MUST use different URL Paths and Object sizes in orders as 2321 they are observed in real Applications. Clients MAY also setup 2322 multiple connections per FQDN to process multiple transactions in a 2323 sequence at the same time. Table 6 has a partial example sequence of 2324 the Office 365 Word application transactions. 2326 +---------------------------------+----------------------+----------+ 2327 | FQDN | URL Path | Object | 2328 | | | size | 2329 +===================================================================+ 2330 | company1-my.sharepoint.com | /personal... | 23,132 | 2331 +---------------------------------+----------------------+----------+ 2332 | word-edit.officeapps.live.com | /we/WsaUpload.ashx | 2 | 2333 +---------------------------------+----------------------+----------+ 2334 | static.sharepointonline.com | /bld/.../blank.js | 454 | 2335 +---------------------------------+----------------------+----------+ 2336 | static.sharepointonline.com | /bld/.../ | 23,254 | 2337 | | initstrings.js | | 2338 +---------------------------------+----------------------+----------+ 2339 | static.sharepointonline.com | /bld/.../init.js | 292,740 | 2340 +---------------------------------+----------------------+----------+ 2341 | company1-my.sharepoint.com | /ScriptResource... | 102,774 | 2342 +---------------------------------+----------------------+----------+ 2343 | company1-my.sharepoint.com | /ScriptResource... | 40,329 | 2344 +---------------------------------+----------------------+----------+ 2345 | company1-my.sharepoint.com | /WebResource... | 23,063 | 2346 +---------------------------------+----------------------+----------+ 2347 | word-edit.officeapps.live.com | /we/wordeditorframe. | 60,657 | 2348 | | aspx... | | 2349 +---------------------------------+----------------------+----------+ 2350 | static.sharepointonline.com | /bld/_layouts/.../ | 454 | 2351 | | blank.js | | 2352 +---------------------------------+----------------------+----------+ 2353 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 19,201 | 2354 | | EditSurface.css | | 2355 +---------------------------------+----------------------+----------+ 2356 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 221,397 | 2357 | | WordEditor.css | | 2358 +---------------------------------+----------------------+----------+ 2359 | s1-officeapps-15.cdn.office.net | /we/s/.../ | 107,571 | 2360 | | Microsoft | | 2361 | | Ajax.js | | 2362 +---------------------------------+----------------------+----------+ 2363 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 39,981 | 2364 | | wacbootwe.js | | 2365 +---------------------------------+----------------------+----------+ 2366 | s1-officeapps-15.cdn.office.net | /we/s/.../ | 51,749 | 2367 | | CommonIntl.js | | 2368 +---------------------------------+----------------------+----------+ 2369 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 6,050 | 2370 | | Compat.js | | 2371 +---------------------------------+----------------------+----------+ 2372 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 54,158 | 2373 | | Box4Intl.js | | 2374 +---------------------------------+----------------------+----------+ 2375 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 24,946 | 2376 | | WoncaIntl.js | | 2377 +---------------------------------+----------------------+----------+ 2378 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 53,515 | 2379 | | WordEditorIntl.js | | 2380 +---------------------------------+----------------------+----------+ 2381 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 1,978,712| 2382 | | WordEditorExp.js | | 2383 +---------------------------------+----------------------+----------+ 2384 | s1-word-edit-15.cdn.office.net | /we/s/.../jSanity.js | 10,912 | 2385 +---------------------------------+----------------------+----------+ 2386 | word-edit.officeapps.live.com | /we/OneNote.ashx | 145,708 | 2387 +---------------------------------+----------------------+----------+ 2389 Table 6: Office365 Word Transactions 2391 For application identification the HTTPS/TLS traffic MUST include 2392 realistic Certificate Subject Common Name (CN) data as well as Server 2393 Name Indications (SNI). For example, a DUT MAY detect Facebook Chat 2394 traffic by inspecting the certificate and detecting *.facebook.com in 2395 the certificate subject CN and subsequently detect the word chat in 2396 the FQDN 5-edge-chat.facebook.com and identify traffic on the 2397 connection to be Facebook Chat. 2399 Table 7 includes further examples in SNI and CN pairs for several 2400 FQDNs of Office 365. 2402 +------------------------------+----------------------------------+ 2403 |Server Name Indication (SNI) | Certificate Subject | 2404 | | Common Name (CN) | 2405 +=================================================================+ 2406 | r1.res.office365.com | *.res.outlook.com | 2407 +------------------------------+----------------------------------+ 2408 | login.windows.net | graph.windows.net | 2409 +------------------------------+----------------------------------+ 2410 | webdir0a.online.lync.com | *.online.lync.com | 2411 +------------------------------+----------------------------------+ 2412 | login.microsoftonline.com | stamp2.login.microsoftonline.com | 2413 +------------------------------+----------------------------------+ 2414 | webdir.online.lync.com | *.online.lync.com | 2415 +------------------------------+----------------------------------+ 2416 | graph.microsoft.com | graph.microsoft.com | 2417 +------------------------------+----------------------------------+ 2418 | outlook.office365.com | outlook.com | 2419 +------------------------------+----------------------------------+ 2420 | appsforoffice.microsoft.com | appsforoffice.microsoft.com | 2421 +------------------------------+----------------------------------+ 2423 Table 7: Office365 SNI and CN Pairs Examples 2425 NetSecOPEN has provided a reference enterprise perimeter traffic mix 2426 with dozens of applications, hundreds of connections, and thousands 2427 of transactions. 2429 The enterprise perimeter traffic mix consists of 70% HTTPS and 30% 2430 HTTP by Bytes, 58% HTTPS and 42% HTTP by Transactions. By 2431 connections with a single connection per FQDN the mix consists of 43% 2432 HTTPS and 57% HTTP. With multiple connections per FQDN the HTTPS 2433 percentage is higher. 2435 Table 8 is a summary of the NetSecOPEN enterprise perimeter traffic 2436 mix sorted by bytes with unique FQDNs and transactions per 2437 applications. 2439 +------------------+-------+--------------+-------------+ 2440 | Application | FQDNs | Transactions | Bytes | 2441 +=======================================================+ 2442 | Office365 | 26 | 558 | 52,931,947 | 2443 +------------------+-------+--------------+-------------+ 2444 | Box | 4 | 90 | 23,276,089 | 2445 +------------------+-------+--------------+-------------+ 2446 | Salesforce | 6 | 365 | 23,137,548 | 2447 +------------------+-------+--------------+-------------+ 2448 | Gmail | 13 | 139 | 16,399,289 | 2449 +------------------+-------+--------------+-------------+ 2450 | Linkedin | 10 | 206 | 15,040,918 | 2451 +------------------+-------+--------------+-------------+ 2452 | DailyMotion | 8 | 77 | 14,751,514 | 2453 +------------------+-------+--------------+-------------+ 2454 | GoogleDocs | 2 | 71 | 14,205,476 | 2455 +------------------+-------+--------------+-------------+ 2456 | Wikia | 15 | 159 | 13,909,777 | 2457 +------------------+-------+--------------+-------------+ 2458 | Foxnews | 82 | 499 | 13,758,899 | 2459 +------------------+-------+--------------+-------------+ 2460 | Yahoo Finance | 33 | 254 | 13,134,011 | 2461 +------------------+-------+--------------+-------------+ 2462 | Youtube | 8 | 97 | 13,056,216 | 2463 +------------------+-------+--------------+-------------+ 2464 | Facebook | 4 | 207 | 12,726,231 | 2465 +------------------+-------+--------------+-------------+ 2466 | CNBC | 77 | 275 | 11,939,566 | 2467 +------------------+-------+--------------+-------------+ 2468 | Lightreading | 27 | 304 | 11,200,864 | 2469 +------------------+-------+--------------+-------------+ 2470 | BusinessInsider | 16 | 142 | 11,001,575 | 2471 +------------------+-------+--------------+-------------+ 2472 | Alexa | 5 | 153 | 10,475,151 | 2473 +------------------+-------+--------------+-------------+ 2474 | CNN | 41 | 206 | 10,423,740 | 2475 +------------------+-------+--------------+-------------+ 2476 | Twitter Video | 2 | 72 | 10,112,820 | 2477 +------------------+-------+--------------+-------------+ 2478 | Cisco Webex | 1 | 213 | 9,988,417 | 2479 +------------------+-------+--------------+-------------+ 2480 | Slack | 3 | 40 | 9,938,686 | 2481 +------------------+-------+--------------+-------------+ 2482 | Google Maps | 5 | 191 | 8,771,873 | 2483 +------------------+-------+--------------+-------------+ 2484 | SpectrumIEEE | 7 | 145 | 8,682,629 | 2485 +------------------+-------+--------------+-------------+ 2486 | Yelp | 9 | 146 | 8,607,645 | 2487 +------------------+-------+--------------+-------------+ 2488 | Vimeo | 12 | 74 | 8,555,960 | 2489 +------------------+-------+--------------+-------------+ 2490 | Wikihow | 11 | 140 | 8,042,314 | 2491 +------------------+-------+--------------+-------------+ 2492 | Netflix | 3 | 31 | 7,839,256 | 2493 +------------------+-------+--------------+-------------+ 2494 | Instagram | 3 | 114 | 7,230,883 | 2495 +------------------+-------+--------------+-------------+ 2496 | Morningstar | 30 | 150 | 7,220,121 | 2497 +------------------+-------+--------------+-------------+ 2498 | Docusign | 5 | 68 | 6,972,738 | 2499 +------------------+-------+--------------+-------------+ 2500 | Twitter | 1 | 100 | 6,939,150 | 2501 +------------------+-------+--------------+-------------+ 2502 | Tumblr | 11 | 70 | 6,877,200 | 2503 +------------------+-------+--------------+-------------+ 2504 | Whatsapp | 3 | 46 | 6,829,848 | 2505 +------------------+-------+--------------+-------------+ 2506 | Imdb | 16 | 251 | 6,505,227 | 2507 +------------------+-------+--------------+-------------+ 2508 | NOAAgov | 1 | 44 | 6,316,283 | 2509 +------------------+-------+--------------+-------------+ 2510 | IndustryWeek | 23 | 192 | 6,242,403 | 2511 +------------------+-------+--------------+-------------+ 2512 | Spotify | 18 | 119 | 6,231,013 | 2513 +------------------+-------+--------------+-------------+ 2514 | AutoNews | 16 | 165 | 6,115,354 | 2515 +------------------+-------+--------------+-------------+ 2516 | Evernote | 3 | 47 | 6,063,168 | 2517 +------------------+-------+--------------+-------------+ 2518 | NatGeo | 34 | 104 | 6,026,344 | 2519 +------------------+-------+--------------+-------------+ 2520 | BBC News | 18 | 156 | 5,898,572 | 2521 +------------------+-------+--------------+-------------+ 2522 | Investopedia | 38 | 241 | 5,792,038 | 2523 +------------------+-------+--------------+-------------+ 2524 | Pinterest | 8 | 102 | 5,658,994 | 2525 +------------------+-------+--------------+-------------+ 2526 | Succesfactors | 2 | 112 | 5,049,001 | 2527 +------------------+-------+--------------+-------------+ 2528 | AbaJournal | 6 | 93 | 4,985,626 | 2529 +------------------+-------+--------------+-------------+ 2530 | Pbworks | 4 | 78 | 4,670,980 | 2531 +------------------+-------+--------------+-------------+ 2532 | NetworkWorld | 42 | 153 | 4,651,354 | 2533 +------------------+-------+--------------+-------------+ 2534 | WebMD | 24 | 280 | 4,416,736 | 2535 +------------------+-------+--------------+-------------+ 2536 | OilGasJournal | 14 | 105 | 4,095,255 | 2537 +------------------+-------+--------------+-------------+ 2538 | Trello | 5 | 39 | 4,080,182 | 2539 +------------------+-------+--------------+-------------+ 2540 | BusinessWire | 5 | 109 | 4,055,331 | 2541 +------------------+-------+--------------+-------------+ 2542 | Dropbox | 5 | 17 | 4,023,469 | 2543 +------------------+-------+--------------+-------------+ 2544 | Nejm | 20 | 190 | 4,003,657 | 2545 +------------------+-------+--------------+-------------+ 2546 | OilGasDaily | 7 | 199 | 3,970,498 | 2547 +------------------+-------+--------------+-------------+ 2548 | Chase | 6 | 52 | 3,719,232 | 2549 +------------------+-------+--------------+-------------+ 2550 | MedicalNews | 6 | 117 | 3,634,187 | 2551 +------------------+-------+--------------+-------------+ 2552 | Marketwatch | 25 | 142 | 3,291,226 | 2553 +------------------+-------+--------------+-------------+ 2554 | Imgur | 5 | 48 | 3,189,919 | 2555 +------------------+-------+--------------+-------------+ 2556 | NPR | 9 | 83 | 3,184,303 | 2557 +------------------+-------+--------------+-------------+ 2558 | Onelogin | 2 | 31 | 3,132,707 | 2559 +------------------+-------+--------------+-------------+ 2560 | Concur | 2 | 50 | 3,066,326 | 2561 +------------------+-------+--------------+-------------+ 2562 | Service-now | 1 | 37 | 2,985,329 | 2563 +------------------+-------+--------------+-------------+ 2564 | Apple itunes | 14 | 80 | 2,843,744 | 2565 +------------------+-------+--------------+-------------+ 2566 | BerkeleyEdu | 3 | 69 | 2,622,009 | 2567 +------------------+-------+--------------+-------------+ 2568 | MSN | 39 | 203 | 2,532,972 | 2569 +------------------+-------+--------------+-------------+ 2570 | Indeed | 3 | 47 | 2,325,197 | 2571 +------------------+-------+--------------+-------------+ 2572 | MayoClinic | 6 | 56 | 2,269,085 | 2573 +------------------+-------+--------------+-------------+ 2574 | Ebay | 9 | 164 | 2,219,223 | 2575 +------------------+-------+--------------+-------------+ 2576 | UCLAedu | 3 | 42 | 1,991,311 | 2577 +------------------+-------+--------------+-------------+ 2578 | ConstructionDive | 5 | 125 | 1,828,428 | 2579 +------------------+-------+--------------+-------------+ 2580 | EducationNews | 4 | 78 | 1,605,427 | 2581 +------------------+-------+--------------+-------------+ 2582 | BofA | 12 | 68 | 1,584,851 | 2583 +------------------+-------+--------------+-------------+ 2584 | ScienceDirect | 7 | 26 | 1,463,951 | 2585 +------------------+-------+--------------+-------------+ 2586 | Reddit | 8 | 55 | 1,441,909 | 2587 +------------------+-------+--------------+-------------+ 2588 | FoodBusinessNews | 5 | 49 | 1,378,298 | 2589 +------------------+-------+--------------+-------------+ 2590 | Amex | 8 | 42 | 1,270,696 | 2591 +------------------+-------+--------------+-------------+ 2592 | Weather | 4 | 50 | 1,243,826 | 2593 +------------------+-------+--------------+-------------+ 2594 | Wikipedia | 3 | 27 | 958,935 | 2595 +------------------+-------+--------------+-------------+ 2596 | Bing | 1 | 52 | 697,514 | 2597 +------------------+-------+--------------+-------------+ 2598 | ADP | 1 | 30 | 508,654 | 2599 +------------------+-------+--------------+-------------+ 2600 | | | | | 2601 +------------------+-------+--------------+-------------+ 2602 | Grand Total | 983 | 10021 | 569,819,095 | 2603 +------------------+-------+--------------+-------------+ 2605 Table 8: Summary of NetSecOPEN Enterprise Perimeter Traffic Mix 2607 Authors' Addresses 2609 Balamuhunthan Balarajah 2611 Email: bm.balarajah@gmail.com 2613 Carsten Rossenhoevel 2614 EANTC AG 2615 Salzufer 14 2616 Berlin 10587 2617 Germany 2619 Email: cross@eantc.de 2621 Brian Monkman 2622 NetSecOPEN 2624 Email: bmonkman@netsecopen.org