idnits 2.17.1 draft-balarajah-bmwg-ngfw-performance-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 27 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: d. During the sustain phase, Average connect time and average transaction time MUST be constant and latency deviation SHOULD not increase more than 10%. -- The document date (July 2, 2018) is 2124 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Methodology Working Group B. Balarajah 3 Internet-Draft C. Rossenhoevel 4 Intended status: Informational EANTC AG 5 Expires: January 3, 2019 July 2, 2018 7 Benchmarking Methodology for Network Security Device Performance 8 draft-balarajah-bmwg-ngfw-performance-04 10 Abstract 12 This document provides benchmarking terminology and methodology for 13 next-generation network security devices including next-generation 14 firewalls (NGFW), intrusion detection and prevention solutions (IDS/ 15 IPS) and unified threat management (UTM) implementations. The 16 document aims to strongly improve the applicability, reproducibility 17 and transparency of benchmarks and to align the test methodology with 18 today's increasingly complex layer 7 application use cases. The main 19 areas covered in this document are test terminology, traffic profiles 20 and benchmarking methodology for NGFWs to start with. 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at https://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on January 3, 2019. 39 Copyright Notice 41 Copyright (c) 2018 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (https://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 57 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 58 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 59 4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 4 60 4.1. Testbed Configuration . . . . . . . . . . . . . . . . . . 4 61 4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 5 62 4.3. Test Equipment Configuration . . . . . . . . . . . . . . 8 63 4.3.1. Client Configuration . . . . . . . . . . . . . . . . 9 64 4.3.2. Backend Server Configuration . . . . . . . . . . . . 10 65 4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 11 66 4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 12 67 5. Test Bed Considerations . . . . . . . . . . . . . . . . . . . 13 68 6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 14 69 6.1. Key Performance Indicators . . . . . . . . . . . . . . . 15 70 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 16 71 7.1. Throughput Performance With NetSecOPEN Traffic Mix . . . 16 72 7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 16 73 7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 17 74 7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 17 75 7.1.4. Test Procedures and expected Results . . . . . . . . 19 76 7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 19 77 7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 20 78 7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 20 79 7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 20 80 7.2.4. Test Procedures and Expected Results . . . . . . . . 21 81 7.3. HTTP Transaction per Second . . . . . . . . . . . . . . . 22 82 7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 22 83 7.3.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 23 84 7.3.3. Test Parameters . . . . . . . . . . . . . . . . . . . 23 85 7.3.4. Test Procedures and Expected Results . . . . . . . . 24 86 7.4. TCP/HTTP Transaction Latency . . . . . . . . . . . . . . 25 87 7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 25 88 7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 26 89 7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 26 90 7.4.4. Test Procedures and Expected Results . . . . . . . . 28 91 7.5. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 29 92 7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 29 93 7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 29 94 7.5.3. Test Parameters . . . . . . . . . . . . . . . . . . . 29 95 7.5.4. Test Procedures and Expected Results . . . . . . . . 31 96 7.6. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 32 97 7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 32 98 7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 32 99 7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 32 100 7.6.4. Test Procedures and expected Results . . . . . . . . 33 101 7.7. TCP/HTTPS Connections per second . . . . . . . . . . . . 34 102 7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 34 103 7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 35 104 7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 35 105 7.7.4. Test Procedures and expected Results . . . . . . . . 37 106 7.8. HTTPS Transaction per Second . . . . . . . . . . . . . . 38 107 7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 38 108 7.8.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 38 109 7.8.3. Test Parameters . . . . . . . . . . . . . . . . . . . 38 110 7.8.4. Test Procedures and Expected Results . . . . . . . . 40 111 7.9. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 41 112 7.9.1. Objective . . . . . . . . . . . . . . . . . . . . . . 41 113 7.10. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 41 114 7.10.1. Objective . . . . . . . . . . . . . . . . . . . . . 41 115 7.10.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 42 116 7.10.3. Test Parameters . . . . . . . . . . . . . . . . . . 42 117 7.10.4. Test Procedures and Expected Results . . . . . . . . 44 118 7.11. Concurrent TCP/HTTPS Connection Capacity . . . . . . . . 45 119 7.11.1. Objective . . . . . . . . . . . . . . . . . . . . . 45 120 8. Formal Syntax . . . . . . . . . . . . . . . . . . . . . . . . 45 121 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 45 122 10. Security Considerations . . . . . . . . . . . . . . . . . . . 45 123 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 45 124 12. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 45 125 13. Normative References . . . . . . . . . . . . . . . . . . . . 46 126 Appendix A. NetSecOPEN Basic Traffic Mix . . . . . . . . . . . . 46 127 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 54 129 1. Introduction 131 15 years have passed since IETF recommended test methodology and 132 terminology for firewalls initially (RFC 2647, RFC 3511). The 133 requirements for network security element performance and 134 effectiveness have increased tremendously since then. Security 135 function implementations have evolved to more advanced areas and have 136 diversified into intrusion detection and prevention, threat 137 management, analysis of encrypted traffic, etc. In an industry of 138 growing importance, well-defined and reproducible key performance 139 indicators (KPIs) are increasingly needed: They enable fair and 140 reasonable comparison of network security functions. All these 141 reasons have led to the creation of a new next-generation firewall 142 benchmarking document. 144 2. Requirements 146 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 147 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 148 document are to be interpreted as described in [RFC2119] . 150 3. Scope 152 This document provides testing terminology and testing methodology 153 next-generation firewalls and related security functions. It covers 154 two main areas: Performance benchmarks and security effectiveness 155 testing. The document focuses on advanced, realistic, and 156 reproducible testing methods. Additionally it describes test bed 157 environments, test tool requirements and test result formats. 159 4. Test Setup 161 Test setup defined in this document will be applicable to all of the 162 benchmarking test scenarios described in Section 7. 164 4.1. Testbed Configuration 166 Testbed configuration MUST ensure that any performance implications 167 that are discovered during the benchmark testing aren't due to the 168 inherent physical network limitations such as number of physical 169 links and forwarding performance capabilities (throughput and 170 latency) of the network devise in the testbed. For this reason, this 171 document recommends to avoid external devices such as switch and 172 router in the testbed as possible. 174 In the typical deployment, the security devices (DUT/SUT) will not 175 have a large number of entries in MAC or ARP tables, which impact the 176 actual DUT/SUT performance due to MAC and ARP/ND table lookup 177 processes. Therefore, depend on number of used IP address in client 178 and server side, it is recommended to connect Layer 3 device(s) 179 between test equipment and DUT/SUT as shown in Figure 1. 181 If the test equipment is capable to emulate layer 3 routing 182 functionality and there is no need for test equipment ports 183 aggregation, it is recommended to configure the test setup as shown 184 in Figure 2. 186 +-------------------+ +-----------+ +--------------------+ 187 |Aggregation Switch/| | | | Aggregation Switch/| 188 | Router +------+ DUT/SUT +------+ Router | 189 | | | | | | 190 +----------+--------+ +-----------+ +--------+-----------+ 191 | | 192 | | 193 +-----------+-----------+ +-----------+-----------+ 194 | | | | 195 | +-------------------+ | | +-------------------+ | 196 | | Emulated Router(s)| | | | Emulated Router(s)| | 197 | | (Optional) | | | | (Optional) | | 198 | +-------------------+ | | +-------------------+ | 199 | +-------------------+ | | +-------------------+ | 200 | | Clients | | | | Servers | | 201 | +-------------------+ | | +-------------------+ | 202 | | | | 203 | Test Equipment | | Test Equipment | 204 +-----------------------+ +-----------------------+ 206 Figure 1: Testbed Setup - Option 1 208 +-----------------------+ +-----------------------+ 209 | +-------------------+ | +-----------+ | +-------------------+ | 210 | | Emulated Router(s)| | | | | | Emulated Router(s)| | 211 | | (Optional) | +----- DUT/SUT +-----+ (Optional) | | 212 | +-------------------+ | | | | +-------------------+ | 213 | +-------------------+ | +-----------+ | +-------------------+ | 214 | | Clients | | | | Servers | | 215 | +-------------------+ | | +-------------------+ | 216 | | | | 217 | Test Equipment | | Test Equipment | 218 +-----------------------+ +-----------------------+ 220 Figure 2: Testbed Setup - Option 2 222 4.2. DUT/SUT Configuration 224 An unique DUT/SUT configuration MUST be used for all of the 225 benchmarking tests described in Section 7. Since each DUT/SUT will 226 have their own unique configuration, users SHOULD configure their 227 device with the same parameters that would be used in the actual 228 deployment of the device or a typical deployment. Also it is 229 mandatory to enable security features on the DUT/SUT in order to 230 achieve maximum security coverage for a specific deployment scenario. 232 This document attempts to define the recommended security features 233 which SHOULD be consistently enabled for all of the benchmarking 234 tests described in Section 7. The table below describes the 235 recommended sets of feature list which SHOULD be configured on the 236 DUT/SUT. In order to improve repeatability, a summary of the DUT 237 configuration including description of all enabled DUT/SUT features 238 MUST be published with the benchmarking results. 240 +----------------------------------------------------+ 241 | Device | 242 +---------------------------------+---+---+---+------+ 243 | | | | | | SSL | 244 | NGFW |NGIPS|ADC|WAF|BPS|Broker| 245 +-------------------------------------------------------------------+ 246 | | |Included |Added to| Future test standards | 247 |DUT Features |Feature|in initial|future | to be developed | 248 | | |Scope |Scope | | 249 +------------------------------------------------+---+---+---+------+ 250 |SSL Inspection| x | | x | | | | | | 251 +-------------------------------------------------------------------+ 252 |IDS/IPS | x | x | | | | | | | 253 +-------------------------------------------------------------------+ 254 |Web Filtering | x | | x | | | | | | 255 +-------------------------------------------------------------------+ 256 |Antivirus | x | x | | | | | | | 257 +-------------------------------------------------------------------+ 258 |Anti Spyware | x | x | | | | | | | 259 +-------------------------------------------------------------------+ 260 |Anti Botnet | x | x | | | | | | | 261 +-------------------------------------------------------------------+ 262 |DLP | x | | x | | | | | | 263 +-------------------------------------------------------------------+ 264 |DDoS | x | | x | | | | | | 265 +-------------------------------------------------------------------+ 266 |Certificate | x | | x | | | | | | 267 |Validation | | | | | | | | | 268 +-------------------------------------------------------------------+ 269 |Logging and | x | x | | | | | | | 270 |Reporting | | | | | | | | | 271 +-------------------------------------------------------------------+ 272 |Application | x | x | | | | | | | 273 |Identification| | | | | | | | | 274 +----------------------+----------+--------+-----+---+---+---+------+ 276 Table 1: DUT/SUT Feature List 278 In addition, it is also recommended to configure a realistic number 279 of access policy rules on the DUT/SUT. This document determines the 280 number of access policy rules for three different class of DUT/SUT. 281 The classification of the DUT/SUT MAY be based on its maximum 282 supported throughput performance number defined in the vendor data 283 sheet. This document classifies the DUT/SUT in three different 284 categories; namely small, medium and maximum. 286 The recommended throughput values for the following classes are; 288 Small - supported throughput less than 5Gbit/s 290 Medium - supported throughput greater than 5Gbit/s and less than 291 10Gbit/s 293 Large - supported throughput greater than 10Gbit/s 295 The access rule defined in the table 2 MUST be configured from top to 296 bottom in correct order shown in the table. The configured access 297 policy rule MUST NOT block the test traffic used for the benchmarking 298 test scenarios. 300 +------------------------------------------------+------------------+ 301 | | DUT/SUT | 302 | | Classification | 303 | | # Rules | 304 +-----------+-----------+-----------------+------------+------+-----+ 305 | | Match | | | | | | 306 | Rules Type| Criteria| Description |Action|Small|Medium|Large| 307 +-------------------------------------------------------------------+ 308 |Application|Application|Any application |block | 10 | 20 | 50 | 309 |layer | |traffic NOT | | | | | 310 | | |included in the | | | | | 311 | | |test traffic | | | | | 312 +-------------------------------------------------------------------+ 313 |Transport |Src IP and |Any src IP use in|block | 50 | 100 | 250 | 314 |layer |TCP/UDP |the test AND any | | | | | 315 | |Dst ports |dst ports NOT | | | | | 316 | | |used in the test | | | | | 317 | | |traffic | | | | | 318 +-------------------------------------------------------------------+ 319 |IP layer |Src/Dst IP |Any src/dst IP |block | 50 | 100 | 250 | 320 | | |NOT used in the | | | | | 321 | | |test | | | | | 322 +-------------------------------------------------------------------+ 323 |Application|Application|Applications |allow | 10 | 10 | 10 | 324 |layer | |included in the | | | | | 325 | | |test traffic | | | | | 326 +-------------------------------------------------------------------+ 327 |Transport |Src IP and |Half of the src |allow | 1 | 1 | 1 | 328 |layer |TCP/UDP |IP used in the | | | | | 329 | |Dst ports |test AND any dst | | | | | 330 | | |ports used in the| | | | | 331 | | |test traffic. One| | | | | 332 | | |rule per subnet | | | | | 333 +-------------------------------------------------------------------+ 334 |IP layer |Src IP |The rest of the |allow | 1 | 1 | 1 | 335 | | |src IP subnet | | | | | 336 | | |range used in the| | | | | 337 | | |test. One rule | | | | | 338 | | |per subnet | | | | | 339 +-----------+-----------------------------+------+-----+------+-----+ 341 Table 2: DUT/SUT Access List 343 4.3. Test Equipment Configuration 345 In general, test equipment allows configuring parameters in different 346 protocol level. These parameters thereby influencing the traffic 347 flows which will be offered and impacting performance measurements. 349 This document attempts to explicitly specify which test equipment 350 parameters SHOULD be configurable, any such parameter(s) MUST be 351 noted in the test report. 353 4.3.1. Client Configuration 355 This section specifies which parameters SHOULD be considerable while 356 configuring emulated clients using test equipment. Also this section 357 specifies the recommended values for certain parameters. 359 4.3.1.1. TCP Stack Attributes 361 The TCP stack SHOULD use a TCP Reno variant, which include congestion 362 avoidance, back off and windowing, retransmission and recovery on 363 every TCP connection between client and server endpoints. The 364 default IPv4 and IPv6 MSS segments size MUST be set to 1460 bytes and 365 1440 bytes and a TX and RX receive windows of 32768 bytes. Delayed 366 ACKs are permitted, but it SHOULD be limited to either a 200 msec 367 delay timeout or 3000 in bytes before a forced ACK. Up to 3 retries 368 SHOULD be allowed before a timeout event is declared. All traffic 369 MUST set the TCP PSH flag to high. The source port range SHOULD be 370 in the range of 1024 - 65535. Internal timeout SHOULD be dynamically 371 scalable per RFC 793. 373 4.3.1.2. Client IP Address Space 375 The sum of the client IP space SHOULD contain the following 376 attributes. The traffic blocks SHOULD consist of multiple unique, 377 continuous static address blocks. A default gateway is permitted. 378 The IPv4 ToS byte should be set to '00'. 380 The following equation can be used to determine the required total 381 number of client IP address. 383 Desired total number of client IP = Target throughput [Mbit/s] / 384 Throughput per IP address [Mbit/s] 386 (Idea 1) 6-7 Mbps per IP (e.g 1,400-1,700 IPs per 10Gbit/s 387 throughput) 389 (Idea 2) 0.1-0.2 Mbps per IP (e.g 50,000-100,000 IPs per 10Gbit/s 390 throughput) 392 Based on deployment and usecase scenario, client IP addresses SHOULD 393 be distributed between IPv4 and IPv6 type. This document recommends 394 using the following ratio(s) between IPv4 and IPv6: 396 (Idea 1) 100 % IPv4, no IPv6 397 (Idea 2) 80 % IPv4, 20 % IPv6 399 (Idea 3) 50 % IPv4, 50 % IPv6 401 (Idea 4) 0 % IPv4, 100 % IPv6 403 4.3.1.3. Emulated Web Browser Attributes 405 The emulated web browser contains attributes that will materially 406 affect how traffic is loaded. The objective is to emulate a modern, 407 typical browser attributes to improve realism of the result set. 409 For HTTP traffic emulation, the emulated browser must negotiate HTTP 410 1.1. HTTP persistency MAY be enabled depend on test scenario. The 411 browser CAN open multiple TCP connections per Server endpoint IP at 412 any time depending on how many sequential transactions are needed to 413 be processed. Within the TCP connection multiple transactions can be 414 processed if the emulated browser has available connections. The 415 browser MUST advertise a User-Agent header. Headers will be sent 416 uncompressed. The browser should enforce content length validation. 418 For encrypted traffic, the following attributes shall define the 419 negotiated encryption parameters. The tests MUST use TLSv1.2 or 420 higher with a record size of 16383, commonly used cipher suite and 421 key strength. Session reuse or ticket resumption may be used for 422 subsequent connections to the same Server endpoint IP. The client 423 endpoint must send TLS Extension SNI information when opening up a 424 security tunnel. Server certificate validation should be disabled. 425 Server certificate validation should be disabled. Cipher suite and 426 certificate size should be defined in the parameter session of 427 benchmarking tests. 429 4.3.2. Backend Server Configuration 431 This document attempts to specify which parameters should be 432 considerable while configuring emulated backend servers using test 433 equipment. 435 4.3.2.1. TCP Stack Attributes 437 The TCP stack SHOULD use a TCP Reno variant, which include congestion 438 avoidance, back off and windowing, retransmission and recovery on 439 every TCP connection between client and server endpoints. The 440 default IPv4 MSS segment size MUST be set to 1460 bytes and a TX and 441 RX receive windows of at least 32768 bytes. Delayed ACKs are 442 permitted but SHOULD be limited to either a 200 msec delay timeout or 443 3000 in bytes before a forced ACK. Up to 3 retries SHOULD be allowed 444 before a timeout event is declared. All traffic MUST set the TCP PSH 445 flag to high. The source port range SHOULD be in the range of 1024 - 446 65535. Internal timeout should be dynamically scalable per RFC 793. 448 4.3.2.2. Server Endpoint IP Addressing 450 The server IP blocks should consist of unique, continuous static 451 address blocks with one IP per Server FQDN endpoint per test port. 452 The IPv4 ToS byte should be set to '00'. The source mac address of 453 the server endpoints shall be the same emulating routed behavior. 454 Each Server FQDN should have it's own unique IP address. The Server 455 IP addressing should be fixed to the same number of FQDN entries. 457 4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes 459 The emulated server pool for HTTP should listen on TCP port 80 and 460 emulated HTTP version 1.1 with persistence. For HTTPS server, the 461 pool must have the same basic attributes of an HTTP server pool plus 462 attributes for SSL/TLS. The server must advertise a server type. 463 For HTTPS server, TLS 1.2 or higher must be used with a record size 464 of 16383 bytes and ticket resumption or Session ID reuse enabled. 465 The server must listen on port TCP 443. The server shall serve a 466 certificate to the client. It is required that the HTTPS server also 467 check Host SNI information with the Fully Qualified Domain Name 468 (FQDN). Client certificate validation should be disabled. Cipher 469 suite and certificate size should be defined in the parameter session 470 of benchmarking tests. 472 4.3.3. Traffic Flow Definition 474 The section describes the traffic pattern between the client and 475 server endpoints. At the beginning of the test, the server endpoint 476 initializes and will be in a ready to accept connection state 477 including initialization of the TCP stack as well as bound HTTP and 478 HTTPS servers. When a client endpoint is needed, it will initialize 479 and be given attributes such as the MAC and IP address. The behavior 480 of the client is to sweep though the given server IP space, 481 sequentially generating a recognizable service by the DUT. Thus, a 482 balanced, mesh between client endpoints and server endpoints will be 483 generated in a client port server port combination. Each client 484 endpoint performs the same actions as other endpoints, with the 485 difference being the source IP of the client endpoint and the target 486 server IP pool. The client shall use Fully Qualified Domain Names in 487 Host Headers and for TLS 1.2 Server Name Indication (SNI). 489 4.3.3.1. Description of Intra-Client Behavior 491 Client endpoints are independent of other clients that are 492 concurrently executing. When a client endpoint initiate traffic, 493 this section will describe how the steps though different services. 494 Once initialized, the user should randomly hold (perform no 495 operation) for a few milliseconds to allow for better randomization 496 of start of client traffic. The client will then either open up a 497 new TCP connection or connect to a TCP persistence stack still open 498 to that specific server. At any point that the service profile may 499 require encryption, a TLS 1.2 encryption tunnel will form presenting 500 the URL request to the server. The server will then perform an SNI 501 name check with the proposed FQDN compared to the domain embedded in 502 the certificate. Only when correct, will the server process the 503 object. The initial object to the server may not have a fixed size; 504 its size is based on benchmarking tests described in Section 7. 505 Multiple additional sub-URLs (Objects on the service page) may be 506 requested simultaneously. This may or may not be to the same server 507 IP as the initial URL. Each sub-object will also use a conical FQDN 508 and URL path, as observed in the traffic mix used. 510 4.3.4. Traffic Load Profile 512 The loading of traffic will be described in this section. The 513 loading of an traffic load profile has five distinct phases: Init, 514 ramp up, sustain, ramp down/close, and collection. 516 Within the Init phase, test bed devices including the client and 517 server endpoints should negotiate layer 2-3 connectivity such as MAC 518 learning and ARP. Only after successful MAC learning or ARP/ND 519 resolution shall the test iteration move to the next phase. No 520 measurements are made in this phase. The minimum recommended time 521 for init phase is 5 seconds. During this phase the emulated clients 522 SHOULD NOT initiate any sessions with the DUT/SUT, in contrast, the 523 emulated servers should be ready to accept requests from DUT/SUT or 524 from emulated clients. 526 In the ramp up phase, the test equipment should start to generate the 527 test traffic. It should use a set approximate number of unique 528 client IP addresses actively to generate traffic. The traffic should 529 ramp from zero to desired target objective. The target objective 530 will be defined for each benchmarking test. The duration for the 531 ramp up phase must be configured long enough, so that the test 532 equipment do not overwhelm DUT/SUT's supported performance metrics 533 namely; connection setup rate, concurrent connection and application 534 transaction. The recommended time duration for the ramp up phase is 535 180- 300 seconds. No measurements are made in this phase. 537 In the sustain phase, the test equipment should keep to generate 538 traffic t constant target value for a constant number of active 539 client IPs. The recommended time duration for sustain phase is 600 540 seconds. This is the phase where measurements occur. 542 In the ramp down/close phase, no new connection is established and no 543 measurements are made. The recommend duration of this phase is 544 between 180 to 300 seconds. 546 The last phase is administrative and will be when the tester merges 547 and collates the report data. 549 5. Test Bed Considerations 551 This section recommends steps to control the test environment and 552 test equipment, specifically focusing on virtualized environments and 553 virtualized test equipment. 555 1. Ensure that any ancillary switching or routing functions between 556 the system under test and the test equipment do not limit the 557 performance of the traffic generator. This is specifically 558 important for virtualized components (vSwitches, vRouters). 560 2. Verify that the performance of the test equipment matches and 561 reasonably exceeds the expected maximum performance of the system 562 under test. 564 3. Assert that the test bed characteristics are stable during the 565 whole test session. A number of factors might influence 566 stability specifically for virtualized test beds, for example 567 additional work loads in a virtualized system, load balancing and 568 movement of virtual machines during the test, or simple issues 569 such as additional heat created by high workloads leading to an 570 emergency CPU performance reduction. 572 Test bed reference pre-tests help to ensure that the desired traffic 573 generator aspects such as maximum throughput and the network 574 performance metrics such as maximum latency and maximum packet loss 575 are met. 577 Once the desired maximum performance goals for the system under test 578 have been identified, a safety margin of 10% SHOULD be added for 579 throughput and subtracted for maximum latency and maximum packet 580 loss. 582 Test bed preparation may be performed either by configuring the DUT 583 in the most trivial setup (fast forwarding) or without presence of 584 DUT. 586 6. Reporting 588 This section describes how the final report should be formatted and 589 presented. The final test report may have two major sections; 590 Introduction and result sections. The following attributes should be 591 present in the introduction section of the test report. 593 1. The name of the NetSecOPEN traffic mix (see Appendix A) must be 594 prominent. 596 2. The time and date of the execution of the test must be prominent. 598 3. Summary of testbed software and Hardware details 600 A. DUT Hardware/Virtual Configuration 602 + This section should clearly identify the make and model of 603 the DUT 605 + iThe port interfaces, including speed and link information 606 must be documented. 608 + If the DUT is a virtual VNF, interface acceleration such 609 as DPDK and SR-IOV must be documented as well as cores 610 used, RAM used, and the pinning / resource sharing 611 configuration. The Hypervisor and version must be 612 documented. 614 + Any additional hardware relevant to the DUT such as 615 controllers must be documented 617 B. DUT Software 619 + The operating system name must be documented 621 + The version must be documented 623 + The specific configuration must be documented 625 C. DUT Enabled Features 627 + Specific features, such as logging, NGFW, DPI must be 628 documented 630 + iAttributes of those featured must be documented 632 + Any additional relevant information about features must be 633 documented 635 D. Test equipment hardware and software 637 + Test equipment vendor name 639 + Hardware details including model number, interface type 641 + Test equipment firmware and test application software 642 version 644 4. Results Summary / Executive Summary 646 1. Results should resemble a pyramid in how it is reported, with 647 the introduction section documenting the summary of results 648 in a prominent, easy to read block. 650 2. In the result section of the test report, the following 651 attributes should be present for each test scenario. 653 a. KPIs must be documented separately for each test 654 scenario. The format of the KPI metrics should be 655 presented as described in Section 6.1. 657 b. The next level of detains should be graphs showing each 658 of these metrics over the duration (sustain phase) of the 659 test. This allows the user to see the measured 660 performance stability changes over time. 662 6.1. Key Performance Indicators 664 This section lists KPIs for overall benchmarking tests scenarios. 665 All KPIs MUST be measured in whole period of sustain phase as 666 described in Section 4.3.4. All KPIs MUST be measured from test 667 equipment's result output. 669 o TCP Concurrent Connection 670 This key performance indicator is measured the average concurrent 671 open TCP connections in the sustaining period. 673 o TCP Connection Setup Rate 674 This key performance indicator will measure the average 675 established TCP connections per second in the sustaining period. 676 For Session setup rate benchmarking test scenario, the KPI will 677 measure average established and terminated TCP connections per 678 second simultaneously. 680 o Application Transaction Rate 681 This key performance indicator will measure the average successful 682 transactions per seconds in the sustaining period. 684 o TLS Handshake Rate 685 This key performance indicator will measure the average TLS 1.2 or 686 higher session formation rate within the sustaining period. 688 o Throughput 689 This key performance indicator will measure the average Layer 1 690 throughput within the sustaining period as well as average packets 691 per seconds within the same period. The value of throughput 692 should be presented in Gbps rounded to two places of precision 693 with a more specific kbps in parenthesis. Optionally, goodput may 694 also be logged as an average goodput rate measured over the same 695 period. Goodput result shall also be presented in the same format 696 as throughput. 698 o URL Response time / Time to Last Byte (TTLB) 699 This key performance indicator will measure the minimum, average 700 and maximum per URL response time in the sustaining period. The 701 latency is measured at Client and in this case would be the time 702 duration between sending a GET request from Client and the 703 receival of the response from the server 705 o Application Transaction Time 706 This key performance indicator will measure the minimum, average 707 and maximum the amount of time to receive all objects from the 708 server. 710 o Time to First Byte (TTFB) 711 This key performance indicator will measure minimum, average and 712 maximum the time to first byte. TTFB is the elapsed time between 713 sending the SYN packet from the client and receiving the first 714 byte of application date from the DUT/SUT. TTFB SHOULD be 715 expressed in millisecond. 717 o TCP Connect Time 718 This key performance indicator will measure minimum, average and 719 maximum TCP connect time. It is elapsed between the time the 720 client sends a SYN packet and the time it receives the SYN/ACK. 721 TCP connect time SHOULD be expressed in millisecond. 723 7. Benchmarking Tests 725 7.1. Throughput Performance With NetSecOPEN Traffic Mix 727 7.1.1. Objective 729 Using NetSecOPEN traffic mix, determine the maximum sustainable 730 throughput performance supported by the DUT/SUT. (see Appendix A for 731 details about traffic mix) 733 7.1.2. Test Setup 735 Test bed setup MUST be configured as defined in Section 4. Any test 736 scenario specific test bed configuration changes must be documented. 738 7.1.3. Test Parameters 740 In this section, test scenario specific parameters SHOULD be defined. 742 7.1.3.1. DUT/SUT Configuration Parameters 744 DUT/SUT parameters MUST conform to the requirements defined in 745 Section 4.2. Any configuration changes for this specific test 746 scenario MUST be documented. 748 7.1.3.2. Test Equipment Configuration Parameters 750 Test equipment configuration parameters MUST conform to the 751 requirements defined in Section 4.3. Following parameters MUST be 752 noted for this test scenario: 754 Client IP address range 756 Server IP address range 758 Traffic distribution ratio between IPv4 and IPv6 760 Traffic load objective or specification type (e.g. Throughput, 761 SimUsers and etc.) 763 Target throughput: It MAY be defined based on requirements. 764 Otherwise it represents aggregated line rate of interface(s) used 765 in the DUT/SUT 767 Initial throughput: Initial throughput MAY be up to 10% of the 768 "Target throughput" 770 7.1.3.3. Traffic Profile 772 Test scenario MUST be run with a single application traffic mix 773 profile (see Appendix A for details about traffic mix). The name of 774 the NetSecOpen traffic mix MUST be documented. 776 7.1.3.4. Test Results Acceptance Criteria 778 The following test Criteria is defined as test results acceptance 779 criteria 780 a. Number of failed Application transaction MUST be 0.01%. 782 b. Number of Terminated TCP connection due to unexpected TCP RST 783 sent by DUT/SUT MUST be less than 0.01% 785 c. Maximum deviation (max. dev) of application transaction time / 786 TTLB (Time To Last Byte) MUST be less than X (The value for "X" 787 will be finalyzed and updated in future draft release) 788 The following equation MUST be used to calculate the deviation of 789 application transaction time or TTLB. 791 max. dev = max((avg_latency - min_latency),(max_latency - 792 avg_latency)) / (Initial latency) 794 Where, the initial latency is calculated using the following 795 equation. For this calculation, the latency values (min', avg' 796 and max') MUST be measured during test procedure step 1 as 797 defined in Section 7.1.4.1. 798 The variable latency represents application transaction time or 799 TTLB. 801 Initial latency:= min((avg' latency - min' latency) | (max' 802 latency - avg' latency)) 804 d. Maximum value of TCP connect time must be less than Xms (The 805 value for "X" will be finalyzed and updated in future draft 806 release). The definition for TCP connect time is found in 807 Section 6.1. 809 e. Maximum value of Time to First Byte must be less than 2* TCP 810 connect time. 812 Test Acceptance criteria for this test scenario MUST be monitored 813 during the sustain phase of the traffic load profile only. 815 7.1.3.5. Measurement 817 Following KPI metrics MUST be reported for this test scenario. 819 Mandatory KPIs: average Throughput, maximum Concurrent TCP 820 connection, TTLB/application transaction time (minimum, average and 821 maximum) and average application transaction rate 823 Optional KPIs: average TCP connection setup rate, average TLS 824 handshake rate, TCP connect time and TTFB 826 7.1.4. Test Procedures and expected Results 828 The test procedure is designed to measure the throughput performance 829 of the DUT/SUT at the sustaining period of traffic load profile. The 830 test procedure consists of three major steps. 832 7.1.4.1. Step 1: Test Initialization and Qualification 834 Verify the link status of the all connected physical interfaces. All 835 interfaces are expected to be "UP" status. 837 Configure traffic load profile of the test equipment to generate test 838 traffic at "initial throughput" rate as described in the parameters 839 section. The DUT/SUT SHOULD reach the "initial throughput" during 840 the sustain phase. Measure all KPI as defined in Section 7.1.3.5. 841 The measured KPIs during the sustain phase MUST meet acceptance 842 criteria "a" and "b" defined in Section 7.1.3.4. 844 If the KPI metrics do not meet the acceptance criteria, the test 845 procedure MUST NOT be continued to step 2. 847 7.1.4.2. Step 2: Test Run with Target Objective 849 Configure test equipment to generate traffic at "Target throughput" 850 rate defined in the parameter table. The test equipment SHOULD 851 follow the traffic load profile definition as described in 852 Section 4.3.4. The test equipment SHOULD start to measure and record 853 all specified KPIs. The frequency of KPI metrics measurement MUST be 854 less than 5 seconds. Continue the test until all traffic profile 855 phases are completed. 857 The DUT/SUT is expected to reach the desired target throughput during 858 the sustain phase. In addition, the measured KPIs must meet all 859 acceptance criteria. Follow the step 3, if the KPI metrics do not 860 meet the acceptance criteria. 862 7.1.4.3. Step 3: Test Iteration 864 Determine the maximum and average achievable throughput within the 865 acceptance criteria. Final test iteration MUST be performed for the 866 test duration defined in Section 4.3.4. 868 7.2. TCP/HTTP Connections Per Second 869 7.2.1. Objective 871 Using HTTP traffic, determine the maximum sustainable TCP session 872 establishment rate supported by the DUT/SUT under different 873 throughput load conditions. 875 Test iterations MUST use HTTP transaction object sizes of 1KB, 16KB 876 and 64KB to measure connections per second performance. 878 7.2.2. Test Setup 880 Test bed setup SHOULD be configured as defined in section 4. Any 881 specific test bed configuration changes such as number of interfaces 882 and interface type, etc. must be documented. 884 7.2.3. Test Parameters 886 In this section, test scenario specific parameters SHOULD be defined. 888 7.2.3.1. DUT/SUT Configuration Parameters 890 DUT/SUT parameters MUST conform to the requirements defined in the 891 section 4.2. Any configuration changes for this specific test 892 scenario MUST be documented. 894 7.2.3.2. Test Equipment Configuration Parameters 896 Test equipment configuration parameters MUST conform to the 897 requirements defined in the section 4.3. Following parameters MUST 898 be documented for this test scenario: 900 - Client IP address range defined in 4.3.1.2 902 - Server IP address range defined in 4.3.2.2 904 - Traffic distribution ratio between IPv4 and IPv6 defined in 4.3.1.2 906 - Target connections per second: Initial value from product data 907 sheet (if known) 909 - Initial connections per second: 10% of "Target connections per 910 second" 912 The client MUST negotiate HTTP 1.1 and close the connection 913 immediately after completion of the transaction. 915 Test scenario SHOULD be run with a single traffic profile with 916 following attributes: 918 HTTP 1.1 with GET command requesting 1, 16 and 64 Kbyte objects with 919 random MIME type. One transaction per TCP connection. 921 7.2.3.3. Test Results Acceptance Criteria 923 The following test Criteria is defined as test results acceptance 924 criteria. 926 a. Number of failed Application transaction MUST be less than 0.01% 927 of attempt transaction. 929 b. Number of Terminated TCP connection due to unexpected TCP RST 930 sent by DUT/SUT MUST be less than 0.01% of total initiated TCP 931 sessions 933 c. During the sustain phase, traffic should be forwarded at a 934 constant rate 936 d. During the sustain phase, Average Transaction latency MUST be 937 constant and not increase more than 10%. 939 e. Concurrent TCP connection should be constant during steady state. 940 This confirms that DUT open and close the session almost at the 941 same rate. 943 7.2.3.4. Measurement 945 Following KPI metrics MUST be reported for this test scenario. 947 Mandatory KPIs: average TCP connections per second, average 948 Throughput and Average Time to TCP First Byte. 950 7.2.4. Test Procedures and Expected Results 952 The test procedure is designed to measure the TCP connection per 953 second rate of the DUT/SUT at the sustaining period of traffic load 954 profile. The test procedure consists of three major steps. This 955 test procedure MAY be repeated multiple times with different IPv4 and 956 IPv6 traffic distribution. 958 7.2.4.1. Step 1: Test Initialization and Qualification 960 Verify the link status of the all connected physical interfaces. All 961 interfaces are expected to be "UP" status. 963 Configure traffic load profile of the test equipment to establish 964 "initial connections per second" as defined in the parameters 965 section. The traffic load profile CAN be defined as described in the 966 section 4.3.4. 968 The DUT/SUT SHOULD reach the "initial connections per second" before 969 the sustain phase. The measured KPIs during the sustain phase MUST 970 meet the acceptance criteria a, b, c and d defined in section 971 7.3.3.3. 973 If the KPI metrics do not meet the acceptance criteria, the test 974 procedure MUST NOT be continued to "Step 2". 976 7.2.4.2. Step 2: Test Run with Target Objective 978 Configure test equipment to establish "Target connections per second" 979 defined in the parameters table. The test equipment SHOULD follow 980 the traffic load profile definition as described in the section 981 4.3.4. 983 During the ramp up and sustain phase, other KPIs such as throughput, 984 TCP concurrent connections and application transaction MUST NOT reach 985 to the maximum value the DUT/SUT can support. 987 The test equipment SHOULD start to measure and record all specified 988 KPIs. The frequency of measurement MUST be less than 5 seconds. 989 Continue the test until all traffic profile phases are completed. 991 The DUT/SUT is expected to reach the desired target connection per 992 second rate at the sustain phase. In addition, the measured KPIs 993 must meet all acceptance criteria. 995 Follow the step 3, if the KPI metrics do not meet the acceptance 996 criteria. 998 7.2.4.3. Step 3: Test Iteration 1000 Determine the maximum and average achievable connections per second 1001 within the acceptance criteria. 1003 7.3. HTTP Transaction per Second 1005 7.3.1. Objective 1007 Using HTTP1.1 traffic, determine the maximum sustainable HTTP 1008 transactions per second supported by the DUT/SUT under different 1009 throughput load conditions. 1011 Test iterations MUST use HTTP transaction object sizes of 1KB, 16KB 1012 and 64KB to measure transactions per second performance under a 1013 variety of DUT Security inspection load conditions. Each HTTP 1014 connection MUST have 1 HTTP Get session. 1016 7.3.2. Test Setup 1018 Test bed setup SHOULD be configured as defined in section 4. Any 1019 specific test bed configuration changes such as number of interfaces 1020 and interface type, etc. must be documented. 1022 7.3.3. Test Parameters 1024 In this section, test scenario specific parameters SHOULD be defined. 1026 7.3.3.1. DUT/SUT Configuration Parameters 1028 DUT/SUT parameters MUST conform to the requirements defined in the 1029 section 4.2. Any configuration changes for this specific test 1030 scenario MUST be documented. 1032 7.3.3.2. Test Equipment Configuration Parameters 1034 Test equipment configuration parameters MUST conform to the 1035 requirements defined in the section 4.3. Following parameters MUST 1036 be documented for this test scenario: 1038 - Client IP address range defined in 4.3.1.2 1040 - Server IP address range defined in 4.3.2.2 1042 - Traffic distribution ratio between IPv4 and IPv6 defined in 4.3.1.2 1044 - Target Transactions per second: Initial value from product data 1045 sheet (if known) 1047 - Initial Transactions per second: 10% of "Target Transactions per 1048 second" 1050 Test scenario SHOULD be run with a single traffic profile with 1051 following attributes: 1053 The client MUST negotiate TCPconnection and close the connection 1054 immediately after completion of X (Number of transaction per 1055 connection needs to be defined and it will be updated in the next 1056 rellease)transaction 1058 HTTP 1.1 with GET command requesting a single 1, 16 and 64 Kbyte 1059 objects. 1061 7.3.3.3. Test Results Acceptance Criteria 1063 The following test Criteria is defined as test results acceptance 1064 criteria. Test results acceptance criteria MUST be monitored during 1065 the whole sustain phase of the traffic load profile. Ramp up and 1066 ramp down phase Should not be considered. 1068 a. Number of failed Application transactions MUST be zero 1070 b. Number of Terminated HTTP connections due to unexpected TCP RST 1071 sent by DUT/SUT MUST be less than 0.01% of total initiated HTTP 1072 sessions 1074 c. Number of failed Application transactions MUST be zero 1076 d. Average Time to TCP First Byte MUST be constant and not increase 1077 more than 10% 1079 e. The deviation of concurrent TCP connection Must be less than 10% 1081 7.3.3.4. Measurement 1083 Following KPI metrics MUST be reported for this test scenario. 1085 average TCP Transactions per second, average Throughput, Average Time 1086 to TCP First Byte and average transaction latency. 1088 7.3.4. Test Procedures and Expected Results 1090 The test procedure is designed to measure the HTTP Transactions per 1091 second rate of the DUT/SUT at the sustaining period of traffic load 1092 profile. The test procedure consists of three major steps. This 1093 test procedure MAY be repeated multiple times with different IPv4 and 1094 IPv6 traffic distribution 1096 7.3.4.1. Step 1: Test Initialization and Qualification 1098 Verify the link status of the all connected physical interfaces. All 1099 interfaces are expected to be "UP" status. 1101 Configure traffic load profile of the test equipment to establish 1102 "initial Transactions per second" as defined in the parameters 1103 section. The traffic load profile CAN be defined as described in the 1104 section 4.3.4. 1106 The DUT/SUT SHOULD reach the "initial Transactions per second" before 1107 the sustain phase. The measured KPIs during the sustain phase MUST 1108 meet the acceptance criteria a, b, c and d defined in section 1109 7.3.3.3. 1111 If the KPI metrics do not meet the acceptance criteria, the test 1112 procedure MUST NOT be continued to "Step 2". 1114 7.3.4.2. Step 2: Test Run with Target Objective 1116 Configure test equipment to establish "Target Transactions per 1117 second" defined in the parameters table. The test equipment SHOULD 1118 follow the traffic load profile definition as described in the 1119 section 4.3.4. 1121 During the ramp up and sustain phase, other KPIs such as throughput, 1122 TCP concurrent connections and TCP connection rate MUST NOT reach to 1123 the maximum value the DUT/SUT can support. 1125 The test equipment SHOULD start to measure and record all specified 1126 KPIs. The frequency of measurement MUST be less than 5 seconds. 1127 Continue the test until all traffic profile phases are completed. 1129 The DUT/SUT is expected to reach the desired target transactions per 1130 second rate at the sustain phase. In addition, the measured KPIs 1131 must meet all acceptance criteria. 1133 Follow the step 3, if the KPI metrics do not meet the acceptance 1134 criteria. 1136 7.3.4.3. Step 3: Test Iteration 1138 Determine the maximum and average achievable Transactions per second 1139 within the acceptance criteria. Final test iteration MUST be 1140 performed for the test duration defined in Section 4.3.4. 1142 7.4. TCP/HTTP Transaction Latency 1144 7.4.1. Objective 1146 Using HTTP traffic, determine the average TCP connect time and the 1147 average HTTP transactional latency when DUT is running with 1148 sustainable HTTP session establishment rate supported by the DUT/SUT 1149 under different HTTP object size. 1151 Test iterations MUST be performed with different object sizes twice, 1152 one with a single transaction and the other with multiple 1153 transactions within a single TCP session. For consistency both 1154 single and multiple transaction test needs to be configured with 1155 HTTP1.1. 1157 7.4.2. Test Setup 1159 Test bed setup SHOULD be configured as defined in section 4. Any 1160 specific test bed configuration changes such as number of interfaces 1161 and interface type, etc. must be documented. 1163 7.4.3. Test Parameters 1165 In this section, test scenario specific parameters SHOULD be defined. 1167 7.4.3.1. DUT/SUT Configuration Parameters 1169 DUT/SUT parameters MUST conform to the requirements defined in the 1170 section 4.2. Any configuration changes for this specific test 1171 scenario MUST be documented. 1173 7.4.3.2. Test Equipment Configuration Parameters 1175 Test equipment configuration parameters MUST conform to the 1176 requirements defined in the section 4.3. Following parameters MUST 1177 be documented for this test scenario: 1179 - Client IP address range defined in 4.3.1.2. 1181 - Server IP address range defined in 4.3.2.2 1183 - Traffic distribution ratio between IPv4 and IPv6 defined in 4.3.1.2 1185 - Target connections per second:50% of the value measured in test 1186 scenario 7.2 1188 - Initial connections per second: 10% of "Target connections per 1189 second" 1191 - HTTP transaction per connection: one test scenario with single 1192 transaction and another scenario with up to 10 transactions 1193 (recommended value is 10) 1195 Test scenario SHOULD be run with a single traffic profile with 1196 following attributes: 1198 Two observe transaction latency with single connection single 1199 transaction and single connection multiple transaction the tests 1200 should run twice: 1202 1st test run: The client MUST negotiate HTTP 1.1 and close the 1203 connection immediately after completion of the transaction. 1205 2nd test run: The client MUST negotiate HTTP1.1 and close the 1206 connection after 10 transactions (GET and RESPONSE) within a single 1207 TCP connection. 1209 HTTP 1.1 with GET command requesting a single 1, 16 or 64 Kbyte 1210 objects. For each test iteration, client MUST request a single 1211 object size. 1213 7.4.3.3. Test Results Acceptance Criteria 1215 The following test Criteria is defined as test results acceptance 1216 criteria. Test results acceptance criteria MUST be monitored during 1217 the whole sustain phase of the traffic load profile. Ramp up and 1218 ramp down phase Should not be considered. 1220 Generica criteria: 1222 a. Number of failed Application transaction MUST be zero. 1224 b. Number of Terminated TCP connection due to unexpected TCP RST 1225 sent by DUT/SUT MUST be zero. 1227 c. During the sustain phase, traffic should be forwarded at a 1228 constant rate. 1230 d. During the sustain phase, Average connect time and average 1231 transaction time MUST be constant and latency deviation SHOULD 1232 not increase more than 10%. 1234 e. Concurrent TCP connection should be constant during steady state. 1235 This confirms the DUT opens and closes the session at the same 1236 rate. After ramp up the DUT must achieve the target connections 1237 per second objective defined in the parameter section and it 1238 remains in that state for the entire test duration (Steady 1239 Stage). 1241 7.4.3.4. Measurement 1243 Following KPI metrics MUST be reported for each test scenario and 1244 object sizes separately: 1246 average TCP connections per second, average transaction latency and 1247 average TCP connect time needs to be recorded. 1249 All KPI's are measured once the target connections per second 1250 achieves the steady state. 1252 7.4.4. Test Procedures and Expected Results 1254 The test procedure is designed to measure the latency statistics, 1255 namely average connect time latency and average transaction latencies 1256 when the DUT is operating close to 50% of its maximum achievable 1257 connections per second. , This test procedure CAN be repeated 1258 multiple times with different IPv4 and IPv6 traffic distribution, 1259 object sizes and single and multiple transactions per connection 1260 scenarios. 1262 7.4.4.1. Step 1: Test Initialization and Qualification 1264 Verify the link status of the all connected physical interfaces. All 1265 interfaces are expected to be "UP" status. 1267 Configure traffic load profile of the test equipment to establish 1268 "initial connections per second" as defined in the parameters 1269 section. The traffic load profile CAN be defined as described in the 1270 section 4.3.4. 1272 The DUT/SUT SHOULD reach the "initial connections per second" before 1273 the sustain phase. The measured KPIs during the sustain phase MUST 1274 meet the acceptance criteria a, b, c, d and e defined in section 1275 7.4.3.3. 1277 If the KPI metrics do not meet the acceptance criteria, the test 1278 procedure MUST NOT be continued to "Step 2". 1280 7.4.4.2. Step 2: Test Run with Target Objective 1282 Configure test equipment to establish "Target connections per second" 1283 defined in the parameters table. The test equipment SHOULD follow 1284 the traffic load profile definition as described in the section 1285 4.3.4. 1287 During the ramp up and sustain phase, other KPIs such as throughput, 1288 TCP concurrent connections and application transaction MUST NOT reach 1289 to the maximum value the DUT/SUT can support. The transaction per 1290 HTTP connection may have to be reduced from the value of 10 1291 transactions to lower so that the throughput or transaction max of 1292 DUT/SUT is not reached. 1294 The test equipment SHOULD start to measure and record all specified 1295 KPIs. The frequency of measurement MUST be less than 5 seconds. 1296 Continue the test until all traffic profile phases are completed. 1298 The DUT/SUT is expected to reach the desired target connection per 1299 second rate at the sustain phase. In addition, the measured KPIs 1300 must meet all acceptance criteria. 1302 Follow the step 3, if the KPI metrics do not meet the acceptance 1303 criteria. 1305 7.4.4.3. Step 3: Test Iteration 1307 Determine the maximum achievable connections per second within the 1308 acceptance criteria and measure the latency values. 1310 7.5. HTTP Throughput 1312 7.5.1. Objective 1314 Determine the throughput for HTTP connections varying the object 1315 size. 1317 7.5.2. Test Setup 1319 Test bed setup SHOULD be configured as defined in section 4. Any 1320 specific test bed configuration changes such as number of interfaces 1321 and interface type, etc. must be documented. 1323 7.5.3. Test Parameters 1325 In this section, test scenario specific parameters SHOULD be defined. 1327 7.5.3.1. DUT/SUT Configuration Parameters 1329 DUT/SUT parameters MUST conform to the requirements defined in the 1330 section 4.2. Any configuration changes for this specific test 1331 scenario MUST be documented. 1333 7.5.3.2. Test Equipment Configuration Parameters 1335 Test equipment configuration parameters MUST conform to the 1336 requirements defined in the section 4.3. Following parameters MUST 1337 be documented for this test scenario: 1339 - Client IP address range Defined in 4.3.1.2 1341 - Server IP address range Defined in 4.3.2.2 1343 - Target Throughput: Initial value from product data sheet (if known) 1345 - Number of object requests per connection: 10 1346 - HTTP Response Object Size: 16KB, 64KB, 256KB and mixed objects 1348 +---------------------+---------------------+ 1349 | Object size (KByte) | Number of requests/ | 1350 | | Weight | 1351 +---------------------+---------------------+ 1352 | 0.2 | 1 | 1353 +---------------------+---------------------+ 1354 | 6 | 1 | 1355 +---------------------+---------------------+ 1356 | 8 | 1 | 1357 +---------------------+---------------------+ 1358 | 9 | 1 | 1359 +---------------------+---------------------+ 1360 | 10 | 1 | 1361 +---------------------+---------------------+ 1362 | 25 | 1 | 1363 +---------------------+---------------------+ 1364 | 26 | 1 | 1365 +---------------------+---------------------+ 1366 | 35 | 1 | 1367 +---------------------+---------------------+ 1368 | 59 | 1 | 1369 +---------------------+---------------------+ 1370 | 347 | 1 | 1371 +---------------------+---------------------+ 1373 Table 3: Mixed Objects 1375 7.5.3.3. Test Results Acceptance Criteria 1377 The following test Criteria is defined as test results acceptance 1378 criteria. Test results acceptance criteria MUST be monitored during 1379 the whole sustain phase of the traffic load profile. Ramp up and 1380 ramp down phase Should not be considered. 1382 a. Number of failed Application transaction MUST be less than 0.01% 1383 of attempt transaction. 1385 b. Traffic should be forwarded constantly. 1387 c. The deviation of concurrent TCP connection Must be less than 10% 1389 d. The deviation of average HTTP transaction latency MUST be less 1390 than 10% 1392 7.5.3.4. Measurement 1394 The KPI metrics MUST be reported for this test scenario: 1396 Average Throughput. 1398 7.5.4. Test Procedures and Expected Results 1400 The test procedure is designed to measure HTTP throughput of the DUT/ 1401 SUT. The test procedure consists of three major steps. This test 1402 procedure MAY be repeated multiple times with different IPv4 and IPv6 1403 traffic distribution and object sizes. 1405 7.5.4.1. Step 1: Test Initialization and Qualification 1407 Verify the link status of the all connected physical interfaces. All 1408 interfaces are expected to be "UP" status. 1410 Configure traffic load profile of the test equipment to establish 1411 "initial throughput" as defined in the parameters section. 1413 The traffic load profile should be defined as described in 1414 Section 4.3.4. The DUT/SUT SHOULD reach the "initial throughput" 1415 during the sustain phase. Measure all KPI as defined in 1416 Section 7.5.3.4 1418 The measured KPIs during the sustain phase MUST meet the acceptance 1419 criteria "a" defined in Section 7.5.3.3. 1421 If the KPI metrics do not meet the acceptance criteria, the test 1422 procedure MUST NOT be continued to "Step 2". 1424 7.5.4.2. Step 2: Test Run with Target Objective 1426 The test equipment SHOULD start to measure and record all specified 1427 KPIs. The frequency of measurement MUST be less than 5 seconds. 1428 Continue the test until all traffic profile phases are completed. 1430 The DUT/SUT is expected to reach the desired target HTTP connections 1431 at the sustain phase. In addition, the measured KPIs must meet all 1432 acceptance criteria. 1434 Perform the test separately for each object size (16k, 64k, 256k and 1435 mixed object). 1437 Follow the step 3, if the KPI metrics do not meet the acceptance 1438 criteria. 1440 7.5.4.3. Step 3: Test Iteration 1442 Determine the maximum and average achievable throughput within the 1443 acceptance criteria. Final test iteration MUST be performed for the 1444 test duration defined in Section 4.3.4. 1446 7.6. Concurrent TCP/HTTP Connection Capacity 1448 7.6.1. Objective 1450 Determine the maximum number of concurrent TCP connection that DUT/ 1451 SUT sustains when using HTTP traffic. 1453 7.6.2. Test Setup 1455 Test bed setup SHOULD be configured as defined in Section 4. Any 1456 specific test bed configuration changes such as number of interfaces 1457 and interface type, etc. must be documented. 1459 7.6.3. Test Parameters 1461 In this section, test scenario specific parameters SHOULD be defined. 1463 7.6.3.1. DUT/SUT Configuration Parameters 1465 DUT/SUT parameters MUST conform to the requirements defined in 1466 Section 4.2. Any configuration changes for this specific test 1467 scenario MUST be documented. 1469 7.6.3.2. Test Equipment Configuration Parameters 1471 Test equipment configuration parameters MUST conform to the 1472 requirements defined in Section 4.3. Following parameters MUST be 1473 noted for this test scenario: 1475 Client IP address range defined in 4.3.1.2 1477 Server IP address range defined in 4.3.2.2 1479 Traffic distribution ratio between IPv4 and IPv6 defined in 1480 4.3.1.2 1482 Target concurrent connection: Initial value from product data 1483 sheet (if known) 1485 Initial concurrent connection: 10% of "Target concurrent 1486 connection" 1488 The client must negotiate HTTP 1.1 with persistence and each client 1489 can open multiple concurrent TCP connections per server endpoint IP. 1491 Test scenario SHOULD be run with a single traffic profile with 1492 following attributes: 1494 HTTP 1.1 with GET command requesting 10 Kbyte objects with random 1495 MIME type. 1497 The test equipment SHOULD perform HTTP transactions within each TCP 1498 connection subsequently. The frequency of transactions MUST be 1499 defined to achieve X% of total throughput that DUT can support. The 1500 suggested value of X is 25. It will be finalyzed and updated in the 1501 next draft version. 1503 During the sustain state of concurrent connection and traffic load , 1504 a minimal % of TCP connection SHOULD be closed and re-opened. 1506 7.6.3.3. Test Results Acceptance Criteria 1508 To be done 1510 7.6.3.4. Measurement 1512 Following KPI metrics MUST be reported for this test scenario; 1514 average Throughput, max. Min. Avg. Concurrent TCP connection, TTLB/ 1515 application transaction time (minimum, average and maximum) and 1516 average application transaction rate. 1518 7.6.4. Test Procedures and expected Results 1520 The test procedure is designed to measure the concurrent TCP 1521 connection capacity of the DUT/SUT at the sustaining period of 1522 traffic load profile. The test procedure consists of three major 1523 steps. This test procedure MAY be repeated multiple times with 1524 different IPv4 and IPv6 traffic distribution. 1526 7.6.4.1. Step 1: Test Initialization and Qualification 1528 Verify the link status of the all connected physical interfaces. All 1529 interfaces are expected to be "UP" status. 1531 Configure traffic load profile of the test equipment to establish 1532 "initial concurrent connection" as defined in the parameters section. 1533 The traffic load profile should be defined as described in 1534 Section 4.3.4. 1536 The DUT/SUT SHOULD reach the "initial concurrent connection" during 1537 the sustain phase. The measured KPIs during the sustain phase MUST 1538 meet the acceptance criteria "a" and "b" defined in Section 7.6.3.3 1540 If the KPI metrics do not meet the acceptance criteria, the test 1541 procedure MUST NOT be continued to "Step 2". 1543 7.6.4.2. Step 2: Test Run with Target Objective 1545 Configure test equipment to establish "Target concurrent connection" 1546 defined in the parameters table. The test equipment SHOULD follow 1547 the traffic load profile definition as described in Section 4.3.4. 1549 During the ramp up and sustain phase, the other KPIs such as 1550 throughput, TCP connection rate and application transaction MUST NOT 1551 reach to the maximum value that the DUT/SUT can support. Throughput, 1552 TCP connection rate and application transaction should not be reached 1553 more than X% of maximum value that DUT can support. The suggested 1554 value of X is 25. It will be finalyzed and updated in the next draft 1555 version. 1557 The test equipment SHOULD start to measure and record all specified 1558 KPIs. The frequency of measurement MUST be less than 5 seconds. 1559 Continue the test until all traffic profile phases are completed. 1561 The DUT/SUT is expected to reach the desired target concurrent 1562 connection at the sustain phase. In addition, the measured KPIs must 1563 meet all acceptance criteria. 1565 Follow the step 3, if the KPI metrics do not meet the acceptance 1566 criteria. 1568 7.6.4.3. Step 3: Test Iteration 1570 Determine the maximum and average achievable concurent connection 1571 capacity within the acceptance criteria. 1573 7.7. TCP/HTTPS Connections per second 1575 7.7.1. Objective 1577 Using HTTPS traffic, determine the maximum sustainable SSL/TLS 1578 session establishment rate supported by the DUT/SUT under different 1579 throughput load conditions. 1581 Test iterations MUST include common cipher suites and key strengths 1582 as well as forward looking stronger keys. Specific test iterations 1583 MUST include the following ciphers and keys: 1585 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 1587 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 1589 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp384 1591 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 3072 1593 For each cipher suite and key strengths, test iterations MUST use a 1594 single HTTP transaction object size of 1KB, 16KB and 64KB to measure 1595 connections per second performance under a variety of DUT Security 1596 inspection load conditions. 1598 7.7.2. Test Setup 1600 Test bed setup SHOULD be configured as defined in section 4. Any 1601 specific test bed configuration changes such as number of interfaces 1602 and interface type, etc. must be documented. 1604 7.7.3. Test Parameters 1606 In this section, test scenario specific parameters SHOULD be defined. 1608 7.7.3.1. DUT/SUT Configuration Parameters 1610 DUT/SUT parameters MUST conform to the requirements defined in the 1611 section 4.2. Any configuration changes for this specific test 1612 scenario MUST be documented. 1614 7.7.3.2. Test Equipment Configuration Parameters 1616 Test equipment configuration parameters MUST conform to the 1617 requirements defined in the section 4.3. Following parameters MUST 1618 be documented for this test scenario: 1620 - Client IP address range defined in 4.3.1.2 1622 - Server IP address range defined in 4.3.2.2 1624 - Traffic distribution ratio between IPv4 and IPv6 defined in 4.3.1.2 1626 - Target connections per second: Initial value from product data 1627 sheet (if known) 1629 - Initial connections per second: 10% of "Target connections per 1630 second" 1631 The client MUST negotiate HTTPS 1.1 and close the connection 1632 immediately after completion of the transaction. 1634 Test scenario SHOULD be run with a single traffic profile with 1635 following attributes: 1637 HTTPS 1.1 with GET command requesting 1, 16 and 64 Kbyte objects with 1638 random MIME type. One Transaction per TCP connection 1640 Each client connection MUST perform a full handshak e with server 1641 certificate (no Certificate on client side) and MUST NOT use session 1642 reuse or resumption 1644 TLS record size MAY be optimized for the object size up to a record 1645 size of 16K 1647 7.7.3.3. Test Results Acceptance Criteria 1649 The following test Criteria is defined as test results acceptance 1650 criteria. 1652 a. Number of failed Application transaction MUST be less than 0.01% 1653 of attempt transaction 1655 b. Number of Terminated TCP connection due to unexpected TCP RST 1656 sent by DUT/SUT MUST be less than 0.01% of total initiated TCP 1657 sessions 1659 c. During the sustain phase, traffic should be forwarded at a 1660 constant rate 1662 d. During the sustain phase, Average Time to TCP First Byte MUST be 1663 constant and the deviation of latency MUST NOT increase more than 1664 10% 1666 e. Concurrent TCP connection should be constant during steady state. 1667 This confirms that DUT open and close the session at the same 1668 rate 1670 7.7.3.4. Measurement 1672 Following KPI metrics MUST be reported for this test scenario. 1674 Mandatory KPIs: average TCP connections per second, average 1675 Throughput and Average Time to TCP First Byte. 1677 7.7.4. Test Procedures and expected Results 1679 The test procedure is designed to measure the TCP connection per 1680 second rate of the DUT/SUT at the sustaining period of traffic load 1681 profile. The test procedure consists of three major steps. This 1682 test procedure MAY be repeated multiple times with different IPv4 and 1683 IPv6 traffic distribution. 1685 7.7.4.1. Step 1: Test Initialization and Qualification 1687 Verify the link status of the all connected physical interfaces. All 1688 interfaces are expected to be "UP" status. 1690 Configure traffic load profile of the test equipment to establish 1691 "initial connections per second" as defined in the parameters 1692 section. The traffic load profile CAN be defined as described in the 1693 section 4.3.4. 1695 The DUT/SUT SHOULD reach the "initial connections per second" before 1696 the sustain phase. The measured KPIs during the sustain phase MUST 1697 meet the acceptance criteria a, b, c and d defined in section 1698 7.8.3.3. 1700 If the KPI metrics do not meet the acceptance criteria, the test 1701 procedure MUST NOT be continued to "Step 2". 1703 7.7.4.2. Step 2: Test Run with Target Objective 1705 Configure test equipment to establish "Target connections per second" 1706 defined in the parameters table. The test equipment SHOULD follow 1707 the traffic load profile definition as described in the section 1708 4.3.4. 1710 During the ramp up and sustain phase, other KPIs such as throughput, 1711 TCP concurrent connections and application transaction MUST NOT reach 1712 to the maximum value the DUT/SUT can support. 1714 The test equipment SHOULD start to measure and record all specified 1715 KPIs. The frequency of measurement MUST be less than 5 seconds. 1716 Continue the test until all traffic profile phases are completed. 1718 The DUT/SUT is expected to reach the desired target connection per 1719 second rate at the sustain phase. In addition, the measured KPIs 1720 must meet all acceptance criteria. 1722 Follow the step 3, if the KPI metrics do not meet the acceptance 1723 criteria. 1725 7.7.4.3. Step 3: Test Iteration 1727 Determine the maximum and average achievable connections per second 1728 within the acceptance criteria. 1730 7.8. HTTPS Transaction per Second 1732 7.8.1. Objective 1734 Using HTTPS traffic, determine the maximum sustainable Transactions 1735 Per second supported by the DUT/SUT under different throughput load 1736 conditions. 1738 Test iterations MUST include common cipher suites and key strengths 1739 as well as forward looking stronger keys. Specific test iterations 1740 MUST include the following ciphers and keys: 1742 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 1744 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 1746 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp384 1748 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 3072 1750 7.8.2. Test Setup 1752 Test bed setup SHOULD be configured as defined in section 4. Any 1753 specific test bed configuration changes such as number of interfaces 1754 and interface type, etc. must be documented. 1756 7.8.3. Test Parameters 1758 In this section, test scenario specific parameters SHOULD be defined. 1760 7.8.3.1. DUT/SUT Configuration Parameters 1762 DUT/SUT parameters MUST conform to the requirements defined in the 1763 section 4.2. Any configuration changes for this specific test 1764 scenario MUST be documented. 1766 7.8.3.2. Test Equipment Configuration Parameters 1768 Test equipment configuration parameters MUST conform to the 1769 requirements defined in the section 4.3. Following parameters MUST 1770 be documented for this test scenario: 1772 - Client IP address range defined in 4.3.1.2 1773 - Server IP address range defined in 4.3.2.2 1775 - Traffic distribution ratio between IPv4 and IPv6 defined in 4.3.1.2 1777 - Target Transactions per second: Initial value from product data 1778 sheet (if known) 1780 - Initial Transactions per second: 10% of "Target Transactions per 1781 second" 1783 Test scenario SHOULD be run with a single traffic profile with 1784 following attributes: 1786 The client MUST negotiate HTTPS 1.1 and close the connection 1787 immediately after completion of X (Number of transaction per 1788 connection needs to be defined and it will be updated in the next 1789 rellease)transaction 1791 HTTPS 1.1 with GET command requesting a single 1, 16 and 64 Kbyte 1792 objects 1794 Each client connection MUST perform a full handshake with server 1795 certificate and SHOULD NOT use session reuse or resumption 1797 TLS record size MAY be optimized for the object size up to a record 1798 size of 16K 1800 7.8.3.3. Test Results Acceptance Criteria 1802 The following test Criteria is defined as test results acceptance 1803 criteria. Test results acceptance criteria MUST be monitored during 1804 the whole sustain phase of the traffic load profile. Ramp up and 1805 ramp down phase Should not be considered. 1807 a. Number of failed Application transactions MUST be zero 1809 b. Number of Terminated HTTP connections due to unexpected TCP RST 1810 sent by DUT/SUT MUST be less than 0.01% of total initiated HTTP 1811 sessions 1813 c. Number of failed Application transactions MUST be zero 1815 d. Average Time to TCP First Byte MUST be constant and not increase 1816 more than 10% 1818 e. The deviation of concurrent TCP connection Must be less than 10% 1820 7.8.3.4. Measurement 1822 Following KPI metrics MUST be reported for this test scenario. 1824 average TCP Transactions per second, average Throughput, Average Time 1825 to TCP First Byte and average transaction latency. 1827 7.8.4. Test Procedures and Expected Results 1829 The test procedure is designed to measure the HTTP Transactions per 1830 second rate of the DUT/SUT at the sustaining period of traffic load 1831 profile. The test procedure consists of three major steps. This 1832 test procedure MAY be repeated multiple times with different IPv4 and 1833 IPv6 traffic distribution, object sizes and ciphers and keys. 1835 7.8.4.1. Step 1: Test Initialization and Qualification 1837 Verify the link status of the all connected physical interfaces. All 1838 interfaces are expected to be "UP" status. 1840 Configure traffic load profile of the test equipment to establish 1841 "initial Transactions per second" as defined in the parameters 1842 section. The traffic load profile CAN be defined as described in the 1843 section 4.3.4. 1845 The DUT/SUT SHOULD reach the "initial Transactions per second" before 1846 the sustain phase. The measured KPIs during the sustain phase MUST 1847 meet the acceptance criteria a, b, c and d defined in section 1848 7.3.3.3. 1850 If the KPI metrics do not meet the acceptance criteria, the test 1851 procedure MUST NOT be continued to "Step 2". 1853 7.8.4.2. Step 2: Test Run with Target Objective 1855 Configure test equipment to establish "Target Transactions per 1856 second" defined in the parameters table. The test equipment SHOULD 1857 follow the traffic load profile definition as described in the 1858 section 4.3.4. 1860 During the ramp up and sustain phase, other KPIs such as throughput, 1861 TCP concurrent connections and TCP connection rate MUST NOT reach to 1862 the maximum value the DUT/SUT can support. 1864 The test equipment SHOULD start to measure and record all specified 1865 KPIs. The frequency of measurement MUST be less than 5 seconds. 1866 Continue the test until all traffic profile phases are completed. 1868 The DUT/SUT is expected to reach the desired target transactions per 1869 second rate at the sustain phase. In addition, the measured KPIs 1870 must meet all acceptance criteria. 1872 Follow the step 3, if the KPI metrics do not meet the acceptance 1873 criteria. 1875 7.8.4.3. Step 3: Test Iteration 1877 Determine the maximum and average achievable Transactions per second 1878 within the acceptance criteria. Final test iteration MUST be 1879 performed for the test duration defined in Section 4.3.4. 1881 7.9. HTTPS Transaction Latency 1883 7.9.1. Objective 1885 Using HTTP traffic, determine the average TCP connect time and the 1886 average HTTP transactional latency when DUT is running with 1887 sustainable HTTP session establishment rate supported by the DUT/SUT 1888 under different HTTP object size. 1890 Test parameters and test test procedures will be added in the future 1891 release. 1893 7.10. HTTPS Throughput 1895 7.10.1. Objective 1897 Determine the throughput for HTTPS connections varying the object 1898 size. 1900 Test iterations MUST include common cipher suites and key strengths 1901 as well as forward looking stronger keys. Specific test iterations 1902 MUST include the following ciphers and keys: 1904 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 1906 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 1908 3. ECDHE-ECDSA-AES256-GCM-SHA384 with Secp384 1910 4. ECDHE-RSA-AES256-GCM-SHA384 with RSA 3072 1912 7.10.2. Test Setup 1914 Test bed setup SHOULD be configured as defined in section 4. Any 1915 specific test bed configuration changes such as number of interfaces 1916 and interface type, etc. must be documented. 1918 7.10.3. Test Parameters 1920 In this section, test scenario specific parameters SHOULD be defined. 1922 7.10.3.1. DUT/SUT Configuration Parameters 1924 DUT/SUT parameters MUST conform to the requirements defined in the 1925 section 4.2. Any configuration changes for this specific test 1926 scenario MUST be documented. 1928 7.10.3.2. Test Equipment Configuration Parameters 1930 Test equipment configuration parameters MUST conform to the 1931 requirements defined in the section 4.3. Following parameters MUST 1932 be documented for this test scenario: 1934 - Client IP address range Defined in 4.3.1.2 1936 - Server IP address range Defined in 4.3.2.2 1938 - Target Throughput: Initial value from product data sheet (if known) 1940 - Number of object requests per connection: 10 1942 - HTTPS Response Object Size: 16KB, 64KB, 256KB and mixed object 1943 +---------------------+---------------------+ 1944 | Object size (KByte) | Number of requests/ | 1945 | | Weight | 1946 +---------------------+---------------------+ 1947 | 0.2 | 1 | 1948 +---------------------+---------------------+ 1949 | 6 | 1 | 1950 +---------------------+---------------------+ 1951 | 8 | 1 | 1952 +---------------------+---------------------+ 1953 | 9 | 1 | 1954 +---------------------+---------------------+ 1955 | 10 | 1 | 1956 +---------------------+---------------------+ 1957 | 25 | 1 | 1958 +---------------------+---------------------+ 1959 | 26 | 1 | 1960 +---------------------+---------------------+ 1961 | 35 | 1 | 1962 +---------------------+---------------------+ 1963 | 59 | 1 | 1964 +---------------------+---------------------+ 1965 | 347 | 1 | 1966 +---------------------+---------------------+ 1968 Table 4: Mixed Objects 1970 Each client connection MUST perform a full handshake with server 1971 certificate (no Certificate on client side) and 50% of connection 1972 SHOULD use session reuse or resumption. 1974 TLS record size MAY be optimized for the object size up to a record 1975 size of 16K. 1977 7.10.3.3. Test Results Acceptance Criteria 1979 The following test Criteria is defined as test results acceptance 1980 criteria. Test results acceptance criteria MUST be monitored during 1981 the whole sustain phase of the traffic load profile. Ramp up and 1982 ramp down phase Should not be considered. 1984 a. Number of failed Application transaction MUST be less than 0.01% 1985 of attempt transaction. 1987 b. Traffic should be forwarded constantly. 1989 c. The deviation of concurrent TCP connection Must be less than 10% 1990 d. The deviation of average HTTP transaction latency MUST be less 1991 than 10% 1993 7.10.3.4. Measurement 1995 The KPI metrics MUST be reported for this test scenario: 1997 Average Throughput. 1999 7.10.4. Test Procedures and Expected Results 2001 The test procedure consists of three major steps. This test 2002 procedure MAY be repeated multiple times with different IPv4 and IPv6 2003 traffic distribution and object sizes. 2005 7.10.4.1. Step 1: Test Initialization and Qualification 2007 Verify the link status of the all connected physical interfaces. All 2008 interfaces are expected to be "UP" status. 2010 Configure traffic load profile of the test equipment to establish 2011 "initial throughput" as defined in the parameters section. 2013 The traffic load profile should be defined as described in 2014 Section 4.3.4. The DUT/SUT SHOULD reach the "initial throughput" 2015 during the sustain phase. Measure all KPI as defined in 2016 Section 7.5.3.4 2018 The measured KPIs during the sustain phase MUST meet the acceptance 2019 criteria "a" defined in Section 7.5.3.3. 2021 If the KPI metrics do not meet the acceptance criteria, the test 2022 procedure MUST NOT be continued to "Step 2". 2024 7.10.4.2. Step 2: Test Run with Target Objective 2026 The test equipment SHOULD start to measure and record all specified 2027 KPIs. The frequency of measurement MUST be less than 5 seconds. 2028 Continue the test until all traffic profile phases are completed. 2030 The DUT/SUT is expected to reach the desired target HTTP connections 2031 at the sustain phase. In addition, the measured KPIs must meet all 2032 acceptance criteria. 2034 Perform the test separately for each object size (16k, 64k, 256k and 2035 mixed object). 2037 Follow the step 3, if the KPI metrics do not meet the acceptance 2038 criteria. 2040 7.10.4.3. Step 3: Test Iteration 2042 Determine the maximum and average achievable throughput within the 2043 acceptance criteria. Final test iteration MUST be performed for the 2044 test duration defined in Section 4.3.4. 2046 7.11. Concurrent TCP/HTTPS Connection Capacity 2048 7.11.1. Objective 2050 Usin encrypted traffic (HTTPS), determine the maximum number of 2051 concurrent TCP connection that DUT/SUT sustains. 2053 Test parameters and test test procedures will be added in the future 2054 release. 2056 8. Formal Syntax 2058 9. IANA Considerations 2060 This document makes no request of IANA. 2062 Note to RFC Editor: this section may be removed on publication as an 2063 RFC. 2065 10. Security Considerations 2067 Security consideration will be added in the future release. 2069 11. Acknowledgements 2071 Acknowledgements will be added in the future release. 2073 12. Contributors 2075 The authors would like to thank the many people that contributed 2076 their time and knowledge to this effort. 2078 Specifically to the co-chairs of the NetSecOPEN Test Methodology 2079 working group and the NetSecOPEN Security Effectiveness working group 2080 - Alex Samonte, Aria Eslambolchizadeh, Carsten Rossenhoevel and David 2081 DeSanto. 2083 Additionally the following people provided input, comments and spent 2084 time reviewiing the myriad of drafts. If we have missed anyone the 2085 fault is entirely our own. Thanks to - Amritam Putatunda, 2086 Balamuhunthan Balarajah, Brian Monkman, Chris Chapman, Chris Pearson, 2087 Chuck McAuley, David White, Jurrie Van Den Breekel, Michelle Rhines, 2088 Rob Andrews, Samaresh Nadir, Shay Filosof, and Tim Winters. 2090 13. Normative References 2092 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2093 Requirement Levels", BCP 14, RFC 2119, 2094 DOI 10.17487/RFC2119, March 1997, 2095 . 2097 Appendix A. NetSecOPEN Basic Traffic Mix 2099 A traffic mix for testing performance of next generation firewalls 2100 MUST scale to stress the DUT based on real-world conditions. In 2101 order to achieve this the following MUST be included: 2103 o Clients connecting to multiple different server FQDNs per 2104 application 2106 o Clients loading apps and pages with connections and objects in 2107 specific orders 2109 o Multiple unique certificates for HTTPS/TLS 2111 o A wide variety of different object sizes 2113 o Different URL paths 2115 o Mix of HTTP and HTTPS 2117 A traffic mix for testing performance of next generation firewalls 2118 MUST also facility application identification using different 2119 detection methods with and without decryption of the traffic. Such 2120 as: 2122 o HTTP HOST based application detection 2124 o HTTPS/TLS Server Name Indication (SNI) 2126 o Certificate Subject Common Name (CN) 2128 The mix MUST be of sufficient complexity and volume to render 2129 differences in individual apps as statistically insignificant. For 2130 example, changes in like to like apps - such as one type of video 2131 service vs. another both consist of larger objects whereas one news 2132 site vs. another both typically have more connections then other apps 2133 because of trackers and embedded advertising content. To achieve 2134 sufficient complexity, a mix MUST have: 2136 o Thousands of URLs each client walks thru 2138 o Hundreds of FQDNs each client connects to 2140 o Hundreds of unique certificates for HTTPS/TLS 2142 o Thousands of different object sizes per client in orders matching 2143 applications 2145 The following is a description of what a popular application in an 2146 enterprise traffic mix contains. 2148 Table 5 lists the FQDNs, number of transactions and bytes transferred 2149 as an example client interacts with Office 365 Outlook, Word, Excel, 2150 Powerpoint, Sharepoint and Skype. 2152 +---------------------------------+------------+-------------+ 2153 | Office365 FQDN | Bytes | Transaction | 2154 +============================================================+ 2155 | r1.res.office365.com | 14,056,960 | 192 | 2156 +---------------------------------+------------+-------------+ 2157 | s1-word-edit-15.cdn.office.net | 6,731,019 | 22 | 2158 +---------------------------------+------------+-------------+ 2159 | company1-my.sharepoint.com | 6,269,492 | 42 | 2160 +---------------------------------+------------+-------------+ 2161 | swx.cdn.skype.com | 6,100,027 | 12 | 2162 +---------------------------------+------------+-------------+ 2163 | static.sharepointonline.com | 6,036,947 | 41 | 2164 +---------------------------------+------------+-------------+ 2165 | spoprod-a.akamaihd.net | 3,904,250 | 25 | 2166 +---------------------------------+------------+-------------+ 2167 | s1-excel-15.cdn.office.net | 2,767,941 | 16 | 2168 +---------------------------------+------------+-------------+ 2169 | outlook.office365.com | 2,047,301 | 86 | 2170 +---------------------------------+------------+-------------+ 2171 | shellprod.msocdn.com | 1,008,370 | 11 | 2172 +---------------------------------+------------+-------------+ 2173 | word-edit.officeapps.live.com | 932,080 | 25 | 2174 +---------------------------------+------------+-------------+ 2175 | res.delve.office.com | 760,146 | 2 | 2176 +---------------------------------+------------+-------------+ 2177 | s1-powerpoint-15.cdn.office.net | 557,604 | 3 | 2178 +---------------------------------+------------+-------------+ 2179 | appsforoffice.microsoft.com | 511,171 | 5 | 2180 +---------------------------------+------------+-------------+ 2181 | powerpoint.officeapps.live.com | 471,625 | 14 | 2182 +---------------------------------+------------+-------------+ 2183 | excel.officeapps.live.com | 342,040 | 14 | 2184 +---------------------------------+------------+-------------+ 2185 | s1-officeapps-15.cdn.office.net | 331,343 | 5 | 2186 +---------------------------------+------------+-------------+ 2187 | webdir0a.online.lync.com | 66,930 | 15 | 2188 +---------------------------------+------------+-------------+ 2189 | portal.office.com | 13,956 | 1 | 2190 +---------------------------------+------------+-------------+ 2191 | config.edge.skype.com | 6,911 | 2 | 2192 +---------------------------------+------------+-------------+ 2193 | clientlog.portal.office.com | 6,608 | 8 | 2194 +---------------------------------+------------+-------------+ 2195 | webdir.online.lync.com | 4,343 | 5 | 2196 +---------------------------------+------------+-------------+ 2197 | graph.microsoft.com | 2,289 | 2 | 2198 +---------------------------------+------------+-------------+ 2199 | nam.loki.delve.office.com | 1,812 | 5 | 2200 +---------------------------------+------------+-------------+ 2201 | login.microsoftonline.com | 464 | 2 | 2202 +---------------------------------+------------+-------------+ 2203 | login.windows.net | 232 | 1 | 2204 +---------------------------------+------------+-------------+ 2206 Table 5: Office365 2208 Clients MUST connect to multiple server FQDNs in the same order as 2209 real applications. Connections MUST be made when the client is 2210 interacting with the application and NOT first setup up all 2211 connections. Connections SHOULD stay open per client for subsequent 2212 transactions to the same FQDN similar to how a web browser behaves. 2213 Clients MUST use different URL Paths and Object sizes in orders as 2214 they are observed in real Applications. Clients MAY also setup 2215 multiple connections per FQDN to process multiple transactions in a 2216 sequence at the same time. Table 6 has a partial example sequence of 2217 the Office 365 Word application transactions. 2219 +---------------------------------+----------------------+----------+ 2220 | FQDN | URL Path | Object | 2221 | | | size | 2222 +===================================================================+ 2223 | company1-my.sharepoint.com | /personal... | 23,132 | 2224 +---------------------------------+----------------------+----------+ 2225 | word-edit.officeapps.live.com | /we/WsaUpload.ashx | 2 | 2226 +---------------------------------+----------------------+----------+ 2227 | static.sharepointonline.com | /bld/.../blank.js | 454 | 2228 +---------------------------------+----------------------+----------+ 2229 | static.sharepointonline.com | /bld/.../ | 23,254 | 2230 | | initstrings.js | | 2231 +---------------------------------+----------------------+----------+ 2232 | static.sharepointonline.com | /bld/.../init.js | 292,740 | 2233 +---------------------------------+----------------------+----------+ 2234 | company1-my.sharepoint.com | /ScriptResource... | 102,774 | 2235 +---------------------------------+----------------------+----------+ 2236 | company1-my.sharepoint.com | /ScriptResource... | 40,329 | 2237 +---------------------------------+----------------------+----------+ 2238 | company1-my.sharepoint.com | /WebResource... | 23,063 | 2239 +---------------------------------+----------------------+----------+ 2240 | word-edit.officeapps.live.com | /we/wordeditorframe. | 60,657 | 2241 | | aspx... | | 2242 +---------------------------------+----------------------+----------+ 2243 | static.sharepointonline.com | /bld/_layouts/.../ | 454 | 2244 | | blank.js | | 2245 +---------------------------------+----------------------+----------+ 2246 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 19,201 | 2247 | | EditSurface.css | | 2248 +---------------------------------+----------------------+----------+ 2249 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 221,397 | 2250 | | WordEditor.css | | 2251 +---------------------------------+----------------------+----------+ 2252 | s1-officeapps-15.cdn.office.net | /we/s/.../ | 107,571 | 2253 | | Microsoft | | 2254 | | Ajax.js | | 2255 +---------------------------------+----------------------+----------+ 2256 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 39,981 | 2257 | | wacbootwe.js | | 2258 +---------------------------------+----------------------+----------+ 2259 | s1-officeapps-15.cdn.office.net | /we/s/.../ | 51,749 | 2260 | | CommonIntl.js | | 2261 +---------------------------------+----------------------+----------+ 2262 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 6,050 | 2263 | | Compat.js | | 2264 +---------------------------------+----------------------+----------+ 2265 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 54,158 | 2266 | | Box4Intl.js | | 2267 +---------------------------------+----------------------+----------+ 2268 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 24,946 | 2269 | | WoncaIntl.js | | 2270 +---------------------------------+----------------------+----------+ 2271 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 53,515 | 2272 | | WordEditorIntl.js | | 2273 +---------------------------------+----------------------+----------+ 2274 | s1-word-edit-15.cdn.office.net | /we/s/.../ | 1,978,712| 2275 | | WordEditorExp.js | | 2276 +---------------------------------+----------------------+----------+ 2277 | s1-word-edit-15.cdn.office.net | /we/s/.../jSanity.js | 10,912 | 2278 +---------------------------------+----------------------+----------+ 2279 | word-edit.officeapps.live.com | /we/OneNote.ashx | 145,708 | 2280 +---------------------------------+----------------------+----------+ 2282 Table 6: Office365 Word Transactions 2284 For application identification the HTTPS/TLS traffic MUST include 2285 realistic Certificate Subject Common Name (CN) data as well as Server 2286 Name Indications. For example, a DUT may detect Facebook Chat 2287 traffic by inspecting the certificate and detecting *.facebook.com in 2288 the certificate subject CN and subsequently detect the word chat in 2289 the FQDN 5-edge-chat.facebook.com and identify traffic on the 2290 connection to be Facebook Chat. 2292 Table 7 includes further examples in SNI and CN pairs for several 2293 FQDNs of Office 365. 2295 +------------------------------+----------------------------------+ 2296 |Server Name Indication (SNI) | Certificate Subject | 2297 | | Common Name (CN) | 2298 +=================================================================+ 2299 | r1.res.office365.com | *.res.outlook.com | 2300 +------------------------------+----------------------------------+ 2301 | login.windows.net | graph.windows.net | 2302 +------------------------------+----------------------------------+ 2303 | webdir0a.online.lync.com | *.online.lync.com | 2304 +------------------------------+----------------------------------+ 2305 | login.microsoftonline.com | stamp2.login.microsoftonline.com | 2306 +------------------------------+----------------------------------+ 2307 | webdir.online.lync.com | *.online.lync.com | 2308 +------------------------------+----------------------------------+ 2309 | graph.microsoft.com | graph.microsoft.com | 2310 +------------------------------+----------------------------------+ 2311 | outlook.office365.com | outlook.com | 2312 +------------------------------+----------------------------------+ 2313 | appsforoffice.microsoft.com | appsforoffice.microsoft.com | 2314 +------------------------------+----------------------------------+ 2316 Table 7: Office365 SNI and CN Pairs Examples 2318 NetSecOPEN has provided a reference enterprise perimeter traffic mix 2319 with dozens of applications, hundreds of connections, and thousands 2320 of transactions. (link to spreadsheet with details) 2322 NetSecOPEN has provided a reference enterprise perimeter traffic mix 2323 with dozens of applications, hundreds of connections, and thousands 2324 of transactions. (link to spreadsheet with details) 2325 The enterprise perimeter traffic mix consists of 70% HTTPS and 30% 2326 HTTP by Bytes, 58% HTTPS and 42% HTTP by Transactions. By 2327 connections with a single connection per FQDN the mix consists of 43% 2328 HTTPS and 57% HTTP. With multiple connections per FQDN the HTTPS 2329 percentage is higher. 2331 Table 8 is a summary of the NetSecOPEN enterprise perimeter traffic 2332 mix sorted by bytes with unique FQDNs and transactions per 2333 applications. 2335 +------------------+-------+--------------+-------------+ 2336 | Application | FQDNs | Transactions | Bytes | 2337 +=======================================================+ 2338 | Office365 | 26 | 558 | 52,931,947 | 2339 +------------------+-------+--------------+-------------+ 2340 | Box | 4 | 90 | 23,276,089 | 2341 +------------------+-------+--------------+-------------+ 2342 | Salesforce | 6 | 365 | 23,137,548 | 2343 +------------------+-------+--------------+-------------+ 2344 | Gmail | 13 | 139 | 16,399,289 | 2345 +------------------+-------+--------------+-------------+ 2346 | Linkedin | 10 | 206 | 15,040,918 | 2347 +------------------+-------+--------------+-------------+ 2348 | DailyMotion | 8 | 77 | 14,751,514 | 2349 +------------------+-------+--------------+-------------+ 2350 | GoogleDocs | 2 | 71 | 14,205,476 | 2351 +------------------+-------+--------------+-------------+ 2352 | Wikia | 15 | 159 | 13,909,777 | 2353 +------------------+-------+--------------+-------------+ 2354 | Foxnews | 82 | 499 | 13,758,899 | 2355 +------------------+-------+--------------+-------------+ 2356 | Yahoo Finance | 33 | 254 | 13,134,011 | 2357 +------------------+-------+--------------+-------------+ 2358 | Youtube | 8 | 97 | 13,056,216 | 2359 +------------------+-------+--------------+-------------+ 2360 | Facebook | 4 | 207 | 12,726,231 | 2361 +------------------+-------+--------------+-------------+ 2362 | CNBC | 77 | 275 | 11,939,566 | 2363 +------------------+-------+--------------+-------------+ 2364 | Lightreading | 27 | 304 | 11,200,864 | 2365 +------------------+-------+--------------+-------------+ 2366 | BusinessInsider | 16 | 142 | 11,001,575 | 2367 +------------------+-------+--------------+-------------+ 2368 | Alexa | 5 | 153 | 10,475,151 | 2369 +------------------+-------+--------------+-------------+ 2370 | CNN | 41 | 206 | 10,423,740 | 2371 +------------------+-------+--------------+-------------+ 2372 | Twitter Video | 2 | 72 | 10,112,820 | 2373 +------------------+-------+--------------+-------------+ 2374 | Cisco Webex | 1 | 213 | 9,988,417 | 2375 +------------------+-------+--------------+-------------+ 2376 | Slack | 3 | 40 | 9,938,686 | 2377 +------------------+-------+--------------+-------------+ 2378 | Google Maps | 5 | 191 | 8,771,873 | 2379 +------------------+-------+--------------+-------------+ 2380 | SpectrumIEEE | 7 | 145 | 8,682,629 | 2381 +------------------+-------+--------------+-------------+ 2382 | Yelp | 9 | 146 | 8,607,645 | 2383 +------------------+-------+--------------+-------------+ 2384 | Vimeo | 12 | 74 | 8,555,960 | 2385 +------------------+-------+--------------+-------------+ 2386 | Wikihow | 11 | 140 | 8,042,314 | 2387 +------------------+-------+--------------+-------------+ 2388 | Netflix | 3 | 31 | 7,839,256 | 2389 +------------------+-------+--------------+-------------+ 2390 | Instagram | 3 | 114 | 7,230,883 | 2391 +------------------+-------+--------------+-------------+ 2392 | Morningstar | 30 | 150 | 7,220,121 | 2393 +------------------+-------+--------------+-------------+ 2394 | Docusign | 5 | 68 | 6,972,738 | 2395 +------------------+-------+--------------+-------------+ 2396 | Twitter | 1 | 100 | 6,939,150 | 2397 +------------------+-------+--------------+-------------+ 2398 | Tumblr | 11 | 70 | 6,877,200 | 2399 +------------------+-------+--------------+-------------+ 2400 | Whatsapp | 3 | 46 | 6,829,848 | 2401 +------------------+-------+--------------+-------------+ 2402 | Imdb | 16 | 251 | 6,505,227 | 2403 +------------------+-------+--------------+-------------+ 2404 | NOAAgov | 1 | 44 | 6,316,283 | 2405 +------------------+-------+--------------+-------------+ 2406 | IndustryWeek | 23 | 192 | 6,242,403 | 2407 +------------------+-------+--------------+-------------+ 2408 | Spotify | 18 | 119 | 6,231,013 | 2409 +------------------+-------+--------------+-------------+ 2410 | AutoNews | 16 | 165 | 6,115,354 | 2411 +------------------+-------+--------------+-------------+ 2412 | Evernote | 3 | 47 | 6,063,168 | 2413 +------------------+-------+--------------+-------------+ 2414 | NatGeo | 34 | 104 | 6,026,344 | 2415 +------------------+-------+--------------+-------------+ 2416 | BBC News | 18 | 156 | 5,898,572 | 2417 +------------------+-------+--------------+-------------+ 2418 | Investopedia | 38 | 241 | 5,792,038 | 2419 +------------------+-------+--------------+-------------+ 2420 | Pinterest | 8 | 102 | 5,658,994 | 2421 +------------------+-------+--------------+-------------+ 2422 | Succesfactors | 2 | 112 | 5,049,001 | 2423 +------------------+-------+--------------+-------------+ 2424 | AbaJournal | 6 | 93 | 4,985,626 | 2425 +------------------+-------+--------------+-------------+ 2426 | Pbworks | 4 | 78 | 4,670,980 | 2427 +------------------+-------+--------------+-------------+ 2428 | NetworkWorld | 42 | 153 | 4,651,354 | 2429 +------------------+-------+--------------+-------------+ 2430 | WebMD | 24 | 280 | 4,416,736 | 2431 +------------------+-------+--------------+-------------+ 2432 | OilGasJournal | 14 | 105 | 4,095,255 | 2433 +------------------+-------+--------------+-------------+ 2434 | Trello | 5 | 39 | 4,080,182 | 2435 +------------------+-------+--------------+-------------+ 2436 | BusinessWire | 5 | 109 | 4,055,331 | 2437 +------------------+-------+--------------+-------------+ 2438 | Dropbox | 5 | 17 | 4,023,469 | 2439 +------------------+-------+--------------+-------------+ 2440 | Nejm | 20 | 190 | 4,003,657 | 2441 +------------------+-------+--------------+-------------+ 2442 | OilGasDaily | 7 | 199 | 3,970,498 | 2443 +------------------+-------+--------------+-------------+ 2444 | Chase | 6 | 52 | 3,719,232 | 2445 +------------------+-------+--------------+-------------+ 2446 | MedicalNews | 6 | 117 | 3,634,187 | 2447 +------------------+-------+--------------+-------------+ 2448 | Marketwatch | 25 | 142 | 3,291,226 | 2449 +------------------+-------+--------------+-------------+ 2450 | Imgur | 5 | 48 | 3,189,919 | 2451 +------------------+-------+--------------+-------------+ 2452 | NPR | 9 | 83 | 3,184,303 | 2453 +------------------+-------+--------------+-------------+ 2454 | Onelogin | 2 | 31 | 3,132,707 | 2455 +------------------+-------+--------------+-------------+ 2456 | Concur | 2 | 50 | 3,066,326 | 2457 +------------------+-------+--------------+-------------+ 2458 | Service-now | 1 | 37 | 2,985,329 | 2459 +------------------+-------+--------------+-------------+ 2460 | Apple itunes | 14 | 80 | 2,843,744 | 2461 +------------------+-------+--------------+-------------+ 2462 | BerkeleyEdu | 3 | 69 | 2,622,009 | 2463 +------------------+-------+--------------+-------------+ 2464 | MSN | 39 | 203 | 2,532,972 | 2465 +------------------+-------+--------------+-------------+ 2466 | Indeed | 3 | 47 | 2,325,197 | 2467 +------------------+-------+--------------+-------------+ 2468 | MayoClinic | 6 | 56 | 2,269,085 | 2469 +------------------+-------+--------------+-------------+ 2470 | Ebay | 9 | 164 | 2,219,223 | 2471 +------------------+-------+--------------+-------------+ 2472 | UCLAedu | 3 | 42 | 1,991,311 | 2473 +------------------+-------+--------------+-------------+ 2474 | ConstructionDive | 5 | 125 | 1,828,428 | 2475 +------------------+-------+--------------+-------------+ 2476 | EducationNews | 4 | 78 | 1,605,427 | 2477 +------------------+-------+--------------+-------------+ 2478 | BofA | 12 | 68 | 1,584,851 | 2479 +------------------+-------+--------------+-------------+ 2480 | ScienceDirect | 7 | 26 | 1,463,951 | 2481 +------------------+-------+--------------+-------------+ 2482 | Reddit | 8 | 55 | 1,441,909 | 2483 +------------------+-------+--------------+-------------+ 2484 | FoodBusinessNews | 5 | 49 | 1,378,298 | 2485 +------------------+-------+--------------+-------------+ 2486 | Amex | 8 | 42 | 1,270,696 | 2487 +------------------+-------+--------------+-------------+ 2488 | Weather | 4 | 50 | 1,243,826 | 2489 +------------------+-------+--------------+-------------+ 2490 | Wikipedia | 3 | 27 | 958,935 | 2491 +------------------+-------+--------------+-------------+ 2492 | Bing | 1 | 52 | 697,514 | 2493 +------------------+-------+--------------+-------------+ 2494 | ADP | 1 | 30 | 508,654 | 2495 +------------------+-------+--------------+-------------+ 2496 | | | | | 2497 +------------------+-------+--------------+-------------+ 2498 | Grand Total | 983 | 10021 | 569,819,095 | 2499 +------------------+-------+--------------+-------------+ 2501 Table 8: Summary of NetSecOPEN Enterprise Perimeter Traffic Mix 2503 Authors' Addresses 2505 Balamuhunthan Balarajah 2506 EANTC AG 2507 Salzufer 14 2508 Berlin 10587 2509 Germany 2511 Email: balarajah@eantc.de 2512 Carsten Rossenhoevel 2513 EANTC AG 2514 Salzufer 14 2515 Berlin 10587 2516 Germany 2518 Email: cross@eantc.de