<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<rfc category="info" docName="draft-ietf-bmwg-ngfw-performance-06"
     ipr="trust200902">
  <front>
    <title abbrev="Benchmarking Network Security Devices">Benchmarking
    Methodology for Network Security Device Performance</title>

    <author fullname="Balamuhunthan Balarajah" initials="B"
            surname="Balarajah">
      <organization/>

      <address>
        <postal>
          <street/>

          <city>Berlin</city>

          <code/>

          <region/>

          <country>Germany</country>
        </postal>

        <phone/>

        <facsimile/>

        <email>bm.balarajah@gmail.com</email>
      </address>
    </author>

    <author fullname="Carsten Rossenhoevel" initials="C"
            surname="Rossenhoevel">
      <organization>EANTC AG</organization>

      <address>
        <postal>
          <street>Salzufer 14</street>

          <city>Berlin</city>

          <code>10587</code>

          <region/>

          <country>Germany</country>
        </postal>

        <phone/>

        <facsimile/>

        <email>cross@eantc.de</email>
      </address>
    </author>

    <author fullname="Brian Monkman" initials="B" surname="Monkman">
      <organization>NetSecOPEN</organization>

      <address>
        <postal>
          <street>417 Independence Court</street>

          <city>Mechanicsburg</city>

          <code>17050</code>

          <region>PA</region>

          <country>USA</country>
        </postal>

        <phone/>

        <facsimile/>

        <email>bmonkman@netsecopen.org</email>
      </address>
    </author>

    <date month="February" year="2021"/>

    <area/>

    <workgroup>Benchmarking Methodology Working Group</workgroup>

    <keyword/>

    <keyword/>

    <abstract>
      <t>This document provides benchmarking terminology and methodology for
      next-generation network security devices including next-generation
      firewalls (NGFW), next-generation intrusion detection and prevention
      systems (NGIDS/NGIPS) and unified threat management (UTM)
      implementations. This document aims to strongly improve the
      applicability, reproducibility, and transparency of benchmarks and to
      align the test methodology with today's increasingly complex layer 7
      security centric network application use cases. The main areas covered
      in this document are test terminology, test configuration parameters,
      and benchmarking methodology for NGFW and NGIDS/NGIPS to start with.</t>
    </abstract>
  </front>

  <middle>
    <section title="Introduction">
      <t>15 years have passed since IETF recommended test methodology and
      terminology for firewalls initially (<xref target="RFC3511"/>). The
      requirements for network security element performance and effectiveness
      have increased tremendously since then. Security function
      implementations have evolved to more advanced areas and have diversified
      into intrusion detection and prevention, threat management, analysis of
      encrypted traffic, etc. In an industry of growing importance,
      well-defined, and reproducible key performance indicators (KPIs) are
      increasingly needed as they enable fair and reasonable comparison of
      network security functions. All these reasons have led to the creation
      of a new next-generation network security device benchmarking document
      and this document supersedes <xref target="RFC3511"/>.</t>
    </section>

    <section title="Requirements">
      <t>The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
      "SHOULD", "SHOULD NOT", "RECOMMENDED", “NOT RECOMMENDED”, "MAY", and
      "OPTIONAL" in this document are to be interpreted as described in BCP 14
      <xref target="RFC2119"/>, <xref target="RFC8174"/> when, and only when,
      they appear in all capitals, as shown here.</t>
    </section>

    <section title="Scope">
      <t>This document provides testing terminology and testing methodology
      for modern and next-generation network security devices. It covers the
      validation of security effectiveness configurations of network security
      devices, followed by performance benchmark testing. This document
      focuses on advanced, realistic, and reproducible testing methods.
      Additionally, it describes test bed environments, test tool
      requirements, and test result formats.</t>
    </section>

    <section anchor="Test_Setup" title="Test Setup">
      <t>Test setup defined in this document is applicable to all benchmarking
      tests described in <xref pageno="false" target="Benchmarking"/>. The
      test setup MUST be contained within an Isolated Test Environment (see
      Section 3 of <xref target="RFC6815"/>).</t>

      <section anchor="Testbed_Configuration" title="Test Bed Configuration">
        <t>Test bed configuration MUST ensure that any performance
        implications that are discovered during the benchmark testing aren't
        due to the inherent physical network limitations such as the number of
        physical links and forwarding performance capabilities (throughput and
        latency) of the network devices in the test bed. For this reason, this
        document recommends avoiding external devices such as switches and
        routers in the test bed wherever possible.</t>

        <t>In some deployment scenarios, the network security devices (Device
        Under Test/System Under Test) are connected to routers and switches
        which will reduce the number of entries in MAC or ARP tables of the
        Device Under Test/System Under Test (DUT/SUT). If MAC or ARP tables
        have many entries, this may impact the actual DUT/SUT performance due
        to MAC and ARP/ND (Neighbor Discovery) table lookup processes. This
        document also recommends using test equipment with the capability of
        emulating layer 3 routing functionality instead of adding external
        routers in the test bed.</t>

        <t>The test bed setup Option 1 (<xref target="figure1"/>) is the
        RECOMMENDED test bed setup for the benchmarking test.</t>

        <figure alt="figure1" anchor="figure1"
                title="Test Bed Setup - Option 1">
          <artwork>+-----------------------+                   +-----------------------+
| +-------------------+ |   +-----------+   | +-------------------+ |
| | Emulated Router(s)| |   |           |   | | Emulated Router(s)| |
| |    (Optional)     | +----- DUT/SUT  +-----+    (Optional)     | |
| +-------------------+ |   |           |   | +-------------------+ |
| +-------------------+ |   +-----------+   | +-------------------+ |
| |     Clients       | |                   | |      Servers      | |
| +-------------------+ |                   | +-------------------+ |
|                       |                   |                       |
|   Test Equipment      |                   |   Test Equipment      |
+-----------------------+                   +-----------------------+</artwork>
        </figure>

        <t>If the test equipment used is not capable of emulating layer 3
        routing functionality or if the numbers of used ports are mismatched
        between test equipment and the DUT/SUT (need for a test equipment
        ports aggregation), the test setup can be configured as shown in <xref
        target="figure2"/>.</t>

        <figure anchor="figure2" suppress-title="false"
                title="Test Bed Setup - Option 2">
          <artwork> +-------------------+      +-----------+      +--------------------+
 |Aggregation Switch/|      |           |      | Aggregation Switch/|
 | Router            +------+  DUT/SUT  +------+ Router             |
 |                   |      |           |      |                    |
 +----------+--------+      +-----------+      +--------+-----------+
            |                                           |
            |                                           |
+-----------+-----------+                   +-----------+-----------+
|                       |                   |                       |
| +-------------------+ |                   | +-------------------+ |
| | Emulated Router(s)| |                   | | Emulated Router(s)| |
| |     (Optional)    | |                   | |     (Optional)    | |
| +-------------------+ |                   | +-------------------+ |
| +-------------------+ |                   | +-------------------+ |
| |      Clients      | |                   | |      Servers      | |
| +-------------------+ |                   | +-------------------+ |
|                       |                   |                       |
|    Test Equipment     |                   |    Test Equipment     |
+-----------------------+                   +-----------------------+</artwork>
        </figure>
      </section>

      <section anchor="DUT-SUT_Configuration" title="DUT/SUT Configuration">
        <t>A unique DUT/SUT configuration MUST be used for all benchmarking
        tests described in <xref target="Benchmarking"/>. Since each DUT/SUT
        will have their own unique configuration, users SHOULD configure their
        device with the same parameters and security features that would be
        used in the actual deployment of the device or a typical deployment in
        order to achieve maximum network security coverage.</t>

        <t>Table 1 and Table 2 below describe the RECOMMENDED and OPTIONAL
        sets of network security feature list for NGFW and NGIDS/NGIPS
        respectively. The selected security features SHOULD be consistently
        enabled on the DUT/SUT for all the benchmarking tests described in
        <xref target="Benchmarking"/>.</t>

        <t>To improve repeatability, a summary of the DUT/SUT configuration
        including a description of all enabled DUT/SUT features MUST be
        published with the benchmarking results.</t>

        <figure align="left" alt="" title="Table 1: NGFW Security Features ">
          <artwork align="center">                 +------------------------+
                 |           NGFW         |
+--------------- +-------------+----------+
|                |             |          |
|DUT/SUT Features| RECOMMENDED | OPTIONAL |
|                |             |          |
+----------------+-------------+----------+
|SSL Inspection  |     x       |          |
+----------------+-------------+----------+
|IDS/IPS         |     x       |          |
+----------------+-------------+----------+
|Anti-Spyware    |     x       |          |
+----------------+-------------+----------+
|Anti-Virus      |     x       |          |
+----------------+-------------+----------+
|Anti-Botnet     |     x       |          |
+----------------+-------------+----------+
|Web Filtering   |             |    x     |
+----------------+-------------+----------+
|Data Loss       |             |          |  
|Protection (DLP)|             |    x     |
+----------------+-------------+----------+
|DDoS            |             |    x     |
+----------------+-------------+----------+
|Certificate     |             |    x     |
|Validation      |             |          |
+----------------+-------------+----------+
|Logging and     |     x       |          |
|Reporting       |             |          |
+----------------+-------------+----------+
|Application     |     x       |          |
|Identification  |             |          |
+----------------+-------------+----------+   
</artwork>
        </figure>

        <figure align="left" alt=""
                title="Table 2: NGIDS/NGIPS Security Features">
          <artwork align="center">                 +------------------------+
                 |       NGIDS/NGIPS      |
+----------------+-------------+----------+
|                |             |          |
|DUT/SUT Features| RECOMMENDED | OPTIONAL |
|                |             |          |
+----------------+-------------+----------+
|SSL Inspection  |     x       |          |
+----------------+-------------+----------+
|Anti-Malware    |     x       |          |
+----------------+-------------+----------+
|Anti-Spyware    |     x       |          |
+----------------+-------------+----------+
|Anti-Botnet     |     x       |          |
+----------------+-------------+----------+
|Logging and     |     x       |          |
|Reporting       |             |          |
+----------------+-------------+----------+
|Application     |     x       |          |
|Identification  |             |          |
+----------------+-------------+----------+
|Deep Packet     |     x       |          |
|Inspection      |             |          |
+----------------+-------------+----------+
|Anti-Evasion    |     x       |          |
+----------------+-------------+----------+</artwork>
        </figure>

        <t>The following table provides a brief description of the security
        features.</t>

        <figure align="left" alt="Table 3"
                title="Table 3: Security  Feature Description">
          <artwork>+------------------+------------------------------------------------+
| DUT/SUT Features | Description                                    |
+------------------+------------------------------------------------+
| SSL Inspection   | DUT/SUT intercepts and decrypts inbound HTTPS  |
|                  | traffic between servers and clients. Once the  |
|                  | content inspection has been completed, DUT/SUT |
|                  | encrypts the HTTPS traffic with ciphers        |
|                  | and keys used by the clients and servers.      |
+------------------+------------------------------------------------+
| IDS/IPS          | DUT/SUT detects and blocks exploits            |
|                  | targeting known and unknown vulnerabilities    |
|                  | across the monitored network.                  |
+------------------+------------------------------------------------+
| Anti-Malware     | DUT/SUT detects and prevents the transmission  |
|                  | of malicious executable code and any associated|
|                  | communications across the monitored network.   |
|                  | This includes data exfiltration as well as     |
|                  | command and control channels.                  |
+------------------+------------------------------------------------+
| Anti-Spyware     | Anti-Spyware is a subcategory of Anti Malware. |
|                  | Spyware transmits information without the      |
|                  | user's knowledge or permission. DUT/SUT detects|
|                  | and block initial infection or transmission of |
|                  | data.                                          |
+------------------+------------------------------------------------+
| Anti-Botnet      | DUT/SUT detects traffic to or from botnets.    |
+------------------+------------------------------------------------+
| Anti-Evasion     | DUT/SUT detects and mitigates attacks that have|
|                  | been obfuscated in some manner.                |
+------------------+------------------------------------------------+
| Web Filtering    | DUT/SUT detects and blocks malicious website   |
|                  | including defined classifications of website   |
|                  | across the monitored network.                  |
+------------------+------------------------------------------------+
| DLP              | DUT/SUT detects and blocks the transmission    |
|                  | of Personally Identifiable Information (PII)   |
|                  | and specific files across the monitored network|
+------------------+------------------------------------------------+
| Certificate      | DUT/SUT validates certificates used in         |
| Validation       | encrypted communications across the monitored  |
|                  | network.                                       |
+------------------+------------------------------------------------+
| Logging and      | DUT/SUT logs and reports all traffic at the    |
| Reporting        | flow level across the monitored.               |
+------------------+------------------------------------------------+
| Application      | DUT/SUT detects known applications as defined  |
| Identification   | within the traffic mix selected across         |
|                  | the monitored network.                         |
+------------------+------------------------------------------------+</artwork>
        </figure>

        <t>In summary, a DUT/SUT SHOULD be configured as follows:</t>

        <t><list style="symbols">
            <t>All RECOMMENDED security inspection enabled<vspace/></t>

            <t>Disposition of all flows of traffic are logged - Logging to an
            external device is permissible<vspace/></t>

            <t>Geographical location filtering and Application Identification
            and Control configured to be triggered based on a site or
            application from the defined traffic mix<vspace/></t>
          </list></t>

        <t>In addition, a realistic number of access control rules (ACL)
        SHOULD be configured on the DUT/SUT where ACL's are configurable and
        also reasonable based on the deployment scenario. This document
        determines the number of access policy rules for four different
        classes of DUT/SUT; namely Extra Small (XS), Small (S), Medium (M) and
        Large (L). A sample DUT/SUT classification is described in <xref
        target="DUT-Classification"/>.</t>

        <t>The Access Control Rules (ACL) defined in Table 4 MUST be
        configured from top to bottom in the correct order as shown in the
        table. This is due to ACL types listed in specificity decreasing
        order, with "block" first, followed by "allow", representing typical
        ACL based security policy. The ACL entries SHOULD be configured with
        routable IP subnets by the DUT/SUT. (Note: There will be differences
        between how security vendors implement ACL decision making.) The
        configured ACL MUST NOT block the security and measurement traffic
        used for the benchmarking tests.</t>

        <figure title="Table 4: DUT/SUT Access List">
          <artwork>                                                    +---------------+
                                                    | DUT/SUT       |
                                                    | Classification|
                                                    | # Rules       |
+-----------+-----------+--------------------+------+---+---+---+---+
|           | Match     |                    |      |   |   |   |   |
| Rules Type| Criteria  |   Description      |Action| XS| S | M | L |
+-------------------------------------------------------------------+
|Application|Application| Any application    | block| 5 | 10| 20| 50|
|layer      |           | not included in    |      |   |   |   |   |
|           |           | the measurement    |      |   |   |   |   |
|           |           | traffic            |      |   |   |   |   |
+-------------------------------------------------------------------+
|Transport  |Src IP and | Any src IP subnet  | block| 25| 50|100|250|
|layer      |TCP/UDP    | used and any dst   |      |   |   |   |   |
|           |Dst ports  | ports not used in  |      |   |   |   |   |
|           |           | the measurement    |      |   |   |   |   |
|           |           | traffic            |      |   |   |   |   |
+-------------------------------------------------------------------+
|IP layer   |Src/Dst IP | Any src/dst IP     | block| 25| 50|100|250|
|           |           | subnet not used    |      |   |   |   |   |
|           |           | in the measurement |      |   |   |   |   |
|           |           | traffic            |      |   |   |   |   |
+-------------------------------------------------------------------+
|Application|Application| Half of the        | allow| 10| 10| 10| 10|
|layer      |           | applications       |      |   |   |   |   |
|           |           | included in the    |      |   |   |   |   |
|           |           | measurement traffic|      |   |   |   |   |
|           |           |(see the note below)|      |   |   |   |   |
+-------------------------------------------------------------------+
|Transport  |Src IP and | Half of the src    | allow| &gt;1| &gt;1| &gt;1| &gt;1|
|layer      |TCP/UDP    | IP used and any    |      |   |   |   |   |
|           |Dst ports  | dst ports used in  |      |   |   |   |   |
|           |           | the measurement    |      |   |   |   |   |
|           |           | traffic            |      |   |   |   |   |
|           |           | (one rule per      |      |   |   |   |   | 
|           |           | subnet)            |      |   |   |   |   |
+-------------------------------------------------------------------+
|IP layer   |Src IP     | The rest of the    | allow| &gt;1| &gt;1| &gt;1| &gt;1|
|           |           | src IP subnet      |      |   |   |   |   |
|           |           | range used in the  |      |   |   |   |   |
|           |           | measurement        |      |   |   |   |   |
|           |           | traffic            |      |   |   |   |   |  
|           |           | (one rule per      |      |   |   |   |   |
|           |           | subnet)            |      |   |   |   |   |
+-----------+-----------+--------------------+------+---+---+---+---+</artwork>
        </figure>

        <t>Note: If the half of applications included in the measurement
        traffic is less than 10, the missing number of ACL entries (dummy
        rules) can be configured for any application traffic not included in
        the measurement traffic.</t>

        <section anchor="security_effectiveness"
                 title="Security Effectiveness Configuration">
          <t>The Security features (defined in table 1 and 2) of the DUT/SUT
          MUST be configured effectively in such a way to detect, prevent, and
          report the defined security Vulnerability sets. This Section defines
          the selection of the security Vulnerability sets from Common
          Vulnerabilities and Exposures (CVE) list for the testing. The
          vulnerability set MUST reflect a minimum of 500 CVEs from no older
          than 10 calendar years to the current year. These CVEs SHOULD be
          selected with a focus on in-use software commonly found in business
          applications, with a Common Vulnerability Scoring System (CVSS)
          Severity of High (7-10).</t>

          <t>This document is primarily focused on performance benchmarking.
          However, it is RECOMMENDED to validate the security features
          configuration of the DUT/SUT by evaluating the security
          effectiveness as a prerequisite for performance benchmarking tests
          defined in the section 7. In case the Benchmarking tests are
          performed without evaluating security effectiveness, the test report
          MUST explain the implications of this. The methodology for
          evaluating Security effectiveness is defined in <xref
          target="Test-Methodology-Security-Effectiveness-Evaluation"/>.</t>
        </section>
      </section>

      <section anchor="Test_Equipment_Configuration"
               title="Test Equipment Configuration">
        <t>In general, test equipment allows configuring parameters in
        different protocol layers. These parameters thereby influence the
        traffic flows which will be offered and impact performance
        measurements.</t>

        <t>This section specifies common test equipment configuration
        parameters applicable for all benchmarking tests defined in <xref
        pageno="false" target="Benchmarking"/>. Any benchmarking test specific
        parameters are described under the test setup section of each
        benchmarking test individually.</t>

        <section title="Client Configuration">
          <t>This section specifies which parameters SHOULD be considered
          while configuring clients using test equipment. Also, this section
          specifies the RECOMMENDED values for certain parameters.</t>

          <section anchor="TCP_Stack_client" title="TCP Stack Attributes">
            <t>The TCP stack SHOULD use a congestion control algorithm at
            client and server endpoints. The default IPv4 and IPv6 MSS
            segments size SHOULD be set to 1460 bytes and 1440 bytes
            respectively and a TX and RX initial receive windows of 64 KByte.
            Client initial congestion window SHOULD NOT exceed 10 times the
            MSS. Delayed ACKs are permitted and the maximum client delayed ACK
            SHOULD NOT exceed 10 times the MSS before a forced ACK. Up to 3
            retries SHOULD be allowed before a timeout event is declared. All
            traffic MUST set the TCP PSH flag to high. The source port range
            SHOULD be in the range of 1024 - 65535. Internal timeout SHOULD be
            dynamically scalable per RFC 793. The client SHOULD initiate and
            close TCP connections. TCP connections MUST be closed via FIN.</t>
          </section>

          <section anchor="Client_IP" title="Client IP Address Space">
            <t>The sum of the client IP space SHOULD contain the following
            attributes.</t>

            <t><list style="symbols">
                <t>The IP blocks SHOULD consist of multiple unique,
                discontinuous static address blocks.</t>

                <t>A default gateway is permitted.</t>

                <t>The IPv4 Type of Service (ToS) byte or IPv6 traffic class
                should be set to '00' or ‘000000’ respectively.</t>
              </list></t>

            <t>The following equation can be used to define the total number
            of client IP addresses that will be configured on the test
            equipment.</t>

            <t>Desired total number of client IP = Target throughput [Mbit/s]
            / Average throughput per IP address [Mbit/s]</t>

            <t>As shown in the example list below, the value for "Average
            throughput per IP address" can be varied depending on the
            deployment and use case scenario.</t>

            <t><list style="format (Option %d)">
                <t>DUT/SUT deployment scenario 1 : 6-7 Mbit/s per IP (e.g.
                1,400-1,700 IPs per 10Gbit/s throughput)</t>

                <t>DUT/SUT deployment scenario 2 : 0.1-0.2 Mbit/s per IP (e.g.
                50,000-100,000 IPs per 10Gbit/s throughput)</t>
              </list></t>

            <t>Based on deployment and use case scenario, client IP addresses
            SHOULD be distributed between IPv4 and IPv6 type. The Following
            options can be considered for a selection of traffic mix
            ratio.</t>

            <t><list style="format (Option %d)">
                <t>100 % IPv4, no IPv6</t>

                <t>80 % IPv4, 20% IPv6</t>

                <t>50 % IPv4, 50% IPv6</t>

                <t>20 % IPv4, 80% IPv6</t>

                <t>no IPv4, 100% IPv6</t>
              </list></t>

            <t>Note: The IANA has assigned IP address range for the testing
            purpose as described in <xref target="IANA"/>.</t>
          </section>

          <section anchor="Emulated_web_Browser_attributes"
                   title="Emulated Web Browser Attributes">
            <t>The emulated web client contains attributes that will
            materially affect how traffic is loaded. The objective is to
            emulate modern, typical browser attributes to improve realism of
            the result set.</t>

            <t>For HTTP traffic emulation, the emulated browser MUST negotiate
            HTTP 1.1. HTTP persistence MAY be enabled depending on the test
            scenario. The browser MAY open multiple TCP connections per Server
            endpoint IP at any time depending on how many sequential
            transactions are needed to be processed. Within the TCP connection
            multiple transactions MAY be processed if the emulated browser has
            available connections. The browser SHOULD advertise a User-Agent
            header. Headers MUST be sent uncompressed. The browser SHOULD
            enforce content length validation.</t>

            <t>For encrypted traffic, the following attributes SHALL define
            the negotiated encryption parameters. The test clients MUST use
            TLSv1.2 or higher. TLS record size MAY be optimized for the HTTPS
            response object size up to a record size of 16 KByte. The client
            endpoint SHOULD send TLS Extension Server Name Indication (SNI)
            information when opening a security tunnel. Each client connection
            MUST perform a full handshake with server certificate and MUST NOT
            use session reuse or resumption.</t>

            <t>The following ciphers and keys are RECOMMENDED to use for HTTPS
            based benchmarking tests defined in <xref pageno="false"
            target="Benchmarking"/>.<list style="numbers">
                <t>ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature
                Hash Algorithm: ecdsa_secp256r1_sha256 and Supported group:
                sepc256r1)</t>

                <t>ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash
                Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256)</t>

                <t>ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash
                Algorithm: ecdsa_secp384r1_sha384 and Supported group:
                sepc521r1)</t>

                <t>ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash
                Algorithm: rsa_pkcs1_sha384 and Supported group: secp256)</t>
              </list></t>

            <t>Note: The above ciphers and keys were those commonly used
            enterprise grade encryption cipher suites. It is recognized that
            these will evolve over time. Individual certification bodies
            SHOULD use ciphers and keys that reflect evolving use cases. These
            choices MUST be documented in the resulting test reports with
            detailed information on the ciphers and keys used along with
            reasons for the choices.</t>
          </section>
        </section>

        <section title="Backend Server Configuration">
          <t>This section specifies which parameters should be considered
          while configuring emulated backend servers using test equipment.</t>

          <section title="TCP Stack Attributes">
            <t>The TCP stack on the server side SHOULD be configured similar
            to the client side configuration described in <xref pageno="false"
            target="TCP_Stack_client"/>. In addition, server initial
            congestion window MUST NOT exceed 10 times the MSS. Delayed ACKs
            are permitted and the maximum server delayed ACK MUST NOT exceed
            10 times the MSS before a forced ACK.</t>
          </section>

          <section anchor="Server_IP" title="Server Endpoint IP Addressing">
            <t>The sum of the server IP space SHOULD contain the following
            attributes.</t>

            <t><list style="symbols">
                <t>The server IP blocks SHOULD consist of unique,
                discontinuous static address blocks with one IP per Server
                Fully Qualified Domain Name (FQDN) endpoint per test port.</t>

                <t>A default gateway is permitted. The IPv4 ToS byte and IPv6
                traffic class bytes should be set to '00' and ‘000000’
                respectively.</t>

                <t>The server IP addresses SHOULD be distributed between IPv4
                and IPv6 with a ratio identical to the clients distribution
                ratio.</t>
              </list></t>

            <t>Note: The IANA has assigned IP address range for the testing
            purpose as described in <xref target="IANA"/>.</t>
          </section>

          <section title="HTTP / HTTPS Server Pool Endpoint Attributes">
            <t>The server pool for HTTP SHOULD listen on TCP port 80 and
            emulate HTTP version 1.1 with persistence. The Server MUST
            advertise server type in the Server response header <xref
            target="RFC2616"/>. For HTTPS server, TLS 1.2 or higher MUST be
            used with a maximum record size of 16 KByte and MUST NOT use
            ticket resumption or Session ID reuse. The server MUST listen on
            port TCP 443. The server SHALL serve a certificate to the client.
            The HTTPS server MUST check Host SNI information with the FQDN if
            the SNI is in use. Cipher suite and key size on the server side
            MUST be configured similar to the client side configuration
            described in <xref pageno="false"
            target="Emulated_web_Browser_attributes"/>.</t>
          </section>
        </section>

        <section title="Traffic Flow Definition">
          <t>This section describes the traffic pattern between client and
          server endpoints. At the beginning of the test, the server endpoint
          initializes and will be ready to accept connection states including
          initialization of the TCP stack as well as bound HTTP and HTTPS
          servers. When a client endpoint is needed, it will initialize and be
          given attributes such as a MAC and IP address. The behavior of the
          client is to sweep through the given server IP space, sequentially
          generating a recognizable service by the DUT. Thus, a balanced, mesh
          between client endpoints and server endpoints will be generated in a
          client port server port combination. Each client endpoint performs
          the same actions as other endpoints, with the difference being the
          source IP of the client endpoint and the target server IP pool. The
          client MUST use the server's IP address or Fully Qualified Domain
          Names (FQDN) in Host Headers <xref target="RFC2616"/>. For TLS the
          client MAY use Server Name Indication (SNI).</t>

          <section title=" Description of Intra-Client Behavior">
            <t>Client endpoints are independent of other clients that are
            concurrently executing. When a client endpoint initiates traffic,
            this section describes how the client steps through different
            services. Once the test is initialized, the client endpoints
            SHOULD randomly hold (perform no operation) for a few milliseconds
            to allow for better randomization of the start of client traffic.
            Each client will either open a new TCP connection or connect to a
            TCP persistence stack still open to that specific server. At any
            point that the service profile may require encryption, a TLS
            encryption tunnel will form presenting the URL or IP address
            request to the server. If using SNI, the server will then perform
            an SNI name check with the proposed FQDN compared to the domain
            embedded in the certificate. Only when correct, will the server
            process the HTTPS response object. The initial response object to
            the server MUST NOT have a fixed size; its size is based on
            benchmarking tests described in <xref pageno="false"
            target="Benchmarking"/>. Multiple additional sub-URLs (response
            objects on the service page) MAY be requested simultaneously. This
            MAY be to the same server IP as the initial URL. Each sub-object
            will also use a conical FQDN and URL path, as observed in the
            traffic mix used.</t>
          </section>
        </section>

        <section anchor="Traffic_Load_Profile" title="Traffic Load Profile">
          <t>The loading of traffic is described in this section. The loading
          of a traffic load profile has five distinct phases: Init, ramp up,
          sustain, ramp down, and collection.</t>

          <t><list style="numbers">
              <t>During the Init phase, test bed devices including the client
              and server endpoints should negotiate layer 2-3 connectivity
              such as MAC learning and ARP. Only after successful MAC learning
              or ARP/ND resolution SHALL the test iteration move to the next
              phase. No measurements are made in this phase. The minimum
              RECOMMEND time for Init phase is 5 seconds. During this phase,
              the emulated clients SHOULD NOT initiate any sessions with the
              DUT/SUT, in contrast, the emulated servers should be ready to
              accept requests from DUT/SUT or from emulated clients.</t>

              <t>In the ramp up phase, the test equipment SHOULD start to
              generate the test traffic. It SHOULD use a set approximate
              number of unique client IP addresses actively to generate
              traffic. The traffic SHOULD ramp from zero to desired target
              objective. The target objective will be defined for each
              benchmarking test. The duration for the ramp up phase MUST be
              configured long enough, so that the test equipment does not
              overwhelm the DUT/SUT's stated performance metrics namely;
              connections per second, throughput, concurrent TCP connections,
              and application transactions per second. No measurements are
              made in this phase.</t>

              <t>Sustain phase starts when all required clients (connections)
              are active and operating at their desired load condition. In the
              sustain phase, the test equipment SHOULD continue generating
              traffic to constant target value for a constant number of active
              clients. The minimum RECOMMENDED time duration for sustain phase
              is 300 seconds. This is the phase where measurements occur.</t>

              <t>In the ramp down/close phase, no new connections are
              established, and no measurements are made. The time duration for
              ramp up and ramp down phase SHOULD be the same.</t>

              <t>The last phase is administrative and will occur when the test
              equipment merges and collates the report data.</t>
            </list></t>
        </section>
      </section>
    </section>

    <section title="Test Bed Considerations">
      <t>This section recommends steps to control the test environment and
      test equipment, specifically focusing on virtualized environments and
      virtualized test equipment.</t>

      <t><list style="numbers">
          <t>Ensure that any ancillary switching or routing functions between
          the system under test and the test equipment do not limit the
          performance of the traffic generator. This is specifically important
          for virtualized components (vSwitches, vRouters).</t>

          <t>Verify that the performance of the test equipment matches and
          reasonably exceeds the expected maximum performance of the system
          under test.</t>

          <t>Assert that the test bed characteristics are stable during the
          entire test session. Several factors might influence stability
          specifically, for virtualized test beds. For example, additional
          workloads in a virtualized system, load balancing, and movement of
          virtual machines during the test, or simple issues such as
          additional heat created by high workloads leading to an emergency
          CPU performance reduction.</t>
        </list></t>

      <t>Test bed reference pre-tests help to ensure that the maximum desired
      traffic generator aspects such as throughput, transaction per second,
      connection per second, concurrent connection, and latency.</t>

      <t>Test bed preparation may be performed either by configuring the DUT
      in the most trivial setup (fast forwarding) or without presence of the
      DUT.</t>
    </section>

    <section title="Reporting">
      <t>This section describes how the final report should be formatted and
      presented. The final test report MAY have two major sections;
      Introduction and detailed test results sections.</t>

      <section title="Introduction">
        <t>The following attributes SHOULD be present in the introduction
        section of the test report.</t>

        <t><list style="numbers">
            <t>The time and date of the execution of the test MUST be
            prominent.</t>

            <t>Summary of test bed software and Hardware details<list
                style="letters">
                <t>DUT/SUT Hardware/Virtual Configuration<list style="symbols">
                    <t>This section SHOULD clearly identify the make and model
                    of the DUT/SUT</t>

                    <t>The port interfaces, including speed and link
                    information MUST be documented.</t>

                    <t>If the DUT/SUT is a Virtual Network Function (VNF),
                    host (server) hardware and software details, interface
                    acceleration type such as DPDK and SR-IOV used CPU cores,
                    used RAM, and the resource sharing (e.g. Pinning details
                    and NUMA Node) configuration MUST be documented. The
                    virtual components such as Hypervisor, virtual switch
                    version MUST be also documented.</t>

                    <t>Any additional hardware relevant to the DUT/SUT such as
                    controllers MUST be documented</t>
                  </list></t>

                <t>DUT/SUT Software<list style="symbols">
                    <t>The operating system name MUST be documented</t>

                    <t>The version MUST be documented</t>

                    <t>The specific configuration MUST be documented</t>
                  </list></t>

                <t>DUT/SUT Enabled Features<list style="symbols">
                    <t>Configured DUT/SUT features (see Table 1 and Table 2)
                    MUST be documented</t>

                    <t>Attributes of those featured MUST be documented</t>

                    <t>Any additional relevant information about features MUST
                    be documented</t>
                  </list></t>

                <t>Test equipment hardware and software <list style="symbols">
                    <t>Test equipment vendor name</t>

                    <t>Hardware details including model number, interface
                    type</t>

                    <t>Test equipment firmware and test application software
                    version</t>
                  </list></t>

                <t>Key test parameters<list style="symbols">
                    <t>Used cipher suites and keys</t>

                    <t>IPv4 and IPv6 traffic distribution</t>

                    <t>Number of configured ACL</t>
                  </list></t>

                <t>Details of application traffic mix used in the benchmarking
                test <xref format="default"
                target="Throughput_Performance_With_Traffic_Mix">Throughput
                Performance with Application Traffic Mix</xref><list
                    style="symbols">
                    <t>Name of applications and layer 7 protocols</t>

                    <t>Percentage of emulated traffic for each application and
                    layer 7 protocols</t>

                    <t>Percentage of encrypted traffic and used cipher suites
                    and keys (The RECOMMENDED ciphers and keys are defined in
                    <xref pageno="false"
                    target="Emulated_web_Browser_attributes"/>)</t>

                    <t>Used object sizes for each application and layer 7
                    protocols</t>
                  </list></t>
              </list></t>

            <t>Results Summary / Executive Summary<list style="letters">
                <t>Results SHOULD resemble a pyramid in how it is reported,
                with the introduction section documenting the summary of
                results in a prominent, easy to read block.</t>
              </list></t>
          </list></t>
      </section>

      <section title="Detailed Test Results">
        <t>In the result section of the test report, the following attributes
        should be present for each benchmarking test.<list style="letters">
            <t>KPIs MUST be documented separately for each benchmarking test.
            The format of the KPI metrics should be presented as described in
            <xref target="Key_Performance_Indicators"/>.</t>

            <t>The next level of details SHOULD be graphs showing each of
            these metrics over the duration (sustain phase) of the test. This
            allows the user to see the measured performance stability changes
            over time.</t>
          </list></t>
      </section>

      <section anchor="Key_Performance_Indicators"
               title=" Benchmarks and Key Performance Indicators">
        <t>This section lists key performance indicators (KPIs) for overall
        benchmarking tests. All KPIs MUST be measured during the sustain phase
        of the traffic load profile described in <xref
        target="Traffic_Load_Profile"/>. All KPIs MUST be measured from the
        result output of test equipment.</t>

        <t><list style="symbols">
            <t>Concurrent TCP Connections<vspace/>The aggregate number of
            simultaneous connections between hosts across the DUT/SUT, or
            between hosts and the DUT/SUT (defined in <xref
            target="RFC2647"/>).</t>

            <t>TCP Connections Per Second<vspace/>The average number of
            successfully established TCP connections per second between hosts
            across the DUT/SUT, or between hosts and the DUT/SUT. The TCP
            connection must be initiated via a TCP 3 way handshake (SYN,
            SYN/ACK, ACK). Then the TCP session data is sent. The TCP session
            MUST be closed via either a TCP 3 way close (FIN, FIN/ACK, ACK),
            or a TCP 4 way close (FIN, ACK, FIN, ACK), and not by a RST.</t>

            <t>Application Transactions Per Second<vspace/>The average number
            of successfully completed transactions per second. For a
            particular transaction to be considered successful, all data must
            have been transferred in its entirety. In case of HTTP(S)
            transaction, it must have a valid status code, and the appropriate
            FIN, FIN/ACK sequence must have been completed.</t>

            <t>TLS Handshake Rate<vspace/>The average number of successfully
            established TLS connections per second between hosts across the
            DUT/SUT, or between hosts and the DUT/SUT.</t>

            <t>Throughput<vspace/>The number of bits per second of allowed
            traffic a DUT/SUT can be observed to transmit to the correct
            destination interface(s) in response to a specified offered load
            (defined in <xref target="RFC2647"/>). The throughput benchmarking
            tests defined in <xref pageno="false" target="Benchmarking"/>
            SHOULD measure the average throughput value. This document
            recommends presenting the throughput value in Gbit/s rounded to
            two places of precision with a more specific Kbit/s in
            parenthesis.</t>

            <t>Time to First Byte (TTFB)<vspace/>TTFB is the elapsed time
            between the start of sending the TCP SYN packet from the client
            and the client receiving the first packet of application data from
            the server or DUT/SUT. The benchmarking tests <xref pageno="false"
            target="HTTP-Latency">HTTP Transaction Latency</xref> and <xref
            pageno="false" target="HTTPS-Latency">HTTPS Transaction
            Latency</xref> measure the minimum, average and maximum TTFB. The
            value SHOULD be expressed in millisecond.</t>

            <t>URL Response time / Time to Last Byte (TTLB)<vspace/>URL
            Response time / TTLB is the elapsed time between the start of
            sending the TCP SYN packet from the client and the client
            receiving the last packet of application data from the server or
            DUT/SUT. The benchmarking tests <xref pageno="false"
            target="HTTP-Latency">HTTP Transaction Latency</xref> and <xref
            pageno="false" target="HTTPS-Latency">HTTP Transaction
            Latency</xref> measure the minimum, average and maximum TTLB. The
            value SHOULD be expressed in millisecond.</t>
          </list></t>
      </section>
    </section>

    <section anchor="Benchmarking" title=" Benchmarking Tests">
      <section anchor="Throughput_Performance_With_Traffic_Mix"
               title="Throughput Performance with Application Traffic Mix">
        <section title="Objective">
          <t>Using a relevant application traffic mix, determine the
          sustainable throughput performance supported by the DUT/SUT.</t>

          <t>Based on customer use case, users can choose the application
          traffic mix for this test. The details about the traffic mix MUST be
          documented in the report. At least the following traffic mix details
          MUST be documented and reported together with the test results:</t>

          <t><list>
              <t>Name of applications and layer 7 protocols</t>

              <t>Percentage of emulated traffic for each application and layer
              7 protocols</t>

              <t>Percentage of encrypted traffic and used cipher suites and
              keys (The RECOMMENDED ciphers and keys are defined in <xref
              pageno="false" target="Emulated_web_Browser_attributes"/>.)</t>

              <t>Used object sizes for each application and layer 7
              protocols</t>
            </list></t>
        </section>

        <section title="Test Setup">
          <t>Test bed setup MUST be configured as defined in <xref
          target="Test_Setup"/>. Any benchmarking test specific test bed
          configuration changes MUST be documented.</t>
        </section>

        <section title="Test Parameters">
          <t>In this section, the benchmarking test specific parameters SHOULD
          be defined.</t>

          <section title="DUT/SUT Configuration Parameters">
            <t>DUT/SUT parameters MUST conform to the requirements defined in
            <xref target="DUT-SUT_Configuration"/>. Any configuration changes
            for this specific benchmarking test MUST be documented. In case
            the DUT is configured without SSL inspection feature, the test
            report MUST explain the implications of this to the relevant
            application traffic mix encrypted traffic.</t>
          </section>

          <section anchor="Test_Equipment_Configuration_Parameters_TC_7_1"
                   title="Test Equipment Configuration Parameters">
            <t>Test equipment configuration parameters MUST conform to the
            requirements defined in <xref
            target="Test_Equipment_Configuration"/>. Following parameters MUST
            be noted for this benchmarking test:</t>

            <t><list>
                <t>Client IP address range defined in <xref
                target="Client_IP"/></t>

                <t>Server IP address range defined in <xref
                target="Server_IP"/></t>

                <t>Traffic distribution ratio between IPv4 and IPv6 defined in
                <xref target="Client_IP"/></t>

                <t>Target throughput: Aggregated line rate of interface(s)
                used in the DUT/SUT or the value defined based on requirement
                for a specific deployment scenario</t>

                <t>Initial throughput: 10% of the "Target throughput"</t>

                <t>One of the ciphers and keys defined in <xref pageno="false"
                target="Emulated_web_Browser_attributes"/> are RECOMMENDED to
                use for this benchmarking test.</t>
              </list></t>
          </section>

          <section anchor="Traffic_Profile" title="Traffic Profile">
            <t>Traffic profile: This test MUST be run with a relevant
            application traffic mix profile.</t>
          </section>

          <section anchor="Test_Results_Validation_Criteria_7_1"
                   title="Test Results Validation Criteria">
            <t>The following test Criteria is defined as test results
            validation criteria. Test results validation criteria MUST be
            monitored during the whole sustain phase of the traffic load
            profile.<list style="letters">
                <t>Number of failed application transactions (receiving any
                HTTP response code other than 200 OK) MUST be less than 0.001%
                (1 out of 100,000 transactions) of total attempt
                transactions.</t>

                <t>Number of Terminated TCP connections due to unexpected TCP
                RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
                connections) of total initiated TCP connections.</t>
              </list></t>

            <t>Note: Criteria a. and b. above are synonymous with the
            zero-packet loss criteria for <xref target="RFC2544"/> Throughput,
            and recognize the additional complexity of application layer
            performance.</t>
          </section>

          <section anchor="Measurement_7_1" title="Measurement">
            <t>Following KPI metrics MUST be reported for this benchmarking
            test:</t>

            <t>Mandatory KPIs (benchmarks): Throughput, TTFB (minimum,
            average, and maximum), TTLB (minimum, average, and maximum) and
            Application Transactions Per Second</t>

            <t>Note: TTLB MUST be reported along with the object size used in
            the traffic profile.</t>

            <t>Optional KPIs: TCP Connections Per Second and TLS Handshake
            Rate</t>
          </section>
        </section>

        <section title="Test Procedures and Expected Results">
          <t>The test procedures are designed to measure the throughput
          performance of the DUT/SUT at the sustaining period of traffic load
          profile. The test procedure consists of three major steps. This test
          procedure MAY be repeated multiple times with different IP types;
          IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic
          distribution.</t>

          <section anchor="Step1_Test_Initialization"
                   title="Step 1: Test Initialization and Qualification">
            <t>Verify the link status of all connected physical interfaces.
            All interfaces are expected to be in "UP" status.</t>

            <t>Configure traffic load profile of the test equipment to
            generate test traffic at the "Initial throughput" rate as
            described in the parameters <xref
            target="Test_Equipment_Configuration_Parameters_TC_7_1"/>. The
            test equipment SHOULD follow the traffic load profile definition
            as described in <xref target="Traffic_Load_Profile"/>. The DUT/SUT
            SHOULD reach the "Initial throughput" during the sustain phase.
            Measure all KPI as defined in <xref pageno="false"
            target="Measurement_7_1"/>. The measured KPIs during the sustain
            phase MUST meet the test results validation criteria "a" and "b"
            defined in <xref pageno="false"
            target="Test_Results_Validation_Criteria_7_1"/>.</t>

            <t>If the KPI metrics do not meet the test results validation
            criteria, the test procedure MUST NOT be continued to step 2.</t>
          </section>

          <section title="Step 2: Test Run with Target Objective">
            <t>Configure test equipment to generate traffic at the "Target
            throughput" rate defined in the parameter table. The test
            equipment SHOULD follow the traffic load profile definition as
            described in <xref target="Traffic_Load_Profile"/>. The test
            equipment SHOULD start to measure and record all specified KPIs
            and the frequency of measurements SHOULD be less than 2 seconds.
            Continue the test until all traffic profile phases are
            completed.</t>

            <t>Within the test results validation criteria, the DUT/SUT is
            expected to reach the desired value of the target objective
            ("Target throughput") in the sustain phase. Follow step 3, if the
            measured value does not meet the target value or does not fulfill
            the test results validation criteria.</t>
          </section>

          <section title="Step 3: Test Iteration">
            <t>Determine the achievable throughput within the test results
            validation criteria. Final test iteration MUST be performed for
            the test duration defined in <xref
            target="Traffic_Load_Profile"/>.</t>
          </section>
        </section>
      </section>

      <section anchor="HTTP_CPS" title="TCP/HTTP Connections Per Second">
        <section title="Objective">
          <t>Using HTTP traffic, determine the sustainable TCP connection
          establishment rate supported by the DUT/SUT under different
          throughput load conditions.</t>

          <t>To measure connections per second, test iterations MUST use the
          different fixed HTTP response object sizes (the different load
          conditions) defined in <xref
          target="Test_Equipment_Configuration_Parameters_HTTP_CPS"/>.</t>
        </section>

        <section title="Test Setup">
          <t>Test bed setup SHOULD be configured as defined in <xref
          target="Test_Setup"/>. Any specific test bed configuration changes
          such as number of interfaces and interface type, etc. MUST be
          documented.</t>
        </section>

        <section title="Test Parameters">
          <t>In this section, benchmarking test specific parameters SHOULD be
          defined.</t>

          <section title="DUT/SUT Configuration Parameters">
            <t>DUT/SUT parameters MUST conform to the requirements defined in
            <xref target="DUT-SUT_Configuration"/>. Any configuration changes
            for this specific benchmarking test MUST be documented.</t>
          </section>

          <section anchor="Test_Equipment_Configuration_Parameters_HTTP_CPS"
                   title="Test Equipment Configuration Parameters">
            <t>Test equipment configuration parameters MUST conform to the
            requirements defined in <xref
            target="Test_Equipment_Configuration"/>. Following parameters MUST
            be documented for this benchmarking test:</t>

            <t>Client IP address range defined in <xref
            target="Client_IP"/></t>

            <t>Server IP address range defined in <xref
            target="Server_IP"/></t>

            <t>Traffic distribution ratio between IPv4 and IPv6 defined in
            <xref target="Client_IP"/></t>

            <t>Target connections per second: Initial value from product
            datasheet or the value defined based on requirement for a specific
            deployment scenario</t>

            <t>Initial connections per second: 10% of “Target connections per
            second” (an optional parameter for documentation)</t>

            <t>The client SHOULD negotiate HTTP 1.1 and close the connection
            with FIN immediately after completion of one transaction. In each
            test iteration, client MUST send GET command requesting a fixed
            HTTP response object size.</t>

            <t>The RECOMMENDED response object sizes are 1, 2, 4, 16, and 64
            KByte.</t>
          </section>

          <section anchor="Validation_Criteria_HTTP_CPS"
                   title="Test Results Validation Criteria">
            <t>The following test Criteria is defined as test results
            validation criteria. Test results validation criteria MUST be
            monitored during the whole sustain phase of the traffic load
            profile.<list style="letters">
                <t>Number of failed Application transactions (receiving any
                HTTP response code other than 200 OK) MUST be less than 0.001%
                (1 out of 100,000 transactions) of total attempt
                transactions.</t>

                <t>Number of Terminated TCP connections due to unexpected TCP
                RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
                connections) of total initiated TCP connections.</t>

                <t>During the sustain phase, traffic should be forwarded at a
                constant rate.</t>

                <t>Concurrent TCP connections MUST be constant during steady
                state and any deviation of concurrent TCP connections SHOULD
                be less than 10%. This confirms the DUT opens and closes TCP
                connections almost at the same rate.</t>
              </list></t>
          </section>

          <section title="Measurement">
            <t>TCP Connections Per Second MUST be reported for each test
            iteration (for each object size).</t>
          </section>
        </section>

        <section title="Test Procedures and Expected Results">
          <t>The test procedure is designed to measure the TCP connections per
          second rate of the DUT/SUT at the sustaining period of the traffic
          load profile. The test procedure consists of three major steps. This
          test procedure MAY be repeated multiple times with different IP
          types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic
          distribution.</t>

          <section title="Step 1: Test Initialization and Qualification">
            <t>Verify the link status of all connected physical interfaces.
            All interfaces are expected to be in "UP" status.</t>

            <t>Configure the traffic load profile of the test equipment to
            establish "initial connections per second" as defined in the
            parameters <xref
            target="Test_Equipment_Configuration_Parameters_HTTP_CPS"/>. The
            traffic load profile SHOULD be defined as described in <xref
            target="Traffic_Load_Profile"/>.</t>

            <t>The DUT/SUT SHOULD reach the "Initial connections per second"
            before the sustain phase. The measured KPIs during the sustain
            phase MUST meet the test results validation criteria a, b, c, and
            d defined in <xref target="Validation_Criteria_HTTP_CPS"/>.</t>

            <t>If the KPI metrics do not meet the test results validation
            criteria, the test procedure MUST NOT be continued to "Step
            2".</t>
          </section>

          <section title="Step 2: Test Run with Target Objective">
            <t>Configure test equipment to establish the target objective
            ("Target connections per second") defined in the parameters table.
            The test equipment SHOULD follow the traffic load profile
            definition as described in <xref
            target="Traffic_Load_Profile"/>.</t>

            <t>During the ramp up and sustain phase of each test iteration,
            other KPIs such as throughput, concurrent TCP connections and
            application transactions per second MUST NOT reach to the maximum
            value the DUT/SUT can support. The test results for specific test
            iterations SHOULD NOT be reported, if the above mentioned KPI
            (especially throughput) reaches the maximum value. (Example: If
            the test iteration with 64 KByte of HTTP response object size
            reached the maximum throughput limitation of the DUT, the test
            iteration MAY be interrupted and the result for 64 KByte SHOULD
            NOT be reported).</t>

            <t>The test equipment SHOULD start to measure and record all
            specified KPIs and the frequency of measurements SHOULD be less
            than 2 seconds. Continue the test until all traffic profile phases
            are completed.</t>

            <t>Within the test results validation criteria, the DUT/SUT is
            expected to reach the desired value of the target objective
            ("Target connections per second") in the sustain phase. Follow
            step 3, if the measured value does not meet the target value or
            does not fulfill the test results validation criteria.</t>
          </section>

          <section title="Step 3: Test Iteration">
            <t>Determine the achievable TCP connections per second within the
            test results validation criteria.</t>
          </section>
        </section>
      </section>

      <section anchor="HTTP_TP" title="HTTP Throughput">
        <section title="Objective">
          <t>Determine the sustainable throughput of the DUT/SUT for HTTP
          transactions varying the HTTP response object size.</t>
        </section>

        <section title="Test Setup">
          <t>Test bed setup SHOULD be configured as defined in <xref
          target="Test_Setup"/>. Any specific test bed configuration changes
          such as number of interfaces and interface type, etc. must be
          documented.</t>
        </section>

        <section title="Test Parameters">
          <t>In this section, benchmarking test specific parameters SHOULD be
          defined.</t>

          <section title="DUT/SUT Configuration Parameters">
            <t>DUT/SUT parameters MUST conform to the requirements defined in
            <xref target="DUT-SUT_Configuration"/>. Any configuration changes
            for this specific benchmarking test MUST be documented.</t>
          </section>

          <section anchor="Test_Equipment_Configuration_Parameters_HTTP_TP"
                   title="Test Equipment Configuration Parameters">
            <t>Test equipment configuration parameters MUST conform to the
            requirements defined in <xref
            target="Test_Equipment_Configuration"/>. Following parameters MUST
            be documented for this benchmarking test:</t>

            <t>Client IP address range defined in <xref
            target="Client_IP"/></t>

            <t>Server IP address range defined in <xref
            target="Server_IP"/></t>

            <t>Traffic distribution ratio between IPv4 and IPv6 defined in
            <xref target="Client_IP"/></t>

            <t>Target Throughput: Aggregated line rate of interface(s) used in
            the DUT/SUT or the value defined based on requirement for a
            specific deployment scenario</t>

            <t>Initial Throughput: 10% of "Target Throughput" (an optional
            parameter for documentation)</t>

            <t>Number of HTTP response object requests (transactions) per
            connection: 10</t>

            <t>RECOMMENDED HTTP response object size: 1, 16, 64, 256 KByte,
            and mixed objects defined in the table</t>

            <figure title="Table 4: Mixed Objects">
              <artwork>+---------------------+---------------------+
| Object size (KByte) | Number of requests/ |
|                     | Weight              |
+---------------------+---------------------+
| 0.2                 | 1                   |
+---------------------+---------------------+
| 6                   | 1                   |
+---------------------+---------------------+
| 8                   | 1                   |
+---------------------+---------------------+
| 9                   | 1                   |
+---------------------+---------------------+
| 10                  | 1                   |
+---------------------+---------------------+
| 25                  | 1                   |
+---------------------+---------------------+
| 26                  | 1                   |
+---------------------+---------------------+
| 35                  | 1                   |
+---------------------+---------------------+
| 59                  | 1                   |
+---------------------+---------------------+
| 347                 | 1                   |
+---------------------+---------------------+</artwork>
            </figure>
          </section>

          <section anchor="Validation_Criteria_HTTP_TP"
                   title="Test Results Validation Criteria">
            <t>The following test Criteria is defined as test results
            validation criteria. Test results validation criteria MUST be
            monitored during the whole sustain phase of the traffic load
            profile.<list style="letters">
                <t>Number of failed Application transactions (receiving any
                HTTP response code other than 200 OK) MUST be less than 0.001%
                (1 out of 100,000 transactions) of attempt transactions.</t>

                <t>Traffic should be forwarded constantly.</t>

                <t>Concurrent TCP connections MUST be constant during steady
                state and any deviation of concurrent TCP connections SHOULD
                be less than 10%. This confirms the DUT opens and closes TCP
                connections almost at the same rate.</t>
              </list></t>

            <t>Note: Criteria a. above is synonymous with the zero-packet loss
            criteria for <xref target="RFC2544"/> Throughput, and recognize
            the additional complexity of application layer performance.</t>
          </section>

          <section anchor="Measurement_TP" title="Measurement">
            <t>Throughput and HTTP Transactions per Second MUST be reported
            for each object size.</t>

            <t/>
          </section>
        </section>

        <section title="Test Procedures and Expected Results">
          <t>The test procedure is designed to measure HTTP throughput of the
          DUT/ SUT. The test procedure consists of three major steps. This
          test procedure MAY be repeated multiple times with different IPv4
          and IPv6 traffic distribution and HTTP response object sizes.</t>

          <section title="Step 1: Test Initialization and Qualification">
            <t>Verify the link status of all connected physical interfaces.
            All interfaces are expected to be in "UP" status.</t>

            <t>Configure traffic load profile of the test equipment to
            establish "Initial Throughput" as defined in the parameters <xref
            target="Test_Equipment_Configuration_Parameters_HTTP_TP"/>.</t>

            <t>The traffic load profile SHOULD be defined as described in
            <xref target="Traffic_Load_Profile"/>. The DUT/SUT SHOULD reach
            the "Initial Throughput" during the sustain phase. Measure all KPI
            as defined in <xref target="Measurement_TP"/>.</t>

            <t>The measured KPIs during the sustain phase MUST meet the test
            results validation criteria "a" defined in <xref
            target="Validation_Criteria_HTTP_TP"/>.</t>

            <t>If the KPI metrics do not meet the test results validation
            criteria, the test procedure MUST NOT be continued to "Step
            2".</t>
          </section>

          <section title="Step 2: Test Run with Target Objective">
            <t>Configure test equipment to establish the target objective
            ("Target throughput") defined in the parameters table. The test
            equipment SHOULD start to measure and record all specified KPIs
            and the frequency of measurements SHOULD be less than 2 seconds.
            Continue the test until all traffic profile phases are
            completed.</t>

            <t>Within the test results validation criteria, the DUT/SUT is
            expected to reach the desired value of the target objective in the
            sustain phase. Follow step 3, if the measured value does not meet
            the target value or does not fulfill the test results validation
            criteria.</t>
          </section>

          <section title="Step 3: Test Iteration">
            <t>Determine the achievable throughput within the test results
            validation criteria and measure the KPI metric Transactions per
            Second. Final test iteration MUST be performed for the test
            duration defined in <xref target="Traffic_Load_Profile"/>.</t>
          </section>
        </section>
      </section>

      <section anchor="HTTP-Latency" title="HTTP Transaction Latency">
        <section title="Objective">
          <t>Using HTTP traffic, determine the HTTP transaction latency when
          DUT is running with sustainable HTTP transactions per second
          supported by the DUT/SUT under different HTTP response object
          sizes.</t>

          <t>Test iterations MUST be performed with different HTTP response
          object sizes in two different scenarios. One with a single
          transaction and the other with multiple transactions within a single
          TCP connection. For consistency both the single and multiple
          transaction test MUST be configured with HTTP 1.1.</t>

          <t>Scenario 1: The client MUST negotiate HTTP 1.1 and close the
          connection with FIN immediately after completion of a single
          transaction (GET and RESPONSE).</t>

          <t>Scenario 2: The client MUST negotiate HTTP 1.1 and close the
          connection FIN immediately after completion of 10 transactions (GET
          and RESPONSE) within a single TCP connection.</t>
        </section>

        <section title="Test Setup">
          <t>Test bed setup SHOULD be configured as defined in <xref
          target="Test_Setup"/>. Any specific test bed configuration changes
          such as number of interfaces and interface type, etc. MUST be
          documented.</t>
        </section>

        <section title="Test Parameters">
          <t>In this section, benchmarking test specific parameters SHOULD be
          defined.</t>

          <section title="DUT/SUT Configuration Parameters">
            <t>DUT/SUT parameters MUST conform to the requirements defined in
            <xref target="DUT-SUT_Configuration"/>. Any configuration changes
            for this specific benchmarking test MUST be documented.</t>
          </section>

          <section anchor="Test_Equipment_Configuration_Parameters_HTTP_latency"
                   title="Test Equipment Configuration Parameters">
            <t>Test equipment configuration parameters MUST conform to the
            requirements defined in <xref
            target="Test_Equipment_Configuration"/>. Following parameters MUST
            be documented for this benchmarking test:</t>

            <t>Client IP address range defined in <xref
            target="Client_IP"/></t>

            <t>Server IP address range defined in <xref
            target="Server_IP"/></t>

            <t>Traffic distribution ratio between IPv4 and IPv6 defined in
            <xref target="Client_IP"/></t>

            <t/>

            <t>Target objective for scenario 1: 50% of the maximum connection
            per second measured in benchmarking test <xref format="default"
            target="HTTP_CPS">TCP/HTTP Connections Per Second</xref></t>

            <t>Target objective for scenario 2: 50% of the maximum throughput
            measured in benchmarking test <xref format="default"
            target="HTTP_TP">HTTP Throughput</xref></t>

            <t>Initial objective for scenario 1: 10% of Target objective for
            scenario 1” (an optional parameter for documentation)</t>

            <t>Initial objective for scenario 2: 10% of “Target objective for
            scenario 2” (an optional parameter for documentation)</t>

            <t>HTTP transaction per TCP connection: test scenario 1 with
            single transaction and the second scenario with 10
            transactions</t>

            <t>HTTP 1.1 with GET command requesting a single object. The
            RECOMMENDED object sizes are 1, 16, and 64 KByte. For each test
            iteration, client MUST request a single HTTP response object
            size.</t>
          </section>

          <section anchor="Validation_Criteria_HTTP_Latency"
                   title="Test Results Validation Criteria">
            <t>The following test Criteria is defined as test results
            validation criteria. Test results validation criteria MUST be
            monitored during the whole sustain phase of the traffic load
            profile. Ramp up and ramp down phase SHOULD NOT be considered.</t>

            <t><list style="letters">
                <t>Number of failed Application transactions (receiving any
                HTTP response code other than 200 OK) MUST be less than 0.001%
                (1 out of 100,000 transactions) of attempt transactions.</t>

                <t>Number of Terminated TCP connections due to unexpected TCP
                RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
                connections) of total initiated TCP connections.</t>

                <t>During the sustain phase, traffic should be forwarded at a
                constant rate.</t>

                <t>Concurrent TCP connections MUST be constant during steady
                state and any deviation of concurrent TCP connections SHOULD
                be less than 10%. This confirms the DUT opens and closes TCP
                connections almost at the same rate.</t>

                <t>After ramp up the DUT MUST achieve the "Target objective"
                defined in the parameter <xref
                target="Test_Equipment_Configuration_Parameters_HTTP_latency"/>
                and remain in that state for the entire test duration (sustain
                phase).</t>
              </list></t>
          </section>

          <section title="Measurement">
            <t>TTFB (minimum, average and maximum) and TTLB (minimum, average
            and maximum) MUST be reported for each object size.</t>
          </section>
        </section>

        <section title="Test Procedures and Expected Results">
          <t>The test procedure is designed to measure TTFB or TTLB when the
          DUT/SUT is operating close to 50% of its maximum achievable
          connections per second or throughput. This test procedure MAY be
          repeated multiple times with different IP types (IPv4 only, IPv6
          only and IPv4 and IPv6 mixed traffic distribution), HTTP response
          object sizes and single and multiple transactions per connection
          scenarios.</t>

          <section title="Step 1: Test Initialization and Qualification">
            <t>Verify the link status of all connected physical interfaces.
            All interfaces are expected to be in "UP" status.</t>

            <t>Configure traffic load profile of the test equipment to
            establish "Initial objective" as defined in the parameters <xref
            target="Test_Equipment_Configuration_Parameters_HTTP_latency"/>.
            The traffic load profile can be defined as described in <xref
            target="Traffic_Load_Profile"/>.</t>

            <t>The DUT/SUT SHOULD reach the "Initial objective" before the
            sustain phase. The measured KPIs during the sustain phase MUST
            meet the test results validation criteria a, b, c, d, e and f
            defined in <xref target="Validation_Criteria_HTTP_Latency"/>.</t>

            <t>If the KPI metrics do not meet the test results validation
            criteria, the test procedure MUST NOT be continued to "Step
            2".</t>
          </section>

          <section title="Step 2: Test Run with Target Objective">
            <t>Configure test equipment to establish "Target objective"
            defined in the parameters table. The test equipment SHOULD follow
            the traffic load profile definition as described in <xref
            target="Traffic_Load_Profile"/>.</t>

            <t>The test equipment SHOULD start to measure and record all
            specified KPIs and the frequency of measurement SHOULD be less
            than 2 seconds. Continue the test until all traffic profile phases
            are completed.</t>

            <t>Within the test results validation criteria, the DUT/SUT MUST
            reach the desired value of the target objective in the sustain
            phase.</t>

            <t>Measure the minimum, average and maximum values of TFB and
            TTLB.</t>
          </section>
        </section>
      </section>

      <section title="Concurrent TCP/HTTP Connection Capacity">
        <section title="Objective">
          <t>Determine the number of concurrent TCP connections that the DUT/
          SUT sustains when using HTTP traffic.</t>
        </section>

        <section title="Test Setup">
          <t>Test bed setup SHOULD be configured as defined in <xref
          target="Test_Setup"/>. Any specific test bed configuration changes
          such as number of interfaces and interface type, etc. must be
          documented.</t>
        </section>

        <section anchor="CC_parameter" title="Test Parameters">
          <t>In this section, benchmarking test specific parameters SHOULD be
          defined.</t>

          <section title="DUT/SUT Configuration Parameters">
            <t>DUT/SUT parameters MUST conform to the requirements defined in
            <xref target="DUT-SUT_Configuration"/>. Any configuration changes
            for this specific benchmarking test MUST be documented.</t>
          </section>

          <section anchor="Test_Equipment_Configuration_Parameters_HTTP_CC"
                   title="Test Equipment Configuration Parameters">
            <t>Test equipment configuration parameters MUST conform to the
            requirements defined in <xref
            target="Test_Equipment_Configuration"/>. Following parameters MUST
            be noted for this benchmarking test:</t>

            <t><list>
                <t>Client IP address range defined in <xref
                target="Client_IP"/></t>

                <t>Server IP address range defined in <xref
                target="Server_IP"/></t>

                <t>Traffic distribution ratio between IPv4 and IPv6 defined in
                <xref target="Client_IP"/></t>

                <t>Target concurrent connection: Initial value from product
                datasheet or the value defined based on requirement for a
                specific deployment scenario.</t>

                <t>Initial concurrent connection: 10% of “Target concurrent
                connection” (an optional parameter for documentation)</t>

                <t>Maximum connections per second during ramp up phase: 50% of
                maximum connections per second measured in benchmarking test
                <xref target="HTTP_CPS">TCP/HTTP Connections per
                second</xref></t>

                <t>Ramp up time (in traffic load profile for "Target
                concurrent connection"): “Target concurrent connection" /
                "Maximum connections per second during ramp up phase"</t>

                <t>Ramp up time (in traffic load profile for "Initial
                concurrent connection"): “Initial concurrent connection" /
                "Maximum connections per second during ramp up phase"</t>
              </list></t>

            <t>The client MUST negotiate HTTP 1.1 with persistence and each
            client MAY open multiple concurrent TCP connections per server
            endpoint IP.</t>

            <t>Each client sends 10 GET commands requesting 1 KByte HTTP
            response object in the same TCP connection (10 transactions/TCP
            connection) and the delay (think time) between each transaction
            MUST be X seconds.</t>

            <t>X = (“Ramp up time” + ”steady state time”) /10</t>

            <t>The established connections SHOULD remain open until the ramp
            down phase of the test. During the ramp down phase, all
            connections SHOULD be successfully closed with FIN.</t>
          </section>

          <section anchor="CC_Test_Results_Validation_Criteria"
                   title="Test Results Validation Criteria">
            <t>The following test Criteria is defined as test results
            validation criteria. Test results validation criteria MUST be
            monitored during the whole sustain phase of the traffic load
            profile.<list style="letters">
                <t>Number of failed Application transactions (receiving any
                HTTP response code other than 200 OK) MUST be less than 0.001%
                (1 out of 100,000 transaction) of total attempted
                transactions.</t>

                <t>Number of Terminated TCP connections due to unexpected TCP
                RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
                connections) of total initiated TCP connections.</t>

                <t>During the sustain phase, traffic SHOULD be forwarded
                constantly.</t>
              </list></t>
          </section>

          <section anchor="CC_Measurement" title="Measurement">
            <t>Average Concurrent TCP Connections MUST be reported for this
            benchmarking test.</t>
          </section>
        </section>

        <section title="Test Procedures and Expected Results">
          <t>The test procedure is designed to measure the concurrent TCP
          connection capacity of the DUT/SUT at the sustaining period of
          traffic load profile. The test procedure consists of three major
          steps. This test procedure MAY be repeated multiple times with
          different IPv4 and IPv6 traffic distribution.</t>

          <section anchor="CC_Step1_Test_Initialization"
                   title="Step 1: Test Initialization and Qualification">
            <t>Verify the link status of all connected physical interfaces.
            All interfaces are expected to be in "UP" status.</t>

            <t>Configure test equipment to establish “Initial concurrent TCP
            connections" defined in <xref
            target="Test_Equipment_Configuration_Parameters_HTTP_CC"/>. Except
            ramp up time, the traffic load profile SHOULD be defined as
            described in <xref target="Traffic_Load_Profile"/>.</t>

            <t>During the sustain phase, the DUT/SUT SHOULD reach the “Initial
            concurrent TCP connections”. The measured KPIs during the sustain
            phase MUST meet the test results validation criteria “a” and “b”
            defined in <xref
            target="CC_Test_Results_Validation_Criteria"/>.</t>

            <t>If the KPI metrics do not meet the test results validation
            criteria, the test procedure MUST NOT be continued to “Step
            2”.</t>
          </section>

          <section title="Step 2: Test Run with Target Objective">
            <t>Configure test equipment to establish the target objective
            (“Target concurrent TCP connections”). The test equipment SHOULD
            follow the traffic load profile definition (except ramp up time)
            as described in <xref target="Traffic_Load_Profile"/>.</t>

            <t>During the ramp up and sustain phase, the other KPIs such as
            throughput, TCP connections per second and application
            transactions per second MUST NOT reach to the maximum value that
            the DUT/SUT can support.</t>

            <t>The test equipment SHOULD start to measure and record KPIs
            defined in <xref target="CC_Measurement"/>. The frequency of
            measurement SHOULD be less than 2 seconds. Continue the test until
            all traffic profile phases are completed.</t>

            <t>Within the test results validation criteria, the DUT/SUT is
            expected to reach the desired value of the target objective in the
            sustain phase. Follow step 3, if the measured value does not meet
            the target value or does not fulfill the test results validation
            criteria.</t>
          </section>

          <section title="Step 3: Test Iteration">
            <t>Determine the achievable concurrent TCP connections capacity
            within the test results validation criteria.</t>
          </section>
        </section>
      </section>

      <section anchor="HTTPS_CPS" title="TCP/HTTPS Connections per Second">
        <section title="Objective">
          <t>Using HTTPS traffic, determine the sustainable SSL/TLS session
          establishment rate supported by the DUT/SUT under different
          throughput load conditions.</t>

          <t>Test iterations MUST include common cipher suites and key
          strengths as well as forward looking stronger keys. Specific test
          iterations MUST include ciphers and keys defined in <xref
          target="Test_Equipment_Configuration_Parameters_HTTPS_CPS"/>.</t>

          <t>For each cipher suite and key strengths, test iterations MUST use
          a single HTTPS response object size defined in the test equipment
          configuration parameters <xref
          target="Test_Equipment_Configuration_Parameters_HTTPS_CPS"/> to
          measure connections per second performance under a variety of DUT
          Security inspection load conditions.</t>
        </section>

        <section title="Test Setup">
          <t>Test bed setup SHOULD be configured as defined in <xref
          target="Test_Setup"/>. Any specific test bed configuration changes
          such as number of interfaces and interface type, etc. MUST be
          documented.</t>
        </section>

        <section title="Test Parameters">
          <t>In this section, benchmarking test specific parameters SHOULD be
          defined.</t>

          <section title="DUT/SUT Configuration Parameters">
            <t>DUT/SUT parameters MUST conform to the requirements defined in
            <xref target="DUT-SUT_Configuration"/>. Any configuration changes
            for this specific benchmarking test MUST be documented.</t>
          </section>

          <section anchor="Test_Equipment_Configuration_Parameters_HTTPS_CPS"
                   title="Test Equipment Configuration Parameters">
            <t>Test equipment configuration parameters MUST conform to the
            requirements defined in <xref
            target="Test_Equipment_Configuration"/>. Following parameters MUST
            be documented for this benchmarking test:</t>

            <t>Client IP address range defined in <xref
            target="Client_IP"/></t>

            <t>Server IP address range defined in <xref
            target="Server_IP"/></t>

            <t>Traffic distribution ratio between IPv4 and IPv6 defined in
            <xref target="Client_IP"/></t>

            <t>Target connections per second: Initial value from product
            datasheet or the value defined based on requirement for a specific
            deployment scenario.</t>

            <t>Initial connections per second: 10% of “Target connections per
            second” (an optional parameter for documentation)</t>

            <t>RECOMMENDED ciphers and keys defined in <xref pageno="false"
            target="Emulated_web_Browser_attributes"/></t>

            <t>The client MUST negotiate HTTPS 1.1 and close the connection
            with FIN immediately after completion of one transaction. In each
            test iteration, client MUST send GET command requesting a fixed
            HTTPS response object size. The RECOMMENDED object sizes are 1, 2,
            4, 16, and 64 KByte.</t>
          </section>

          <section anchor="Validation_Criteria_HTTPS_CPS"
                   title="Test Results Validation Criteria">
            <t>The following test Criteria is defined as test results
            validation criteria:<list style="letters">
                <t>Number of failed Application transactions (receiving any
                HTTP response code other than 200 OK) MUST be less than 0.001%
                (1 out of 100,000 transactions) of attempt transactions.</t>

                <t>Number of Terminated TCP connections due to unexpected TCP
                RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
                connections) of total initiated TCP connections.</t>

                <t>During the sustain phase, traffic should be forwarded at a
                constant rate.</t>

                <t>Concurrent TCP connections MUST be constant during steady
                state and any deviation of concurrent TCP connections SHOULD
                be less than 10%. This confirms the DUT opens and closes TCP
                connections almost at the same rate.</t>
              </list></t>
          </section>

          <section title="Measurement">
            <t>TCP Connections Per Second MUST be reported for each test
            iteration (for each object size).</t>

            <t>The KPI metric TLS Handshake Rate can be measured in the test
            using 1KByte object size.</t>
          </section>
        </section>

        <section title="Test Procedures and Expected Results">
          <t>The test procedure is designed to measure the TCP connections per
          second rate of the DUT/SUT at the sustaining period of traffic load
          profile. The test procedure consists of three major steps. This test
          procedure MAY be repeated multiple times with different IPv4 and
          IPv6 traffic distribution.</t>

          <section anchor="TLS_Handshake_Step1_Test_Initialization"
                   title="Step 1: Test Initialization and Qualification">
            <t>Verify the link status of all connected physical interfaces.
            All interfaces are expected to be in "UP" status.</t>

            <t>Configure traffic load profile of the test equipment to
            establish "Initial connections per second" as defined in <xref
            target="Test_Equipment_Configuration_Parameters_HTTPS_CPS"/>. The
            traffic load profile MAY be defined as described in <xref
            target="Traffic_Load_Profile"/>.</t>

            <t>The DUT/SUT SHOULD reach the "Initial connections per second"
            before the sustain phase. The measured KPIs during the sustain
            phase MUST meet the test results validation criteria a, b, c, and
            d defined in <xref target="Validation_Criteria_HTTPS_CPS"/>.</t>

            <t>If the KPI metrics do not meet the test results validation
            criteria, the test procedure MUST NOT be continued to "Step
            2".</t>
          </section>

          <section title="Step 2: Test Run with Target Objective">
            <t>Configure test equipment to establish "Target connections per
            second" defined in the parameters table. The test equipment SHOULD
            follow the traffic load profile definition as described in <xref
            target="Traffic_Load_Profile"/>.</t>

            <t>During the ramp up and sustain phase, other KPIs such as
            throughput, concurrent TCP connections and application
            transactions per second MUST NOT reach the maximum value that the
            DUT/SUT can support. The test results for specific test iteration
            SHOULD NOT be reported, if the above mentioned KPI (especially
            throughput) reaches the maximum value. (Example: If the test
            iteration with 64 KByte of HTTPS response object size reached the
            maximum throughput limitation of the DUT, the test iteration can
            be interrupted and the result for 64 KByte SHOULD NOT be
            reported).</t>

            <t>The test equipment SHOULD start to measure and record all
            specified KPIs. The frequency of measurement SHOULD be less than 2
            seconds. Continue the test until all traffic profile phases are
            completed.</t>

            <t>Within the test results validation criteria, the DUT/SUT is
            expected to reach the desired value of the target objective
            ("Target connections per second") in the sustain phase. Follow
            step 3, if the measured value does not meet the target value or
            does not fulfill the test results validation criteria.</t>
          </section>

          <section title="Step 3: Test Iteration">
            <t>Determine the achievable connections per second within the test
            results validation criteria.</t>
          </section>
        </section>
      </section>

      <section anchor="HTTPS_TP" title="HTTPS Throughput">
        <section title="Objective">
          <t>Determine the throughput for HTTPS transactions varying the HTTPS
          response object size.</t>

          <t>Test iterations MUST include common cipher suites and key
          strengths as well as forward looking stronger keys. Specific test
          iterations MUST include the ciphers and keys defined in the
          parameter <xref
          target="Test_Equipment_Configuration_Parameters_HTTPS_TP"/>.</t>
        </section>

        <section title="Test Setup">
          <t>Test bed setup SHOULD be configured as defined in <xref
          target="Test_Setup"/>. Any specific test bed configuration changes
          such as number of interfaces and interface type, etc. must be
          documented.</t>
        </section>

        <section title="Test Parameters">
          <t>In this section, benchmarking test specific parameters SHOULD be
          defined.</t>

          <section title="DUT/SUT Configuration Parameters">
            <t>DUT/SUT parameters MUST conform to the requirements defined in
            <xref target="DUT-SUT_Configuration"/>. Any configuration changes
            for this specific benchmarking test MUST be documented.</t>
          </section>

          <section anchor="Test_Equipment_Configuration_Parameters_HTTPS_TP"
                   title="Test Equipment Configuration Parameters">
            <t>Test equipment configuration parameters MUST conform to the
            requirements defined in <xref
            target="Test_Equipment_Configuration"/>. Following parameters MUST
            be documented for this benchmarking test:</t>

            <t>Client IP address range defined in <xref
            target="Client_IP"/></t>

            <t>Server IP address range defined in <xref
            target="Server_IP"/></t>

            <t>Traffic distribution ratio between IPv4 and IPv6 defined in
            <xref target="Client_IP"/></t>

            <t>Target Throughput: Aggregated line rate of interface(s) used in
            the DUT/SUT or the value defined based on requirement for a
            specific deployment scenario.</t>

            <t>Initial Throughput: 10% of "Target Throughput" (an optional
            parameter for documentation)</t>

            <t>Number of HTTPS response object requests (transactions) per
            connection: 10</t>

            <t>RECOMMENDED ciphers and keys defined in <xref pageno="false"
            target="Emulated_web_Browser_attributes"/></t>

            <t>RECOMMENDED HTTPS response object size: 1, 16, 64, 256 KByte,
            and mixed objects defined in the table below.</t>

            <figure title="Table 5: Mixed Objects">
              <artwork>+---------------------+---------------------+
| Object size (KByte) | Number of requests/ |
|                     | Weight              |
+---------------------+---------------------+
| 0.2                 | 1                   |
+---------------------+---------------------+
| 6                   | 1                   |
+---------------------+---------------------+
| 8                   | 1                   |
+---------------------+---------------------+
| 9                   | 1                   |
+---------------------+---------------------+
| 10                  | 1                   |
+---------------------+---------------------+
| 25                  | 1                   |
+---------------------+---------------------+
| 26                  | 1                   |
+---------------------+---------------------+
| 35                  | 1                   |
+---------------------+---------------------+
| 59                  | 1                   |
+---------------------+---------------------+
| 347                 | 1                   |
+---------------------+---------------------+</artwork>
            </figure>
          </section>

          <section anchor="Validation_Criteria_HTTPS_TP"
                   title="Test Results Validation Criteria">
            <t>The following test Criteria is defined as test results
            validation criteria. Test results validation criteria MUST be
            monitored during the whole sustain phase of the traffic load
            profile.<list style="letters">
                <t>Number of failed Application transactions (receiving any
                HTTP response code other than 200 OK) MUST be less than 0.001%
                (1 out of 100,000 transactions) of attempt transactions.</t>

                <t>Traffic should be forwarded constantly.</t>

                <t>Concurrent TCP connections MUST be constant during steady
                state and any deviation of concurrent TCP connections SHOULD
                be less than 10%. This confirms the DUT opens and closes TCP
                connections almost at the same rate.</t>
              </list></t>

            <t>Note: Criteria a. above is synonymous with the zero-packet loss
            criteria for <xref target="RFC2544"/> Throughput, and recognize
            the additional complexity of application layer performance.</t>
          </section>

          <section anchor="Measurement_HTTPS_TP" title="Measurement">
            <t>Throughput and HTTP Transactions per Second MUST be reported
            for each object size.</t>
          </section>
        </section>

        <section title="Test Procedures and Expected Results">
          <t>The test procedure consists of three major steps. This test
          procedure MAY be repeated multiple times with different IPv4 and
          IPv6 traffic distribution and HTTPS response object sizes.</t>

          <section title="Step 1: Test Initialization and Qualification">
            <t>Verify the link status of all connected physical interfaces.
            All interfaces are expected to be in "UP" status.</t>

            <t>Configure traffic load profile of the test equipment to
            establish "initial throughput" as defined in the parameters <xref
            target="Test_Equipment_Configuration_Parameters_HTTPS_TP"/>.</t>

            <t>The traffic load profile should be defined as described in
            <xref target="Traffic_Load_Profile"/>. The DUT/SUT SHOULD reach
            the "Initial Throughput" during the sustain phase. Measure all KPI
            as defined in <xref target="Measurement_HTTPS_TP"/>.</t>

            <t>The measured KPIs during the sustain phase MUST meet the test
            results validation criteria "a" defined in <xref
            target="Validation_Criteria_HTTPS_TP"/>.</t>

            <t>If the KPI metrics do not meet the test results validation
            criteria, the test procedure MUST NOT be continued to "Step
            2".</t>
          </section>

          <section title="Step 2: Test Run with Target Objective">
            <t>Configure test equipment to establish the target objective
            ("Target throughput") defined in the parameters table. The test
            equipment SHOULD start to measure and record all specified KPIs.
            The frequency of measurement SHOULD be less than 2 seconds.
            Continue the test until all traffic profile phases are
            completed.</t>

            <t>Within the test results validation criteria, the DUT/SUT is
            expected to reach the desired value of the target objective in the
            sustain phase. Follow step 3, if the measured value does not meet
            the target value or does not fulfill the test results validation
            criteria.</t>
          </section>

          <section title="Step 3: Test Iteration">
            <t>Determine the achievable throughput within the test results
            validation criteria. Final test iteration MUST be performed for
            the test duration defined in <xref
            target="Traffic_Load_Profile"/>.</t>
          </section>
        </section>
      </section>

      <section anchor="HTTPS-Latency" title="HTTPS Transaction Latency">
        <section title="Objective">
          <t>Using HTTPS traffic, determine the HTTPS transaction latency when
          DUT is running with sustainable HTTPS transactions per second
          supported by the DUT/SUT under different HTTPS response object
          size.</t>

          <t>Scenario 1: The client MUST negotiate HTTPS and close the
          connection with FIN immediately after completion of a single
          transaction (GET and RESPONSE).</t>

          <t>Scenario 2: The client MUST negotiate HTTPS and close the
          connection with FIN immediately after completion of 10 transactions
          (GET and RESPONSE) within a single TCP connection.</t>
        </section>

        <section title="Test Setup">
          <t>Test bed setup SHOULD be configured as defined in <xref
          target="Test_Setup"/>. Any specific test bed configuration changes
          such as number of interfaces and interface type, etc. MUST be
          documented.</t>
        </section>

        <section title="Test Parameters">
          <t>In this section, benchmarking test specific parameters SHOULD be
          defined.</t>

          <section title="DUT/SUT Configuration Parameters">
            <t>DUT/SUT parameters MUST conform to the requirements defined in
            <xref target="DUT-SUT_Configuration"/>. Any configuration changes
            for this specific benchmarking test MUST be documented.</t>
          </section>

          <section anchor="Test_Equipment_Configuration_Parameters_HTTPS_Latency"
                   title="Test Equipment Configuration Parameters">
            <t>Test equipment configuration parameters MUST conform to the
            requirements defined in <xref
            target="Test_Equipment_Configuration"/>. Following parameters MUST
            be documented for this benchmarking test:</t>

            <t>Client IP address range defined in <xref
            target="Client_IP"/></t>

            <t>Server IP address range defined in <xref
            target="Server_IP"/></t>

            <t>Traffic distribution ratio between IPv4 and IPv6 defined in
            <xref target="Client_IP"/></t>

            <t>RECOMMENDED cipher suites and key sizes defined in <xref
            pageno="false" target="Emulated_web_Browser_attributes"/></t>

            <t>Target objective for scenario 1: 50% of the maximum connections
            per second measured in benchmarking test <xref
            target="HTTPS_CPS">TCP/HTTPS Connections per second</xref></t>

            <t>Target objective for scenario 2: 50% of the maximum throughput
            measured in benchmarking test <xref format="default"
            target="HTTPS_TP">HTTPS Throughput</xref></t>

            <t>Initial objective for scenario 1: 10% of Target objective for
            scenario 1” (an optional parameter for documentation)</t>

            <t>Initial objective for scenario 2: 10% of “Target objective for
            scenario 2” (an optional parameter for documentation)</t>

            <t>HTTPS transaction per TCP connection: test scenario 1 with
            single transaction and the second scenario with 10
            transactions</t>

            <t>HTTPS 1.1 with GET command requesting a single object. The
            RECOMMENDED object sizes are 1, 16, and 64 KByte. For each test
            iteration, client MUST request a single HTTPS response object
            size.</t>
          </section>

          <section anchor="Validation_Criteria_HTTPS_Latency"
                   title="Test Results Validation Criteria">
            <t>The following test Criteria is defined as test results
            validation criteria. Test results validation criteria MUST be
            monitored during the whole sustain phase of the traffic load
            profile. Ramp up and ramp down phase SHOULD NOT be considered.</t>

            <t><list style="letters">
                <t>Number of failed Application transactions (receiving any
                HTTP response code other than 200 OK) MUST be less than 0.001%
                (1 out of 100,000 transactions) of attempt transactions.</t>

                <t>Number of Terminated TCP connections due to unexpected TCP
                RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
                connections) of total initiated TCP connections</t>

                <t>During the sustain phase, traffic should be forwarded at a
                constant rate.</t>

                <t>Concurrent TCP connections MUST be constant during steady
                state and any deviation of concurrent TCP connections SHOULD
                be less than 10%. This confirms the DUT opens and closes TCP
                connections almost at the same rate</t>

                <t>After ramp up the DUT MUST achieve the "Target objective"
                defined in the parameter <xref
                target="Test_Equipment_Configuration_Parameters_HTTPS_Latency"/>
                and remain in that state for the entire test duration (sustain
                phase).</t>
              </list></t>
          </section>

          <section title="Measurement">
            <t>TTFB (minimum, average and maximum) and TTLB (minimum, average
            and maximum) MUST be reported for each object size.</t>
          </section>
        </section>

        <section title="Test Procedures and Expected Results">
          <t>The test procedure is designed to measure TTFB or TTLB when the
          DUT/SUT is operating close to 50% of its maximum achievable
          connections per second or throughput. This test procedure MAY be
          repeated multiple times with different IP types (IPv4 only, IPv6
          only and IPv4 and IPv6 mixed traffic distribution), HTTPS response
          object sizes and single and multiple transactions per connection
          scenarios.</t>

          <section title="Step 1: Test Initialization and Qualification">
            <t>Verify the link status of all connected physical interfaces.
            All interfaces are expected to be in "UP" status.</t>

            <t>Configure traffic load profile of the test equipment to
            establish "Initial objective" as defined in the parameters <xref
            target="Test_Equipment_Configuration_Parameters_HTTPS_Latency"/>.
            The traffic load profile can be defined as described in <xref
            target="Traffic_Load_Profile"/>.</t>

            <t>The DUT/SUT SHOULD reach the "Initial objective" before the
            sustain phase. The measured KPIs during the sustain phase MUST
            meet the test results validation criteria a, b, c, d, e and f
            defined in <xref target="Validation_Criteria_HTTPS_Latency"/>.</t>

            <t>If the KPI metrics do not meet the test results validation
            criteria, the test procedure MUST NOT be continued to "Step
            2".</t>
          </section>

          <section title="Step 2: Test Run with Target Objective">
            <t>Configure test equipment to establish "Target objective"
            defined in the parameters table. The test equipment SHOULD follow
            the traffic load profile definition as described in <xref
            target="Traffic_Load_Profile"/>.</t>

            <t>The test equipment SHOULD start to measure and record all
            specified KPIs. The frequency of measurement SHOULD be less than 2
            seconds. Continue the test until all traffic profile phases are
            completed.</t>

            <t>Within the test results validation criteria, the DUT/SUT MUST
            reach the desired value of the target objective in the sustain
            phase.</t>

            <t>Measure the minimum, average and maximum values of TFB and
            TTLB.</t>
          </section>
        </section>
      </section>

      <section anchor="HTTPS_CC"
               title="Concurrent TCP/HTTPS Connection Capacity">
        <section title="Objective">
          <t>Determine the number of concurrent TCP connections that the
          DUT/SUT sustains when using HTTPS traffic.</t>
        </section>

        <section title="Test Setup">
          <t>Test bed setup SHOULD be configured as defined in <xref
          target="Test_Setup"/>. Any specific test bed configuration changes
          such as number of interfaces and interface type, etc. MUST be
          documented.</t>
        </section>

        <section anchor="HTTPS_CC_parameter" title="Test Parameters">
          <t>In this section, benchmarking test specific parameters SHOULD be
          defined.</t>

          <section title="DUT/SUT Configuration Parameters">
            <t>DUT/SUT parameters MUST conform to the requirements defined in
            <xref target="DUT-SUT_Configuration"/>. Any configuration changes
            for this specific benchmarking test MUST be documented.</t>
          </section>

          <section anchor="Test_Equipment_Configuration_Parameters_HTTPS_CC"
                   title="Test Equipment Configuration Parameters">
            <t>Test equipment configuration parameters MUST conform to the
            requirements defined in <xref
            target="Test_Equipment_Configuration"/>. Following parameters MUST
            be documented for this benchmarking test:</t>

            <t><list>
                <t>Client IP address range defined in <xref
                target="Client_IP"/></t>

                <t>Server IP address range defined in <xref
                target="Server_IP"/></t>

                <t>Traffic distribution ratio between IPv4 and IPv6 defined in
                <xref target="Client_IP"/></t>

                <t>RECOMMENDED cipher suites and key sizes defined in <xref
                pageno="false" target="Emulated_web_Browser_attributes"/></t>

                <t>Target concurrent connections: Initial value from product
                datasheet or the value defined based on requirement for a
                specific deployment scenario.</t>

                <t>Initial concurrent connections: 10% of “Target concurrent
                connections” (an optional parameter for documentation)</t>

                <t>Connections per second during ramp up phase: 50% of maximum
                connections per second measured in benchmarking test <xref
                target="HTTPS_CPS">TCP/HTTPS Connections per second</xref></t>

                <t>Ramp up time (in traffic load profile for "Target
                concurrent connections"): “Target concurrent connections" /
                "Maximum connections per second during ramp up phase"</t>

                <t>Ramp up time (in traffic load profile for "Initial
                concurrent connections"): “Initial concurrent connections" /
                "Maximum connections per second during ramp up phase"</t>
              </list></t>

            <t>The client MUST perform HTTPS transaction with persistence and
            each client can open multiple concurrent TCP connections per
            server endpoint IP.</t>

            <t>Each client sends 10 GET commands requesting 1 KByte HTTPS
            response objects in the same TCP connections (10 transactions/TCP
            connection) and the delay (think time) between each transaction
            MUST be X seconds.</t>

            <t>X = (“Ramp up time” + ”steady state time”) /10</t>

            <t>The established connections SHOULD remain open until the ramp
            down phase of the test. During the ramp down phase, all
            connections SHOULD be successfully closed with FIN.</t>
          </section>

          <section anchor="HTTPS_CC_Test_Results_Validation_Criteria"
                   title="Test Results Validation Criteria">
            <t>The following test Criteria is defined as test results
            validation criteria. Test results validation criteria MUST be
            monitored during the whole sustain phase of the traffic load
            profile.<list style="letters">
                <t>Number of failed Application transactions (receiving any
                HTTP response code other than 200 OK) MUST be less than 0.001%
                (1 out of 100,000 transactions) of total attempted
                transactions.</t>

                <t>Number of Terminated TCP connections due to unexpected TCP
                RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
                connections) of total initiated TCP connections.</t>

                <t>During the sustain phase, traffic SHOULD be forwarded
                constantly.</t>
              </list></t>
          </section>

          <section anchor="HTTPS_CC_Measurement" title="Measurement">
            <t>Average Concurrent TCP Connections MUST be reported for this
            benchmarking test.</t>
          </section>
        </section>

        <section title="Test Procedures and Expected Results">
          <t>The test procedure is designed to measure the concurrent TCP
          connection capacity of the DUT/SUT at the sustaining period of
          traffic load profile. The test procedure consists of three major
          steps. This test procedure MAY be repeated multiple times with
          different IPv4 and IPv6 traffic distribution.</t>

          <section anchor="HTTPS_CC_Step1_Test_Initialization"
                   title="Step 1: Test Initialization and Qualification">
            <t>Verify the link status of all connected physical interfaces.
            All interfaces are expected to be in "UP" status.</t>

            <t>Configure test equipment to establish “initial concurrent TCP
            connections" defined in <xref
            target="Test_Equipment_Configuration_Parameters_HTTPS_CC"/>.
            Except ramp up time, the traffic load profile SHOULD be defined as
            described in <xref target="Traffic_Load_Profile"/>.</t>

            <t>During the sustain phase, the DUT/SUT SHOULD reach the “Initial
            concurrent TCP connections”. The measured KPIs during the sustain
            phase MUST meet the test results validation criteria “a” and “b”
            defined in <xref
            target="HTTPS_CC_Test_Results_Validation_Criteria"/>.</t>

            <t>If the KPI metrics do not meet the test results validation
            criteria, the test procedure MUST NOT be continued to “Step
            2”.</t>
          </section>

          <section title="Step 2: Test Run with Target Objective">
            <t>Configure test equipment to establish the target objective
            (“Target concurrent TCP connections”). The test equipment SHOULD
            follow the traffic load profile definition (except ramp up time)
            as described in <xref target="Traffic_Load_Profile"/>.</t>

            <t>During the ramp up and sustain phase, the other KPIs such as
            throughput, TCP connections per second and application
            transactions per second MUST NOT reach to the maximum value that
            the DUT/SUT can support.</t>

            <t>The test equipment SHOULD start to measure and record KPIs
            defined in <xref target="HTTPS_CC_Measurement"/>. The frequency of
            measurement SHOULD be less than 2 seconds. Continue the test until
            all traffic profile phases are completed.</t>

            <t>Within the test results validation criteria, the DUT/SUT is
            expected to reach the desired value of the target objective in the
            sustain phase. Follow step 3, if the measured value does not meet
            the target value or does not fulfill the test results validation
            criteria.</t>
          </section>

          <section title="Step 3: Test Iteration">
            <t>Determine the achievable concurrent TCP connections within the
            test results validation criteria.</t>
          </section>
        </section>
      </section>
    </section>

    <section anchor="IANA" title="IANA Considerations">
      <t>The IANA has allocated 2001:0200::/48 for IPv6 testing, which is a
      48-bit prefix from the <xref target="RFC4733"/> pool. For IPv4 testing,
      the IP subnet 198.18.0.0/15 has been assigned to the BMWG by the IANA.
      This assignment was made to minimize the chance of conflict in case a
      testing device were to be accidentally connected to part of the
      Internet. The specific use of the IPv4 addresses is detailed in <xref
      target="RFC2544"/> Appendix C.</t>
    </section>

    <section anchor="Security_consieration" title="Security Considerations">
      <t>The primary goal of this document is to provide benchmarking
      terminology and methodology for next-generation network security
      devices. However, readers should be aware that there is some overlap
      between performance and security issues. Specifically, the optimal
      configuration for network security device performance may not be the
      most secure, and vice-versa. The Cipher suites recommended in this
      document are just for test purpose only. The Cipher suite recommendation
      for a real deployment is outside the scope of this document.</t>
    </section>

    <section title="Contributors">
      <t>The following individuals contributed significantly to the creation
      of this document:</t>

      <t>Alex Samonte, Amritam Putatunda, Aria Eslambolchizadeh, David
      DeSanto, Jurrie Van Den Breekel, Ryan Liles, Samaresh Nair, Stephen
      Goudreault, and Tim Otto</t>
    </section>

    <section anchor="Acknowledgements" title="Acknowledgements">
      <t>The authors wish to acknowledge the members of NetSecOPEN for their
      participation in the creation of this document. Additionally, the
      following members need to be acknowledged:</t>

      <t>Anand Vijayan, Baski Mohan, Chao Guo, Chris Brown, Chris Marshall,
      Jay Lindenauer, Michael Shannon, Mike Deichman, Ray Vinson, Ryan Riese,
      Tim Carlin, and Toulnay Orkun</t>
    </section>
  </middle>

  <back>
    <references title="Normative References">
      <?rfc include="reference.RFC.2119"?>

      <?rfc include="reference.RFC.8174"?>
    </references>

    <references title="Informative References">
      <?rfc include="reference.RFC.3511"?>

      <?rfc include="reference.RFC.6815"?>

      <?rfc include="reference.RFC.2616"?>

      <?rfc include="reference.RFC.2647"?>

      <?rfc include="reference.RFC.2544"?>

      <?rfc include="reference.RFC.4733"?>
    </references>

    <section anchor="Test-Methodology-Security-Effectiveness-Evaluation"
             title="Test Methodology - Security Effectiveness Evaluation">
      <section title="Test Objective">
        <t>This test methodology verifies the DUT/SUT is able to detect,
        prevent and report the vulnerabilities.</t>

        <t>In this test, background test traffic will be generated in order to
        utilize the DUT/SUT. In parallel, the CVEs will be sent to the DUT/SUT
        as encrypted and as well as clear text payload formats using a traffic
        generator. The selection of the CVEs is described in <xref
        target="security_effectiveness"/>.</t>

        <t><list style="symbols">
            <t>Number of blocked CVEs</t>

            <t>Number of bypassed (nonblocked) CVEs</t>

            <t>Background traffic performance (verify if the background
            traffic is impacted while sending CVE toward DUT/SUT)</t>

            <t>Accuracy of DUT/SUT statistics in term of vulnerabilities
            reporting</t>
          </list></t>
      </section>

      <section title="Test Bed Setup">
        <t>The same Test bed MUST be used for security effectiveness test and
        as well as for benchmarking test cases defined in <xref
        target="Benchmarking"/>.</t>
      </section>

      <section title="Test Parameters">
        <t>In this section, the benchmarking test specific parameters SHOULD
        be defined.</t>

        <section title="DUT/SUT Configuration Parameters">
          <t>DUT/SUT configuration Parameters MUST conform to the requirements
          defined in <xref target="DUT-SUT_Configuration"/>. The same DUT
          configuration MUST be used for Security effectiveness test and as
          well as for benchmarking test cases defined in <xref
          target="Benchmarking"/>. The DUT/SUT MUST be configured in inline
          mode and all detected attack traffic MUST be dropped and the session
          Should be reset</t>
        </section>

        <section title="Test Equipment Configuration Parameters">
          <t>Test equipment configuration parameters MUST conform to the
          requirements defined in <xref format="default"
          target="Test_Equipment_Configuration"/>. The same Client and server
          IP ranges MUST be configured as used in the benchmarking test cases.
          In addition, the following parameters MUST be documented for this
          benchmarking test:</t>

          <t><list style="symbols">
              <t>Background Traffic: 45% of maximum HTTP throughput and 45% of
              Maximum HTTPS throughput supported by the DUT/SUT (measured with
              object size 64 KByte in the benchmarking tests "HTTP(S)
              Throughput" defined in <xref pageno="false" target="HTTP_TP"/>
              and <xref pageno="false" target="HTTPS_TP"/>.</t>

              <t>RECOMMENDED CVE traffic transmission Rate: 10 CVEs per
              second</t>

              <t>RECOMMEND to generate each CVE multiple times (sequentially)
              at 10 CVEs per second</t>

              <t>Ciphers and Keys for the encrypted CVE traffic MUST use the
              same cipher configured for HTTPS traffic related benchmarking
              tests (<xref pageno="false" target="HTTPS_CPS"/> - <xref
              pageno="false" target="HTTPS_CC"/>)</t>
            </list></t>
        </section>
      </section>

      <section anchor="CVE_Criteria" title="Test Results Validation Criteria">
        <t>The following test Criteria is defined as test results validation
        criteria. Test results validation criteria MUST be monitored during
        the whole test duration.</t>

        <t><list style="letters">
            <t>Number of failed Application transaction in the background
            traffic MUST be less than 0.01% of attempted transactions</t>

            <t>Number of Terminated TCP connections of the background traffic
            (due to unexpected TCP RST sent by DUT/SUT) MUST be less than
            0.01% of total initiated TCP connections in the background
            traffic</t>

            <t>During the sustain phase, traffic should be forwarded at a
            constant rate</t>

            <t>False positive MUST NOT occur in the background traffic</t>
          </list></t>
      </section>

      <section title="Measurement">
        <t>Following KPI metrics MUST be reported for this test scenario:</t>

        <t>Mandatory KPIs:</t>

        <t><list style="symbols">
            <t>Blocked CVEs: It should be represented in the following
            ways:<list style="symbols">
                <t>Number of blocked CVEs out of total CVEs</t>

                <t>Percentage of blocked CVEs</t>
              </list></t>

            <t>Unblocked CVEs: It should be represented in the following
            ways:<list>
                <t>Number of unblocked CVEs out of total CVEs</t>

                <t>Percentage of unblocked CVEs</t>
              </list></t>

            <t>Background traffic behavior: it should represent one of the
            followings ways:<list>
                <t>No impact (traffic transmission at a constant rate)</t>

                <t>Minor impact (e.g. small spikes- +/- 100 Mbit/s)</t>

                <t>Heavily impacted (e.g. large spikes and reduced the
                background throughput &gt; 100 Mbit/s)</t>
              </list></t>

            <t>DUT/SUT reporting accuracy: DUT/SUT MUST report all detected
            vulnerabilities.</t>
          </list></t>

        <t>Optional KPIs:<list style="symbols">
            <t>List of unblocked CVEs</t>
          </list></t>
      </section>

      <section title="Test Procedures and Expected Results">
        <t>The test procedure is designed to measure the security
        effectiveness of the DUT/SUT at the sustaining period of the traffic
        load profile. The test procedure consists of two major steps. This
        test procedure MAY be repeated multiple times with different IPv4 and
        IPv6 traffic distribution.</t>

        <section title="Step 1: Background Traffic">
          <t>Generate the background traffic at the transmission rate defined
          in the parameter section.</t>

          <t>The DUT/SUT MUST reach the target objective (throughput) in
          sustain phase. The measured KPIs during the sustain phase MUST meet
          the test results validation criteria a, b, c and d defined in <xref
          pageno="false" target="CVE_Criteria"/>.</t>

          <t>If the KPI metrics do not meet the acceptance criteria, the test
          procedure MUST NOT be continued to "Step 2".</t>
        </section>

        <section title="Step 2: CVE Emulation">
          <t>While generating the background traffic (in sustain phase), send
          the CVE traffic as defined in the parameter section.</t>

          <t>The test equipment SHOULD start to measure and record all
          specified KPIs. The frequency of measurement MUST be less than 2
          seconds. Continue the test until all CVEs are sent.</t>

          <t>The measured KPIs MUST meet all the test results validation
          criteria a, b, c, and d defined in <xref pageno="false"
          target="CVE_Criteria"/>.</t>

          <t>In addition, the DUT/SUT SHOULD report the vulnerabilities
          correctly.</t>
        </section>
      </section>
    </section>

    <section anchor="DUT-Classification" title="DUT/SUT Classification">
      <t>This document attempts to classify the DUT/SUT in four different four
      different categories based on its maximum supported firewall throughput
      performance number defined in the vendor datasheet. This classification
      MAY help user to determine specific configuration scale (e.g., number of
      ACL entries), traffic profiles, and attack traffic profiles, scaling
      those proportionally to DUT/SUT sizing category.</t>

      <t>The four different categories are Extra Small, Small, Medium, and
      Large. The RECOMMENDED throughput values for the following categories
      are:</t>

      <t>Extra Small (XS) - supported throughput less than 1Gbit/s</t>

      <t>Small (S) - supported throughput less than 5Gbit/s</t>

      <t>Medium (M) - supported throughput greater than 5Gbit/s and less than
      10Gbit/s</t>

      <t>Large (L) - supported throughput greater than 10Gbit/s</t>
    </section>
  </back>
</rfc>
