idnits 2.17.1 draft-ietf-bmwg-ipsec-meth-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([RFC2432], [RFC2544]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (April 3, 2009) is 5492 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC2119' is defined on line 1729, but no explicit reference was found in the text == Unused Reference: 'RFC2393' is defined on line 1735, but no explicit reference was found in the text == Unused Reference: 'RFC2402' is defined on line 1742, but no explicit reference was found in the text == Unused Reference: 'RFC2403' is defined on line 1745, but no explicit reference was found in the text == Unused Reference: 'RFC2404' is defined on line 1748, but no explicit reference was found in the text == Unused Reference: 'RFC2405' is defined on line 1751, but no explicit reference was found in the text == Unused Reference: 'RFC2406' is defined on line 1754, but no explicit reference was found in the text == Unused Reference: 'RFC2407' is defined on line 1757, but no explicit reference was found in the text == Unused Reference: 'RFC2408' is defined on line 1760, but no explicit reference was found in the text == Unused Reference: 'RFC2409' is defined on line 1764, but no explicit reference was found in the text == Unused Reference: 'RFC2410' is defined on line 1767, but no explicit reference was found in the text == Unused Reference: 'RFC2411' is defined on line 1770, but no explicit reference was found in the text == Unused Reference: 'RFC2412' is defined on line 1773, but no explicit reference was found in the text == Unused Reference: 'RFC2451' is defined on line 1779, but no explicit reference was found in the text == Unused Reference: 'RFC4306' is defined on line 1803, but no explicit reference was found in the text == Unused Reference: 'RFC5180' is defined on line 1806, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ipsec-properties' is defined on line 1810, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1242 ** Obsolete normative reference: RFC 1981 (Obsoleted by RFC 8201) ** Downref: Normative reference to an Informational RFC: RFC 2285 ** Obsolete normative reference: RFC 2393 (Obsoleted by RFC 3173) ** Obsolete normative reference: RFC 2401 (Obsoleted by RFC 4301) ** Obsolete normative reference: RFC 2402 (Obsoleted by RFC 4302, RFC 4305) ** Obsolete normative reference: RFC 2406 (Obsoleted by RFC 4303, RFC 4305) ** Obsolete normative reference: RFC 2407 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2408 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2409 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2411 (Obsoleted by RFC 6071) ** Downref: Normative reference to an Informational RFC: RFC 2412 ** Downref: Normative reference to an Informational RFC: RFC 2432 ** Downref: Normative reference to an Informational RFC: RFC 2544 ** Obsolete normative reference: RFC 2547 (Obsoleted by RFC 4364) ** Obsolete normative reference: RFC 4305 (Obsoleted by RFC 4835) ** Obsolete normative reference: RFC 4306 (Obsoleted by RFC 5996) ** Downref: Normative reference to an Informational RFC: RFC 5180 -- Possible downref: Normative reference to a draft: ref. 'I-D.ietf-ipsec-properties' Summary: 22 errors (**), 0 flaws (~~), 19 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Working Group M. Kaeo 3 Internet-Draft Double Shot Security 4 Expires: October 5, 2009 T. Van Herck 5 Cisco Systems 6 April 3, 2009 8 Methodology for Benchmarking IPsec Devices 9 draft-ietf-bmwg-ipsec-meth-04 11 Status of this Memo 13 This Internet-Draft is submitted to IETF in full conformance with the 14 provisions of BCP 78 and BCP 79. This document may contain material 15 from IETF Documents or IETF Contributions published or made publicly 16 available before November 10, 2008. The person(s) controlling the 17 copyright in some of this material may not have granted the IETF 18 Trust the right to allow modifications of such material outside the 19 IETF Standards Process. Without obtaining an adequate license from 20 the person(s) controlling the copyright in such materials, this 21 document may not be modified outside the IETF Standards Process, and 22 derivative works of it may not be created outside the IETF Standards 23 Process, except to format it for publication as an RFC or to 24 translate it into languages other than English. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF), its areas, and its working groups. Note that 28 other groups may also distribute working documents as Internet- 29 Drafts. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 The list of current Internet-Drafts can be accessed at 37 http://www.ietf.org/ietf/1id-abstracts.txt. 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html. 42 This Internet-Draft will expire on October 5, 2009. 44 Copyright Notice 46 Copyright (c) 2009 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents in effect on the date of 51 publication of this document (http://trustee.ietf.org/license-info). 52 Please review these documents carefully, as they describe your rights 53 and restrictions with respect to this document. 55 Abstract 57 The purpose of this draft is to describe methodology specific to the 58 benchmarking of IPsec IP forwarding devices. It builds upon the 59 tenets set forth in [RFC2544], [RFC2432] and other IETF Benchmarking 60 Methodology Working Group (BMWG) efforts. This document seeks to 61 extend these efforts to the IPsec paradigm. 63 The BMWG produces two major classes of documents: Benchmarking 64 Terminology documents and Benchmarking Methodology documents. The 65 Terminology documents present the benchmarks and other related terms. 66 The Methodology documents define the procedures required to collect 67 the benchmarks cited in the corresponding Terminology documents. 69 Table of Contents 71 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 72 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 5 73 3. Methodology Format . . . . . . . . . . . . . . . . . . . . . . 5 74 4. Key Words to Reflect Requirements . . . . . . . . . . . . . . 6 75 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 6 76 6. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 6 77 7. Test Parameters . . . . . . . . . . . . . . . . . . . . . . . 9 78 7.1. Frame Type . . . . . . . . . . . . . . . . . . . . . . . . 9 79 7.1.1. IP . . . . . . . . . . . . . . . . . . . . . . . . . . 9 80 7.1.2. UDP . . . . . . . . . . . . . . . . . . . . . . . . . 9 81 7.1.3. TCP . . . . . . . . . . . . . . . . . . . . . . . . . 9 82 7.1.4. NAT-Traversal . . . . . . . . . . . . . . . . . . . . 9 83 7.2. Frame Sizes . . . . . . . . . . . . . . . . . . . . . . . 10 84 7.3. Fragmentation and Reassembly . . . . . . . . . . . . . . . 10 85 7.4. Time To Live . . . . . . . . . . . . . . . . . . . . . . . 11 86 7.5. Trial Duration . . . . . . . . . . . . . . . . . . . . . . 11 87 7.6. Security Context Parameters . . . . . . . . . . . . . . . 11 88 7.6.1. IPsec Transform Sets . . . . . . . . . . . . . . . . . 11 89 7.6.2. IPsec Topologies . . . . . . . . . . . . . . . . . . . 13 90 7.6.3. IKE Keepalives . . . . . . . . . . . . . . . . . . . . 14 91 7.6.4. IKE DH-group . . . . . . . . . . . . . . . . . . . . . 14 92 7.6.5. IKE SA / IPsec SA Lifetime . . . . . . . . . . . . . . 14 93 7.6.6. IPsec Selectors . . . . . . . . . . . . . . . . . . . 15 94 7.6.7. NAT-Traversal . . . . . . . . . . . . . . . . . . . . 15 95 8. Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 96 8.1. IPsec Tunnel Capacity . . . . . . . . . . . . . . . . . . 15 97 8.2. IPsec SA Capacity . . . . . . . . . . . . . . . . . . . . 16 98 9. Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . 17 99 9.1. Throughput baseline . . . . . . . . . . . . . . . . . . . 17 100 9.2. IPsec Throughput . . . . . . . . . . . . . . . . . . . . . 18 101 9.3. IPsec Encryption Throughput . . . . . . . . . . . . . . . 19 102 9.4. IPsec Decryption Throughput . . . . . . . . . . . . . . . 20 103 10. Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 104 10.1. Latency Baseline . . . . . . . . . . . . . . . . . . . . . 21 105 10.2. IPsec Latency . . . . . . . . . . . . . . . . . . . . . . 22 106 10.3. IPsec Encryption Latency . . . . . . . . . . . . . . . . . 23 107 10.4. IPsec Decryption Latency . . . . . . . . . . . . . . . . . 24 108 10.5. Time To First Packet . . . . . . . . . . . . . . . . . . . 24 109 11. Frame Loss Rate . . . . . . . . . . . . . . . . . . . . . . . 25 110 11.1. Frame Loss Baseline . . . . . . . . . . . . . . . . . . . 25 111 11.2. IPsec Frame Loss . . . . . . . . . . . . . . . . . . . . . 26 112 11.3. IPsec Encryption Frame Loss . . . . . . . . . . . . . . . 27 113 11.4. IPsec Decryption Frame Loss . . . . . . . . . . . . . . . 28 114 11.5. IKE Phase 2 Rekey Frame Loss . . . . . . . . . . . . . . . 28 115 12. IPsec Tunnel Setup Behavior . . . . . . . . . . . . . . . . . 29 116 12.1. IPsec Tunnel Setup Rate . . . . . . . . . . . . . . . . . 29 117 12.2. IKE Phase 1 Setup Rate . . . . . . . . . . . . . . . . . . 30 118 12.3. IKE Phase 2 Setup Rate . . . . . . . . . . . . . . . . . . 31 119 13. IPsec Rekey Behavior . . . . . . . . . . . . . . . . . . . . . 32 120 13.1. IKE Phase 1 Rekey Rate . . . . . . . . . . . . . . . . . . 32 121 13.2. IKE Phase 2 Rekey Rate . . . . . . . . . . . . . . . . . . 33 122 14. IPsec Tunnel Failover Time . . . . . . . . . . . . . . . . . . 34 123 15. DoS Attack Resiliency . . . . . . . . . . . . . . . . . . . . 36 124 15.1. Phase 1 DoS Resiliency Rate . . . . . . . . . . . . . . . 36 125 15.2. Phase 2 Hash Mismatch DoS Resiliency Rate . . . . . . . . 37 126 15.3. Phase 2 Anti Replay Attack DoS Resiliency Rate . . . . . . 37 127 16. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 39 128 17. References . . . . . . . . . . . . . . . . . . . . . . . . . . 39 129 17.1. Normative References . . . . . . . . . . . . . . . . . . . 39 130 17.2. Informative References . . . . . . . . . . . . . . . . . . 41 131 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 41 133 1. Introduction 135 This document defines a specific set of tests that can be used to 136 measure and report the performance characteristics of IPsec devices. 137 It extends the methodology already defined for benchmarking network 138 interconnecting devices in [RFC2544] to IPsec gateways and 139 additionally introduces tests which can be used to measure end-host 140 IPsec performance. 142 2. Document Scope 144 The primary focus of this document is to establish a performance 145 testing methodology for IPsec devices that support manual keying and 146 IKEv1. A seperate document will be written specifically to address 147 testing using the updated IKEv2 specification. Both IPv4 and IPv6 148 addressing will be taken into consideration for all relevant test 149 methodologies. 151 The testing will be constrained to: 153 o Devices acting as IPsec gateways whose tests will pertain to both 154 IPsec tunnel and transport mode. 156 o Devices acting as IPsec end-hosts whose tests will pertain to both 157 IPsec tunnel and transport mode. 159 What is specifically out of scope is any testing that pertains to 160 considerations involving, L2TP [RFC2661], GRE [RFC2784], BGP/MPLS 161 VPN's [RFC2547] and anything that does not specifically relate to the 162 establishment and tearing down of IPsec tunnels. 164 3. Methodology Format 166 The Methodology is described in the following format: 168 Objective: The reason for performing the test. 170 Topology: Physical test layout to be used as further clarified in 171 Section 6. 173 Procedure: Describes the method used for carrying out the test. 175 Reporting Format: Description of reporting of the test results. 177 4. Key Words to Reflect Requirements 179 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 180 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 181 document are to be interpreted as described in RFC 2119. RFC 2119 182 defines the use of these key words to help make the intent of 183 standards track documents as clear as possible. While this document 184 uses these keywords, this document is not a standards track document. 186 5. Test Considerations 188 Before any of the IPsec data plane benchmarking tests are carried 189 out, a baseline MUST be established. I.e. the particular test in 190 question must first be executed to measure its performance without 191 enabling IPsec. Once both the Baseline clear text performance and 192 the performance using an IPsec enabled datapath have been measured, 193 the difference between the two can be discerned. 195 This document explicitly assumes that you MUST follow logical 196 performance test methodology that includes the pre-configuration or 197 pre-population of routing protocols, ARP caches, IPv6 neighbor 198 discovery and all other extraneous IPv4 and IPv6 parameters required 199 to pass packets before the tester is ready to send IPsec protected 200 packets. IPv6 nodes that implement Path MTU Discovery [RFC1981] MUST 201 ensure that the PMTUD process has been completed before any of the 202 tests have been run. 204 For every IPsec data plane benchmarking test, the SA database (SADB) 205 MUST be created and populated with the appropriate SA's before any 206 actual test traffic is sent, i.e. the DUT/SUT MUST have Active 207 Tunnels. This may require manual commands to be executed on the DUT/ 208 SUT or the sending of appropriate learning frames to the DUT/SUT to 209 trigger IKE negotiation. This is to ensure that none of the control 210 plane parameters (such as IPsec Tunnel Setup Rates and IPsec Tunnel 211 Rekey Rates) are factored into these tests. 213 For control plane benchmarking tests (i.e. IPsec Tunnel Setup Rate 214 and IPsec Tunnel Rekey Rates), the authentication mechanisms(s) used 215 for the authenticated Diffie-Hellman exchange MUST be reported. 217 6. Test Topologies 219 The tests can be performed as a DUT or SUT. When the tests are 220 performed as a DUT, the Tester itself must be an IPsec peer. This 221 scenario is shown in Figure 1. When testing an IPsec Device as a 222 DUT, one considerations that needs to be take into account is that 223 the Tester can introduce interoperability issues potentially limiting 224 the scope of the tests that can be executed. On the other hand, this 225 method has the advantage that IPsec client side testing can be 226 performed as well as it is able to identify abnormalities and 227 assymetry between the encryption and decryption behavior. 229 +------------+ 230 | | 231 +----[D] Tester [A]----+ 232 | | | | 233 | +------------+ | 234 | | 235 | +------------+ | 236 | | | | 237 +----[C] DUT [B]----+ 238 | | 239 +------------+ 241 Figure 1: Device Under Test Topology 243 The SUT scenario is depicted in Figure 2. Two identical DUTs are 244 used in this test set up which more accurately simulate the use of 245 IPsec gateways. IPsec SA (i.e. AH/ESP transport or tunnel mode) 246 configurations can be tested using this set-up where the tester is 247 only required to send and receive cleartext traffic. 249 +------------+ 250 | | 251 +-----------------[F] Tester [A]-----------------+ 252 | | | | 253 | +------------+ | 254 | | 255 | +------------+ +------------+ | 256 | | | | | | 257 +----[E] DUTa [D]--------[C] DUTb [B]----+ 258 | | | | 259 +------------+ +------------+ 261 Figure 2: System Under Test Topology 263 When an IPsec DUT needs to be tested in a chassis failover topology, 264 a second DUT needs to be used as shown in figure 3. This is the 265 high-availability equivalent of the topology as depicted in Figure 1. 266 Note that in this topology the Tester MUST be an IPsec peer. 268 +------------+ 269 | | 270 +---------[F] Tester [A]---------+ 271 | | | | 272 | +------------+ | 273 | | 274 | +------------+ | 275 | | | | 276 | +----[C] DUTa [B]----+ | 277 | | | | | | 278 | | +------------+ | | 279 +----+ +----+ 280 | +------------+ | 281 | | | | 282 +----[E] DUTb [D]----+ 283 | | 284 +------------+ 286 Figure 3: Redundant Device Under Test Topology 288 When no IPsec enabled Tester is available and an IPsec failover 289 scenario needs to be tested, the topology as shown in Figure 4 can be 290 used. In this case, either the high availability pair of IPsec 291 devices can be used as an Initiator or as a Responder. The remaining 292 chassis will take the opposite role. 294 +------------+ 295 | | 296 +--------------------[H] Tester [A]----------------+ 297 | | | | 298 | +------------+ | 299 | | 300 | +------------+ | 301 | | | | 302 | +---[E] DUTa [D]---+ | 303 | | | | | +------------+ | 304 | | +------------+ | | | | 305 +---+ +----[C] DUTc [B]---+ 306 | +------------+ | | | 307 | | | | +------------+ 308 +---[G] DUTb [F]---+ 309 | | 310 +------------+ 312 Figure 4: Redundant System Under Test Topology 314 7. Test Parameters 316 For each individual test performed, all of the following parameters 317 MUST be explicitly reported in any test results. 319 7.1. Frame Type 321 7.1.1. IP 323 Both IPv4 and IPv6 frames MUST be used. The basic IPv4 header is 20 324 bytes long (which may be increased by the use of an options field). 325 The basic IPv6 header is a fixed 40 bytes and uses an extension field 326 for additional headers. Only the basic headers plus the IPsec AH 327 and/or ESP headers MUST be present. 329 It is RECOMMENDED that IPv4 and IPv6 frames be tested separately to 330 ascertain performance parameters for either IPv4 or IPv6 traffic. If 331 both IPv4 and IPv6 traffic are to be tested, the device SHOULD be 332 pre-configured for a dual-stack environment to handle both traffic 333 types. 335 It is RECOMMENDED that a test payload field is added in the payload 336 of each packet that allows flow identification and timestamping of a 337 received packet. 339 7.1.2. UDP 341 It is also RECOMMENDED that the test is executed using UDP as the L4 342 protocol. When using UDP, instrumentation data SHOULD be present in 343 the payload of the packet. It is OPTIONAL to have application 344 payload. 346 7.1.3. TCP 348 It is OPTIONAL to perform the tests with TCP as the L4 protocol but 349 in case this is considered, the TCP traffic is RECOMMENDED to be 350 stateful. With a TCP as a L4 header it is possible that there will 351 not be enough room to add all instrumentation data to identify the 352 packets within the DUT/SUT. 354 7.1.4. NAT-Traversal 356 It is RECOMMENDED to test the scenario where IPsec protected traffic 357 must traverse network address translation (NAT) gateways. This is 358 commonly referred to as Nat-Traversal and requires UDP encapsulation. 360 7.2. Frame Sizes 362 Each test MUST be run with different frame sizes. It is RECOMMENDED 363 to use teh following cleartext layer 2 frame sizes for IPv4 tests 364 over Ethernet media: 64, 128, 256, 512, 1024, 1280, and 1518 bytes, 365 per RFC2544 section 9 [RFC2544]. The four CRC bytes are included in 366 the frame size specified. 368 For GigabitEthernet, supporting jumboframes, the cleartext layer 2 369 framesizes used are 64, 128, 256, 512, 1024, 1280, 1518, 2048, 3072, 370 4096, 5120, 6144, 7168, 8192, 9234 bytes 372 For SONET these are: 47, 67, 128, 256, 512, 1024, 1280, 1518, 2048, 373 4096 bytes 375 To accomodate IEEE 802.1q and IEEE 802.3as it is RECOMMENDED to 376 respectively include 1522 and 2000 byte framesizes in all tests. 378 Since IPv6 requires that every link has an MTU of 1280 octets or 379 greater, it is MANDATORY to execute tests with cleartext layer 2 380 frame sizes that include 1280 and 1518 bytes. It is RECOMMENDED that 381 additional frame sizes are included in the IPv6 test execution, 382 including the maximum supported datagram size for the linktype used. 384 7.3. Fragmentation and Reassembly 386 IPsec devices can and must fragment packets in specific scenarios. 387 Depending on whether the fragmentation is performed in software or 388 using specialized custom hardware, there may be a significant impact 389 on performance. 391 In IPv4, unless the DF (don't fragment) bit is set by the packet 392 source, the sender cannot guarantee that some intermediary device on 393 the way will not fragment an IPsec packet. For transport mode IPsec, 394 the peers must be able to fragment and reassemble IPsec packets. 395 Reassembly of fragmented packets is especially important if an IPv4 396 port selector (or IPv6 transport protocol selector) is configured. 397 For tunnel mode IPsec, it is not a requirement. Note that 398 fragmentation is handled differently in IPv6 than in IPv4. In IPv6 399 networks, fragmentation is no longer done by intermediate routers in 400 the networks, but by the source node that originates the packet. The 401 path MTU discovery (PMTUD) mechanism is recommended for every IPv6 402 node to avoid fragmentation. 404 Packets generated by hosts that do not support PMTUD, and have not 405 set the DF bit in the IP header, will undergo fragmentation before 406 IPsec encapsulation. Packets generated by hosts that do support 407 PMTUD will use it locally to match the statically configured MTU on 408 the tunnel. If you manually set the MTU on the tunnel, you must set 409 it low enough to allow packets to pass through the smallest link on 410 the path. Otherwise, the packets that are too large to fit will be 411 dropped. 413 Fragmentation can occur due to encryption overhead and is closely 414 linked to the choice of transform used. Since each test SHOULD be 415 run with a maximum cleartext frame size (as per the previous section) 416 it will cause fragmentation to occur since the maximum frame size 417 will be exceeded. All tests MUST be run with the DF bit not set. It 418 is also recommended that all tests be run with the DF bit set. 420 7.4. Time To Live 422 The source frames should have a TTL value large enough to accommodate 423 the DUT/SUT. A Minimum TTL of 64 is RECOMMENDED. 425 7.5. Trial Duration 427 The duration of the test portion of each trial SHOULD be at least 60 428 seconds. In the case of IPsec tunnel rekeying tests, the test 429 duration must be at least two times the IPsec tunnel rekey time to 430 ensure a reasonable worst case scenario test. 432 7.6. Security Context Parameters 434 All of the security context parameters listed in section 7.13 of the 435 IPsec Benchmarking Terminology document MUST be reported. When 436 merely discussing the behavior of traffic flows through IPsec 437 devices, an IPsec context MUST be provided. In the cases where IKE 438 is configured (as opposed to using manually keyed tunnels), both an 439 IPsec and an IKE context MUST be provided. Additional considerations 440 for reporting security context parameters are detailed below. These 441 all MUST be reported. 443 7.6.1. IPsec Transform Sets 445 All tests should be done on different IPsec transform set 446 combinations. An IPsec transform specifies a single IPsec security 447 protocol (either AH or ESP) with its corresponding security 448 algorithms and mode. A transform set is a combination of individual 449 IPsec transforms designed to enact a specific security policy for 450 protecting a particular traffic flow. At minumim, the transform set 451 must include one AH algorithm and a mode or one ESP algorithm and a 452 mode. 454 +-------------+------------------+----------------------+-----------+ 455 | ESP | Encryption | Authentication | Mode | 456 | Transform | Algorithm | Algorithm | | 457 +-------------+------------------+----------------------+-----------+ 458 | 1 | NULL | HMAC-SHA1-96 | Transport | 459 | 2 | NULL | HMAC-SHA1-96 | Tunnel | 460 | 3 | 3DES-CBC | HMAC-SHA1-96 | Transport | 461 | 4 | 3DES-CBC | HMAC-SHA1-96 | Tunnel | 462 | 5 | AES-CBC-128 | HMAC-SHA1-96 | Transport | 463 | 6 | AES-CBC-128 | HMAC-SHA1-96 | Tunnel | 464 | 7 | NULL | AES-XCBC-MAC-96 | Transport | 465 | 8 | NULL | AES-XCBC-MAC-96 | Tunnel | 466 | 9 | 3DES-CBC | AES-XCBC-MAC-96 | Transport | 467 | 10 | 3DES-CBC | AES-XCBC-MAC-96 | Tunnel | 468 | 11 | AES-CBC-128 | AES-XCBC-MAC-96 | Transport | 469 | 12 | AES-CBC-128 | AES-XCBC-MAC-96 | Tunnel | 470 +-------------+------------------+----------------------+-----------+ 472 Table 1 474 Testing of ESP Transforms 1-4 MUST be supported. Testing of ESP 475 Transforms 5-12 SHOULD be supported. 477 +--------------+--------------------------+-----------+ 478 | AH Transform | Authentication Algorithm | Mode | 479 +--------------+--------------------------+-----------+ 480 | 1 | HMAC-SHA1-96 | Transport | 481 | 2 | HMAC-SHA1-96 | Tunnel | 482 | 3 | AES-XBC-MAC-96 | Transport | 483 | 4 | AES-XBC-MAC-96 | Tunnel | 484 +--------------+--------------------------+-----------+ 486 Table 2 488 If AH is supported by the DUT/SUT testing of AH Transforms 1 and 2 489 MUST be supported. Testing of AH Transforms 3 And 4 SHOULD be 490 supported. 492 Note that this these tables are derived from the Cryptographic 493 Algorithms for AH and ESP requirements as described in [RFC4305]. 494 Optionally, other AH and/or ESP transforms MAY be supported. 496 +-----------------------+----+-----+ 497 | Transform Combination | AH | ESP | 498 +-----------------------+----+-----+ 499 | 1 | 1 | 1 | 500 | 2 | 2 | 2 | 501 | 3 | 1 | 3 | 502 | 4 | 2 | 4 | 503 +-----------------------+----+-----+ 505 Table 3 507 It is RECOMMENDED that the transforms shown in Table 3 be supported 508 for IPv6 traffic selectors since AH may be used with ESP in these 509 environments. Since AH will provide the overall authentication and 510 integrity, the ESP Authentication algorithm MUST be Null for these 511 tests. Optionally, other combined AH/ESP transform sets MAY be 512 supported. 514 7.6.2. IPsec Topologies 516 All tests should be done at various IPsec topology configurations and 517 the IPsec topology used MUST be reported. Since IPv6 requires the 518 implementation of manual keys for IPsec, both manual keying and IKE 519 configurations MUST be tested. 521 For manual keying tests, the IPsec SA's used should vary from 1 to 522 101, increasing in increments of 50. Although it is not expected 523 that manual keying (i.e. manually configuring the IPsec SA) will be 524 deployed in any operational setting with the exception of very small 525 controlled environments (i.e. less than 10 nodes), it is prudent to 526 test for potentially larger scale deployments. 528 For IKE specific tests, the following IPsec topologies MUST be 529 tested: 531 o 1 IKE SA & 2 IPsec SA (i.e. 1 IPsec Tunnel) 533 o 1 IKE SA & {max} IPsec SA's 535 o {max} IKE SA's & {max} IPsec SA's 537 It is RECOMMENDED to also test with the following IPsec topologies in 538 order to gain more datapoints: 540 o {max/2} IKE SA's & {(max/2) IKE SA's} IPsec SA's 542 o {max} IKE SA's & {(max) IKE SA's} IPsec SA's 544 7.6.3. IKE Keepalives 546 IKE keepalives track reachability of peers by sending hello packets 547 between peers. During the typical life of an IKE Phase 1 SA, packets 548 are only exchanged over this IKE Phase 1 SA when an IPsec IKE Quick 549 Mode (QM) negotiation is required at the expiration of the IPSec 550 Tunnel SA's. There is no standards-based mechanism for either type 551 of SA to detect the loss of a peer, except when the QM negotiation 552 fails. Most IPsec implementations use the Dead Peer Detection (i.e. 553 Keepalive) mechanism to determine whether connectivity has been lost 554 with a peer before the expiration of the IPsec Tunnel SA's. 556 All tests using IKEv1 MUST use the same IKE keepalive parameters. 558 7.6.4. IKE DH-group 560 There are 3 Diffie-Hellman groups which can be supported by IPsec 561 standards compliant devices: 563 o DH-group 1: 768 bits 565 o DH-group 2: 1024 bits 567 o DH-group 14: 2048 bits 569 DH-group 2 MUST be tested, to support the new IKEv1 algorithm 570 requirements listed in [RFC4109]. It is recommended that the same 571 DH-group be used for both IKE Phase 1 and IKE phase 2. All test 572 methodologies using IKE MUST report which DH-group was configured to 573 be used for IKE Phase 1 and IKE Phase 2 negotiations. 575 7.6.5. IKE SA / IPsec SA Lifetime 577 An IKE SA or IPsec SA is retained by each peer until the Tunnel 578 lifetime expires. IKE SA's and IPsec SA's have individual lifetime 579 parameters. In many real-world environments, the IPsec SA's will be 580 configured with shorter lifetimes than that of the IKE SA's. This 581 will force a rekey to happen more often for IPsec SA's. 583 When the initiator begins an IKE negotiation between itself and a 584 remote peer (the responder), an IKE policy can be selected only if 585 the lifetime of the responder's policy is shorter than or equal to 586 the lifetime of the initiator's policy. If the lifetimes are not the 587 same, the shorter lifetime will be used. 589 To avoid any incompatibilities in data plane benchmark testing, all 590 devices MUST have the same IKE SA lifetime as well as an identical 591 IPsec SA lifetime configured. Both SHALL be configured to a time 592 which exceeds the test duration timeframe and the total number of 593 bytes to be transmitted during the test. 595 Note that the IPsec SA lifetime MUST be equal to or less than the IKE 596 SA lifetime. Both the IKE SA lifetime and the IPsec SA lifetime used 597 MUST be reported. This parameter SHOULD be variable when testing IKE 598 rekeying performance. 600 7.6.6. IPsec Selectors 602 All tests MUST be performed using standard IPsec selectors as 603 described in [RFC2401] section 4.4.2. 605 7.6.7. NAT-Traversal 607 For any tests that include network address translation 608 considerations, the use of NAT-T in the test environment MUST be 609 recorded. 611 8. Capacity 613 8.1. IPsec Tunnel Capacity 615 Objective: Measure the maximum number of IPsec Tunnels or Active 616 Tunnels that can be sustained on an IPsec Device. 618 Topology If no IPsec aware tester is available the test MUST be 619 conducted using a System Under Test Topology as depicted in 620 Figure 2. When an IPsec aware tester is available the test MUST 621 be executed using a Device Under Test Topology as depicted in 622 Figure 1. 624 Procedure: The IPsec Device under test initially MUST NOT have any 625 Active IPsec Tunnels. The Initiator (either a tester or an IPsec 626 peer) will start the negotiation of an IPsec Tunnel (a single 627 Phase 1 SA and a pair Phase 2 SA's). 629 After it is detected that the tunnel is established, a limited 630 number (50 packets RECOMMENDED) SHALL be sent through the tunnel. 631 If all packet are received by the Responder (i.e. the DUT), a new 632 IPsec Tunnel may be attempted. 634 This proces will be repeated until no more IPsec Tunnels can be 635 established. 637 At the end of the test, a traffic pattern is sent to the initiator 638 that will be distributed over all Established Tunnels, where each 639 tunnel will need to propagate a fixed number of packets at a 640 minimum rate of e.g. 5 pps. The aggregate rate of all Active 641 Tunnels SHALL NOT exceed the IPsec Throughput. When all packets 642 sent by the Iniator are being received by the Responder, the test 643 has succesfully determined the IKE SA Capacity. If however this 644 final check fails, the test needs to be re-executed with a lower 645 number of Active IPsec Tunnels. There MAY be a need to enforce a 646 lower number of Active IPsec Tunnels i.e. an upper limit of Active 647 IPsec Tunnel SHOULD be defined in the test. 649 During the entire duration of the test rekeying of Tunnels SHALL 650 NOT be permitted. If a rekey event occurs, the test is invalid 651 and MUST be restarted. 653 Reporting Format: The reporting format should reflect the maximum 654 number of IPsec Tunnels that can be established when all packets 655 by the initiator are received by the responder. In addition the 656 Security Context parameters defined in Section 7.6 and utilized 657 for this test MUST be included in any statement of capacity. 659 8.2. IPsec SA Capacity 661 Objective: Measure the maximum number of IPsec SA's that can be 662 sustained on an IPsec Device. 664 Topology If no IPsec aware tester is available the test MUST be 665 conducted using a System Under Test Topology as depicted in 666 Figure 2. When an IPsec aware tester is available the test MUST 667 be executed using a Device Under Test Topology as depicted in 668 Figure 1. 670 Procedure: The IPsec Device under test initially MUST NOT have any 671 Active IPsec Tunnels. The Initiator (either a tester or an IPsec 672 peer) will start the negotiation of an IPsec Tunnel (a single 673 Phase 1 SA and a pair Phase 2 SA's). 675 After it is detected that the tunnel is established, a limited 676 number (50 packets RECOMMENDED) SHALL be sent through the tunnel. 677 If all packet are received by the Responder (i.e. the DUT), a new 678 pair of IPsec SA's may be attempted. This will be achieved by 679 offering a specific traffic pattern to the Initiator that matches 680 a given selector and therfore triggering the negotiation of a new 681 pair of IPsec SA's. 683 This proces will be repeated until no more IPsec SA' can be 684 established. 686 At the end of the test, a traffic pattern is sent to the initiator 687 that will be distributed over all IPsec SA's, where each SA will 688 need to propagate a fixed number of packets at a minimum rate of 5 689 pps. When all packets sent by the Iniator are being received by 690 the Responder, the test has succesfully determined the IPsec SA 691 Capacity. If however this final check fails, the test needs to be 692 re-executed with a lower number of IPsec SA's. There MAY be a 693 need to enforce a lower number IPsec SA's i.e. an upper limit of 694 IPsec SA's SHOULD be defined in the test. 696 During the entire duration of the test rekeying of Tunnels SHALL 697 NOT be permitted. If a rekey event occurs, the test is invalid 698 and MUST be restarted. 700 Reporting Format: The reporting format SHOULD be the same as listed 701 in Section 8.1 for the maximum number of IPsec SAs. 703 9. Throughput 705 This section contains the description of the tests that are related 706 to the characterization of the packet forwarding of a DUT/SUT in an 707 IPsec environment. Some metrics extend the concept of throughput 708 presented in [RFC1242]. The notion of Forwarding Rate is cited in 709 [RFC2285]. 711 A separate test SHOULD be performed for Throughput tests using IPv4/ 712 UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic. 714 9.1. Throughput baseline 716 Objective: Measure the intrinsic cleartext throughput of a device 717 without the use of IPsec. The throughput baseline methodology and 718 reporting format is derived from [RFC2544]. 720 Topology If no IPsec aware tester is available the test MUST be 721 conducted using a System Under Test Topology as depicted in 722 Figure 2. When an IPsec aware tester is available the test MUST 723 be executed using a Device Under Test Topology as depicted in 724 Figure 1. 726 Procedure: Send a specific number of frames that matches the IPsec 727 SA selector(s) to be tested at a specific rate through the DUT and 728 then count the frames that are transmitted by the DUT. If the 729 count of offered frames is equal to the count of received frames, 730 the rate of the offered stream is increased and the test is rerun. 731 If fewer frames are received than were transmitted, the rate of 732 the offered stream is reduced and the test is rerun. 734 The throughput is the fastest rate at which the count of test 735 frames transmitted by the DUT is equal to the number of test 736 frames sent to it by the test equipment. 738 Note that the IPsec SA selectors refer to the IP addresses and 739 port numbers. So eventhough this is a test of only cleartext 740 traffic, the same type of traffic should be sent for the baseline 741 test as for tests utilizing IPsec. 743 Reporting Format: The results of the throughput test SHOULD be 744 reported in the form of a graph. If it is, the x coordinate 745 SHOULD be the frame size, the y coordinate SHOULD be the frame 746 rate. There SHOULD be at least two lines on the graph. There 747 SHOULD be one line showing the theoretical frame rate for the 748 media at the various frame sizes. The second line SHOULD be the 749 plot of the test results. Additional lines MAY be used on the 750 graph to report the results for each type of data stream tested. 751 Text accompanying the graph SHOULD indicate the protocol, data 752 stream format, and type of media used in the tests. 754 Any values for throughput rate MUST be expressed in packets per 755 second. The rate MAY also be expressed in bits (or bytes) per 756 second if the vendor so desires. The statement of performance 757 MUST include: 759 * Measured maximum frame rate 761 * Size of the frame used 763 * Theoretical limit of the media for that frame size 765 * Type of protocol used in the test 767 9.2. IPsec Throughput 769 Objective: Measure the intrinsic throughput of a device utilizing 770 IPsec. 772 Topology If no IPsec aware tester is available the test MUST be 773 conducted using a System Under Test Topology as depicted in 774 Figure 2. When an IPsec aware tester is available the test MUST 775 be executed using a Device Under Test Topology as depicted in 776 Figure 1. 778 Procedure: Send a specific number of cleartext frames that match the 779 IPsec SA selector(s) at a specific rate through the DUT/SUT. DUTa 780 will encrypt the traffic and forward to DUTb which will in turn 781 decrypt the traffic and forward to the testing device. The 782 testing device counts the frames that are transmitted by the DUTb. 783 If the count of offered frames is equal to the count of received 784 frames, the rate of the offered stream is increased and the test 785 is rerun. If fewer frames are received than were transmitted, the 786 rate of the offered stream is reduced and the test is rerun. 788 The IPsec Throughput is the fastest rate at which the count of 789 test frames transmitted by the DUT/SUT is equal to the number of 790 test frames sent to it by the test equipment. 792 For tests using multiple IPsec SA's, the test traffic associated 793 with the individual traffic selectors defined for each IPsec SA 794 MUST be sent in a round robin type fashion to keep the test 795 balanced so as not to overload any single IPsec SA. 797 Reporting format: The reporting format SHALL be the same as listed 798 in Section 9.1 with the additional requirement that the Security 799 Context Parameters, as defined in Section 7.6, utilized for this 800 test MUST be included in any statement of performance. 802 9.3. IPsec Encryption Throughput 804 Objective: Measure the intrinsic DUT vendor specific IPsec 805 Encryption Throughput. 807 Topology The test MUST be conducted using a Device Under Test 808 Topology as depicted in Figure 1. 810 Procedure: Send a specific number of cleartext frames that match the 811 IPsec SA selector(s) at a specific rate to the DUT. The DUT will 812 receive the cleartext frames, perform IPsec operations and then 813 send the IPsec protected frame to the tester. Upon receipt of the 814 encrypted packet, the testing device will timestamp the packet(s) 815 and record the result. If the count of offered frames is equal to 816 the count of received frames, the rate of the offered stream is 817 increased and the test is rerun. If fewer frames are received 818 than were transmitted, the rate of the offered stream is reduced 819 and the test is rerun. 821 The IPsec Encryption Throughput is the fastest rate at which the 822 count of test frames transmitted by the DUT is equal to the number 823 of test frames sent to it by the test equipment. 825 For tests using multiple IPsec SA's, the test traffic associated 826 with the individual traffic selectors defined for each IPsec SA 827 MUST be sent in a round robin type fashion to keep the test 828 balanced so as not to overload any single IPsec SA. 830 Reporting format: The reporting format SHALL be the same as listed 831 in Section 9.1 with the additional requirement that the Security 832 Context Parameters, as defined in Section 7.6, utilized for this 833 test MUST be included in any statement of performance. 835 9.4. IPsec Decryption Throughput 837 Objective: Measure the intrinsic DUT vendor specific IPsec 838 Decryption Throughput. 840 Topology The test MUST be conducted using a Device Under Test 841 Topology as depicted in Figure 1. 843 Procedure: Send a specific number of IPsec protected frames that 844 match the IPsec SA selector(s) at a specific rate to the DUT. The 845 DUT will receive the IPsec protected frames, perform IPsec 846 operations and then send the cleartext frame to the tester. Upon 847 receipt of the cleartext packet, the testing device will timestamp 848 the packet(s) and record the result. If the count of offered 849 frames is equal to the count of received frames, the rate of the 850 offered stream is increased and the test is rerun. If fewer 851 frames are received than were transmitted, the rate of the offered 852 stream is reduced and the test is rerun. 854 The IPsec Decryption Throughput is the fastest rate at which the 855 count of test frames transmitted by the DUT is equal to the number 856 of test frames sent to it by the test equipment. 858 For tests using multiple IPsec SA's, the test traffic associated 859 with the individual traffic selectors defined for each IPsec SA 860 MUST be sent in a round robin type fashion to keep the test 861 balanced so as not to overload any single IPsec SA. 863 Reporting format: The reporting format SHALL be the same as listed 864 in Section 9.1 with the additional requirement that the Security 865 Context Parameters, as defined in Section 7.6, utilized for this 866 test MUST be included in any statement of performance. 868 10. Latency 870 This section presents methodologies relating to the characterization 871 of the forwarding latency of a DUT/SUT. It extends the concept of 872 latency characterization presented in [RFC2544] to an IPsec 873 environment. 875 A separate tests SHOULD be performed for latency tests using IPv4/ 876 UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic. 878 In order to lessen the effect of packet buffering in the DUT/SUT, the 879 latency tests MUST be run at the measured IPsec throughput level of 880 the DUT/SUT; IPsec latency at other offered loads is optional. 882 Lastly, [RFC1242] and [RFC2544] draw distinction between two classes 883 of devices: "store and forward" and "bit-forwarding". Each class 884 impacts how latency is collected and subsequently presented. See the 885 related RFC's for more information. In practice, much of the test 886 equipment will collect the latency measurement for one class or the 887 other, and, if needed, mathematically derive the reported value by 888 the addition or subtraction of values accounting for medium 889 propagation delay of the packet, bit times to the timestamp trigger 890 within the packet, etc. Test equipment vendors SHOULD provide 891 documentation regarding the composition and calculation latency 892 values being reported. The user of this data SHOULD understand the 893 nature of the latency values being reported, especially when 894 comparing results collected from multiple test vendors. (E.g., If 895 test vendor A presents a "store and forward" latency result and test 896 vendor B presents a "bit-forwarding" latency result, the user may 897 erroneously conclude the DUT has two differing sets of latency 898 values.). 900 10.1. Latency Baseline 902 Objective: Measure the intrinsic latency (min/avg/max) introduced by 903 a device without the use of IPsec. 905 Topology If no IPsec aware tester is available the test MUST be 906 conducted using a System Under Test Topology as depicted in 907 Figure 2. When an IPsec aware tester is available the test MUST 908 be executed using a Device Under Test Topology as depicted in 909 Figure 1. 911 Procedure: First determine the throughput for the DUT/SUT at each of 912 the listed frame sizes. Send a stream of frames at a particular 913 frame size through the DUT at the determined throughput rate using 914 frames that match the IPsec SA selector(s) to be tested. The 915 stream SHOULD be at least 120 seconds in duration. An identifying 916 tag SHOULD be included in one frame after 60 seconds with the type 917 of tag being implementation dependent. The time at which this 918 frame is fully transmitted is recorded (timestamp A). The 919 receiver logic in the test equipment MUST recognize the tag 920 information in the frame stream and record the time at which the 921 tagged frame was received (timestamp B). 923 The latency is timestamp B minus timestamp A as per the relevant 924 definition from RFC 1242, namely latency as defined for store and 925 forward devices or latency as defined for bit forwarding devices. 927 The test MUST be repeated at least 20 times with the reported 928 value being the average of the recorded values. 930 Reporting Format The report MUST state which definition of latency 931 (from [RFC1242]) was used for this test. The latency results 932 SHOULD be reported in the format of a table with a row for each of 933 the tested frame sizes. There SHOULD be columns for the frame 934 size, the rate at which the latency test was run for that frame 935 size, for the media types tested, and for the resultant latency 936 values for each type of data stream tested. 938 10.2. IPsec Latency 940 Objective: Measure the intrinsic IPsec Latency (min/avg/max) 941 introduced by a device when using IPsec. 943 Topology If no IPsec aware tester is available the test MUST be 944 conducted using a System Under Test Topology as depicted in 945 Figure 2. When an IPsec aware tester is available the test MUST 946 be executed using a Device Under Test Topology as depicted in 947 Figure 1. 949 Procedure: First determine the throughput for the DUT/SUT at each of 950 the listed frame sizes. Send a stream of cleartext frames at a 951 particular frame size through the DUT/SUT at the determined 952 throughput rate using frames that match the IPsec SA selector(s) 953 to be tested. DUTa will encrypt the traffic and forward to DUTb 954 which will in turn decrypt the traffic and forward to the testing 955 device. 957 The stream SHOULD be at least 120 seconds in duration. An 958 identifying tag SHOULD be included in one frame after 60 seconds 959 with the type of tag being implementation dependent. The time at 960 which this frame is fully transmitted is recorded (timestamp A). 961 The receiver logic in the test equipment MUST recognize the tag 962 information in the frame stream and record the time at which the 963 tagged frame was received (timestamp B). 965 The IPsec Latency is timestamp B minus timestamp A as per the 966 relevant definition from [RFC1242], namely latency as defined for 967 store and forward devices or latency as defined for bit forwarding 968 devices. 970 The test MUST be repeated at least 20 times with the reported 971 value being the average of the recorded values. 973 Reporting format: The reporting format SHALL be the same as listed 974 in Section 10.1 with the additional requirement that the Security 975 Context Parameters, as defined in Section 7.6, utilized for this 976 test MUST be included in any statement of performance. 978 10.3. IPsec Encryption Latency 980 Objective: Measure the DUT vendor specific IPsec Encryption Latency 981 for IPsec protected traffic. 983 Topology The test MUST be conducted using a Device Under Test 984 Topology as depicted in Figure 1. 986 Procedure: Send a stream of cleartext frames at a particular frame 987 size through the DUT/SUT at the determined throughput rate using 988 frames that match the IPsec SA selector(s) to be tested. 990 The stream SHOULD be at least 120 seconds in duration. An 991 identifying tag SHOULD be included in one frame after 60 seconds 992 with the type of tag being implementation dependent. The time at 993 which this frame is fully transmitted is recorded (timestamp A). 994 The DUT will receive the cleartext frames, perform IPsec 995 operations and then send the IPsec protected frames to the tester. 996 Upon receipt of the encrypted frames, the receiver logic in the 997 test equipment MUST recognize the tag information in the frame 998 stream and record the time at which the tagged frame was received 999 (timestamp B). 1001 The IPsec Encryption Latency is timestamp B minus timestamp A as 1002 per the relevant definition from [RFC1242], namely latency as 1003 defined for store and forward devices or latency as defined for 1004 bit forwarding devices. 1006 The test MUST be repeated at least 20 times with the reported 1007 value being the average of the recorded values. 1009 Reporting format: The reporting format SHALL be the same as listed 1010 in Section 10.1 with the additional requirement that the Security 1011 Context Parameters, as defined in Section 7.6, utilized for this 1012 test MUST be included in any statement of performance. 1014 10.4. IPsec Decryption Latency 1016 Objective: Measure the DUT Vendor Specific IPsec Decryption Latency 1017 for IPsec protected traffic. 1019 Topology The test MUST be conducted using a Device Under Test 1020 Topology as depicted in Figure 1. 1022 Procedure: Send a stream of IPsec protected frames at a particular 1023 frame size through the DUT/SUT at the determined throughput rate 1024 using frames that match the IPsec SA selector(s) to be tested. 1026 The stream SHOULD be at least 120 seconds in duration. An 1027 identifying tag SHOULD be included in one frame after 60 seconds 1028 with the type of tag being implementation dependent. The time at 1029 which this frame is fully transmitted is recorded (timestamp A). 1030 The DUT will receive the IPsec protected frames, perform IPsec 1031 operations and then send the cleartext frames to the tester. Upon 1032 receipt of the decrypted frames, the receiver logic in the test 1033 equipment MUST recognize the tag information in the frame stream 1034 and record the time at which the tagged frame was received 1035 (timestamp B). 1037 The IPsec Decryption Latency is timestamp B minus timestamp A as 1038 per the relevant definition from [RFC1242], namely latency as 1039 defined for store and forward devices or latency as defined for 1040 bit forwarding devices. 1042 The test MUST be repeated at least 20 times with the reported 1043 value being the average of the recorded values. 1045 Reporting format: The reporting format SHALL be the same as listed 1046 in Section 10.1 with the additional requirement that the Security 1047 Context Parameters, as defined in Section 7.6, utilized for this 1048 test MUST be included in any statement of performance. 1050 10.5. Time To First Packet 1052 Objective: Measure the time it takes to transmit a packet when no 1053 SA's have been established. 1055 Topology If no IPsec aware tester is available the test MUST be 1056 conducted using a System Under Test Topology as depicted in 1057 Figure 2. When an IPsec aware tester is available the test MUST 1058 be executed using a Device Under Test Topology as depicted in 1059 Figure 1. 1061 Procedure: Determine the IPsec throughput for the DUT/SUT at each of 1062 the listed frame sizes. Start with a DUT/SUT with Configured 1063 Tunnels. Send a stream of cleartext frames at a particular frame 1064 size through the DUT/SUT at the determined throughput rate using 1065 frames that match the IPsec SA selector(s) to be tested. 1067 The time at which the first frame is fully transmitted from the 1068 testing device is recorded as timestamp A. The time at which the 1069 testing device receives its first frame from the DUT/SUT is 1070 recorded as timestamp B. The Time To First Packet is the 1071 difference between Timestamp B and Timestamp A. 1073 Note that it is possible that packets can be lost during IPsec 1074 Tunnel establishment and that timestamp A & B are not required to 1075 be associated with a unique packet. 1077 Reporting format: The Time To First Packet results SHOULD be 1078 reported in the format of a table with a row for each of the 1079 tested frame sizes. There SHOULD be columns for the frame size, 1080 the rate at which the TTFP test was run for that frame size, for 1081 the media types tested, and for the resultant TTFP values for each 1082 type of data stream tested. The Security Context Parameters 1083 defined in Section 7.6 and utilized for this test MUST be included 1084 in any statement of performance. 1086 11. Frame Loss Rate 1088 This section presents methodologies relating to the characterization 1089 of frame loss rate, as defined in [RFC1242], in an IPsec environment. 1091 11.1. Frame Loss Baseline 1093 Objective: To determine the frame loss rate, as defined in 1094 [RFC1242], of a DUT/SUT throughout the entire range of input data 1095 rates and frame sizes without the use of IPsec. 1097 Topology If no IPsec aware tester is available the test MUST be 1098 conducted using a System Under Test Topology as depicted in 1099 Figure 2. When an IPsec aware tester is available the test MUST 1100 be executed using a Device Under Test Topology as depicted in 1101 Figure 1. 1103 Procedure: Send a specific number of frames at a specific rate 1104 through the DUT/SUT to be tested using frames that match the IPsec 1105 SA selector(s) to be tested and count the frames that are 1106 transmitted by the DUT/SUT. The frame loss rate at each point is 1107 calculated using the following equation: 1109 ( ( input_count - output_count ) * 100 ) / input_count 1111 The first trial SHOULD be run for the frame rate that corresponds 1112 to 100% of the maximum rate for the nominal device throughput, 1113 which is the throughput that is actually supported on an interface 1114 for a specific packet size and may not be the theoretical maximum. 1115 Repeat the procedure for the rate that corresponds to 90% of the 1116 maximum rate used and then for 80% of this rate. This sequence 1117 SHOULD be continued (at reduced 10% intervals) until there are two 1118 successive trials in which no frames are lost. The maximum 1119 granularity of the trials MUST be 10% of the maximum rate, a finer 1120 granularity is encouraged. 1122 Reporting Format: The results of the frame loss rate test SHOULD be 1123 plotted as a graph. If this is done then the X axis MUST be the 1124 input frame rate as a percent of the theoretical rate for the 1125 media at the specific frame size. The Y axis MUST be the percent 1126 loss at the particular input rate. The left end of the X axis and 1127 the bottom of the Y axis MUST be 0 percent; the right end of the X 1128 axis and the top of the Y axis MUST be 100 percent. Multiple 1129 lines on the graph MAY used to report the frame loss rate for 1130 different frame sizes, protocols, and types of data streams. 1132 11.2. IPsec Frame Loss 1134 Objective: To measure the frame loss rate of a device when using 1135 IPsec to protect the data flow. 1137 Topology When an IPsec aware tester is available the test MUST be 1138 executed using a Device Under Test Topology as depicted in 1139 Figure 1. If no IPsec aware tester is available the test MUST be 1140 conducted using a System Under Test Topology as depicted in 1141 Figure 2. In this scenario, it is common practice to use an 1142 asymmetric topology, where a less powerful (lower throughput) DUT 1143 is used in conjunction with a much more powerful IPsec device. 1144 This topology variant can in may cases produce more accurate 1145 results that the symmetric variant depicted in the figure, since 1146 all bottlenecks are expected to be on the less performant device. 1148 Procedure: Ensure that the DUT/SUT is in active tunnel mode. Send a 1149 specific number of cleartext frames that match the IPsec SA 1150 selector(s) to be tested at a specific rate through the DUT/SUT. 1151 DUTa will encrypt the traffic and forward to DUTb which will in 1152 turn decrypt the traffic and forward to the testing device. The 1153 testing device counts the frames that are transmitted by the DUTb. 1154 The frame loss rate at each point is calculated using the 1155 following equation: 1157 ( ( input_count - output_count ) * 100 ) / input_count 1159 The first trial SHOULD be run for the frame rate that corresponds 1160 to 100% of the maximum rate for the nominal device throughput, 1161 which is the throughput that is actually supported on an interface 1162 for a specific packet size and may not be the theoretical maximum. 1163 Repeat the procedure for the rate that corresponds to 90% of the 1164 maximum rate used and then for 80% of this rate. This sequence 1165 SHOULD be continued (at reducing 10% intervals) until there are 1166 two successive trials in which no frames are lost. The maximum 1167 granularity of the trials MUST be 10% of the maximum rate, a finer 1168 granularity is encouraged. 1170 Reporting Format: The reporting format SHALL be the same as listed 1171 in Section 11.1 with the additional requirement that the Security 1172 Context Parameters, as defined in Section 7.6, utilized for this 1173 test MUST be included in any statement of performance. 1175 11.3. IPsec Encryption Frame Loss 1177 Objective: To measure the effect of IPsec encryption on the frame 1178 loss rate of a device. 1180 Topology The test MUST be conducted using a Device Under Test 1181 Topology as depicted in Figure 1. 1183 Procedure: Send a specific number of cleartext frames that match the 1184 IPsec SA selector(s) at a specific rate to the DUT. The DUT will 1185 receive the cleartext frames, perform IPsec operations and then 1186 send the IPsec protected frame to the tester. The testing device 1187 counts the encrypted frames that are transmitted by the DUT. The 1188 frame loss rate at each point is calculated using the following 1189 equation: 1191 ( ( input_count - output_count ) * 100 ) / input_count 1193 The first trial SHOULD be run for the frame rate that corresponds 1194 to 100% of the maximum rate for the nominal device throughput, 1195 which is the throughput that is actually supported on an interface 1196 for a specific packet size and may not be the theoretical maximum. 1197 Repeat the procedure for the rate that corresponds to 90% of the 1198 maximum rate used and then for 80% of this rate. This sequence 1199 SHOULD be continued (at reducing 10% intervals) until there are 1200 two successive trials in which no frames are lost. The maximum 1201 granularity of the trials MUST be 10% of the maximum rate, a finer 1202 granularity is encouraged. 1204 Reporting Format: The reporting format SHALL be the same as listed 1205 in Section 11.1 with the additional requirement that the Security 1206 Context Parameters, as defined in Section 7.6, utilized for this 1207 test MUST be included in any statement of performance. 1209 11.4. IPsec Decryption Frame Loss 1211 Objective: To measure the effects of IPsec encryption on the frame 1212 loss rate of a device. 1214 Topology: The test MUST be conducted using a Device Under Test 1215 Topology as depicted in Figure 1. 1217 Procedure: Send a specific number of IPsec protected frames that 1218 match the IPsec SA selector(s) at a specific rate to the DUT. The 1219 DUT will receive the IPsec protected frames, perform IPsec 1220 operations and then send the cleartext frames to the tester. The 1221 testing device counts the cleartext frames that are transmitted by 1222 the DUT. The frame loss rate at each point is calculated using 1223 the following equation: 1225 ( ( input_count - output_count ) * 100 ) / input_count 1227 The first trial SHOULD be run for the frame rate that corresponds 1228 to 100% of the maximum rate for the nominal device throughput, 1229 which is the throughput that is actually supported on an interface 1230 for a specific packet size and may not be the theoretical maximum. 1231 Repeat the procedure for the rate that corresponds to 90% of the 1232 maximum rate used and then for 80% of this rate. This sequence 1233 SHOULD be continued (at reducing 10% intervals) until there are 1234 two successive trials in which no frames are lost. The maximum 1235 granularity of the trials MUST be 10% of the maximum rate, a finer 1236 granularity is encouraged. 1238 Reporting format: The reporting format SHALL be the same as listed 1239 in Section 11.1 with the additional requirement that the Security 1240 Context Parameters, as defined in Section 7.6, utilized for this 1241 test MUST be included in any statement of performance. 1243 11.5. IKE Phase 2 Rekey Frame Loss 1245 Objective: To measure the frame loss due to an IKE Phase 2 (i.e. 1246 IPsec SA) Rekey event. 1248 Topology: The test MUST be conducted using a Device Under Test 1249 Topology as depicted in Figure 1. 1251 Procedure: The procedure is the same as in Section 11.2 with the 1252 exception that the IPsec SA lifetime MUST be configured to be one- 1253 third of the trial test duration or one-third of the total number 1254 of bytes to be transmitted during the trial duration. 1256 Reporting format: The reporting format SHALL be the same as listed 1257 in Section 11.1 with the additional requirement that the Security 1258 Context Parameters, as defined in Section 7.6, utilized for this 1259 test MUST be included in any statement of performance. 1261 12. IPsec Tunnel Setup Behavior 1263 12.1. IPsec Tunnel Setup Rate 1265 Objective: Determine the rate at which IPsec Tunnels can be 1266 established. 1268 Topology: The test MUST be conducted using a Device Under Test 1269 Topology as depicted in Figure 1. 1271 Procedure: Configure the Responder (where the Responder is the DUT) 1272 with n IKE Phase 1 and corresponding IKE Phase 2 policies. Ensure 1273 that no SA's are established and that the Responder has 1274 Established Tunnels for all n policies. Send a stream of 1275 cleartext frames at a particular frame size to the Responder at 1276 the determined throughput rate using frames with selectors 1277 matching the first IKE Phase 1 policy. As soon as the testing 1278 device receives its first frame from the Responder, it knows that 1279 the IPsec Tunnel is established and starts sending the next stream 1280 of cleartext frames using the same frame size and throughput rate 1281 but this time using selectors matching the second IKE Phase 1 1282 policy. This process is repeated until all configured IPsec 1283 Tunnels have been established. 1285 Some devices may support policy configurations where you do not 1286 need a one-to-one correspondence between an IKE Phase 1 policy and 1287 a specific IKE SA. In this case, the number of IKE Phase 1 1288 policies configured should be sufficient so that the transmitted 1289 (i.e. offered) test traffic will create 'n' IKE SAs. 1291 The IPsec Tunnel Setup Rate is measured in Tunnels Per Second 1292 (TPS) and is determined by the following formula: 1294 Tunnel Setup Rate = n / [Duration of Test - (n * 1295 frame_transmit_time)] TPS 1296 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1297 to exceed the duration of the test time. It is RECOMMENDED that 1298 n=100 IPsec Tunnels are tested at a minimum to get a large enough 1299 sample size to depict some real-world behavior. 1301 Reporting Format: The Tunnel Setup Rate results SHOULD be reported 1302 in the format of a table with a row for each of the tested frame 1303 sizes. There SHOULD be columns for: 1305 The throughput rate at which the test was run for the specified 1306 frame size 1308 The media type used for the test 1310 The resultant Tunnel Setup Rate values, in TPS, for the 1311 particular data stream tested for that frame size 1313 The Security Context Parameters defined in Section 7.6 and 1314 utilized for this test MUST be included in any statement of 1315 performance. 1317 12.2. IKE Phase 1 Setup Rate 1319 Objective: Determine the rate of IKE SA's that can be established. 1321 Topology: The test MUST be conducted using a Device Under Test 1322 Topology as depicted in Figure 1. 1324 Procedure: Configure the Responder with n IKE Phase 1 and 1325 corresponding IKE Phase 2 policies. Ensure that no SA's are 1326 established and that the Responder has Configured Tunnels for all 1327 n policies. Send a stream of cleartext frames at a particular 1328 frame size through the Responder at the determined throughput rate 1329 using frames with selectors matching the first IKE Phase 1 policy. 1330 As soon as the Phase 1 SA is established, the testing device 1331 starts sending the next stream of cleartext frames using the same 1332 frame size and throughput rate but this time using selectors 1333 matching the second IKE Phase 1 policy. This process is repeated 1334 until all configured IKE SA's have been established. 1336 Some devices may support policy configurations where you do not 1337 need a one-to-one correspondence between an IKE Phase 1 policy and 1338 a specific IKE SA. In this case, the number of IKE Phase 1 1339 policies configured should be sufficient so that the transmitted 1340 (i.e. offered) test traffic will create 'n' IKE SAs. 1342 The IKE SA Setup Rate is determined by the following formula: 1344 IKE SA Setup Rate = n / [Duration of Test - (n * 1345 frame_transmit_time)] IKE SAs per second 1347 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1348 to exceed the duration of the test time. It is RECOMMENDED that 1349 n=100 IKE SA's are tested at a minumum to get a large enough 1350 sample size to depict some real-world behavior. 1352 Reporting Format: The IKE Phase 1 Setup Rate results SHOULD be 1353 reported in the format of a table with a row for each of the 1354 tested frame sizes. There SHOULD be columns for the frame size, 1355 the rate at which the test was run for that frame size, for the 1356 media types tested, and for the resultant IKE Phase 1 Setup Rate 1357 values for each type of data stream tested. The Security Context 1358 Parameters defined in Section 7.6 and utilized for this test MUST 1359 be included in any statement of performance. 1361 12.3. IKE Phase 2 Setup Rate 1363 Objective: Determine the rate of IPsec SA's that can be established. 1365 Topology: The test MUST be conducted using a Device Under Test 1366 Topology as depicted in Figure 1. 1368 Procedure: Configure the Responder (where the Responder is the DUT) 1369 with a single IKE Phase 1 policy and n corresponding IKE Phase 2 1370 policies. Ensure that no SA's are established and that the 1371 Responder has Configured Tunnels for all policies. Send a stream 1372 of cleartext frames at a particular frame size through the 1373 Responder at the determined throughput rate using frames with 1374 selectors matching the first IPsec SA policy. 1376 The time at which the IKE SA is established is recorded as 1377 timestamp_A. As soon as the Phase 1 SA is established, the IPsec 1378 SA negotiation will be initiated. Once the first IPsec SA has 1379 been established, start sending the next stream of cleartext 1380 frames using the same frame size and throughput rate but this time 1381 using selectors matching the second IKE Phase 2 policy. This 1382 process is repeated until all configured IPsec SA's have been 1383 established. 1385 The IPsec SA Setup Rate is determined by the following formula, 1386 where test_duration and frame_transmit_times are expressed in 1387 units of seconds: 1389 IPsec SA Setup Rate = n / [test_duration - {timestamp_A +((n-1) * 1390 frame_transmit_time)}] IPsec SA's per Second 1392 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1393 to exceed the duration of the test time. It is RECOMMENDED that 1394 n=100 IPsec SA's are tested at a minumum to get a large enough 1395 sample size to depict some real-world behavior. 1397 Reporting Format: The IKE Phase 2 Setup Rate results SHOULD be 1398 reported in the format of a table with a row for each of the 1399 tested frame sizes. There SHOULD be columns for: 1401 The throughput rate at which the test was run for the specified 1402 frame size 1404 The media type used for the test 1406 The resultant IKE Phase 2 Setup Rate values, in IPsec SA's per 1407 second, for the particular data stream tested for that frame 1408 size 1410 The Security Context parameters defined in Section 7.6 and 1411 utilized for this test MUST be included in any statement of 1412 performance. 1414 13. IPsec Rekey Behavior 1416 The IPsec Rekey Behavior test all need to be executed by an IPsec 1417 aware test device since the test needs to be closely linked with the 1418 IKE FSM (Finite State Machine) and cannot be done by offering 1419 specific traffic pattern at either the Initiator or the Responder. 1421 13.1. IKE Phase 1 Rekey Rate 1423 Objective: Determine the maximum rate at which an IPsec Device can 1424 rekey IKE SA's. 1426 Topology: The test MUST be conducted using a Device Under Test 1427 Topology as depicted in Figure 1. 1429 Procedure: The IPsec Device under test should initially be set up 1430 with the determined IPsec Tunnel Capacity number of Active IPsec 1431 Tunnels. 1433 The IPsec aware tester should then perform a binary search where 1434 it initiates an IKE Phase 1 SA rekey for all Active IPsec Tunnels. 1435 The tester MUST timestamp for each IKE SA when it initiated the 1436 rekey (timestamp_A) and MUST timestamp once more once the FSM 1437 declares the rekey is completed (timestamp_B).The rekey time for a 1438 specific SA equals timestamp_B - timestamp_A. Once the iteration 1439 is complete the tester now has a table of rekey times for each IKE 1440 SA. The reciproce of the average of this table is the IKE Phase 1 1441 Rekey Rate. 1443 It is expected that all IKE SA were able to rekey succesfully. If 1444 this is not the case, the IPsec Tunnels are all re-established and 1445 the binary search goes to the next value of IKE SA's it will 1446 rekey. The process will repeat itself until a rate is determined 1447 at which all SA's in that timeframe rekey correctly. 1449 Reporting Format: The IKE Phase 1 Rekey Rate results SHOULD be 1450 reported in the format of a table with a row for each of the 1451 tested frame sizes. There SHOULD be columns for the frame size, 1452 the rate at which the test was run for that frame size, for the 1453 media types tested, and for the resultant IKE Phase 1 Rekey Rate 1454 values for each type of data stream tested. The Security Context 1455 Parameters defined in Section 7.6 and utilized for this test MUST 1456 be included in any statement of performance. 1458 13.2. IKE Phase 2 Rekey Rate 1460 Objective: Determine the maximum rate at which an IPsec Device can 1461 rekey IPsec SA's. 1463 Topology: The test MUST be conducted using a Device Under Test 1464 Topology as depicted in Figure 1. 1466 Procedure: The IPsec Device under test should initially be set up 1467 with the determined IPsec Tunnel Capacity number of Active IPsec 1468 Tunnels. 1470 The IPsec aware tester should then perform a binary search where 1471 it initiates an IKE Phase 2 SA rekey for all IPsec SA's. The 1472 tester MUST timestamp for each IPsec SA when it initiated the 1473 rekey (timestamp_A) and MUST timestamp once more once the FSM 1474 declares the rekey is completed (timestamp_B). The rekey time for 1475 a specific IPsec SA is timestamp_B - timestamp_A. Once the 1476 itteration is complete the tester now has a table of rekey times 1477 for each IPsec SA. The reciproce of the average of this table is 1478 the IKE Phase 2 Rekey Rate. 1480 It is expected that all IPsec SA's were able to rekey succesfully. 1481 If this is not the case, the IPsec Tunnels are all re-established 1482 and the binary search goes to the next value of IPsec SA's it will 1483 rekey. The process will repeat itself until a rate is determined 1484 at which a all SA's in that timeframe rekey correctly. 1486 Reporting Format: The IKE Phase 2 Rekey Rate results SHOULD be 1487 reported in the format of a table with a row for each of the 1488 tested frame sizes. There SHOULD be columns for the frame size, 1489 the rate at which the test was run for that frame size, for the 1490 media types tested, and for the resultant IKE Phase 2 Rekey Rate 1491 values for each type of data stream tested. The Security Context 1492 Parameters defined in Section 7.6 and utilized for this test MUST 1493 be included in any statement of performance. 1495 14. IPsec Tunnel Failover Time 1497 This section presents methodologies relating to the characterization 1498 of the failover behavior of a DUT/SUT in a IPsec environment. 1500 In order to lessen the effect of packet buffering in the DUT/SUT, the 1501 Tunnel Failover Time tests MUST be run at the measured IPsec 1502 Throughput level of the DUT. Tunnel Failover Time tests at other 1503 offered constant loads are OPTIONAL. 1505 Tunnel Failovers can be achieved in various ways, for example: 1507 o Failover between two Software Instances of an IPsec stack. 1509 o Failover between two IPsec devices. 1511 o Failover between two Hardware IPsec Engines within a single IPsec 1512 Device. 1514 o Fallback to Software IPsec from Hardware IPsec within a single 1515 IPsec Device. 1517 In all of the above cases there shall be at least one active IPsec 1518 device and a standby device. In some cases the standby device is not 1519 present and two or more IPsec devices are backing eachother up in 1520 case of a catastrophic device or stack failure. The standby (or 1521 potential other active) IPsec Devices can back up the active IPsec 1522 Device in either a stateless or statefull method. In the former 1523 case, Phase 1 SA's as well as Phase 2 SA's will need to be re- 1524 established in order to guarantuee packet forwarding. In the latter 1525 case, the SPD and SADB of the active IPsec Device is synchronized to 1526 the standby IPsec Device to ensure immediate packet path recovery. 1528 Objective: Determine the time required to fail over all Active 1529 Tunnels from an active IPsec Device to its standby device. 1531 Topology: If no IPsec aware tester is available, the test MUST be 1532 conducted using a Redundant System Under Test Topology as depicted 1533 in Figure 4. When an IPsec aware tester is available the test 1534 MUST be executed using a Redundant Unit Under Test Topology as 1535 depicted in Figure 3. If the failover is being tested withing a 1536 single DUT e.g. crypto engine based failovers, a Device Under Test 1537 Topology as depicted in Figure 1 MAY be used as well. 1539 Procedure: Before a failover can be triggered, the IPsec Device has 1540 to be in a state where the active stack/engine/node has a the 1541 maximum supported number of Active Tunnnels. The Tunnels will be 1542 transporting bidirectional traffic at the determined IPsec 1543 Throughput rate for the smallest framesize that the stack/engine/ 1544 node is capable of forwarding (In most cases, this will be 64 1545 Bytes). The traffic should traverse in a round robin fashion 1546 through all Active Tunnels. 1548 When traffic is flowing through all Active Tunnels in steady 1549 state, a failover shall be triggered. 1551 Both receiver sides of the testers will now look at sequence 1552 counters in the instrumented packets that are being forwarded 1553 through the Tunnels. Each Tunnel MUST have its own counter to 1554 keep track of packetloss on a per SA basis. 1556 If the tester observes no sequence number drops on any of the 1557 Tunnels in both directions then the Failover Time MUST be listed 1558 as 'null', indicating that the failover was immediate and without 1559 any packetloss. 1561 In all other cases where the tester observes a gap in the sequence 1562 numbers of the instrumented payload of the packets, the tester 1563 will monitor all SA's and look for any Tunnels that are still not 1564 receiving packets after the Failover. These will be marked as 1565 'pending' Tunnels. Active Tunnels that are forwarding packets 1566 again without any packetloss shall be marked as 'recovered' 1567 Tunnels. In background the tester will keep monitoring all SA's 1568 to make sure that no packets are dropped. If this is the case 1569 then the Tunnel in question will be placed back in 'pending' 1570 state. 1572 Note that reordered packets can naturally occur after en/ 1573 decryption. This is not a valid reason to place a Tunnel back in 1574 'pending' state. 1576 The tester will wait until all Tunnel are marked as 'recovered'. 1577 Then it will find the SA with the largest gap in sequence number. 1578 Given the fact that the framesize is fixed and the time of that 1579 framesize can easily be calculated for the initiator links, a 1580 simple multiplication of the framesize time * largest packetloss 1581 gap will yield the Tunnel Failover Time. 1583 This test MUST be repeated for the single tunnel, maximum 1584 throughput failover case. It is RECOMMENDED that the test is 1585 repeated for various number of Active Tunnels as well as for 1586 different framesizes and framerates. 1588 Reporting Format: The results shall be represented in a tabular 1589 format, where the first column will list the number of Active 1590 Tunnels, the second column the Framesize, the third column the 1591 Framerate and the fourth column the Tunnel Failover Time in 1592 milliseconds. 1594 15. DoS Attack Resiliency 1596 15.1. Phase 1 DoS Resiliency Rate 1598 Objective: Determine how many invalid IKE phase 1 sessions can be 1599 directed at a DUT before the Responder ignores or rejects valid 1600 IKE SA attempts. 1602 Topology: The test MUST be conducted using a Device Under Test 1603 Topology as depicted in Figure 1. 1605 Procedure: Configure the Responder with n IKE Phase 1 and 1606 corresponding IKE Phase 2 policies, where n is equal to the IPsec 1607 Tunnel Capacity. Ensure that no SA's are established and that the 1608 Responder has Configured Tunnels for all n policies. Start with 1609 95% of the offered test traffic containing an IKE Phase 1 policy 1610 mismatch (either a mismatched pre-shared-key or an invalid 1611 certificate). 1613 Send a burst of cleartext frames at a particular frame size 1614 through the Responder at the determined throughput rate using 1615 frames with selectors matching all n policies. Once the test 1616 completes, check whether all 5% of the correct IKE Phase 1 SAs 1617 have been established. If not, keep repeating the test by 1618 decrementing the number of mismatched IKE Phase 1 policies 1619 configured by 5% until all correct IKE Phase 1 SAs have been 1620 established. Between each retest, ensure that the DUT is reset 1621 and cleared of all previous state information. 1623 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1624 to exceed the duration of the test time. It is RECOMMENDED that 1625 the test duration is 2 x (n x IKE SA set up rate) to ensure that 1626 there is enough time to establish the valid IKE Phase 1 SAs. 1628 Some devices may support policy configurations where you do not 1629 need a one-to-one correspondence between an IKE Phase 1 policy and 1630 a specific IKE SA. In this case, the number of IKE Phase 1 1631 policies configured should be sufficient so that the transmitted 1632 (i.e. offered) test traffic will create 'n' IKE SAs. 1634 Reporting Format: The result shall be represented as the highest 1635 percentage of invalid IKE Phase1 messages that still allowed all 1636 the valid attempts to be completed. The Security Context 1637 Parameters defined in Section 7.6 and utilized for this test MUST 1638 be included in any statement of performance. 1640 15.2. Phase 2 Hash Mismatch DoS Resiliency Rate 1642 Objective: Determine the rate of Hash Mismatched packets at which a 1643 valid IPsec stream start dropping frames. 1645 Topology: The test MUST be conducted using a Device Under Test 1646 Topology as depicted in Figure 1. 1648 Procedure: A stream of IPsec traffic is offered to a DUT for 1649 decryption. This stream consists of two microflows. One valid 1650 microflow and one that contains altered IPsec packets with a Hash 1651 Mismatch. The aggregate rate of both microflows MUST be equal to 1652 the IPsec Throughput and should therefore be able to pass the DUT. 1653 A binary search will be applied to determine the ratio between the 1654 two microflows that causes packetloss on the valid microflow of 1655 traffic. 1657 The test MUST be conducted with a single Active Tunnel. It MAY be 1658 repeated at various Tunnel scalability data points (e.g. 90%). 1660 Reporting Format: The results shall be listed as PPS (of invalid 1661 traffic). The Security Context Parameters defined in Section 7.6 1662 and utilized for this test MUST be included in any statement of 1663 performance. The aggregate rate of both microflows which act as 1664 the offrered testing load MUST also be reported. 1666 15.3. Phase 2 Anti Replay Attack DoS Resiliency Rate 1667 Objective: Determine the rate of replayed packets at which a valid 1668 IPsec stream start dropping frames. 1670 Topology: The test MUST be conducted using a Device Under Test 1671 Topology as depicted in Figure 1. 1673 Procedure: A stream of IPsec traffic is offered to a DUT for 1674 decryption. This stream consists of two microflows. One valid 1675 microflow and one that contains replayed packets of the valid 1676 microflow. The aggregate rate of both microflows MUST be equal to 1677 the IPsec Throughput ad should therefore be able to pass the DUT. 1678 A binary seach will be applied to determine the ration between the 1679 two microflows that causes packetloss on the valid microflow of 1680 traffic. 1682 The replayed packets should always be offered within the window of 1683 which the original packet arrived i.e. it MUST be replayed 1684 directly after the original packet has been sent to the DUT. The 1685 binary search SHOULD start with a low anti replay count where 1686 every few anti replay windows, a single packet in the window is 1687 replayed. To increase this, one should obey the following 1688 sequence: 1690 * Increase the replayed packets so every window contains a single 1691 replayed packet 1693 * Increase the replayed packets so every packet within a window 1694 is replayed once 1696 * Increase the replayed packets so packets within a single window 1697 are replayed multiple times following the same fill sequence 1699 If the flow of replayed traffic equals the IPsec Throughput, the 1700 flow SHOULD be increased till the point where packetloss is 1701 observed on the replayed traffic flow. 1703 The test MUST be conducted with a single Active Tunnel. It MAY be 1704 repeated at various Tunnel scalability data points. The test 1705 SHOULD also be repeated on all configurable Anti Replay Window 1706 Sizes. 1708 Reporting Format: PPS (of replayed traffic). The Security Context 1709 Parameters defined in Section 7.6 and utilized for this test MUST 1710 be included in any statement of performance. 1712 16. Acknowledgements 1714 The authors would like to acknowledge the following individual for 1715 their help and participation of the compilation and editing of this 1716 document: Michele Bustos, Paul Hoffman, Benno Overeinder, Scott 1717 Poretsky and Yaron Sheffer 1719 17. References 1721 17.1. Normative References 1723 [RFC1242] Bradner, S., "Benchmarking terminology for network 1724 interconnection devices", RFC 1242, July 1991. 1726 [RFC1981] McCann, J., Deering, S., and J. Mogul, "Path MTU Discovery 1727 for IP version 6", RFC 1981, August 1996. 1729 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1730 Requirement Levels", BCP 14, RFC 2119, March 1997. 1732 [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN 1733 Switching Devices", RFC 2285, February 1998. 1735 [RFC2393] Shacham, A., Monsour, R., Pereira, R., and M. Thomas, "IP 1736 Payload Compression Protocol (IPComp)", RFC 2393, 1737 December 1998. 1739 [RFC2401] Kent, S. and R. Atkinson, "Security Architecture for the 1740 Internet Protocol", RFC 2401, November 1998. 1742 [RFC2402] Kent, S. and R. Atkinson, "IP Authentication Header", 1743 RFC 2402, November 1998. 1745 [RFC2403] Madson, C. and R. Glenn, "The Use of HMAC-MD5-96 within 1746 ESP and AH", RFC 2403, November 1998. 1748 [RFC2404] Madson, C. and R. Glenn, "The Use of HMAC-SHA-1-96 within 1749 ESP and AH", RFC 2404, November 1998. 1751 [RFC2405] Madson, C. and N. Doraswamy, "The ESP DES-CBC Cipher 1752 Algorithm With Explicit IV", RFC 2405, November 1998. 1754 [RFC2406] Kent, S. and R. Atkinson, "IP Encapsulating Security 1755 Payload (ESP)", RFC 2406, November 1998. 1757 [RFC2407] Piper, D., "The Internet IP Security Domain of 1758 Interpretation for ISAKMP", RFC 2407, November 1998. 1760 [RFC2408] Maughan, D., Schneider, M., and M. Schertler, "Internet 1761 Security Association and Key Management Protocol 1762 (ISAKMP)", RFC 2408, November 1998. 1764 [RFC2409] Harkins, D. and D. Carrel, "The Internet Key Exchange 1765 (IKE)", RFC 2409, November 1998. 1767 [RFC2410] Glenn, R. and S. Kent, "The NULL Encryption Algorithm and 1768 Its Use With IPsec", RFC 2410, November 1998. 1770 [RFC2411] Thayer, R., Doraswamy, N., and R. Glenn, "IP Security 1771 Document Roadmap", RFC 2411, November 1998. 1773 [RFC2412] Orman, H., "The OAKLEY Key Determination Protocol", 1774 RFC 2412, November 1998. 1776 [RFC2432] Dubray, K., "Terminology for IP Multicast Benchmarking", 1777 RFC 2432, October 1998. 1779 [RFC2451] Pereira, R. and R. Adams, "The ESP CBC-Mode Cipher 1780 Algorithms", RFC 2451, November 1998. 1782 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1783 Network Interconnect Devices", RFC 2544, March 1999. 1785 [RFC2547] Rosen, E. and Y. Rekhter, "BGP/MPLS VPNs", RFC 2547, 1786 March 1999. 1788 [RFC2661] Townsley, W., Valencia, A., Rubens, A., Pall, G., Zorn, 1789 G., and B. Palter, "Layer Two Tunneling Protocol "L2TP"", 1790 RFC 2661, August 1999. 1792 [RFC2784] Farinacci, D., Li, T., Hanks, S., Meyer, D., and P. 1793 Traina, "Generic Routing Encapsulation (GRE)", RFC 2784, 1794 March 2000. 1796 [RFC4109] Hoffman, P., "Algorithms for Internet Key Exchange version 1797 1 (IKEv1)", RFC 4109, May 2005. 1799 [RFC4305] Eastlake, D., "Cryptographic Algorithm Implementation 1800 Requirements for Encapsulating Security Payload (ESP) and 1801 Authentication Header (AH)", RFC 4305, December 2005. 1803 [RFC4306] Kaufman, C., "Internet Key Exchange (IKEv2) Protocol", 1804 RFC 4306, December 2005. 1806 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 1807 Dugatkin, "IPv6 Benchmarking Methodology for Network 1808 Interconnect Devices", RFC 5180, May 2008. 1810 [I-D.ietf-ipsec-properties] 1811 Krywaniuk, A., "Security Properties of the IPsec Protocol 1812 Suite", draft-ietf-ipsec-properties-02 (work in progress), 1813 July 2002. 1815 17.2. Informative References 1817 [FIPS.186-1.1998] 1818 National Institute of Standards and Technology, "Digital 1819 Signature Standard", FIPS PUB 186-1, December 1998, 1820 . 1822 Authors' Addresses 1824 Merike Kaeo 1825 Double Shot Security 1826 3518 Fremont Ave N #363 1827 Seattle, WA 98103 1828 USA 1830 Phone: +1(310)866-0165 1831 Email: kaeo@merike.com 1833 Tim Van Herck 1834 Cisco Systems 1835 170 West Tasman Drive 1836 San Jose, CA 95134-1706 1837 USA 1839 Phone: +1(408)853-2284 1840 Email: herckt@cisco.com