idnits 2.17.1 draft-ietf-bmwg-ipsec-meth-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1792. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1803. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1810. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1816. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([RFC2432], [RFC2544]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (March 2008) is 5886 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC2119' is defined on line 1662, but no explicit reference was found in the text == Unused Reference: 'RFC2393' is defined on line 1668, but no explicit reference was found in the text == Unused Reference: 'RFC2402' is defined on line 1675, but no explicit reference was found in the text == Unused Reference: 'RFC2403' is defined on line 1678, but no explicit reference was found in the text == Unused Reference: 'RFC2404' is defined on line 1681, but no explicit reference was found in the text == Unused Reference: 'RFC2405' is defined on line 1684, but no explicit reference was found in the text == Unused Reference: 'RFC2406' is defined on line 1687, but no explicit reference was found in the text == Unused Reference: 'RFC2407' is defined on line 1690, but no explicit reference was found in the text == Unused Reference: 'RFC2408' is defined on line 1693, but no explicit reference was found in the text == Unused Reference: 'RFC2409' is defined on line 1697, but no explicit reference was found in the text == Unused Reference: 'RFC2410' is defined on line 1700, but no explicit reference was found in the text == Unused Reference: 'RFC2411' is defined on line 1703, but no explicit reference was found in the text == Unused Reference: 'RFC2412' is defined on line 1706, but no explicit reference was found in the text == Unused Reference: 'RFC2451' is defined on line 1712, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ipsec-ikev2' is defined on line 1736, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ipsec-properties' is defined on line 1741, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-bmwg-ipv6-meth' is defined on line 1746, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1242 ** Obsolete normative reference: RFC 1981 (Obsoleted by RFC 8201) ** Downref: Normative reference to an Informational RFC: RFC 2285 ** Obsolete normative reference: RFC 2393 (Obsoleted by RFC 3173) ** Obsolete normative reference: RFC 2401 (Obsoleted by RFC 4301) ** Obsolete normative reference: RFC 2402 (Obsoleted by RFC 4302, RFC 4305) ** Obsolete normative reference: RFC 2406 (Obsoleted by RFC 4303, RFC 4305) ** Obsolete normative reference: RFC 2407 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2408 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2409 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2411 (Obsoleted by RFC 6071) ** Downref: Normative reference to an Informational RFC: RFC 2412 ** Downref: Normative reference to an Informational RFC: RFC 2432 ** Downref: Normative reference to an Informational RFC: RFC 2544 ** Obsolete normative reference: RFC 2547 (Obsoleted by RFC 4364) ** Obsolete normative reference: RFC 4305 (Obsoleted by RFC 4835) -- Possible downref: Normative reference to a draft: ref. 'I-D.ietf-ipsec-properties' == Outdated reference: A later version (-05) exists of draft-ietf-bmwg-ipv6-meth-03 ** Downref: Normative reference to an Informational draft: draft-ietf-bmwg-ipv6-meth (ref. 'I-D.ietf-bmwg-ipv6-meth') Summary: 21 errors (**), 0 flaws (~~), 20 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Working Group M. Kaeo 3 Internet-Draft Double Shot Security 4 Expires: September 2, 2008 T. Van Herck 5 Cisco Systems 6 March 2008 8 Methodology for Benchmarking IPsec Devices 9 draft-ietf-bmwg-ipsec-meth-03 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on September 2, 2008. 36 Copyright Notice 38 Copyright (C) The IETF Trust (2008). 40 Abstract 42 The purpose of this draft is to describe methodology specific to the 43 benchmarking of IPsec IP forwarding devices. It builds upon the 44 tenets set forth in [RFC2544], [RFC2432] and other IETF Benchmarking 45 Methodology Working Group (BMWG) efforts. This document seeks to 46 extend these efforts to the IPsec paradigm. 48 The BMWG produces two major classes of documents: Benchmarking 49 Terminology documents and Benchmarking Methodology documents. The 50 Terminology documents present the benchmarks and other related terms. 51 The Methodology documents define the procedures required to collect 52 the benchmarks cited in the corresponding Terminology documents. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 57 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 4 58 3. Methodology Format . . . . . . . . . . . . . . . . . . . . . . 4 59 4. Key Words to Reflect Requirements . . . . . . . . . . . . . . 5 60 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 5 61 6. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 5 62 7. Test Parameters . . . . . . . . . . . . . . . . . . . . . . . 8 63 7.1. Frame Type . . . . . . . . . . . . . . . . . . . . . . . . 8 64 7.1.1. IP . . . . . . . . . . . . . . . . . . . . . . . . . . 8 65 7.1.2. UDP . . . . . . . . . . . . . . . . . . . . . . . . . 8 66 7.1.3. TCP . . . . . . . . . . . . . . . . . . . . . . . . . 8 67 7.2. Frame Sizes . . . . . . . . . . . . . . . . . . . . . . . 8 68 7.3. Fragmentation and Reassembly . . . . . . . . . . . . . . . 9 69 7.4. Time To Live . . . . . . . . . . . . . . . . . . . . . . . 10 70 7.5. Trial Duration . . . . . . . . . . . . . . . . . . . . . . 10 71 7.6. Security Context Parameters . . . . . . . . . . . . . . . 10 72 7.6.1. IPsec Transform Sets . . . . . . . . . . . . . . . . . 10 73 7.6.2. IPsec Topologies . . . . . . . . . . . . . . . . . . . 12 74 7.6.3. IKE Keepalives . . . . . . . . . . . . . . . . . . . . 13 75 7.6.4. IKE DH-group . . . . . . . . . . . . . . . . . . . . . 13 76 7.6.5. IKE SA / IPsec SA Lifetime . . . . . . . . . . . . . . 13 77 7.6.6. IPsec Selectors . . . . . . . . . . . . . . . . . . . 14 78 7.6.7. NAT-Traversal . . . . . . . . . . . . . . . . . . . . 14 79 8. Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 80 8.1. IPsec Tunnel Capacity . . . . . . . . . . . . . . . . . . 14 81 8.2. IPsec SA Capacity . . . . . . . . . . . . . . . . . . . . 15 82 9. Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . 16 83 9.1. Throughput baseline . . . . . . . . . . . . . . . . . . . 16 84 9.2. IPsec Throughput . . . . . . . . . . . . . . . . . . . . . 17 85 9.3. IPsec Encryption Throughput . . . . . . . . . . . . . . . 18 86 9.4. IPsec Decryption Throughput . . . . . . . . . . . . . . . 19 87 10. Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 88 10.1. Latency Baseline . . . . . . . . . . . . . . . . . . . . . 20 89 10.2. IPsec Latency . . . . . . . . . . . . . . . . . . . . . . 21 90 10.3. IPsec Encryption Latency . . . . . . . . . . . . . . . . . 22 91 10.4. IPsec Decryption Latency . . . . . . . . . . . . . . . . . 23 92 10.5. Time To First Packet . . . . . . . . . . . . . . . . . . . 23 93 11. Frame Loss Rate . . . . . . . . . . . . . . . . . . . . . . . 24 94 11.1. Frame Loss Baseline . . . . . . . . . . . . . . . . . . . 24 95 11.2. IPsec Frame Loss . . . . . . . . . . . . . . . . . . . . . 25 96 11.3. IPsec Encryption Frame Loss . . . . . . . . . . . . . . . 26 97 11.4. IPsec Decryption Frame Loss . . . . . . . . . . . . . . . 26 98 11.5. IKE Phase 2 Rekey Frame Loss . . . . . . . . . . . . . . . 27 99 12. IPsec Tunnel Setup Behavior . . . . . . . . . . . . . . . . . 28 100 12.1. IPsec Tunnel Setup Rate . . . . . . . . . . . . . . . . . 28 101 12.2. IKE Phase 1 Setup Rate . . . . . . . . . . . . . . . . . . 29 102 12.3. IKE Phase 2 Setup Rate . . . . . . . . . . . . . . . . . . 29 103 13. IPsec Rekey Behavior . . . . . . . . . . . . . . . . . . . . . 31 104 13.1. IKE Phase 1 Rekey Rate . . . . . . . . . . . . . . . . . . 31 105 13.2. IKE Phase 2 Rekey Rate . . . . . . . . . . . . . . . . . . 32 106 14. IPsec Tunnel Failover Time . . . . . . . . . . . . . . . . . . 32 107 15. DoS Attack Resiliency . . . . . . . . . . . . . . . . . . . . 34 108 15.1. Phase 1 DoS Resiliency Rate . . . . . . . . . . . . . . . 34 109 15.2. Phase 2 Hash Mismatch DoS Resiliency Rate . . . . . . . . 35 110 15.3. Phase 2 Anti Replay Attack DoS Resiliency Rate . . . . . . 36 111 16. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 37 112 17. References . . . . . . . . . . . . . . . . . . . . . . . . . . 37 113 17.1. Normative References . . . . . . . . . . . . . . . . . . . 37 114 17.2. Informative References . . . . . . . . . . . . . . . . . . 39 115 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 39 116 Intellectual Property and Copyright Statements . . . . . . . . . . 40 118 1. Introduction 120 This document defines a specific set of tests that can be used to 121 measure and report the performance characteristics of IPsec devices. 122 It extends the methodology already defined for benchmarking network 123 interconnecting devices in [RFC2544] to IPsec gateways and 124 additionally introduces tests which can be used to measure end-host 125 IPsec performance. 127 2. Document Scope 129 The primary focus of this document is to establish a performance 130 testing methodology for IPsec devices that support manual keying and 131 IKEv1. A seperate document will be written specifically to address 132 testing using the updated IKEv2 specification. Both IPv4 and IPv6 133 addressing will be taken into consideration for all relevant test 134 methodologies. 136 The testing will be constrained to: 138 o Devices acting as IPsec gateways whose tests will pertain to both 139 IPsec tunnel and transport mode. 141 o Devices acting as IPsec end-hosts whose tests will pertain to both 142 IPsec tunnel and transport mode. 144 What is specifically out of scope is any testing that pertains to 145 considerations involving, L2TP [RFC2661], GRE [RFC2784], BGP/MPLS 146 VPN's [RFC2547] and anything that does not specifically relate to the 147 establishment and tearing down of IPsec tunnels. 149 3. Methodology Format 151 The Methodology is described in the following format: 153 Objective: The reason for performing the test. 155 Topology: Physical test layout to be used as further clarified in 156 Section 6. 158 Procedure: Describes the method used for carrying out the test. 160 Reporting Format: Description of reporting of the test results. 162 4. Key Words to Reflect Requirements 164 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 165 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 166 document are to be interpreted as described in RFC 2119. RFC 2119 167 defines the use of these key words to help make the intent of 168 standards track documents as clear as possible. While this document 169 uses these keywords, this document is not a standards track document. 171 5. Test Considerations 173 Before any of the IPsec data plane benchmarking tests are carried 174 out, a baseline MUST be established. I.e. the particular test in 175 question must first be executed to measure its performance without 176 enabling IPsec. Once both the Baseline clear text performance and 177 the performance using an IPsec enabled datapath have been measured, 178 the difference between the two can be discerned. 180 This document explicitly assumes that you MUST follow logical 181 performance test methodology that includes the pre-configuration or 182 pre-population of routing protocols, ARP caches, IPv6 neighbor 183 discovery and all other extraneous IPv4 and IPv6 parameters required 184 to pass packets before the tester is ready to send IPsec protected 185 packets. IPv6 nodes that implement Path MTU Discovery [RFC1981] MUST 186 ensure that the PMTUD process has been completed before any of the 187 tests have been run. 189 For every IPsec data plane benchmarking test, the SA database (SADB) 190 MUST be created and populated with the appropriate SA's before any 191 actual test traffic is sent, i.e. the DUT/SUT MUST have Active 192 Tunnels. This may require manual commands to be executed on the DUT/ 193 SUT or the sending of appropriate learning frames to the DUT/SUT to 194 trigger IKE negotiation. This is to ensure that none of the control 195 plane parameters (such as IPsec Tunnel Setup Rates and IPsec Tunnel 196 Rekey Rates) are factored into these tests. 198 For control plane benchmarking tests (i.e. IPsec Tunnel Setup Rate 199 and IPsec Tunnel Rekey Rates), the authentication mechanisms(s) used 200 for the authenticated Diffie-Hellman exchange MUST be reported. 202 6. Test Topologies 204 The tests can be performed as a DUT or SUT. When the tests are 205 performed as a DUT, the Tester itself must be an IPsec peer. This 206 scenario is shown in Figure 1. When testing an IPsec Device as a 207 DUT, one considerations that needs to be take into account is that 208 the Tester can introduce interoperability issues potentially limiting 209 the scope of the tests that can be executed. On the other hand, this 210 method has the advantage that IPsec client side testing can be 211 performed as well as it is able to identify abnormalities and 212 assymetry between the encryption and decryption behavior. 214 +------------+ 215 | | 216 +----[D] Tester [A]----+ 217 | | | | 218 | +------------+ | 219 | | 220 | +------------+ | 221 | | | | 222 +----[C] DUT [B]----+ 223 | | 224 +------------+ 226 Figure 1: Device Under Test Topology 228 The SUT scenario is depicted in Figure 2. Two identical DUTs are 229 used in this test set up which more accurately simulate the use of 230 IPsec gateways. IPsec SA (i.e. AH/ESP transport or tunnel mode) 231 configurations can be tested using this set-up where the tester is 232 only required to send and receive cleartext traffic. 234 +------------+ 235 | | 236 +-----------------[F] Tester [A]-----------------+ 237 | | | | 238 | +------------+ | 239 | | 240 | +------------+ +------------+ | 241 | | | | | | 242 +----[E] DUTa [D]--------[C] DUTb [B]----+ 243 | | | | 244 +------------+ +------------+ 246 Figure 2: System Under Test Topology 248 When an IPsec DUT needs to be tested in a chassis failover topology, 249 a second DUT needs to be used as shown in figure 3. This is the 250 high-availability equivalent of the topology as depicted in Figure 1. 251 Note that in this topology the Tester MUST be an IPsec peer. 253 +------------+ 254 | | 255 +---------[F] Tester [A]---------+ 256 | | | | 257 | +------------+ | 258 | | 259 | +------------+ | 260 | | | | 261 | +----[C] DUTa [B]----+ | 262 | | | | | | 263 | | +------------+ | | 264 +----+ +----+ 265 | +------------+ | 266 | | | | 267 +----[E] DUTb [D]----+ 268 | | 269 +------------+ 271 Figure 3: Redundant Device Under Test Topology 273 When no IPsec enabled Tester is available and an IPsec failover 274 scenario needs to be tested, the topology as shown in Figure 4 can be 275 used. In this case, either the high availability pair of IPsec 276 devices can be used as an Initiator or as a Responder. The remaining 277 chassis will take the opposite role. 279 +------------+ 280 | | 281 +--------------------[H] Tester [A]----------------+ 282 | | | | 283 | +------------+ | 284 | | 285 | +------------+ | 286 | | | | 287 | +---[E] DUTa [D]---+ | 288 | | | | | +------------+ | 289 | | +------------+ | | | | 290 +---+ +----[C] DUTc [B]---+ 291 | +------------+ | | | 292 | | | | +------------+ 293 +---[G] DUTb [F]---+ 294 | | 295 +------------+ 297 Figure 4: Redundant System Under Test Topology 299 7. Test Parameters 301 For each individual test performed, all of the following parameters 302 MUST be explicitly reported in any test results. 304 7.1. Frame Type 306 7.1.1. IP 308 Both IPv4 and IPv6 frames MUST be used. The basic IPv4 header is 20 309 bytes long (which may be increased by the use of an options field). 310 The basic IPv6 header is a fixed 40 bytes and uses an extension field 311 for additional headers. Only the basic headers plus the IPsec AH 312 and/or ESP headers MUST be present. 314 It is RECOMMENDED that IPv4 and IPv6 frames be tested separately to 315 ascertain performance parameters for either IPv4 or IPv6 traffic. If 316 both IPv4 and IPv6 traffic are to be tested, the device SHOULD be 317 pre-configured for a dual-stack environment to handle both traffic 318 types. 320 It is RECOMMENDED that a test payload field is added in the payload 321 of each packet that allows flow identification and timestamping of a 322 received packet. 324 7.1.2. UDP 326 It is also RECOMMENDED that the test is executed using UDP as the L4 327 protocol. When using UDP, instrumentation data SHOULD be present in 328 the payload of the packet. It is OPTIONAL to have application 329 payload. 331 7.1.3. TCP 333 It is OPTIONAL to perform the tests with TCP as the L4 protocol but 334 in case this is considered, the TCP traffic is RECOMMENDED to be 335 stateful. With a TCP as a L4 header it is possible that there will 336 not be enough room to add all instrumentation data to identify the 337 packets within the DUT/SUT. 339 7.2. Frame Sizes 341 Each test MUST be run with different frame sizes. It is RECOMMENDED 342 to use teh following cleartext layer 2 frame sizes for IPv4 tests 343 over Ethernet media: 64, 128, 256, 512, 1024, 1280, and 1518 bytes, 344 per RFC2544 section 9 [RFC2544]. The four CRC bytes are included in 345 the frame size specified. 347 For GigabitEthernet, supporting jumboframes, the cleartext layer 2 348 framesizes used are 64, 128, 256, 512, 1024, 1280, 1518, 2048, 3072, 349 4096, 5120, 6144, 7168, 8192, 9234 bytes 351 For SONET these are: 47, 67, 128, 256, 512, 1024, 1280, 1518, 2048, 352 4096 bytes 354 To accomodate IEEE 802.1q and IEEE 802.3as it is RECOMMENDED to 355 respectively include 1522 and 2000 byte framesizes in all tests. 357 Since IPv6 requires that every link has an MTU of 1280 octets or 358 greater, it is MANDATORY to execute tests with cleartext layer 2 359 frame sizes that include 1280 and 1518 bytes. It is RECOMMENDED that 360 additional frame sizes are included in the IPv6 test execution, 361 including the maximum supported datagram size for the linktype used. 363 7.3. Fragmentation and Reassembly 365 IPsec devices can and must fragment packets in specific scenarios. 366 Depending on whether the fragmentation is performed in software or 367 using specialized custom hardware, there may be a significant impact 368 on performance. 370 In IPv4, unless the DF (don't fragment) bit is set by the packet 371 source, the sender cannot guarantee that some intermediary device on 372 the way will not fragment an IPsec packet. For transport mode IPsec, 373 the peers must be able to fragment and reassemble IPsec packets. 374 Reassembly of fragmented packets is especially important if an IPv4 375 port selector (or IPv6 transport protocol selector) is configured. 376 For tunnel mode IPsec, it is not a requirement. Note that 377 fragmentation is handled differently in IPv6 than in IPv4. In IPv6 378 networks, fragmentation is no longer done by intermediate routers in 379 the networks, but by the source node that originates the packet. The 380 path MTU discovery (PMTUD) mechanism is recommended for every IPv6 381 node to avoid fragmentation. 383 Packets generated by hosts that do not support PMTUD, and have not 384 set the DF bit in the IP header, will undergo fragmentation before 385 IPsec encapsulation. Packets generated by hosts that do support 386 PMTUD will use it locally to match the statically configured MTU on 387 the tunnel. If you manually set the MTU on the tunnel, you must set 388 it low enough to allow packets to pass through the smallest link on 389 the path. Otherwise, the packets that are too large to fit will be 390 dropped. 392 Fragmentation can occur due to encryption overhead and is closely 393 linked to the choice of transform used. Since each test SHOULD be 394 run with a maximum cleartext frame size (as per the previous section) 395 it will cause fragmentation to occur since the maximum frame size 396 will be exceeded. All tests MUST be run with the DF bit not set. It 397 is also recommended that all tests be run with the DF bit set. 399 7.4. Time To Live 401 The source frames should have a TTL value large enough to accommodate 402 the DUT/SUT. A Minimum TTL of 64 is RECOMMENDED. 404 7.5. Trial Duration 406 The duration of the test portion of each trial SHOULD be at least 60 407 seconds. In the case of IPsec tunnel rekeying tests, the test 408 duration must be at least two times the IPsec tunnel rekey time to 409 ensure a reasonable worst case scenario test. 411 7.6. Security Context Parameters 413 All of the security context parameters listed in section 7.13 of the 414 IPsec Benchmarking Terminology document MUST be reported. When 415 merely discussing the behavior of traffic flows through IPsec 416 devices, an IPsec context MUST be provided. In the cases where IKE 417 is configured (as opposed to using manually keyed tunnels), both an 418 IPsec and an IKE context MUST be provided. Additional considerations 419 for reporting security context parameters are detailed below. These 420 all MUST be reported. 422 7.6.1. IPsec Transform Sets 424 All tests should be done on different IPsec transform set 425 combinations. An IPsec transform specifies a single IPsec security 426 protocol (either AH or ESP) with its corresponding security 427 algorithms and mode. A transform set is a combination of individual 428 IPsec transforms designed to enact a specific security policy for 429 protecting a particular traffic flow. At minumim, the transform set 430 must include one AH algorithm and a mode or one ESP algorithm and a 431 mode. 433 +-------------+------------------+----------------------+-----------+ 434 | ESP | Encryption | Authentication | Mode | 435 | Transform | Algorithm | Algorithm | | 436 +-------------+------------------+----------------------+-----------+ 437 | 1 | NULL | HMAC-SHA1-96 | Transport | 438 | 2 | NULL | HMAC-SHA1-96 | Tunnel | 439 | 3 | 3DES-CBC | HMAC-SHA1-96 | Transport | 440 | 4 | 3DES-CBC | HMAC-SHA1-96 | Tunnel | 441 | 5 | AES-CBC-128 | HMAC-SHA1-96 | Transport | 442 | 6 | AES-CBC-128 | HMAC-SHA1-96 | Tunnel | 443 | 7 | NULL | AES-XCBC-MAC-96 | Transport | 444 | 8 | NULL | AES-XCBC-MAC-96 | Tunnel | 445 | 9 | 3DES-CBC | AES-XCBC-MAC-96 | Transport | 446 | 10 | 3DES-CBC | AES-XCBC-MAC-96 | Tunnel | 447 | 11 | AES-CBC-128 | AES-XCBC-MAC-96 | Transport | 448 | 12 | AES-CBC-128 | AES-XCBC-MAC-96 | Tunnel | 449 +-------------+------------------+----------------------+-----------+ 451 Table 1 453 Testing of ESP Transforms 1-4 MUST be supported. Testing of ESP 454 Transforms 5-12 SHOULD be supported. 456 +--------------+--------------------------+-----------+ 457 | AH Transform | Authentication Algorithm | Mode | 458 +--------------+--------------------------+-----------+ 459 | 1 | HMAC-SHA1-96 | Transport | 460 | 2 | HMAC-SHA1-96 | Tunnel | 461 | 3 | AES-XBC-MAC-96 | Transport | 462 | 4 | AES-XBC-MAC-96 | Tunnel | 463 +--------------+--------------------------+-----------+ 465 Table 2 467 Testing of AH Transforms 1 and 2 MUST be supported. Testing of AH 468 Transforms 3 And 4 SHOULD be supported. 470 Note that this these tables are derived from the Cryptographic 471 Algorithms for AH and ESP requirements as described in [RFC4305]. 472 Optionally, other AH and/or ESP transforms MAY be supported. 474 +-----------------------+----+-----+ 475 | Transform Combination | AH | ESP | 476 +-----------------------+----+-----+ 477 | 1 | 1 | 1 | 478 | 2 | 2 | 2 | 479 | 3 | 1 | 3 | 480 | 4 | 2 | 4 | 481 +-----------------------+----+-----+ 483 Table 3 485 It is RECOMMENDED that the transforms shown in Table 3 be supported 486 for IPv6 traffic selectors since AH may be used with ESP in these 487 environments. Since AH will provide the overall authentication and 488 integrity, the ESP Authentication algorithm MUST be Null for these 489 tests. Optionally, other combined AH/ESP transform sets MAY be 490 supported. 492 7.6.2. IPsec Topologies 494 All tests should be done at various IPsec topology configurations and 495 the IPsec topology used MUST be reported. Since IPv6 requires the 496 implementation of manual keys for IPsec, both manual keying and IKE 497 configurations MUST be tested. 499 For manual keying tests, the IPsec SA's used should vary from 1 to 500 101, increasing in increments of 50. Although it is not expected 501 that manual keying (i.e. manually configuring the IPsec SA) will be 502 deployed in any operational setting with the exception of very small 503 controlled environments (i.e. less than 10 nodes), it is prudent to 504 test for potentially larger scale deployments. 506 For IKE specific tests, the following IPsec topologies MUST be 507 tested: 509 o 1 IKE SA & 2 IPsec SA (i.e. 1 IPsec Tunnel) 511 o 1 IKE SA & {max} IPsec SA's 513 o {max} IKE SA's & {max} IPsec SA's 515 It is RECOMMENDED to also test with the following IPsec topologies in 516 order to gain more datapoints: 518 o {max/2} IKE SA's & {(max/2) IKE SA's} IPsec SA's 520 o {max} IKE SA's & {(max) IKE SA's} IPsec SA's 522 7.6.3. IKE Keepalives 524 IKE keepalives track reachability of peers by sending hello packets 525 between peers. During the typical life of an IKE Phase 1 SA, packets 526 are only exchanged over this IKE Phase 1 SA when an IPsec IKE Quick 527 Mode (QM) negotiation is required at the expiration of the IPSec 528 Tunnel SA's. There is no standards-based mechanism for either type 529 of SA to detect the loss of a peer, except when the QM negotiation 530 fails. Most IPsec implementations use the Dead Peer Detection (i.e. 531 Keepalive) mechanism to determine whether connectivity has been lost 532 with a peer before the expiration of the IPsec Tunnel SA's. 534 All tests using IKEv1 MUST use the same IKE keepalive parameters. 536 7.6.4. IKE DH-group 538 There are 3 Diffie-Hellman groups which can be supported by IPsec 539 standards compliant devices: 541 o DH-group 1: 768 bits 543 o DH-group 2: 1024 bits 545 o DH-group 14: 2048 bits 547 DH-group 2 MUST be tested, to support the new IKEv1 algorithm 548 requirements listed in [RFC4109]. It is recommended that the same 549 DH-group be used for both IKE Phase 1 and IKE phase 2. All test 550 methodologies using IKE MUST report which DH-group was configured to 551 be used for IKE Phase 1 and IKE Phase 2 negotiations. 553 7.6.5. IKE SA / IPsec SA Lifetime 555 An IKE SA or IPsec SA is retained by each peer until the Tunnel 556 lifetime expires. IKE SA's and IPsec SA's have individual lifetime 557 parameters. In many real-world environments, the IPsec SA's will be 558 configured with shorter lifetimes than that of the IKE SA's. This 559 will force a rekey to happen more often for IPsec SA's. 561 When the initiator begins an IKE negotiation between itself and a 562 remote peer (the responder), an IKE policy can be selected only if 563 the lifetime of the responder's policy is shorter than or equal to 564 the lifetime of the initiator's policy. If the lifetimes are not the 565 same, the shorter lifetime will be used. 567 To avoid any incompatibilities in data plane benchmark testing, all 568 devices MUST have the same IKE SA lifetime as well as an identical 569 IPsec SA lifetime configured. Both SHALL be configured to a time 570 which exceeds the test duration timeframe and the total number of 571 bytes to be transmitted during the test. 573 Note that the IPsec SA lifetime MUST be equal to or less than the IKE 574 SA lifetime. Both the IKE SA lifetime and the IPsec SA lifetime used 575 MUST be reported. This parameter SHOULD be variable when testing IKE 576 rekeying performance. 578 7.6.6. IPsec Selectors 580 All tests MUST be performed using standard IPsec selectors as 581 described in [RFC2401] section 4.4.2. 583 7.6.7. NAT-Traversal 585 For any tests that include network address translation 586 considerations, the use of NAT-T in the test environment MUST be 587 recorded. 589 8. Capacity 591 8.1. IPsec Tunnel Capacity 593 Objective: Measure the maximum number of IPsec Tunnels or Active 594 Tunnels that can be sustained on an IPsec Device. 596 Topology If no IPsec aware tester is available the test MUST be 597 conducted using a System Under Test Topology as depicted in 598 Figure 2. When an IPsec aware tester is available the test MUST 599 be executed using a Device Under Test Topology as depicted in 600 Figure 1. 602 Procedure: The IPsec Device under test initially MUST NOT have any 603 Active IPsec Tunnels. The Initiator (either a tester or an IPsec 604 peer) will start the negotiation of an IPsec Tunnel (a single 605 Phase 1 SA and a pair Phase 2 SA's). 607 After it is detected that the tunnel is established, a limited 608 number (50 packets RECOMMENDED) SHALL be sent through the tunnel. 609 If all packet are received by the Responder (i.e. the DUT), a new 610 IPsec Tunnel may be attempted. 612 This proces will be repeated until no more IPsec Tunnels can be 613 established. 615 At the end of the test, a traffic pattern is sent to the initiator 616 that will be distributed over all Established Tunnels, where each 617 tunnel will need to propagate a fixed number of packets at a 618 minimum rate of e.g. 5 pps. The aggregate rate of all Active 619 Tunnels SHALL NOT exceed the IPsec Throughput. When all packets 620 sent by the Iniator are being received by the Responder, the test 621 has succesfully determined the IKE SA Capacity. If however this 622 final check fails, the test needs to be re-executed with a lower 623 number of Active IPsec Tunnels. There MAY be a need to enforce a 624 lower number of Active IPsec Tunnels i.e. an upper limit of Active 625 IPsec Tunnel SHOULD be defined in the test. 627 During the entire duration of the test rekeying of Tunnels SHALL 628 NOT be permitted. If a rekey event occurs, the test is invalid 629 and MUST be restarted. 631 Reporting Format: The reporting format should reflect the maximum 632 number of IPsec Tunnels that can be established when all packets 633 by the initiator are received by the responder. In addition the 634 Security Context parameters defined in Section 7.6 and utilized 635 for this test MUST be included in any statement of capacity. 637 8.2. IPsec SA Capacity 639 Objective: Measure the maximum number of IPsec SA's that can be 640 sustained on an IPsec Device. 642 Topology If no IPsec aware tester is available the test MUST be 643 conducted using a System Under Test Topology as depicted in 644 Figure 2. When an IPsec aware tester is available the test MUST 645 be executed using a Device Under Test Topology as depicted in 646 Figure 1. 648 Procedure: The IPsec Device under test initially MUST NOT have any 649 Active IPsec Tunnels. The Initiator (either a tester or an IPsec 650 peer) will start the negotiation of an IPsec Tunnel (a single 651 Phase 1 SA and a pair Phase 2 SA's). 653 After it is detected that the tunnel is established, a limited 654 number (50 packets RECOMMENDED) SHALL be sent through the tunnel. 655 If all packet are received by the Responder (i.e. the DUT), a new 656 pair of IPsec SA's may be attempted. This will be achieved by 657 offering a specific traffic pattern to the Initiator that matches 658 a given selector and therfore triggering the negotiation of a new 659 pair of IPsec SA's. 661 This proces will be repeated until no more IPsec SA' can be 662 established. 664 At the end of the test, a traffic pattern is sent to the initiator 665 that will be distributed over all IPsec SA's, where each SA will 666 need to propagate a fixed number of packets at a minimum rate of 5 667 pps. When all packets sent by the Iniator are being received by 668 the Responder, the test has succesfully determined the IPsec SA 669 Capacity. If however this final check fails, the test needs to be 670 re-executed with a lower number of IPsec SA's. There MAY be a 671 need to enforce a lower number IPsec SA's i.e. an upper limit of 672 IPsec SA's SHOULD be defined in the test. 674 During the entire duration of the test rekeying of Tunnels SHALL 675 NOT be permitted. If a rekey event occurs, the test is invalid 676 and MUST be restarted. 678 Reporting Format: The reporting format SHOULD be the same as listed 679 in Section 8.1 for the maximum number of IPsec SAs. 681 9. Throughput 683 This section contains the description of the tests that are related 684 to the characterization of the packet forwarding of a DUT/SUT in an 685 IPsec environment. Some metrics extend the concept of throughput 686 presented in [RFC1242]. The notion of Forwarding Rate is cited in 687 [RFC2285]. 689 A separate test SHOULD be performed for Throughput tests using IPv4/ 690 UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic. 692 9.1. Throughput baseline 694 Objective: Measure the intrinsic cleartext throughput of a device 695 without the use of IPsec. The throughput baseline methodology and 696 reporting format is derived from [RFC2544]. 698 Topology If no IPsec aware tester is available the test MUST be 699 conducted using a System Under Test Topology as depicted in 700 Figure 2. When an IPsec aware tester is available the test MUST 701 be executed using a Device Under Test Topology as depicted in 702 Figure 1. 704 Procedure: Send a specific number of frames that matches the IPsec 705 SA selector(s) to be tested at a specific rate through the DUT and 706 then count the frames that are transmitted by the DUT. If the 707 count of offered frames is equal to the count of received frames, 708 the rate of the offered stream is increased and the test is rerun. 709 If fewer frames are received than were transmitted, the rate of 710 the offered stream is reduced and the test is rerun. 712 The throughput is the fastest rate at which the count of test 713 frames transmitted by the DUT is equal to the number of test 714 frames sent to it by the test equipment. 716 Reporting Format: The results of the throughput test SHOULD be 717 reported in the form of a graph. If it is, the x coordinate 718 SHOULD be the frame size, the y coordinate SHOULD be the frame 719 rate. There SHOULD be at least two lines on the graph. There 720 SHOULD be one line showing the theoretical frame rate for the 721 media at the various frame sizes. The second line SHOULD be the 722 plot of the test results. Additional lines MAY be used on the 723 graph to report the results for each type of data stream tested. 724 Text accompanying the graph SHOULD indicate the protocol, data 725 stream format, and type of media used in the tests. 727 We assume that if a single value is desired for advertising 728 purposes the vendor will select the rate for the minimum frame 729 size for the media. If this is done then the figure MUST be 730 expressed in packets per second. The rate MAY also be expressed 731 in bits (or bytes) per second if the vendor so desires. The 732 statement of performance MUST include: 734 * Measured maximum frame rate 736 * Size of the frame used 738 * Theoretical limit of the media for that frame size 740 * Type of protocol used in the test 742 Even if a single value is used as part of the advertising copy, 743 the full table of results SHOULD be included in the product data 744 sheet. 746 9.2. IPsec Throughput 748 Objective: Measure the intrinsic throughput of a device utilizing 749 IPsec. 751 Topology If no IPsec aware tester is available the test MUST be 752 conducted using a System Under Test Topology as depicted in 753 Figure 2. When an IPsec aware tester is available the test MUST 754 be executed using a Device Under Test Topology as depicted in 755 Figure 1. 757 Procedure: Send a specific number of cleartext frames that match the 758 IPsec SA selector(s) at a specific rate through the DUT/SUT. DUTa 759 will encrypt the traffic and forward to DUTb which will in turn 760 decrypt the traffic and forward to the testing device. The 761 testing device counts the frames that are transmitted by the DUTb. 762 If the count of offered frames is equal to the count of received 763 frames, the rate of the offered stream is increased and the test 764 is rerun. If fewer frames are received than were transmitted, the 765 rate of the offered stream is reduced and the test is rerun. 767 The IPsec Throughput is the fastest rate at which the count of 768 test frames transmitted by the DUT/SUT is equal to the number of 769 test frames sent to it by the test equipment. 771 For tests using multiple IPsec SA's, the test traffic associated 772 with the individual traffic selectors defined for each IPsec SA 773 MUST be sent in a round robin type fashion to keep the test 774 balanced so as not to overload any single IPsec SA. 776 Reporting format: The reporting format SHALL be the same as listed 777 in Section 9.1 with the additional requirement that the Security 778 Context Parameters, as defined in Section 7.6, utilized for this 779 test MUST be included in any statement of performance. 781 9.3. IPsec Encryption Throughput 783 Objective: Measure the intrinsic DUT vendor specific IPsec 784 Encryption Throughput. 786 Topology The test MUST be conducted using a Device Under Test 787 Topology as depicted in Figure 1. 789 Procedure: Send a specific number of cleartext frames that match the 790 IPsec SA selector(s) at a specific rate to the DUT. The DUT will 791 receive the cleartext frames, perform IPsec operations and then 792 send the IPsec protected frame to the tester. Upon receipt of the 793 encrypted packet, the testing device will timestamp the packet(s) 794 and record the result. If the count of offered frames is equal to 795 the count of received frames, the rate of the offered stream is 796 increased and the test is rerun. If fewer frames are received 797 than were transmitted, the rate of the offered stream is reduced 798 and the test is rerun. 800 The IPsec Encryption Throughput is the fastest rate at which the 801 count of test frames transmitted by the DUT is equal to the number 802 of test frames sent to it by the test equipment. 804 For tests using multiple IPsec SA's, the test traffic associated 805 with the individual traffic selectors defined for each IPsec SA 806 MUST be sent in a round robin type fashion to keep the test 807 balanced so as not to overload any single IPsec SA. 809 Reporting format: The reporting format SHALL be the same as listed 810 in Section 9.1 with the additional requirement that the Security 811 Context Parameters, as defined in Section 7.6, utilized for this 812 test MUST be included in any statement of performance. 814 9.4. IPsec Decryption Throughput 816 Objective: Measure the intrinsic DUT vendor specific IPsec 817 Decryption Throughput. 819 Topology The test MUST be conducted using a Device Under Test 820 Topology as depicted in Figure 1. 822 Procedure: Send a specific number of IPsec protected frames that 823 match the IPsec SA selector(s) at a specific rate to the DUT. The 824 DUT will receive the IPsec protected frames, perform IPsec 825 operations and then send the cleartext frame to the tester. Upon 826 receipt of the cleartext packet, the testing device will timestamp 827 the packet(s) and record the result. If the count of offered 828 frames is equal to the count of received frames, the rate of the 829 offered stream is increased and the test is rerun. If fewer 830 frames are received than were transmitted, the rate of the offered 831 stream is reduced and the test is rerun. 833 The IPsec Decryption Throughput is the fastest rate at which the 834 count of test frames transmitted by the DUT is equal to the number 835 of test frames sent to it by the test equipment. 837 For tests using multiple IPsec SA's, the test traffic associated 838 with the individual traffic selectors defined for each IPsec SA 839 MUST be sent in a round robin type fashion to keep the test 840 balanced so as not to overload any single IPsec SA. 842 Reporting format: The reporting format SHALL be the same as listed 843 in Section 9.1 with the additional requirement that the Security 844 Context Parameters, as defined in Section 7.6, utilized for this 845 test MUST be included in any statement of performance. 847 10. Latency 849 This section presents methodologies relating to the characterization 850 of the forwarding latency of a DUT/SUT. It extends the concept of 851 latency characterization presented in [RFC2544] to an IPsec 852 environment. 854 A separate tests SHOULD be performed for latency tests using IPv4/ 855 UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic. 857 In order to lessen the effect of packet buffering in the DUT/SUT, the 858 latency tests MUST be run at the measured IPsec throughput level of 859 the DUT/SUT; IPsec latency at other offered loads is optional. 861 Lastly, [RFC1242] and [RFC2544] draw distinction between two classes 862 of devices: "store and forward" and "bit-forwarding". Each class 863 impacts how latency is collected and subsequently presented. See the 864 related RFC's for more information. In practice, much of the test 865 equipment will collect the latency measurement for one class or the 866 other, and, if needed, mathematically derive the reported value by 867 the addition or subtraction of values accounting for medium 868 propagation delay of the packet, bit times to the timestamp trigger 869 within the packet, etc. Test equipment vendors SHOULD provide 870 documentation regarding the composition and calculation latency 871 values being reported. The user of this data SHOULD understand the 872 nature of the latency values being reported, especially when 873 comparing results collected from multiple test vendors. (E.g., If 874 test vendor A presents a "store and forward" latency result and test 875 vendor B presents a "bit-forwarding" latency result, the user may 876 erroneously conclude the DUT has two differing sets of latency 877 values.). 879 10.1. Latency Baseline 881 Objective: Measure the intrinsic latency (min/avg/max) introduced by 882 a device without the use of IPsec. 884 Topology If no IPsec aware tester is available the test MUST be 885 conducted using a System Under Test Topology as depicted in 886 Figure 2. When an IPsec aware tester is available the test MUST 887 be executed using a Device Under Test Topology as depicted in 888 Figure 1. 890 Procedure: First determine the throughput for the DUT/SUT at each of 891 the listed frame sizes. Send a stream of frames at a particular 892 frame size through the DUT at the determined throughput rate using 893 frames that match the IPsec SA selector(s) to be tested. The 894 stream SHOULD be at least 120 seconds in duration. An identifying 895 tag SHOULD be included in one frame after 60 seconds with the type 896 of tag being implementation dependent. The time at which this 897 frame is fully transmitted is recorded (timestamp A). The 898 receiver logic in the test equipment MUST recognize the tag 899 information in the frame stream and record the time at which the 900 tagged frame was received (timestamp B). 902 The latency is timestamp B minus timestamp A as per the relevant 903 definition from RFC 1242, namely latency as defined for store and 904 forward devices or latency as defined for bit forwarding devices. 906 The test MUST be repeated at least 20 times with the reported 907 value being the average of the recorded values. 909 Reporting Format The report MUST state which definition of latency 910 (from [RFC1242]) was used for this test. The latency results 911 SHOULD be reported in the format of a table with a row for each of 912 the tested frame sizes. There SHOULD be columns for the frame 913 size, the rate at which the latency test was run for that frame 914 size, for the media types tested, and for the resultant latency 915 values for each type of data stream tested. 917 10.2. IPsec Latency 919 Objective: Measure the intrinsic IPsec Latency (min/avg/max) 920 introduced by a device when using IPsec. 922 Topology If no IPsec aware tester is available the test MUST be 923 conducted using a System Under Test Topology as depicted in 924 Figure 2. When an IPsec aware tester is available the test MUST 925 be executed using a Device Under Test Topology as depicted in 926 Figure 1. 928 Procedure: First determine the throughput for the DUT/SUT at each of 929 the listed frame sizes. Send a stream of cleartext frames at a 930 particular frame size through the DUT/SUT at the determined 931 throughput rate using frames that match the IPsec SA selector(s) 932 to be tested. DUTa will encrypt the traffic and forward to DUTb 933 which will in turn decrypt the traffic and forward to the testing 934 device. 936 The stream SHOULD be at least 120 seconds in duration. An 937 identifying tag SHOULD be included in one frame after 60 seconds 938 with the type of tag being implementation dependent. The time at 939 which this frame is fully transmitted is recorded (timestamp A). 940 The receiver logic in the test equipment MUST recognize the tag 941 information in the frame stream and record the time at which the 942 tagged frame was received (timestamp B). 944 The IPsec Latency is timestamp B minus timestamp A as per the 945 relevant definition from [RFC1242], namely latency as defined for 946 store and forward devices or latency as defined for bit forwarding 947 devices. 949 The test MUST be repeated at least 20 times with the reported 950 value being the average of the recorded values. 952 Reporting format: The reporting format SHALL be the same as listed 953 in Section 10.1 with the additional requirement that the Security 954 Context Parameters, as defined in Section 7.6, utilized for this 955 test MUST be included in any statement of performance. 957 10.3. IPsec Encryption Latency 959 Objective: Measure the DUT vendor specific IPsec Encryption Latency 960 for IPsec protected traffic. 962 Topology The test MUST be conducted using a Device Under Test 963 Topology as depicted in Figure 1. 965 Procedure: Send a stream of cleartext frames at a particular frame 966 size through the DUT/SUT at the determined throughput rate using 967 frames that match the IPsec SA selector(s) to be tested. 969 The stream SHOULD be at least 120 seconds in duration. An 970 identifying tag SHOULD be included in one frame after 60 seconds 971 with the type of tag being implementation dependent. The time at 972 which this frame is fully transmitted is recorded (timestamp A). 973 The DUT will receive the cleartext frames, perform IPsec 974 operations and then send the IPsec protected frames to the tester. 975 Upon receipt of the encrypted frames, the receiver logic in the 976 test equipment MUST recognize the tag information in the frame 977 stream and record the time at which the tagged frame was received 978 (timestamp B). 980 The IPsec Encryption Latency is timestamp B minus timestamp A as 981 per the relevant definition from [RFC1242], namely latency as 982 defined for store and forward devices or latency as defined for 983 bit forwarding devices. 985 The test MUST be repeated at least 20 times with the reported 986 value being the average of the recorded values. 988 Reporting format: The reporting format SHALL be the same as listed 989 in Section 10.1 with the additional requirement that the Security 990 Context Parameters, as defined in Section 7.6, utilized for this 991 test MUST be included in any statement of performance. 993 10.4. IPsec Decryption Latency 995 Objective: Measure the DUT Vendor Specific IPsec Decryption Latency 996 for IPsec protected traffic. 998 Topology The test MUST be conducted using a Device Under Test 999 Topology as depicted in Figure 1. 1001 Procedure: Send a stream of IPsec protected frames at a particular 1002 frame size through the DUT/SUT at the determined throughput rate 1003 using frames that match the IPsec SA selector(s) to be tested. 1005 The stream SHOULD be at least 120 seconds in duration. An 1006 identifying tag SHOULD be included in one frame after 60 seconds 1007 with the type of tag being implementation dependent. The time at 1008 which this frame is fully transmitted is recorded (timestamp A). 1009 The DUT will receive the IPsec protected frames, perform IPsec 1010 operations and then send the cleartext frames to the tester. Upon 1011 receipt of the decrypted frames, the receiver logic in the test 1012 equipment MUST recognize the tag information in the frame stream 1013 and record the time at which the tagged frame was received 1014 (timestamp B). 1016 The IPsec Decryption Latency is timestamp B minus timestamp A as 1017 per the relevant definition from [RFC1242], namely latency as 1018 defined for store and forward devices or latency as defined for 1019 bit forwarding devices. 1021 The test MUST be repeated at least 20 times with the reported 1022 value being the average of the recorded values. 1024 Reporting format: The reporting format SHALL be the same as listed 1025 in Section 10.1 with the additional requirement that the Security 1026 Context Parameters, as defined in Section 7.6, utilized for this 1027 test MUST be included in any statement of performance. 1029 10.5. Time To First Packet 1031 Objective: Measure the time it takes to transmit a packet when no 1032 SA's have been established. 1034 Topology If no IPsec aware tester is available the test MUST be 1035 conducted using a System Under Test Topology as depicted in 1036 Figure 2. When an IPsec aware tester is available the test MUST 1037 be executed using a Device Under Test Topology as depicted in 1038 Figure 1. 1040 Procedure: Determine the IPsec throughput for the DUT/SUT at each of 1041 the listed frame sizes. Start with a DUT/SUT with Configured 1042 Tunnels. Send a stream of cleartext frames at a particular frame 1043 size through the DUT/SUT at the determined throughput rate using 1044 frames that match the IPsec SA selector(s) to be tested. 1046 The time at which the first frame is fully transmitted from the 1047 testing device is recorded as timestamp A. The time at which the 1048 testing device receives its first frame from the DUT/SUT is 1049 recorded as timestamp B. The Time To First Packet is the 1050 difference between Timestamp B and Timestamp A. 1052 Note that it is possible that packets can be lost during IPsec 1053 Tunnel establishment and that timestamp A & B are not required to 1054 be associated with a unique packet. 1056 Reporting format: The Time To First Packet results SHOULD be 1057 reported in the format of a table with a row for each of the 1058 tested frame sizes. There SHOULD be columns for the frame size, 1059 the rate at which the TTFP test was run for that frame size, for 1060 the media types tested, and for the resultant TTFP values for each 1061 type of data stream tested. The Security Context Parameters 1062 defined in Section 7.6 and utilized for this test MUST be included 1063 in any statement of performance. 1065 11. Frame Loss Rate 1067 This section presents methodologies relating to the characterization 1068 of frame loss rate, as defined in [RFC1242], in an IPsec environment. 1070 11.1. Frame Loss Baseline 1072 Objective: To determine the frame loss rate, as defined in 1073 [RFC1242], of a DUT/SUT throughout the entire range of input data 1074 rates and frame sizes without the use of IPsec. 1076 Topology If no IPsec aware tester is available the test MUST be 1077 conducted using a System Under Test Topology as depicted in 1078 Figure 2. When an IPsec aware tester is available the test MUST 1079 be executed using a Device Under Test Topology as depicted in 1080 Figure 1. 1082 Procedure: Send a specific number of frames at a specific rate 1083 through the DUT/SUT to be tested using frames that match the IPsec 1084 SA selector(s) to be tested and count the frames that are 1085 transmitted by the DUT/SUT. The frame loss rate at each point is 1086 calculated using the following equation: 1088 ( ( input_count - output_count ) * 100 ) / input_count 1090 The first trial SHOULD be run for the frame rate that corresponds 1091 to 100% of the maximum rate for the frame size on the input media. 1092 Repeat the procedure for the rate that corresponds to 90% of the 1093 maximum rate used and then for 80% of this rate. This sequence 1094 SHOULD be continued (at reduced 10% intervals) until there are two 1095 successive trials in which no frames are lost. The maximum 1096 granularity of the trials MUST be 10% of the maximum rate, a finer 1097 granularity is encouraged. 1099 Reporting Format: The results of the frame loss rate test SHOULD be 1100 plotted as a graph. If this is done then the X axis MUST be the 1101 input frame rate as a percent of the theoretical rate for the 1102 media at the specific frame size. The Y axis MUST be the percent 1103 loss at the particular input rate. The left end of the X axis and 1104 the bottom of the Y axis MUST be 0 percent; the right end of the X 1105 axis and the top of the Y axis MUST be 100 percent. Multiple 1106 lines on the graph MAY used to report the frame loss rate for 1107 different frame sizes, protocols, and types of data streams. 1109 11.2. IPsec Frame Loss 1111 Objective: To measure the frame loss rate of a device when using 1112 IPsec to protect the data flow. 1114 Topology If no IPsec aware tester is available the test MUST be 1115 conducted using a System Under Test Topology as depicted in 1116 Figure 2. When an IPsec aware tester is available the test MUST 1117 be executed using a Device Under Test Topology as depicted in 1118 Figure 1. 1120 Procedure: Ensure that the DUT/SUT is in active tunnel mode. Send a 1121 specific number of cleartext frames that match the IPsec SA 1122 selector(s) to be tested at a specific rate through the DUT/SUT. 1123 DUTa will encrypt the traffic and forward to DUTb which will in 1124 turn decrypt the traffic and forward to the testing device. The 1125 testing device counts the frames that are transmitted by the DUTb. 1126 The frame loss rate at each point is calculated using the 1127 following equation: 1129 ( ( input_count - output_count ) * 100 ) / input_count 1131 The first trial SHOULD be run for the frame rate that corresponds 1132 to 100% of the maximum rate for the frame size on the input media. 1133 Repeat the procedure for the rate that corresponds to 90% of the 1134 maximum rate used and then for 80% of this rate. This sequence 1135 SHOULD be continued (at reducing 10% intervals) until there are 1136 two successive trials in which no frames are lost. The maximum 1137 granularity of the trials MUST be 10% of the maximum rate, a finer 1138 granularity is encouraged. 1140 Reporting Format: The reporting format SHALL be the same as listed 1141 in Section 11.1 with the additional requirement that the Security 1142 Context Parameters, as defined in Section 7.6, utilized for this 1143 test MUST be included in any statement of performance. 1145 11.3. IPsec Encryption Frame Loss 1147 Objective: To measure the effect of IPsec encryption on the frame 1148 loss rate of a device. 1150 Topology The test MUST be conducted using a Device Under Test 1151 Topology as depicted in Figure 1. 1153 Procedure: Send a specific number of cleartext frames that match the 1154 IPsec SA selector(s) at a specific rate to the DUT. The DUT will 1155 receive the cleartext frames, perform IPsec operations and then 1156 send the IPsec protected frame to the tester. The testing device 1157 counts the encrypted frames that are transmitted by the DUT. The 1158 frame loss rate at each point is calculated using the following 1159 equation: 1161 ( ( input_count - output_count ) * 100 ) / input_count 1163 The first trial SHOULD be run for the frame rate that corresponds 1164 to 100% of the maximum rate for the frame size on the input media. 1165 Repeat the procedure for the rate that corresponds to 90% of the 1166 maximum rate used and then for 80% of this rate. This sequence 1167 SHOULD be continued (at reducing 10% intervals) until there are 1168 two successive trials in which no frames are lost. The maximum 1169 granularity of the trials MUST be 10% of the maximum rate, a finer 1170 granularity is encouraged. 1172 Reporting Format: The reporting format SHALL be the same as listed 1173 in Section 11.1 with the additional requirement that the Security 1174 Context Parameters, as defined in Section 7.6, utilized for this 1175 test MUST be included in any statement of performance. 1177 11.4. IPsec Decryption Frame Loss 1179 Objective: To measure the effects of IPsec encryption on the frame 1180 loss rate of a device. 1182 Topology: The test MUST be conducted using a Device Under Test 1183 Topology as depicted in Figure 1. 1185 Procedure: Send a specific number of IPsec protected frames that 1186 match the IPsec SA selector(s) at a specific rate to the DUT. The 1187 DUT will receive the IPsec protected frames, perform IPsec 1188 operations and then send the cleartext frames to the tester. The 1189 testing device counts the cleartext frames that are transmitted by 1190 the DUT. The frame loss rate at each point is calculated using 1191 the following equation: 1193 ( ( input_count - output_count ) * 100 ) / input_count 1195 The first trial SHOULD be run for the frame rate that corresponds 1196 to 100% of the maximum rate for the frame size on the input media. 1197 Repeat the procedure for the rate that corresponds to 90% of the 1198 maximum rate used and then for 80% of this rate. This sequence 1199 SHOULD be continued (at reducing 10% intervals) until there are 1200 two successive trials in which no frames are lost. The maximum 1201 granularity of the trials MUST be 10% of the maximum rate, a finer 1202 granularity is encouraged. 1204 Reporting format: The reporting format SHALL be the same as listed 1205 in Section 11.1 with the additional requirement that the Security 1206 Context Parameters, as defined in Section 7.6, utilized for this 1207 test MUST be included in any statement of performance. 1209 11.5. IKE Phase 2 Rekey Frame Loss 1211 Objective: To measure the frame loss due to an IKE Phase 2 (i.e. 1212 IPsec SA) Rekey event. 1214 Topology: The test MUST be conducted using a Device Under Test 1215 Topology as depicted in Figure 1. 1217 Procedure: The procedure is the same as in Section 11.2 with the 1218 exception that the IPsec SA lifetime MUST be configured to be one- 1219 third of the trial test duration or one-third of the total number 1220 of bytes to be transmitted during the trial duration. 1222 Reporting format: The reporting format SHALL be the same as listed 1223 in Section 11.1 with the additional requirement that the Security 1224 Context Parameters, as defined in Section 7.6, utilized for this 1225 test MUST be included in any statement of performance. 1227 12. IPsec Tunnel Setup Behavior 1229 12.1. IPsec Tunnel Setup Rate 1231 Objective: Determine the rate at which IPsec Tunnels can be 1232 established. 1234 Topology: The test MUST be conducted using a Device Under Test 1235 Topology as depicted in Figure 1. 1237 Procedure: Configure the Responder (where the Responder is the DUT) 1238 with n IKE Phase 1 and corresponding IKE Phase 2 policies. Ensure 1239 that no SA's are established and that the Responder has 1240 Established Tunnels for all n policies. Send a stream of 1241 cleartext frames at a particular frame size to the Responder at 1242 the determined throughput rate using frames with selectors 1243 matching the first IKE Phase 1 policy. As soon as the testing 1244 device receives its first frame from the Responder, it knows that 1245 the IPsec Tunnel is established and starts sending the next stream 1246 of cleartext frames using the same frame size and throughput rate 1247 but this time using selectors matching the second IKE Phase 1 1248 policy. This process is repeated until all configured IPsec 1249 Tunnels have been established. 1251 The IPsec Tunnel Setup Rate is measured in Tunnels Per Second 1252 (TPS) and is determined by the following formula: 1254 Tunnel Setup Rate = n / [Duration of Test - (n * 1255 frame_transmit_time)] TPS 1257 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1258 to exceed the duration of the test time. It is RECOMMENDED that 1259 n=100 IPsec Tunnels are tested at a minimum to get a large enough 1260 sample size to depict some real-world behavior. 1262 Reporting Format: The Tunnel Setup Rate results SHOULD be reported 1263 in the format of a table with a row for each of the tested frame 1264 sizes. There SHOULD be columns for: 1266 The throughput rate at which the test was run for the specified 1267 frame size 1269 The media type used for the test 1271 The resultant Tunnel Setup Rate values, in TPS, for the 1272 particular data stream tested for that frame size 1274 The Security Context Parameters defined in Section 7.6 and 1275 utilized for this test MUST be included in any statement of 1276 performance. 1278 12.2. IKE Phase 1 Setup Rate 1280 Objective: Determine the rate of IKE SA's that can be established. 1282 Topology: The test MUST be conducted using a Device Under Test 1283 Topology as depicted in Figure 1. 1285 Procedure: Configure the Responder with n IKE Phase 1 and 1286 corresponding IKE Phase 2 policies. Ensure that no SA's are 1287 established and that the Responder has Configured Tunnel for all n 1288 policies. Send a stream of cleartext frames at a particular frame 1289 size through the Responder at the determined throughput rate using 1290 frames with selectors matching the first IKE Phase 1 policy. As 1291 soon as the Phase 1 SA is established, the testing device starts 1292 sending the next stream of cleartext frames using the same frame 1293 size and throughput rate but this time using selectors matching 1294 the second IKE Phase 1 policy. This process is repeated until all 1295 configured IKE SA's have been established. 1297 The IKE SA Setup Rate is determined by the following formula: 1299 IKE SA Setup Rate = n / [Duration of Test - (n * 1300 frame_transmit_time)] 1302 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1303 to exceed the duration of the test time. It is RECOMMENDED that 1304 n=100 IKE SA's are tested at a minumum to get a large enough 1305 sample size to depict some real-world behavior. 1307 Reporting Format: The IKE Phase 1 Setup Rate results SHOULD be 1308 reported in the format of a table with a row for each of the 1309 tested frame sizes. There SHOULD be columns for the frame size, 1310 the rate at which the test was run for that frame size, for the 1311 media types tested, and for the resultant IKE Phase 1 Setup Rate 1312 values for each type of data stream tested. The Security Context 1313 Parameters defined in Section 7.6 and utilized for this test MUST 1314 be included in any statement of performance. 1316 12.3. IKE Phase 2 Setup Rate 1318 Objective: Determine the rate of IPsec SA's that can be established. 1320 Topology: The test MUST be conducted using a Device Under Test 1321 Topology as depicted in Figure 1. 1323 Procedure: Configure the Responder (where the Responder is the DUT) 1324 with a single IKE Phase 1 policy and n corresponding IKE Phase 2 1325 policies. Ensure that no SA's are established and that the 1326 Responder has Configured Tunnels for all policies. Send a stream 1327 of cleartext frames at a particular frame size through the 1328 Responder at the determined throughput rate using frames with 1329 selectors matching the first IPsec SA policy. 1331 The time at which the IKE SA is established is recorded as 1332 timestamp_A. As soon as the Phase 1 SA is established, the IPsec 1333 SA negotiation will be initiated. Once the first IPsec SA has 1334 been established, start sending the next stream of cleartext 1335 frames using the same frame size and throughput rate but this time 1336 using selectors matching the second IKE Phase 2 policy. This 1337 process is repeated until all configured IPsec SA's have been 1338 established. 1340 The IPsec SA Setup Rate is determined by the following formula, 1341 where test_duration and frame_transmit_times are expressed in 1342 units of seconds: 1344 IPsec SA Setup Rate = n / [test_duration - {timestamp_A +((n-1) * 1345 frame_transmit_time)}] IPsec SA's per Second 1347 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1348 to exceed the duration of the test time. It is RECOMMENDED that 1349 n=100 IPsec SA's are tested at a minumum to get a large enough 1350 sample size to depict some real-world behavior. 1352 Reporting Format: The IKE Phase 2 Setup Rate results SHOULD be 1353 reported in the format of a table with a row for each of the 1354 tested frame sizes. There SHOULD be columns for: 1356 The throughput rate at which the test was run for the specified 1357 frame size 1359 The media type used for the test 1361 The resultant IKE Phase 2 Setup Rate values, in IPsec SA's per 1362 second, for the particular data stream tested for that frame 1363 size 1365 The Security Context parameters defined in Section 7.6 and 1366 utilized for this test MUST be included in any statement of 1367 performance. 1369 13. IPsec Rekey Behavior 1371 The IPsec Rekey Behavior test all need to be executed by an IPsec 1372 aware test device since the test needs to be closely linked with the 1373 IKE FSM (Finite State Machine) and cannot be done by offering 1374 specific traffic pattern at either the Initiator or the Responder. 1376 13.1. IKE Phase 1 Rekey Rate 1378 Objective: Determine the maximum rate at which an IPsec Device can 1379 rekey IKE SA's. 1381 Topology: The test MUST be conducted using a Device Under Test 1382 Topology as depicted in Figure 1. 1384 Procedure: The IPsec Device under test should initially be set up 1385 with the determined IPsec Tunnel Capacity number of Active IPsec 1386 Tunnels. 1388 The IPsec aware tester should then perform a binary search where 1389 it initiates an IKE Phase 1 SA rekey for all Active IPsec Tunnels. 1390 The tester MUST timestamp for each IKE SA when it initiated the 1391 rekey (timestamp_A) and MUST timestamp once more once the FSM 1392 declares the rekey is completed (timestamp_B).The rekey time for a 1393 specific SA equals timestamp_B - timestamp_A. Once the iteration 1394 is complete the tester now has a table of rekey times for each IKE 1395 SA. The reciproce of the average of this table is the IKE Phase 1 1396 Rekey Rate. 1398 It is expected that all IKE SA were able to rekey succesfully. If 1399 this is not the case, the IPsec Tunnels are all re-established and 1400 the binary search goes to the next value of IKE SA's it will 1401 rekey. The process will repeat itself until a rate is determined 1402 at which all SA's in that timeframe rekey correctly. 1404 Reporting Format: The IKE Phase 1 Rekey Rate results SHOULD be 1405 reported in the format of a table with a row for each of the 1406 tested frame sizes. There SHOULD be columns for the frame size, 1407 the rate at which the test was run for that frame size, for the 1408 media types tested, and for the resultant IKE Phase 1 Rekey Rate 1409 values for each type of data stream tested. The Security Context 1410 Parameters defined in Section 7.6 and utilized for this test MUST 1411 be included in any statement of performance. 1413 13.2. IKE Phase 2 Rekey Rate 1415 Objective: Determine the maximum rate at which an IPsec Device can 1416 rekey IPsec SA's. 1418 Topology: The test MUST be conducted using a Device Under Test 1419 Topology as depicted in Figure 1. 1421 Procedure: The IPsec Device under test should initially be set up 1422 with the determined IPsec Tunnel Capacity number of Active IPsec 1423 Tunnels. 1425 The IPsec aware tester should then perform a binary search where 1426 it initiates an IKE Phase 2 SA rekey for all IPsec SA's. The 1427 tester MUST timestamp for each IPsec SA when it initiated the 1428 rekey (timestamp_A) and MUST timestamp once more once the FSM 1429 declares the rekey is completed (timestamp_B). The rekey time for 1430 a specific IPsec SA is timestamp_B - timestamp_A. Once the 1431 itteration is complete the tester now has a table of rekey times 1432 for each IPsec SA. The reciproce of the average of this table is 1433 the IKE Phase 2 Rekey Rate. 1435 It is expected that all IPsec SA's were able to rekey succesfully. 1436 If this is not the case, the IPsec Tunnels are all re-established 1437 and the binary search goes to the next value of IPsec SA's it will 1438 rekey. The process will repeat itself until a rate is determined 1439 at which a all SA's in that timeframe rekey correctly. 1441 Reporting Format: The IKE Phase 2 Rekey Rate results SHOULD be 1442 reported in the format of a table with a row for each of the 1443 tested frame sizes. There SHOULD be columns for the frame size, 1444 the rate at which the test was run for that frame size, for the 1445 media types tested, and for the resultant IKE Phase 2 Rekey Rate 1446 values for each type of data stream tested. The Security Context 1447 Parameters defined in Section 7.6 and utilized for this test MUST 1448 be included in any statement of performance. 1450 14. IPsec Tunnel Failover Time 1452 This section presents methodologies relating to the characterization 1453 of the failover behavior of a DUT/SUT in a IPsec environment. 1455 In order to lessen the effect of packet buffering in the DUT/SUT, the 1456 Tunnel Failover Time tests MUST be run at the measured IPsec 1457 Throughput level of the DUT. Tunnel Failover Time tests at other 1458 offered constant loads are OPTIONAL. 1460 Tunnel Failovers can be achieved in various ways, for example: 1462 o Failover between two Software Instances of an IPsec stack. 1464 o Failover between two IPsec devices. 1466 o Failover between two Hardware IPsec Engines within a single IPsec 1467 Device. 1469 o Fallback to Software IPsec from Hardware IPsec within a single 1470 IPsec Device. 1472 In all of the above cases there shall be at least one active IPsec 1473 device and a standby device. In some cases the standby device is not 1474 present and two or more IPsec devices are backing eachother up in 1475 case of a catastrophic device or stack failure. The standby (or 1476 potential other active) IPsec Devices can back up the active IPsec 1477 Device in either a stateless or statefull method. In the former 1478 case, Phase 1 SA's as well as Phase 2 SA's will need to be re- 1479 established in order to guarantuee packet forwarding. In the latter 1480 case, the SPD and SADB of the active IPsec Device is synchronized to 1481 the standby IPsec Device to ensure immediate packet path recovery. 1483 Objective: Determine the time required to fail over all Active 1484 Tunnels from an active IPsec Device to its standby device. 1486 Topology: If no IPsec aware tester is available, the test MUST be 1487 conducted using a Redundant System Under Test Topology as depicted 1488 in Figure 4. When an IPsec aware tester is available the test 1489 MUST be executed using a Redundant Unit Under Test Topology as 1490 depicted in Figure 3. If the failover is being tested withing a 1491 single DUT e.g. crypto engine based failovers, a Device Under Test 1492 Topology as depicted in Figure 1 MAY be used as well. 1494 Procedure: Before a failover can be triggered, the IPsec Device has 1495 to be in a state where the active stack/engine/node has a the 1496 maximum supported number of Active Tunnnels. The Tunnels will be 1497 transporting bidirectional traffic at the determined IPsec 1498 Throughput rate for the smallest framesize that the stack/engine/ 1499 node is capable of forwarding (In most cases, this will be 64 1500 Bytes). The traffic should traverse in a round robin fashion 1501 through all Active Tunnels. 1503 When traffic is flowing through all Active Tunnels in steady 1504 state, a failover shall be triggered. 1506 Both receiver sides of the testers will now look at sequence 1507 counters in the instrumented packets that are being forwarded 1508 through the Tunnels. Each Tunnel MUST have its own counter to 1509 keep track of packetloss on a per SA basis. 1511 If the tester observes no sequence number drops on any of the 1512 Tunnels in both directions then the Failover Time MUST be listed 1513 as 'null', indicating that the failover was immediate and without 1514 any packetloss. 1516 In all other cases where the tester observes a gap in the sequence 1517 numbers of the instrumented payload of the packets, the tester 1518 will monitor all SA's and look for any Tunnels that are still not 1519 receiving packets after the Failover. These will be marked as 1520 'pending' Tunnels. Active Tunnels that are forwarding packets 1521 again without any packetloss shall be marked as 'recovered' 1522 Tunnels. In background the tester will keep monitoring all SA's 1523 to make sure that no packets are dropped. If this is the case 1524 then the Tunnel in question will be placed back in 'pending' 1525 state. 1527 Note that reordered packets can naturally occur after en/ 1528 decryption. This is not a valid reason to place a Tunnel back in 1529 'pending' state. 1531 The tester will wait until all Tunnel are marked as 'recovered'. 1532 Then it will find the SA with the largest gap in sequence number. 1533 Given the fact that the framesize is fixed and the time of that 1534 framesize can easily be calculated for the initiator links, a 1535 simple multiplication of the framesize time * largest packetloss 1536 gap will yield the Tunnel Failover Time. 1538 It is RECOMMENDED that the test is repeated for various number of 1539 Active Tunnels as well as for different framesizes and framerates. 1541 Reporting Format: The results shall be represented in a tabular 1542 format, where the first column will list the number of Active 1543 Tunnels, the second column the Framesize, the third column the 1544 Framerate and the fourth column the Tunnel Failover Time in 1545 milliseconds. 1547 15. DoS Attack Resiliency 1549 15.1. Phase 1 DoS Resiliency Rate 1550 Objective: Determine how many invalid IKE phase 1 sessions can be 1551 dropped before a valid IKE session. 1553 Topology: The test MUST be conducted using a Device Under Test 1554 Topology as depicted in Figure 1. 1556 Procedure: Send a burst of IKE Phase 1 messages, at the determined 1557 IPsec Throughput, to the DUT. This burst contain a series of 1558 invalid IKE messages (containing either a mismatch pre-shared-key 1559 or an invalid certificate), followed by a single valid IKE 1560 message. The objective is to increase the string of invalid 1561 messags that are prepended before the valid IKE message up to the 1562 point where the Tunnel associated with the valid IKE request can 1563 no longer be processed and does not yield an Established Tunnel 1564 anymore. The test SHALL start with 1 invalid IKE and a single 1565 valid IKE message. If the Tunnel associated with the valid IKE 1566 message can be Established, then the Tunnel is torn down and the 1567 test will be restarted with an increased count of invalid IKE 1568 messages. 1570 Reporting Format: Failed Attempts. The Security Context Parameters 1571 defined in Section 7.6 and utilized for this test MUST be included 1572 in any statement of performance. 1574 15.2. Phase 2 Hash Mismatch DoS Resiliency Rate 1576 Objective: Determine the rate of Hash Mismatched packets at which a 1577 valid IPsec stream start dropping frames. 1579 Topology: The test MUST be conducted using a Device Under Test 1580 Topology as depicted in Figure 1. 1582 Procedure: A stream of IPsec traffic is offered to a DUT for 1583 decryption. This stream consists of two microflows. One valid 1584 microflow and one that contains altered IPsec packets with a Hash 1585 Mismatch. The aggregate rate of both microflows MUST be equal to 1586 the IPsec Throughput and should therefore be able to pass the DUT. 1587 A binary search will be applied to determine the ratio between the 1588 two microflows that causes packetloss on the valid microflow of 1589 traffic. 1591 The test MUST be conducted with a single Active Tunnel. It MAY be 1592 repeated at various Tunnel scalability data points. 1594 Reporting Format: PPS (of invalid traffic) The Security Context 1595 Parameters defined in Section 7.6 and utilized for this test MUST 1596 be included in any statement of performance. 1598 15.3. Phase 2 Anti Replay Attack DoS Resiliency Rate 1600 Objective: Determine the rate of replayed packets at which a valid 1601 IPsec stream start dropping frames. 1603 Topology: The test MUST be conducted using a Device Under Test 1604 Topology as depicted in Figure 1. 1606 Procedure: A stream of IPsec traffic is offered to a DUT for 1607 decryption. This stream consists of two microflows. One valid 1608 microflow and one that contains replayed packets of the valid 1609 microflow. The aggregate rate of both microflows MUST be equal to 1610 the IPsec Throughput ad should therefore be able to pass the DUT. 1611 A binary seach will be applied to determine the ration between the 1612 two microflows that causes packetloss on the valid microflow of 1613 traffic. 1615 The replayed packets should always be offered within the window of 1616 which the original packet arrived i.e. it MUST be replayed 1617 directly after the original packet has been sent to the DUT. The 1618 binary search SHOULD start with a low anti replay count where 1619 every few anti replay windows, a single packet in the window is 1620 replayed. To increase this, one should obey the following 1621 sequence: 1623 * Increase the replayed packets so every window contains a single 1624 replayed packet 1626 * Increase the replayed packets so every packet within a window 1627 is replayed once 1629 * Increase the replayed packets so packets within a single window 1630 are replayed multiple times following the same fill sequence 1632 If the flow of replayed traffic equals the IPsec Throughput, the 1633 flow SHOULD be increased till the point where packetloss is 1634 observed on the replayed traffic flow. 1636 The test MUST be conducted with a single Active Tunnel. It MAY be 1637 repeated at various Tunnel scalability data points. The test 1638 SHOULD also be repeated on all configurable Anti Replay Window 1639 Sizes. 1641 Reporting Format: PPS (of replayed traffic). The Security Context 1642 Parameters defined in Section 7.6 and utilized for this test MUST 1643 be included in any statement of performance. 1645 16. Acknowledgements 1647 The authors would like to acknowledge the following individual for 1648 their help and participation of the compilation and editing of this 1649 document: Michele Bustos ; Paul Hoffman, Benno Overeinder, Scott 1650 Poretsky, Cisco NSITE Labs 1652 17. References 1654 17.1. Normative References 1656 [RFC1242] Bradner, S., "Benchmarking terminology for network 1657 interconnection devices", RFC 1242, July 1991. 1659 [RFC1981] McCann, J., Deering, S., and J. Mogul, "Path MTU Discovery 1660 for IP version 6", RFC 1981, August 1996. 1662 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1663 Requirement Levels", BCP 14, RFC 2119, March 1997. 1665 [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN 1666 Switching Devices", RFC 2285, February 1998. 1668 [RFC2393] Shacham, A., Monsour, R., Pereira, R., and M. Thomas, "IP 1669 Payload Compression Protocol (IPComp)", RFC 2393, 1670 December 1998. 1672 [RFC2401] Kent, S. and R. Atkinson, "Security Architecture for the 1673 Internet Protocol", RFC 2401, November 1998. 1675 [RFC2402] Kent, S. and R. Atkinson, "IP Authentication Header", 1676 RFC 2402, November 1998. 1678 [RFC2403] Madson, C. and R. Glenn, "The Use of HMAC-MD5-96 within 1679 ESP and AH", RFC 2403, November 1998. 1681 [RFC2404] Madson, C. and R. Glenn, "The Use of HMAC-SHA-1-96 within 1682 ESP and AH", RFC 2404, November 1998. 1684 [RFC2405] Madson, C. and N. Doraswamy, "The ESP DES-CBC Cipher 1685 Algorithm With Explicit IV", RFC 2405, November 1998. 1687 [RFC2406] Kent, S. and R. Atkinson, "IP Encapsulating Security 1688 Payload (ESP)", RFC 2406, November 1998. 1690 [RFC2407] Piper, D., "The Internet IP Security Domain of 1691 Interpretation for ISAKMP", RFC 2407, November 1998. 1693 [RFC2408] Maughan, D., Schneider, M., and M. Schertler, "Internet 1694 Security Association and Key Management Protocol 1695 (ISAKMP)", RFC 2408, November 1998. 1697 [RFC2409] Harkins, D. and D. Carrel, "The Internet Key Exchange 1698 (IKE)", RFC 2409, November 1998. 1700 [RFC2410] Glenn, R. and S. Kent, "The NULL Encryption Algorithm and 1701 Its Use With IPsec", RFC 2410, November 1998. 1703 [RFC2411] Thayer, R., Doraswamy, N., and R. Glenn, "IP Security 1704 Document Roadmap", RFC 2411, November 1998. 1706 [RFC2412] Orman, H., "The OAKLEY Key Determination Protocol", 1707 RFC 2412, November 1998. 1709 [RFC2432] Dubray, K., "Terminology for IP Multicast Benchmarking", 1710 RFC 2432, October 1998. 1712 [RFC2451] Pereira, R. and R. Adams, "The ESP CBC-Mode Cipher 1713 Algorithms", RFC 2451, November 1998. 1715 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1716 Network Interconnect Devices", RFC 2544, March 1999. 1718 [RFC2547] Rosen, E. and Y. Rekhter, "BGP/MPLS VPNs", RFC 2547, 1719 March 1999. 1721 [RFC2661] Townsley, W., Valencia, A., Rubens, A., Pall, G., Zorn, 1722 G., and B. Palter, "Layer Two Tunneling Protocol "L2TP"", 1723 RFC 2661, August 1999. 1725 [RFC2784] Farinacci, D., Li, T., Hanks, S., Meyer, D., and P. 1726 Traina, "Generic Routing Encapsulation (GRE)", RFC 2784, 1727 March 2000. 1729 [RFC4109] Hoffman, P., "Algorithms for Internet Key Exchange version 1730 1 (IKEv1)", RFC 4109, May 2005. 1732 [RFC4305] Eastlake, D., "Cryptographic Algorithm Implementation 1733 Requirements for Encapsulating Security Payload (ESP) and 1734 Authentication Header (AH)", RFC 4305, December 2005. 1736 [I-D.ietf-ipsec-ikev2] 1737 Kaufman, C., "Internet Key Exchange (IKEv2) Protocol", 1738 draft-ietf-ipsec-ikev2-17 (work in progress), 1739 October 2004. 1741 [I-D.ietf-ipsec-properties] 1742 Krywaniuk, A., "Security Properties of the IPsec Protocol 1743 Suite", draft-ietf-ipsec-properties-02 (work in progress), 1744 July 2002. 1746 [I-D.ietf-bmwg-ipv6-meth] 1747 Popoviciu, C., "IPv6 Benchmarking Methodology for Network 1748 Interconnect Devices", draft-ietf-bmwg-ipv6-meth-03 (work 1749 in progress), August 2007. 1751 17.2. Informative References 1753 [FIPS.186-1.1998] 1754 National Institute of Standards and Technology, "Digital 1755 Signature Standard", FIPS PUB 186-1, December 1998, 1756 . 1758 Authors' Addresses 1760 Merike Kaeo 1761 Double Shot Security 1762 3518 Fremont Ave N #363 1763 Seattle, WA 98103 1764 USA 1766 Phone: +1(310)866-0165 1767 Email: kaeo@merike.com 1769 Tim Van Herck 1770 Cisco Systems 1771 170 West Tasman Drive 1772 San Jose, CA 95134-1706 1773 USA 1775 Phone: +1(408)853-2284 1776 Email: herckt@cisco.com 1778 Full Copyright Statement 1780 Copyright (C) The IETF Trust (2008). 1782 This document is subject to the rights, licenses and restrictions 1783 contained in BCP 78, and except as set forth therein, the authors 1784 retain all their rights. 1786 This document and the information contained herein are provided on an 1787 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1788 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1789 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1790 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1791 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1792 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1794 Intellectual Property 1796 The IETF takes no position regarding the validity or scope of any 1797 Intellectual Property Rights or other rights that might be claimed to 1798 pertain to the implementation or use of the technology described in 1799 this document or the extent to which any license under such rights 1800 might or might not be available; nor does it represent that it has 1801 made any independent effort to identify any such rights. Information 1802 on the procedures with respect to rights in RFC documents can be 1803 found in BCP 78 and BCP 79. 1805 Copies of IPR disclosures made to the IETF Secretariat and any 1806 assurances of licenses to be made available, or the result of an 1807 attempt made to obtain a general license or permission for the use of 1808 such proprietary rights by implementers or users of this 1809 specification can be obtained from the IETF on-line IPR repository at 1810 http://www.ietf.org/ipr. 1812 The IETF invites any interested party to bring to its attention any 1813 copyrights, patents or patent applications, or other proprietary 1814 rights that may cover technology that may be required to implement 1815 this standard. Please address the information to the IETF at 1816 ietf-ipr@ietf.org. 1818 Acknowledgment 1820 Funding for the RFC Editor function is provided by the IETF 1821 Administrative Support Activity (IASA).