idnits 2.17.1 draft-ietf-bmwg-ipsec-meth-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([RFC2432], [RFC2544]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 28, 2009) is 5378 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2119' is defined on line 1737, but no explicit reference was found in the text == Unused Reference: 'RFC2393' is defined on line 1743, but no explicit reference was found in the text == Unused Reference: 'RFC2402' is defined on line 1750, but no explicit reference was found in the text == Unused Reference: 'RFC2403' is defined on line 1753, but no explicit reference was found in the text == Unused Reference: 'RFC2404' is defined on line 1756, but no explicit reference was found in the text == Unused Reference: 'RFC2405' is defined on line 1759, but no explicit reference was found in the text == Unused Reference: 'RFC2406' is defined on line 1762, but no explicit reference was found in the text == Unused Reference: 'RFC2407' is defined on line 1765, but no explicit reference was found in the text == Unused Reference: 'RFC2408' is defined on line 1768, but no explicit reference was found in the text == Unused Reference: 'RFC2409' is defined on line 1772, but no explicit reference was found in the text == Unused Reference: 'RFC2410' is defined on line 1775, but no explicit reference was found in the text == Unused Reference: 'RFC2411' is defined on line 1778, but no explicit reference was found in the text == Unused Reference: 'RFC2412' is defined on line 1781, but no explicit reference was found in the text == Unused Reference: 'RFC2451' is defined on line 1787, but no explicit reference was found in the text == Unused Reference: 'RFC4306' is defined on line 1811, but no explicit reference was found in the text == Unused Reference: 'RFC5180' is defined on line 1814, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ipsec-properties' is defined on line 1818, but no explicit reference was found in the text ** Obsolete normative reference: RFC 1981 (Obsoleted by RFC 8201) ** Obsolete normative reference: RFC 2393 (Obsoleted by RFC 3173) ** Obsolete normative reference: RFC 2401 (Obsoleted by RFC 4301) ** Obsolete normative reference: RFC 2402 (Obsoleted by RFC 4302, RFC 4305) ** Obsolete normative reference: RFC 2406 (Obsoleted by RFC 4303, RFC 4305) ** Obsolete normative reference: RFC 2407 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2408 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2409 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2411 (Obsoleted by RFC 6071) ** Obsolete normative reference: RFC 2547 (Obsoleted by RFC 4364) ** Obsolete normative reference: RFC 4305 (Obsoleted by RFC 4835) ** Obsolete normative reference: RFC 4306 (Obsoleted by RFC 5996) Summary: 15 errors (**), 0 flaws (~~), 18 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Working Group M. Kaeo 3 Internet-Draft Double Shot Security 4 Intended status: Informational T. Van Herck 5 Expires: January 29, 2010 Cisco Systems 6 July 28, 2009 8 Methodology for Benchmarking IPsec Devices 9 draft-ietf-bmwg-ipsec-meth-05 11 Status of this Memo 13 This Internet-Draft is submitted to IETF in full conformance with the 14 provisions of BCP 78 and BCP 79. This document may contain material 15 from IETF Documents or IETF Contributions published or made publicly 16 available before November 10, 2008. The person(s) controlling the 17 copyright in some of this material may not have granted the IETF 18 Trust the right to allow modifications of such material outside the 19 IETF Standards Process. Without obtaining an adequate license from 20 the person(s) controlling the copyright in such materials, this 21 document may not be modified outside the IETF Standards Process, and 22 derivative works of it may not be created outside the IETF Standards 23 Process, except to format it for publication as an RFC or to 24 translate it into languages other than English. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF), its areas, and its working groups. Note that 28 other groups may also distribute working documents as Internet- 29 Drafts. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 The list of current Internet-Drafts can be accessed at 37 http://www.ietf.org/ietf/1id-abstracts.txt. 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html. 42 This Internet-Draft will expire on January 29, 2010. 44 Copyright Notice 46 Copyright (c) 2009 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents in effect on the date of 51 publication of this document (http://trustee.ietf.org/license-info). 52 Please review these documents carefully, as they describe your rights 53 and restrictions with respect to this document. 55 Abstract 57 The purpose of this draft is to describe methodology specific to the 58 benchmarking of IPsec IP forwarding devices. It builds upon the 59 tenets set forth in [RFC2544], [RFC2432] and other IETF Benchmarking 60 Methodology Working Group (BMWG) efforts. This document seeks to 61 extend these efforts to the IPsec paradigm. 63 The BMWG produces two major classes of documents: Benchmarking 64 Terminology documents and Benchmarking Methodology documents. The 65 Terminology documents present the benchmarks and other related terms. 66 The Methodology documents define the procedures required to collect 67 the benchmarks cited in the corresponding Terminology documents. 69 Table of Contents 71 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 72 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 5 73 3. Methodology Format . . . . . . . . . . . . . . . . . . . . . . 5 74 4. Key Words to Reflect Requirements . . . . . . . . . . . . . . 6 75 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 6 76 6. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 6 77 7. Test Parameters . . . . . . . . . . . . . . . . . . . . . . . 9 78 7.1. Frame Type . . . . . . . . . . . . . . . . . . . . . . . . 9 79 7.1.1. IP . . . . . . . . . . . . . . . . . . . . . . . . . . 9 80 7.1.2. UDP . . . . . . . . . . . . . . . . . . . . . . . . . 9 81 7.1.3. TCP . . . . . . . . . . . . . . . . . . . . . . . . . 9 82 7.1.4. NAT-Traversal . . . . . . . . . . . . . . . . . . . . 9 83 7.2. Frame Sizes . . . . . . . . . . . . . . . . . . . . . . . 10 84 7.3. Fragmentation and Reassembly . . . . . . . . . . . . . . . 10 85 7.4. Time To Live . . . . . . . . . . . . . . . . . . . . . . . 11 86 7.5. Trial Duration . . . . . . . . . . . . . . . . . . . . . . 11 87 7.6. Security Context Parameters . . . . . . . . . . . . . . . 11 88 7.6.1. IPsec Transform Sets . . . . . . . . . . . . . . . . . 11 89 7.6.2. IPsec Topologies . . . . . . . . . . . . . . . . . . . 13 90 7.6.3. IKE Keepalives . . . . . . . . . . . . . . . . . . . . 14 91 7.6.4. IKE DH-group . . . . . . . . . . . . . . . . . . . . . 14 92 7.6.5. IKE SA / IPsec SA Lifetime . . . . . . . . . . . . . . 14 93 7.6.6. IPsec Selectors . . . . . . . . . . . . . . . . . . . 15 94 7.6.7. NAT-Traversal . . . . . . . . . . . . . . . . . . . . 15 95 8. Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 96 8.1. IPsec Tunnel Capacity . . . . . . . . . . . . . . . . . . 15 97 8.2. IPsec SA Capacity . . . . . . . . . . . . . . . . . . . . 16 98 9. Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . 17 99 9.1. Throughput baseline . . . . . . . . . . . . . . . . . . . 17 100 9.2. IPsec Throughput . . . . . . . . . . . . . . . . . . . . . 18 101 9.3. IPsec Encryption Throughput . . . . . . . . . . . . . . . 19 102 9.4. IPsec Decryption Throughput . . . . . . . . . . . . . . . 20 103 10. Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 104 10.1. Latency Baseline . . . . . . . . . . . . . . . . . . . . . 21 105 10.2. IPsec Latency . . . . . . . . . . . . . . . . . . . . . . 22 106 10.3. IPsec Encryption Latency . . . . . . . . . . . . . . . . . 23 107 10.4. IPsec Decryption Latency . . . . . . . . . . . . . . . . . 24 108 10.5. Time To First Packet . . . . . . . . . . . . . . . . . . . 24 109 11. Frame Loss Rate . . . . . . . . . . . . . . . . . . . . . . . 25 110 11.1. Frame Loss Baseline . . . . . . . . . . . . . . . . . . . 25 111 11.2. IPsec Frame Loss . . . . . . . . . . . . . . . . . . . . . 26 112 11.3. IPsec Encryption Frame Loss . . . . . . . . . . . . . . . 27 113 11.4. IPsec Decryption Frame Loss . . . . . . . . . . . . . . . 28 114 11.5. IKE Phase 2 Rekey Frame Loss . . . . . . . . . . . . . . . 28 115 12. IPsec Tunnel Setup Behavior . . . . . . . . . . . . . . . . . 29 116 12.1. IPsec Tunnel Setup Rate . . . . . . . . . . . . . . . . . 29 117 12.2. IKE Phase 1 Setup Rate . . . . . . . . . . . . . . . . . . 30 118 12.3. IKE Phase 2 Setup Rate . . . . . . . . . . . . . . . . . . 31 119 13. IPsec Rekey Behavior . . . . . . . . . . . . . . . . . . . . . 32 120 13.1. IKE Phase 1 Rekey Rate . . . . . . . . . . . . . . . . . . 32 121 13.2. IKE Phase 2 Rekey Rate . . . . . . . . . . . . . . . . . . 33 122 14. IPsec Tunnel Failover Time . . . . . . . . . . . . . . . . . . 34 123 15. DoS Attack Resiliency . . . . . . . . . . . . . . . . . . . . 36 124 15.1. Phase 1 DoS Resiliency Rate . . . . . . . . . . . . . . . 36 125 15.2. Phase 2 Hash Mismatch DoS Resiliency Rate . . . . . . . . 37 126 15.3. Phase 2 Anti Replay Attack DoS Resiliency Rate . . . . . . 37 127 16. Security Considerations . . . . . . . . . . . . . . . . . . . 39 128 17. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 39 129 18. References . . . . . . . . . . . . . . . . . . . . . . . . . . 39 130 18.1. Normative References . . . . . . . . . . . . . . . . . . . 39 131 18.2. Informative References . . . . . . . . . . . . . . . . . . 41 132 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 41 134 1. Introduction 136 This document defines a specific set of tests that can be used to 137 measure and report the performance characteristics of IPsec devices. 138 It extends the methodology already defined for benchmarking network 139 interconnecting devices in [RFC2544] to IPsec gateways and 140 additionally introduces tests which can be used to measure end-host 141 IPsec performance. 143 2. Document Scope 145 The primary focus of this document is to establish a performance 146 testing methodology for IPsec devices that support manual keying and 147 IKEv1. A seperate document will be written specifically to address 148 testing using the updated IKEv2 specification. Both IPv4 and IPv6 149 addressing will be taken into consideration for all relevant test 150 methodologies. 152 The testing will be constrained to: 154 o Devices acting as IPsec gateways whose tests will pertain to both 155 IPsec tunnel and transport mode. 157 o Devices acting as IPsec end-hosts whose tests will pertain to both 158 IPsec tunnel and transport mode. 160 What is specifically out of scope is any testing that pertains to 161 considerations involving, L2TP [RFC2661], GRE [RFC2784], BGP/MPLS 162 VPN's [RFC2547] and anything that does not specifically relate to the 163 establishment and tearing down of IPsec tunnels. 165 3. Methodology Format 167 The Methodology is described in the following format: 169 Objective: The reason for performing the test. 171 Topology: Physical test layout to be used as further clarified in 172 Section 6. 174 Procedure: Describes the method used for carrying out the test. 176 Reporting Format: Description of reporting of the test results. 178 4. Key Words to Reflect Requirements 180 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 181 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 182 document are to be interpreted as described in RFC 2119. RFC 2119 183 defines the use of these key words to help make the intent of 184 standards track documents as clear as possible. While this document 185 uses these keywords, this document is not a standards track document. 187 5. Test Considerations 189 Before any of the IPsec data plane benchmarking tests are carried 190 out, a baseline MUST be established. (i.e. the particular test in 191 question must first be executed to measure its performance without 192 enabling IPsec). Once both the Baseline clear text performance and 193 the performance using an IPsec enabled datapath have been measured, 194 the difference between the two can be discerned. 196 This document explicitly assumes that you MUST follow logical 197 performance test methodology that includes the pre-configuration or 198 pre-population of routing protocols, ARP caches, IPv6 neighbor 199 discovery and all other extraneous IPv4 and IPv6 parameters required 200 to pass packets before the tester is ready to send IPsec protected 201 packets. IPv6 nodes that implement Path MTU Discovery [RFC1981] MUST 202 ensure that the PMTUD process has been completed before any of the 203 tests have been run. 205 For every IPsec data plane benchmarking test, the SA database (SADB) 206 MUST be created and populated with the appropriate SA's before any 207 actual test traffic is sent, i.e. the DUT/SUT MUST have Active 208 Tunnels. This may require manual commands to be executed on the DUT/ 209 SUT or the sending of appropriate learning frames to the DUT/SUT to 210 trigger IKE negotiation. This is to ensure that none of the control 211 plane parameters (such as IPsec Tunnel Setup Rates and IPsec Tunnel 212 Rekey Rates) are factored into these tests. 214 For control plane benchmarking tests (i.e. IPsec Tunnel Setup Rate 215 and IPsec Tunnel Rekey Rates), the authentication mechanisms(s) used 216 for the authenticated Diffie-Hellman exchange MUST be reported. 218 6. Test Topologies 220 The tests can be performed as a DUT or SUT. When the tests are 221 performed as a DUT, the Tester itself must be an IPsec peer. This 222 scenario is shown in Figure 1. When testing an IPsec Device as a 223 DUT, one considerations that needs to be take into account is that 224 the Tester can introduce interoperability issues potentially limiting 225 the scope of the tests that can be executed. On the other hand, this 226 method has the advantage that IPsec client side testing can be 227 performed as well as it is able to identify abnormalities and 228 assymetry between the encryption and decryption behavior. 230 +------------+ 231 | | 232 +----[D] Tester [A]----+ 233 | | | | 234 | +------------+ | 235 | | 236 | +------------+ | 237 | | | | 238 +----[C] DUT [B]----+ 239 | | 240 +------------+ 242 Figure 1: Device Under Test Topology 244 The SUT scenario is depicted in Figure 2. Two identical DUTs are 245 used in this test set up which more accurately simulate the use of 246 IPsec gateways. IPsec SA (i.e. AH/ESP transport or tunnel mode) 247 configurations can be tested using this set-up where the tester is 248 only required to send and receive cleartext traffic. 250 +------------+ 251 | | 252 +-----------------[F] Tester [A]-----------------+ 253 | | | | 254 | +------------+ | 255 | | 256 | +------------+ +------------+ | 257 | | | | | | 258 +----[E] DUTa [D]--------[C] DUTb [B]----+ 259 | | | | 260 +------------+ +------------+ 262 Figure 2: System Under Test Topology 264 When an IPsec DUT needs to be tested in a chassis failover topology, 265 a second DUT needs to be used as shown in figure 3. This is the 266 high-availability equivalent of the topology as depicted in Figure 1. 267 Note that in this topology the Tester MUST be an IPsec peer. 269 +------------+ 270 | | 271 +---------[F] Tester [A]---------+ 272 | | | | 273 | +------------+ | 274 | | 275 | +------------+ | 276 | | | | 277 | +----[C] DUTa [B]----+ | 278 | | | | | | 279 | | +------------+ | | 280 +----+ +----+ 281 | +------------+ | 282 | | | | 283 +----[E] DUTb [D]----+ 284 | | 285 +------------+ 287 Figure 3: Redundant Device Under Test Topology 289 When no IPsec enabled Tester is available and an IPsec failover 290 scenario needs to be tested, the topology as shown in Figure 4 can be 291 used. In this case, either the high availability pair of IPsec 292 devices can be used as an Initiator or as a Responder. The remaining 293 chassis will take the opposite role. 295 +------------+ 296 | | 297 +--------------------[H] Tester [A]----------------+ 298 | | | | 299 | +------------+ | 300 | | 301 | +------------+ | 302 | | | | 303 | +---[E] DUTa [D]---+ | 304 | | | | | +------------+ | 305 | | +------------+ | | | | 306 +---+ +----[C] DUTc [B]---+ 307 | +------------+ | | | 308 | | | | +------------+ 309 +---[G] DUTb [F]---+ 310 | | 311 +------------+ 313 Figure 4: Redundant System Under Test Topology 315 7. Test Parameters 317 For each individual test performed, all of the following parameters 318 MUST be explicitly reported in any test results. 320 7.1. Frame Type 322 7.1.1. IP 324 Both IPv4 and IPv6 frames MUST be used. The basic IPv4 header is 20 325 bytes long (which may be increased by the use of an options field). 326 The basic IPv6 header is a fixed 40 bytes and uses an extension field 327 for additional headers. Only the basic headers plus the IPsec AH 328 and/or ESP headers MUST be present. 330 It is RECOMMENDED that IPv4 and IPv6 frames be tested separately to 331 ascertain performance parameters for either IPv4 or IPv6 traffic. If 332 both IPv4 and IPv6 traffic are to be tested, the device SHOULD be 333 pre-configured for a dual-stack environment to handle both traffic 334 types. 336 It is RECOMMENDED that a test payload field is added in the payload 337 of each packet that allows flow identification and timestamping of a 338 received packet. 340 7.1.2. UDP 342 It is also RECOMMENDED that the test is executed using UDP as the L4 343 protocol. When using UDP, instrumentation data SHOULD be present in 344 the payload of the packet. It is OPTIONAL to have application 345 payload. 347 7.1.3. TCP 349 It is OPTIONAL to perform the tests with TCP as the L4 protocol but 350 in case this is considered, the TCP traffic is RECOMMENDED to be 351 stateful. With a TCP as a L4 header it is possible that there will 352 not be enough room to add all instrumentation data to identify the 353 packets within the DUT/SUT. 355 7.1.4. NAT-Traversal 357 It is RECOMMENDED to test the scenario where IPsec protected traffic 358 must traverse network address translation (NAT) gateways. This is 359 commonly referred to as Nat-Traversal and requires UDP encapsulation. 361 7.2. Frame Sizes 363 Each test MUST be run with different frame sizes. It is RECOMMENDED 364 to use teh following cleartext layer 2 frame sizes for IPv4 tests 365 over Ethernet media: 64, 128, 256, 512, 1024, 1280, and 1518 bytes, 366 per RFC2544 section 9 [RFC2544]. The four CRC bytes are included in 367 the frame size specified. 369 For GigabitEthernet, supporting jumboframes, the cleartext layer 2 370 framesizes used are 64, 128, 256, 512, 1024, 1280, 1518, 2048, 3072, 371 4096, 5120, 6144, 7168, 8192, 9234 bytes 373 For SONET these are: 47, 67, 128, 256, 512, 1024, 1280, 1518, 2048, 374 4096 bytes 376 To accomodate IEEE 802.1q and IEEE 802.3as it is RECOMMENDED to 377 respectively include 1522 and 2000 byte framesizes in all tests. 379 Since IPv6 requires that every link has an MTU of 1280 octets or 380 greater, it is MANDATORY to execute tests with cleartext layer 2 381 frame sizes that include 1280 and 1518 bytes. It is RECOMMENDED that 382 additional frame sizes are included in the IPv6 test execution, 383 including the maximum supported datagram size for the linktype used. 385 7.3. Fragmentation and Reassembly 387 IPsec devices can and must fragment packets in specific scenarios. 388 Depending on whether the fragmentation is performed in software or 389 using specialized custom hardware, there may be a significant impact 390 on performance. 392 In IPv4, unless the DF (don't fragment) bit is set by the packet 393 source, the sender cannot guarantee that some intermediary device on 394 the way will not fragment an IPsec packet. For transport mode IPsec, 395 the peers must be able to fragment and reassemble IPsec packets. 396 Reassembly of fragmented packets is especially important if an IPv4 397 port selector (or IPv6 transport protocol selector) is configured. 398 For tunnel mode IPsec, it is not a requirement. Note that 399 fragmentation is handled differently in IPv6 than in IPv4. In IPv6 400 networks, fragmentation is no longer done by intermediate routers in 401 the networks, but by the source node that originates the packet. The 402 path MTU discovery (PMTUD) mechanism is recommended for every IPv6 403 node to avoid fragmentation. 405 Packets generated by hosts that do not support PMTUD, and have not 406 set the DF bit in the IP header, will undergo fragmentation before 407 IPsec encapsulation. Packets generated by hosts that do support 408 PMTUD will use it locally to match the statically configured MTU on 409 the tunnel. If you manually set the MTU on the tunnel, you must set 410 it low enough to allow packets to pass through the smallest link on 411 the path. Otherwise, the packets that are too large to fit will be 412 dropped. 414 Fragmentation can occur due to encryption overhead and is closely 415 linked to the choice of transform used. Since each test SHOULD be 416 run with a maximum cleartext frame size (as per the previous section) 417 it will cause fragmentation to occur since the maximum frame size 418 will be exceeded. All tests MUST be run with the DF bit not set. It 419 is also recommended that all tests be run with the DF bit set. 421 7.4. Time To Live 423 The source frames should have a TTL value large enough to accommodate 424 the DUT/SUT. A Minimum TTL of 64 is RECOMMENDED. 426 7.5. Trial Duration 428 The duration of the test portion of each trial SHOULD be at least 60 429 seconds. In the case of IPsec tunnel rekeying tests, the test 430 duration must be at least two times the IPsec tunnel rekey time to 431 ensure a reasonable worst case scenario test. 433 7.6. Security Context Parameters 435 All of the security context parameters listed in section 7.13 of the 436 IPsec Benchmarking Terminology document MUST be reported. When 437 merely discussing the behavior of traffic flows through IPsec 438 devices, an IPsec context MUST be provided. In the cases where IKE 439 is configured (as opposed to using manually keyed tunnels), both an 440 IPsec and an IKE context MUST be provided. Additional considerations 441 for reporting security context parameters are detailed below. These 442 all MUST be reported. 444 7.6.1. IPsec Transform Sets 446 All tests should be done on different IPsec transform set 447 combinations. An IPsec transform specifies a single IPsec security 448 protocol (either AH or ESP) with its corresponding security 449 algorithms and mode. A transform set is a combination of individual 450 IPsec transforms designed to enact a specific security policy for 451 protecting a particular traffic flow. At minumim, the transform set 452 must include one AH algorithm and a mode or one ESP algorithm and a 453 mode. 455 +-------------+------------------+----------------------+-----------+ 456 | ESP | Encryption | Authentication | Mode | 457 | Transform | Algorithm | Algorithm | | 458 +-------------+------------------+----------------------+-----------+ 459 | 1 | NULL | HMAC-SHA1-96 | Transport | 460 | 2 | NULL | HMAC-SHA1-96 | Tunnel | 461 | 3 | 3DES-CBC | HMAC-SHA1-96 | Transport | 462 | 4 | 3DES-CBC | HMAC-SHA1-96 | Tunnel | 463 | 5 | AES-CBC-128 | HMAC-SHA1-96 | Transport | 464 | 6 | AES-CBC-128 | HMAC-SHA1-96 | Tunnel | 465 | 7 | NULL | AES-XCBC-MAC-96 | Transport | 466 | 8 | NULL | AES-XCBC-MAC-96 | Tunnel | 467 | 9 | 3DES-CBC | AES-XCBC-MAC-96 | Transport | 468 | 10 | 3DES-CBC | AES-XCBC-MAC-96 | Tunnel | 469 | 11 | AES-CBC-128 | AES-XCBC-MAC-96 | Transport | 470 | 12 | AES-CBC-128 | AES-XCBC-MAC-96 | Tunnel | 471 +-------------+------------------+----------------------+-----------+ 473 Table 1 475 Testing of ESP Transforms 1-4 MUST be supported. Testing of ESP 476 Transforms 5-12 SHOULD be supported. 478 +--------------+--------------------------+-----------+ 479 | AH Transform | Authentication Algorithm | Mode | 480 +--------------+--------------------------+-----------+ 481 | 1 | HMAC-SHA1-96 | Transport | 482 | 2 | HMAC-SHA1-96 | Tunnel | 483 | 3 | AES-XBC-MAC-96 | Transport | 484 | 4 | AES-XBC-MAC-96 | Tunnel | 485 +--------------+--------------------------+-----------+ 487 Table 2 489 If AH is supported by the DUT/SUT testing of AH Transforms 1 and 2 490 MUST be supported. Testing of AH Transforms 3 And 4 SHOULD be 491 supported. 493 Note that this these tables are derived from the Cryptographic 494 Algorithms for AH and ESP requirements as described in [RFC4305]. 495 Optionally, other AH and/or ESP transforms MAY be supported. 497 +-----------------------+----+-----+ 498 | Transform Combination | AH | ESP | 499 +-----------------------+----+-----+ 500 | 1 | 1 | 1 | 501 | 2 | 2 | 2 | 502 | 3 | 1 | 3 | 503 | 4 | 2 | 4 | 504 +-----------------------+----+-----+ 506 Table 3 508 It is RECOMMENDED that the transforms shown in Table 3 be supported 509 for IPv6 traffic selectors since AH may be used with ESP in these 510 environments. Since AH will provide the overall authentication and 511 integrity, the ESP Authentication algorithm MUST be Null for these 512 tests. Optionally, other combined AH/ESP transform sets MAY be 513 supported. 515 7.6.2. IPsec Topologies 517 All tests should be done at various IPsec topology configurations and 518 the IPsec topology used MUST be reported. Since IPv6 requires the 519 implementation of manual keys for IPsec, both manual keying and IKE 520 configurations MUST be tested. 522 For manual keying tests, the IPsec SA's used should vary from 1 to 523 101, increasing in increments of 50. Although it is not expected 524 that manual keying (i.e. manually configuring the IPsec SA) will be 525 deployed in any operational setting with the exception of very small 526 controlled environments (i.e. less than 10 nodes), it is prudent to 527 test for potentially larger scale deployments. 529 For IKE specific tests, the following IPsec topologies MUST be 530 tested: 532 o 1 IKE SA & 2 IPsec SA (i.e. 1 IPsec Tunnel) 534 o 1 IKE SA & {max} IPsec SA's 536 o {max} IKE SA's & {max} IPsec SA's 538 It is RECOMMENDED to also test with the following IPsec topologies in 539 order to gain more datapoints: 541 o {max/2} IKE SA's & {(max/2) IKE SA's} IPsec SA's 543 o {max} IKE SA's & {(max) IKE SA's} IPsec SA's 545 7.6.3. IKE Keepalives 547 IKE keepalives track reachability of peers by sending hello packets 548 between peers. During the typical life of an IKE Phase 1 SA, packets 549 are only exchanged over this IKE Phase 1 SA when an IPsec IKE Quick 550 Mode (QM) negotiation is required at the expiration of the IPSec 551 Tunnel SA's. There is no standards-based mechanism for either type 552 of SA to detect the loss of a peer, except when the QM negotiation 553 fails. Most IPsec implementations use the Dead Peer Detection (i.e. 554 Keepalive) mechanism to determine whether connectivity has been lost 555 with a peer before the expiration of the IPsec Tunnel SA's. 557 All tests using IKEv1 MUST use the same IKE keepalive parameters. 559 7.6.4. IKE DH-group 561 There are 3 Diffie-Hellman groups which can be supported by IPsec 562 standards compliant devices: 564 o DH-group 1: 768 bits 566 o DH-group 2: 1024 bits 568 o DH-group 14: 2048 bits 570 DH-group 2 MUST be tested, to support the new IKEv1 algorithm 571 requirements listed in [RFC4109]. It is recommended that the same 572 DH-group be used for both IKE Phase 1 and IKE phase 2. All test 573 methodologies using IKE MUST report which DH-group was configured to 574 be used for IKE Phase 1 and IKE Phase 2 negotiations. 576 7.6.5. IKE SA / IPsec SA Lifetime 578 An IKE SA or IPsec SA is retained by each peer until the Tunnel 579 lifetime expires. IKE SA's and IPsec SA's have individual lifetime 580 parameters. In many real-world environments, the IPsec SA's will be 581 configured with shorter lifetimes than that of the IKE SA's. This 582 will force a rekey to happen more often for IPsec SA's. 584 When the initiator begins an IKE negotiation between itself and a 585 remote peer (the responder), an IKE policy can be selected only if 586 the lifetime of the responder's policy is shorter than or equal to 587 the lifetime of the initiator's policy. If the lifetimes are not the 588 same, the shorter lifetime will be used. 590 To avoid any incompatibilities in data plane benchmark testing, all 591 devices MUST have the same IKE SA lifetime as well as an identical 592 IPsec SA lifetime configured. Both SHALL be configured to a time 593 which exceeds the test duration timeframe and the total number of 594 bytes to be transmitted during the test. 596 Note that the IPsec SA lifetime MUST be equal to or less than the IKE 597 SA lifetime. Both the IKE SA lifetime and the IPsec SA lifetime used 598 MUST be reported. This parameter SHOULD be variable when testing IKE 599 rekeying performance. 601 7.6.6. IPsec Selectors 603 All tests MUST be performed using standard IPsec selectors as 604 described in [RFC2401] section 4.4.2. 606 7.6.7. NAT-Traversal 608 For any tests that include network address translation 609 considerations, the use of NAT-T in the test environment MUST be 610 recorded. 612 8. Capacity 614 8.1. IPsec Tunnel Capacity 616 Objective: Measure the maximum number of IPsec Tunnels or Active 617 Tunnels that can be sustained on an IPsec Device. 619 Topology If no IPsec aware tester is available the test MUST be 620 conducted using a System Under Test Topology as depicted in 621 Figure 2. When an IPsec aware tester is available the test MUST 622 be executed using a Device Under Test Topology as depicted in 623 Figure 1. 625 Procedure: The IPsec Device under test initially MUST NOT have any 626 Active IPsec Tunnels. The Initiator (either a tester or an IPsec 627 peer) will start the negotiation of an IPsec Tunnel (a single 628 Phase 1 SA and a pair Phase 2 SA's). 630 After it is detected that the tunnel is established, a limited 631 number (50 packets RECOMMENDED) SHALL be sent through the tunnel. 632 If all packet are received by the Responder (i.e. the DUT), a new 633 IPsec Tunnel may be attempted. 635 This proces will be repeated until no more IPsec Tunnels can be 636 established. 638 At the end of the test, a traffic pattern is sent to the initiator 639 that will be distributed over all Established Tunnels, where each 640 tunnel will need to propagate a fixed number of packets at a 641 minimum rate of e.g. 5 pps. The aggregate rate of all Active 642 Tunnels SHALL NOT exceed the IPsec Throughput. When all packets 643 sent by the Iniator are being received by the Responder, the test 644 has succesfully determined the IKE SA Capacity. If however this 645 final check fails, the test needs to be re-executed with a lower 646 number of Active IPsec Tunnels. There MAY be a need to enforce a 647 lower number of Active IPsec Tunnels i.e. an upper limit of Active 648 IPsec Tunnel SHOULD be defined in the test. 650 During the entire duration of the test rekeying of Tunnels SHALL 651 NOT be permitted. If a rekey event occurs, the test is invalid 652 and MUST be restarted. 654 Reporting Format: The reporting format should reflect the maximum 655 number of IPsec Tunnels that can be established when all packets 656 by the initiator are received by the responder. In addition the 657 Security Context parameters defined in Section 7.6 and utilized 658 for this test MUST be included in any statement of capacity. 660 8.2. IPsec SA Capacity 662 Objective: Measure the maximum number of IPsec SA's that can be 663 sustained on an IPsec Device. 665 Topology If no IPsec aware tester is available the test MUST be 666 conducted using a System Under Test Topology as depicted in 667 Figure 2. When an IPsec aware tester is available the test MUST 668 be executed using a Device Under Test Topology as depicted in 669 Figure 1. 671 Procedure: The IPsec Device under test initially MUST NOT have any 672 Active IPsec Tunnels. The Initiator (either a tester or an IPsec 673 peer) will start the negotiation of an IPsec Tunnel (a single 674 Phase 1 SA and a pair Phase 2 SA's). 676 After it is detected that the tunnel is established, a limited 677 number (50 packets RECOMMENDED) SHALL be sent through the tunnel. 678 If all packet are received by the Responder (i.e. the DUT), a new 679 pair of IPsec SA's may be attempted. This will be achieved by 680 offering a specific traffic pattern to the Initiator that matches 681 a given selector and therfore triggering the negotiation of a new 682 pair of IPsec SA's. 684 This proces will be repeated until no more IPsec SA' can be 685 established. 687 At the end of the test, a traffic pattern is sent to the initiator 688 that will be distributed over all IPsec SA's, where each SA will 689 need to propagate a fixed number of packets at a minimum rate of 5 690 pps. When all packets sent by the Iniator are being received by 691 the Responder, the test has succesfully determined the IPsec SA 692 Capacity. If however this final check fails, the test needs to be 693 re-executed with a lower number of IPsec SA's. There MAY be a 694 need to enforce a lower number IPsec SA's i.e. an upper limit of 695 IPsec SA's SHOULD be defined in the test. 697 During the entire duration of the test rekeying of Tunnels SHALL 698 NOT be permitted. If a rekey event occurs, the test is invalid 699 and MUST be restarted. 701 Reporting Format: The reporting format SHOULD be the same as listed 702 in Section 8.1 for the maximum number of IPsec SAs. 704 9. Throughput 706 This section contains the description of the tests that are related 707 to the characterization of the packet forwarding of a DUT/SUT in an 708 IPsec environment. Some metrics extend the concept of throughput 709 presented in [RFC1242]. The notion of Forwarding Rate is cited in 710 [RFC2285]. 712 A separate test SHOULD be performed for Throughput tests using IPv4/ 713 UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic. 715 9.1. Throughput baseline 717 Objective: Measure the intrinsic cleartext throughput of a device 718 without the use of IPsec. The throughput baseline methodology and 719 reporting format is derived from [RFC2544]. 721 Topology If no IPsec aware tester is available the test MUST be 722 conducted using a System Under Test Topology as depicted in 723 Figure 2. When an IPsec aware tester is available the test MUST 724 be executed using a Device Under Test Topology as depicted in 725 Figure 1. 727 Procedure: Send a specific number of frames that matches the IPsec 728 SA selector(s) to be tested at a specific rate through the DUT and 729 then count the frames that are transmitted by the DUT. If the 730 count of offered frames is equal to the count of received frames, 731 the rate of the offered stream is increased and the test is rerun. 732 If fewer frames are received than were transmitted, the rate of 733 the offered stream is reduced and the test is rerun. 735 The throughput is the fastest rate at which the count of test 736 frames transmitted by the DUT is equal to the number of test 737 frames sent to it by the test equipment. 739 Note that the IPsec SA selectors refer to the IP addresses and 740 port numbers. So eventhough this is a test of only cleartext 741 traffic, the same type of traffic should be sent for the baseline 742 test as for tests utilizing IPsec. 744 Reporting Format: The results of the throughput test SHOULD be 745 reported in the form of a graph. If it is, the x coordinate 746 SHOULD be the frame size, the y coordinate SHOULD be the frame 747 rate. There SHOULD be at least two lines on the graph. There 748 SHOULD be one line showing the theoretical frame rate for the 749 media at the various frame sizes. The second line SHOULD be the 750 plot of the test results. Additional lines MAY be used on the 751 graph to report the results for each type of data stream tested. 752 Text accompanying the graph SHOULD indicate the protocol, data 753 stream format, and type of media used in the tests. 755 Any values for throughput rate MUST be expressed in packets per 756 second. The rate MAY also be expressed in bits (or bytes) per 757 second if the vendor so desires. The statement of performance 758 MUST include: 760 * Measured maximum frame rate 762 * Size of the frame used 764 * Theoretical limit of the media for that frame size 766 * Type of protocol used in the test 768 9.2. IPsec Throughput 770 Objective: Measure the intrinsic throughput of a device utilizing 771 IPsec. 773 Topology If no IPsec aware tester is available the test MUST be 774 conducted using a System Under Test Topology as depicted in 775 Figure 2. When an IPsec aware tester is available the test MUST 776 be executed using a Device Under Test Topology as depicted in 777 Figure 1. 779 Procedure: Send a specific number of cleartext frames that match the 780 IPsec SA selector(s) at a specific rate through the DUT/SUT. DUTa 781 will encrypt the traffic and forward to DUTb which will in turn 782 decrypt the traffic and forward to the testing device. The 783 testing device counts the frames that are transmitted by the DUTb. 784 If the count of offered frames is equal to the count of received 785 frames, the rate of the offered stream is increased and the test 786 is rerun. If fewer frames are received than were transmitted, the 787 rate of the offered stream is reduced and the test is rerun. 789 The IPsec Throughput is the fastest rate at which the count of 790 test frames transmitted by the DUT/SUT is equal to the number of 791 test frames sent to it by the test equipment. 793 For tests using multiple IPsec SA's, the test traffic associated 794 with the individual traffic selectors defined for each IPsec SA 795 MUST be sent in a round robin type fashion to keep the test 796 balanced so as not to overload any single IPsec SA. 798 Reporting format: The reporting format SHALL be the same as listed 799 in Section 9.1 with the additional requirement that the Security 800 Context Parameters, as defined in Section 7.6, utilized for this 801 test MUST be included in any statement of performance. 803 9.3. IPsec Encryption Throughput 805 Objective: Measure the intrinsic DUT vendor specific IPsec 806 Encryption Throughput. 808 Topology The test MUST be conducted using a Device Under Test 809 Topology as depicted in Figure 1. 811 Procedure: Send a specific number of cleartext frames that match the 812 IPsec SA selector(s) at a specific rate to the DUT. The DUT will 813 receive the cleartext frames, perform IPsec operations and then 814 send the IPsec protected frame to the tester. Upon receipt of the 815 encrypted packet, the testing device will timestamp the packet(s) 816 and record the result. If the count of offered frames is equal to 817 the count of received frames, the rate of the offered stream is 818 increased and the test is rerun. If fewer frames are received 819 than were transmitted, the rate of the offered stream is reduced 820 and the test is rerun. 822 The IPsec Encryption Throughput is the fastest rate at which the 823 count of test frames transmitted by the DUT is equal to the number 824 of test frames sent to it by the test equipment. 826 For tests using multiple IPsec SA's, the test traffic associated 827 with the individual traffic selectors defined for each IPsec SA 828 MUST be sent in a round robin type fashion to keep the test 829 balanced so as not to overload any single IPsec SA. 831 Reporting format: The reporting format SHALL be the same as listed 832 in Section 9.1 with the additional requirement that the Security 833 Context Parameters, as defined in Section 7.6, utilized for this 834 test MUST be included in any statement of performance. 836 9.4. IPsec Decryption Throughput 838 Objective: Measure the intrinsic DUT vendor specific IPsec 839 Decryption Throughput. 841 Topology The test MUST be conducted using a Device Under Test 842 Topology as depicted in Figure 1. 844 Procedure: Send a specific number of IPsec protected frames that 845 match the IPsec SA selector(s) at a specific rate to the DUT. The 846 DUT will receive the IPsec protected frames, perform IPsec 847 operations and then send the cleartext frame to the tester. Upon 848 receipt of the cleartext packet, the testing device will timestamp 849 the packet(s) and record the result. If the count of offered 850 frames is equal to the count of received frames, the rate of the 851 offered stream is increased and the test is rerun. If fewer 852 frames are received than were transmitted, the rate of the offered 853 stream is reduced and the test is rerun. 855 The IPsec Decryption Throughput is the fastest rate at which the 856 count of test frames transmitted by the DUT is equal to the number 857 of test frames sent to it by the test equipment. 859 For tests using multiple IPsec SA's, the test traffic associated 860 with the individual traffic selectors defined for each IPsec SA 861 MUST be sent in a round robin type fashion to keep the test 862 balanced so as not to overload any single IPsec SA. 864 Reporting format: The reporting format SHALL be the same as listed 865 in Section 9.1 with the additional requirement that the Security 866 Context Parameters, as defined in Section 7.6, utilized for this 867 test MUST be included in any statement of performance. 869 10. Latency 871 This section presents methodologies relating to the characterization 872 of the forwarding latency of a DUT/SUT. It extends the concept of 873 latency characterization presented in [RFC2544] to an IPsec 874 environment. 876 A separate tests SHOULD be performed for latency tests using IPv4/ 877 UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic. 879 In order to lessen the effect of packet buffering in the DUT/SUT, the 880 latency tests MUST be run at the measured IPsec throughput level of 881 the DUT/SUT; IPsec latency at other offered loads is optional. 883 Lastly, [RFC1242] and [RFC2544] draw distinction between two classes 884 of devices: "store and forward" and "bit-forwarding". Each class 885 impacts how latency is collected and subsequently presented. See the 886 related RFC's for more information. In practice, much of the test 887 equipment will collect the latency measurement for one class or the 888 other, and, if needed, mathematically derive the reported value by 889 the addition or subtraction of values accounting for medium 890 propagation delay of the packet, bit times to the timestamp trigger 891 within the packet, etc. Test equipment vendors SHOULD provide 892 documentation regarding the composition and calculation latency 893 values being reported. The user of this data SHOULD understand the 894 nature of the latency values being reported, especially when 895 comparing results collected from multiple test vendors (e.g., If test 896 vendor A presents a "store and forward" latency result and test 897 vendor B presents a "bit-forwarding" latency result, the user may 898 erroneously conclude the DUT has two differing sets of latency 899 values.). 901 10.1. Latency Baseline 903 Objective: Measure the intrinsic latency (min/avg/max) introduced by 904 a device without the use of IPsec. 906 Topology If no IPsec aware tester is available the test MUST be 907 conducted using a System Under Test Topology as depicted in 908 Figure 2. When an IPsec aware tester is available the test MUST 909 be executed using a Device Under Test Topology as depicted in 910 Figure 1. 912 Procedure: First determine the throughput for the DUT/SUT at each of 913 the listed frame sizes. Send a stream of frames at a particular 914 frame size through the DUT at the determined throughput rate using 915 frames that match the IPsec SA selector(s) to be tested. The 916 stream SHOULD be at least 120 seconds in duration. An identifying 917 tag SHOULD be included in one frame after 60 seconds with the type 918 of tag being implementation dependent. The time at which this 919 frame is fully transmitted is recorded (timestamp A). The 920 receiver logic in the test equipment MUST recognize the tag 921 information in the frame stream and record the time at which the 922 tagged frame was received (timestamp B). 924 The latency is timestamp B minus timestamp A as per the relevant 925 definition from RFC 1242, namely latency as defined for store and 926 forward devices or latency as defined for bit forwarding devices. 928 The test MUST be repeated at least 20 times with the reported 929 value being the average of the recorded values. 931 Reporting Format The report MUST state which definition of latency 932 (from [RFC1242]) was used for this test. The latency results 933 SHOULD be reported in the format of a table with a row for each of 934 the tested frame sizes. There SHOULD be columns for the frame 935 size, the rate at which the latency test was run for that frame 936 size, for the media types tested, and for the resultant latency 937 values for each type of data stream tested. 939 10.2. IPsec Latency 941 Objective: Measure the intrinsic IPsec Latency (min/avg/max) 942 introduced by a device when using IPsec. 944 Topology If no IPsec aware tester is available the test MUST be 945 conducted using a System Under Test Topology as depicted in 946 Figure 2. When an IPsec aware tester is available the test MUST 947 be executed using a Device Under Test Topology as depicted in 948 Figure 1. 950 Procedure: First determine the throughput for the DUT/SUT at each of 951 the listed frame sizes. Send a stream of cleartext frames at a 952 particular frame size through the DUT/SUT at the determined 953 throughput rate using frames that match the IPsec SA selector(s) 954 to be tested. DUTa will encrypt the traffic and forward to DUTb 955 which will in turn decrypt the traffic and forward to the testing 956 device. 958 The stream SHOULD be at least 120 seconds in duration. An 959 identifying tag SHOULD be included in one frame after 60 seconds 960 with the type of tag being implementation dependent. The time at 961 which this frame is fully transmitted is recorded (timestamp A). 962 The receiver logic in the test equipment MUST recognize the tag 963 information in the frame stream and record the time at which the 964 tagged frame was received (timestamp B). 966 The IPsec Latency is timestamp B minus timestamp A as per the 967 relevant definition from [RFC1242], namely latency as defined for 968 store and forward devices or latency as defined for bit forwarding 969 devices. 971 The test MUST be repeated at least 20 times with the reported 972 value being the average of the recorded values. 974 Reporting format: The reporting format SHALL be the same as listed 975 in Section 10.1 with the additional requirement that the Security 976 Context Parameters, as defined in Section 7.6, utilized for this 977 test MUST be included in any statement of performance. 979 10.3. IPsec Encryption Latency 981 Objective: Measure the DUT vendor specific IPsec Encryption Latency 982 for IPsec protected traffic. 984 Topology The test MUST be conducted using a Device Under Test 985 Topology as depicted in Figure 1. 987 Procedure: Send a stream of cleartext frames at a particular frame 988 size through the DUT/SUT at the determined throughput rate using 989 frames that match the IPsec SA selector(s) to be tested. 991 The stream SHOULD be at least 120 seconds in duration. An 992 identifying tag SHOULD be included in one frame after 60 seconds 993 with the type of tag being implementation dependent. The time at 994 which this frame is fully transmitted is recorded (timestamp A). 995 The DUT will receive the cleartext frames, perform IPsec 996 operations and then send the IPsec protected frames to the tester. 997 Upon receipt of the encrypted frames, the receiver logic in the 998 test equipment MUST recognize the tag information in the frame 999 stream and record the time at which the tagged frame was received 1000 (timestamp B). 1002 The IPsec Encryption Latency is timestamp B minus timestamp A as 1003 per the relevant definition from [RFC1242], namely latency as 1004 defined for store and forward devices or latency as defined for 1005 bit forwarding devices. 1007 The test MUST be repeated at least 20 times with the reported 1008 value being the average of the recorded values. 1010 Reporting format: The reporting format SHALL be the same as listed 1011 in Section 10.1 with the additional requirement that the Security 1012 Context Parameters, as defined in Section 7.6, utilized for this 1013 test MUST be included in any statement of performance. 1015 10.4. IPsec Decryption Latency 1017 Objective: Measure the DUT Vendor Specific IPsec Decryption Latency 1018 for IPsec protected traffic. 1020 Topology The test MUST be conducted using a Device Under Test 1021 Topology as depicted in Figure 1. 1023 Procedure: Send a stream of IPsec protected frames at a particular 1024 frame size through the DUT/SUT at the determined throughput rate 1025 using frames that match the IPsec SA selector(s) to be tested. 1027 The stream SHOULD be at least 120 seconds in duration. An 1028 identifying tag SHOULD be included in one frame after 60 seconds 1029 with the type of tag being implementation dependent. The time at 1030 which this frame is fully transmitted is recorded (timestamp A). 1031 The DUT will receive the IPsec protected frames, perform IPsec 1032 operations and then send the cleartext frames to the tester. Upon 1033 receipt of the decrypted frames, the receiver logic in the test 1034 equipment MUST recognize the tag information in the frame stream 1035 and record the time at which the tagged frame was received 1036 (timestamp B). 1038 The IPsec Decryption Latency is timestamp B minus timestamp A as 1039 per the relevant definition from [RFC1242], namely latency as 1040 defined for store and forward devices or latency as defined for 1041 bit forwarding devices. 1043 The test MUST be repeated at least 20 times with the reported 1044 value being the average of the recorded values. 1046 Reporting format: The reporting format SHALL be the same as listed 1047 in Section 10.1 with the additional requirement that the Security 1048 Context Parameters, as defined in Section 7.6, utilized for this 1049 test MUST be included in any statement of performance. 1051 10.5. Time To First Packet 1053 Objective: Measure the time it takes to transmit a packet when no 1054 SA's have been established. 1056 Topology If no IPsec aware tester is available the test MUST be 1057 conducted using a System Under Test Topology as depicted in 1058 Figure 2. When an IPsec aware tester is available the test MUST 1059 be executed using a Device Under Test Topology as depicted in 1060 Figure 1. 1062 Procedure: Determine the IPsec throughput for the DUT/SUT at each of 1063 the listed frame sizes. Start with a DUT/SUT with Configured 1064 Tunnels. Send a stream of cleartext frames at a particular frame 1065 size through the DUT/SUT at the determined throughput rate using 1066 frames that match the IPsec SA selector(s) to be tested. 1068 The time at which the first frame is fully transmitted from the 1069 testing device is recorded as timestamp A. The time at which the 1070 testing device receives its first frame from the DUT/SUT is 1071 recorded as timestamp B. The Time To First Packet is the 1072 difference between Timestamp B and Timestamp A. 1074 Note that it is possible that packets can be lost during IPsec 1075 Tunnel establishment and that timestamp A & B are not required to 1076 be associated with a unique packet. 1078 Reporting format: The Time To First Packet results SHOULD be 1079 reported in the format of a table with a row for each of the 1080 tested frame sizes. There SHOULD be columns for the frame size, 1081 the rate at which the TTFP test was run for that frame size, for 1082 the media types tested, and for the resultant TTFP values for each 1083 type of data stream tested. The Security Context Parameters 1084 defined in Section 7.6 and utilized for this test MUST be included 1085 in any statement of performance. 1087 11. Frame Loss Rate 1089 This section presents methodologies relating to the characterization 1090 of frame loss rate, as defined in [RFC1242], in an IPsec environment. 1092 11.1. Frame Loss Baseline 1094 Objective: To determine the frame loss rate, as defined in 1095 [RFC1242], of a DUT/SUT throughout the entire range of input data 1096 rates and frame sizes without the use of IPsec. 1098 Topology If no IPsec aware tester is available the test MUST be 1099 conducted using a System Under Test Topology as depicted in 1100 Figure 2. When an IPsec aware tester is available the test MUST 1101 be executed using a Device Under Test Topology as depicted in 1102 Figure 1. 1104 Procedure: Send a specific number of frames at a specific rate 1105 through the DUT/SUT to be tested using frames that match the IPsec 1106 SA selector(s) to be tested and count the frames that are 1107 transmitted by the DUT/SUT. The frame loss rate at each point is 1108 calculated using the following equation: 1110 ( ( input_count - output_count ) * 100 ) / input_count 1112 The first trial SHOULD be run for the frame rate that corresponds 1113 to 100% of the maximum rate for the nominal device throughput, 1114 which is the throughput that is actually supported on an interface 1115 for a specific packet size and may not be the theoretical maximum. 1116 Repeat the procedure for the rate that corresponds to 90% of the 1117 maximum rate used and then for 80% of this rate. This sequence 1118 SHOULD be continued (at reduced 10% intervals) until there are two 1119 successive trials in which no frames are lost. The maximum 1120 granularity of the trials MUST be 10% of the maximum rate, a finer 1121 granularity is encouraged. 1123 Reporting Format: The results of the frame loss rate test SHOULD be 1124 plotted as a graph. If this is done then the X axis MUST be the 1125 input frame rate as a percent of the theoretical rate for the 1126 media at the specific frame size. The Y axis MUST be the percent 1127 loss at the particular input rate. The left end of the X axis and 1128 the bottom of the Y axis MUST be 0 percent; the right end of the X 1129 axis and the top of the Y axis MUST be 100 percent. Multiple 1130 lines on the graph MAY used to report the frame loss rate for 1131 different frame sizes, protocols, and types of data streams. 1133 11.2. IPsec Frame Loss 1135 Objective: To measure the frame loss rate of a device when using 1136 IPsec to protect the data flow. 1138 Topology When an IPsec aware tester is available the test MUST be 1139 executed using a Device Under Test Topology as depicted in 1140 Figure 1. If no IPsec aware tester is available the test MUST be 1141 conducted using a System Under Test Topology as depicted in 1142 Figure 2. In this scenario, it is common practice to use an 1143 asymmetric topology, where a less powerful (lower throughput) DUT 1144 is used in conjunction with a much more powerful IPsec device. 1145 This topology variant can in may cases produce more accurate 1146 results that the symmetric variant depicted in the figure, since 1147 all bottlenecks are expected to be on the less performant device. 1149 Procedure: Ensure that the DUT/SUT is in active tunnel mode. Send a 1150 specific number of cleartext frames that match the IPsec SA 1151 selector(s) to be tested at a specific rate through the DUT/SUT. 1152 DUTa will encrypt the traffic and forward to DUTb which will in 1153 turn decrypt the traffic and forward to the testing device. The 1154 testing device counts the frames that are transmitted by the DUTb. 1155 The frame loss rate at each point is calculated using the 1156 following equation: 1158 ( ( input_count - output_count ) * 100 ) / input_count 1160 The first trial SHOULD be run for the frame rate that corresponds 1161 to 100% of the maximum rate for the nominal device throughput, 1162 which is the throughput that is actually supported on an interface 1163 for a specific packet size and may not be the theoretical maximum. 1164 Repeat the procedure for the rate that corresponds to 90% of the 1165 maximum rate used and then for 80% of this rate. This sequence 1166 SHOULD be continued (at reducing 10% intervals) until there are 1167 two successive trials in which no frames are lost. The maximum 1168 granularity of the trials MUST be 10% of the maximum rate, a finer 1169 granularity is encouraged. 1171 Reporting Format: The reporting format SHALL be the same as listed 1172 in Section 11.1 with the additional requirement that the Security 1173 Context Parameters, as defined in Section 7.6, utilized for this 1174 test MUST be included in any statement of performance. 1176 11.3. IPsec Encryption Frame Loss 1178 Objective: To measure the effect of IPsec encryption on the frame 1179 loss rate of a device. 1181 Topology The test MUST be conducted using a Device Under Test 1182 Topology as depicted in Figure 1. 1184 Procedure: Send a specific number of cleartext frames that match the 1185 IPsec SA selector(s) at a specific rate to the DUT. The DUT will 1186 receive the cleartext frames, perform IPsec operations and then 1187 send the IPsec protected frame to the tester. The testing device 1188 counts the encrypted frames that are transmitted by the DUT. The 1189 frame loss rate at each point is calculated using the following 1190 equation: 1192 ( ( input_count - output_count ) * 100 ) / input_count 1194 The first trial SHOULD be run for the frame rate that corresponds 1195 to 100% of the maximum rate for the nominal device throughput, 1196 which is the throughput that is actually supported on an interface 1197 for a specific packet size and may not be the theoretical maximum. 1198 Repeat the procedure for the rate that corresponds to 90% of the 1199 maximum rate used and then for 80% of this rate. This sequence 1200 SHOULD be continued (at reducing 10% intervals) until there are 1201 two successive trials in which no frames are lost. The maximum 1202 granularity of the trials MUST be 10% of the maximum rate, a finer 1203 granularity is encouraged. 1205 Reporting Format: The reporting format SHALL be the same as listed 1206 in Section 11.1 with the additional requirement that the Security 1207 Context Parameters, as defined in Section 7.6, utilized for this 1208 test MUST be included in any statement of performance. 1210 11.4. IPsec Decryption Frame Loss 1212 Objective: To measure the effects of IPsec encryption on the frame 1213 loss rate of a device. 1215 Topology: The test MUST be conducted using a Device Under Test 1216 Topology as depicted in Figure 1. 1218 Procedure: Send a specific number of IPsec protected frames that 1219 match the IPsec SA selector(s) at a specific rate to the DUT. The 1220 DUT will receive the IPsec protected frames, perform IPsec 1221 operations and then send the cleartext frames to the tester. The 1222 testing device counts the cleartext frames that are transmitted by 1223 the DUT. The frame loss rate at each point is calculated using 1224 the following equation: 1226 ( ( input_count - output_count ) * 100 ) / input_count 1228 The first trial SHOULD be run for the frame rate that corresponds 1229 to 100% of the maximum rate for the nominal device throughput, 1230 which is the throughput that is actually supported on an interface 1231 for a specific packet size and may not be the theoretical maximum. 1232 Repeat the procedure for the rate that corresponds to 90% of the 1233 maximum rate used and then for 80% of this rate. This sequence 1234 SHOULD be continued (at reducing 10% intervals) until there are 1235 two successive trials in which no frames are lost. The maximum 1236 granularity of the trials MUST be 10% of the maximum rate, a finer 1237 granularity is encouraged. 1239 Reporting format: The reporting format SHALL be the same as listed 1240 in Section 11.1 with the additional requirement that the Security 1241 Context Parameters, as defined in Section 7.6, utilized for this 1242 test MUST be included in any statement of performance. 1244 11.5. IKE Phase 2 Rekey Frame Loss 1246 Objective: To measure the frame loss due to an IKE Phase 2 (i.e. 1247 IPsec SA) Rekey event. 1249 Topology: The test MUST be conducted using a Device Under Test 1250 Topology as depicted in Figure 1. 1252 Procedure: The procedure is the same as in Section 11.2 with the 1253 exception that the IPsec SA lifetime MUST be configured to be one- 1254 third of the trial test duration or one-third of the total number 1255 of bytes to be transmitted during the trial duration. 1257 Reporting format: The reporting format SHALL be the same as listed 1258 in Section 11.1 with the additional requirement that the Security 1259 Context Parameters, as defined in Section 7.6, utilized for this 1260 test MUST be included in any statement of performance. 1262 12. IPsec Tunnel Setup Behavior 1264 12.1. IPsec Tunnel Setup Rate 1266 Objective: Determine the rate at which IPsec Tunnels can be 1267 established. 1269 Topology: The test MUST be conducted using a Device Under Test 1270 Topology as depicted in Figure 1. 1272 Procedure: Configure the Responder (where the Responder is the DUT) 1273 with n IKE Phase 1 and corresponding IKE Phase 2 policies. Ensure 1274 that no SA's are established and that the Responder has 1275 Established Tunnels for all n policies. Send a stream of 1276 cleartext frames at a particular frame size to the Responder at 1277 the determined throughput rate using frames with selectors 1278 matching the first IKE Phase 1 policy. As soon as the testing 1279 device receives its first frame from the Responder, it knows that 1280 the IPsec Tunnel is established and starts sending the next stream 1281 of cleartext frames using the same frame size and throughput rate 1282 but this time using selectors matching the second IKE Phase 1 1283 policy. This process is repeated until all configured IPsec 1284 Tunnels have been established. 1286 Some devices may support policy configurations where you do not 1287 need a one-to-one correspondence between an IKE Phase 1 policy and 1288 a specific IKE SA. In this case, the number of IKE Phase 1 1289 policies configured should be sufficient so that the transmitted 1290 (i.e. offered) test traffic will create 'n' IKE SAs. 1292 The IPsec Tunnel Setup Rate is measured in Tunnels Per Second 1293 (TPS) and is determined by the following formula: 1295 Tunnel Setup Rate = n / [Duration of Test - (n * 1296 frame_transmit_time)] TPS 1297 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1298 to exceed the duration of the test time. It is RECOMMENDED that 1299 n=100 IPsec Tunnels are tested at a minimum to get a large enough 1300 sample size to depict some real-world behavior. 1302 Reporting Format: The Tunnel Setup Rate results SHOULD be reported 1303 in the format of a table with a row for each of the tested frame 1304 sizes. There SHOULD be columns for: 1306 The throughput rate at which the test was run for the specified 1307 frame size 1309 The media type used for the test 1311 The resultant Tunnel Setup Rate values, in TPS, for the 1312 particular data stream tested for that frame size 1314 The Security Context Parameters defined in Section 7.6 and 1315 utilized for this test MUST be included in any statement of 1316 performance. 1318 12.2. IKE Phase 1 Setup Rate 1320 Objective: Determine the rate of IKE SA's that can be established. 1322 Topology: The test MUST be conducted using a Device Under Test 1323 Topology as depicted in Figure 1. 1325 Procedure: Configure the Responder with n IKE Phase 1 and 1326 corresponding IKE Phase 2 policies. Ensure that no SA's are 1327 established and that the Responder has Configured Tunnels for all 1328 n policies. Send a stream of cleartext frames at a particular 1329 frame size through the Responder at the determined throughput rate 1330 using frames with selectors matching the first IKE Phase 1 policy. 1331 As soon as the Phase 1 SA is established, the testing device 1332 starts sending the next stream of cleartext frames using the same 1333 frame size and throughput rate but this time using selectors 1334 matching the second IKE Phase 1 policy. This process is repeated 1335 until all configured IKE SA's have been established. 1337 Some devices may support policy configurations where you do not 1338 need a one-to-one correspondence between an IKE Phase 1 policy and 1339 a specific IKE SA. In this case, the number of IKE Phase 1 1340 policies configured should be sufficient so that the transmitted 1341 (i.e. offered) test traffic will create 'n' IKE SAs. 1343 The IKE SA Setup Rate is determined by the following formula: 1345 IKE SA Setup Rate = n / [Duration of Test - (n * 1346 frame_transmit_time)] IKE SAs per second 1348 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1349 to exceed the duration of the test time. It is RECOMMENDED that 1350 n=100 IKE SA's are tested at a minumum to get a large enough 1351 sample size to depict some real-world behavior. 1353 Reporting Format: The IKE Phase 1 Setup Rate results SHOULD be 1354 reported in the format of a table with a row for each of the 1355 tested frame sizes. There SHOULD be columns for the frame size, 1356 the rate at which the test was run for that frame size, for the 1357 media types tested, and for the resultant IKE Phase 1 Setup Rate 1358 values for each type of data stream tested. The Security Context 1359 Parameters defined in Section 7.6 and utilized for this test MUST 1360 be included in any statement of performance. 1362 12.3. IKE Phase 2 Setup Rate 1364 Objective: Determine the rate of IPsec SA's that can be established. 1366 Topology: The test MUST be conducted using a Device Under Test 1367 Topology as depicted in Figure 1. 1369 Procedure: Configure the Responder (where the Responder is the DUT) 1370 with a single IKE Phase 1 policy and n corresponding IKE Phase 2 1371 policies. Ensure that no SA's are established and that the 1372 Responder has Configured Tunnels for all policies. Send a stream 1373 of cleartext frames at a particular frame size through the 1374 Responder at the determined throughput rate using frames with 1375 selectors matching the first IPsec SA policy. 1377 The time at which the IKE SA is established is recorded as 1378 timestamp_A. As soon as the Phase 1 SA is established, the IPsec 1379 SA negotiation will be initiated. Once the first IPsec SA has 1380 been established, start sending the next stream of cleartext 1381 frames using the same frame size and throughput rate but this time 1382 using selectors matching the second IKE Phase 2 policy. This 1383 process is repeated until all configured IPsec SA's have been 1384 established. 1386 The IPsec SA Setup Rate is determined by the following formula, 1387 where test_duration and frame_transmit_times are expressed in 1388 units of seconds: 1390 IPsec SA Setup Rate = n / [test_duration - {timestamp_A +((n-1) * 1391 frame_transmit_time)}] IPsec SA's per Second 1393 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1394 to exceed the duration of the test time. It is RECOMMENDED that 1395 n=100 IPsec SA's are tested at a minumum to get a large enough 1396 sample size to depict some real-world behavior. 1398 Reporting Format: The IKE Phase 2 Setup Rate results SHOULD be 1399 reported in the format of a table with a row for each of the 1400 tested frame sizes. There SHOULD be columns for: 1402 The throughput rate at which the test was run for the specified 1403 frame size 1405 The media type used for the test 1407 The resultant IKE Phase 2 Setup Rate values, in IPsec SA's per 1408 second, for the particular data stream tested for that frame 1409 size 1411 The Security Context parameters defined in Section 7.6 and 1412 utilized for this test MUST be included in any statement of 1413 performance. 1415 13. IPsec Rekey Behavior 1417 The IPsec Rekey Behavior test all need to be executed by an IPsec 1418 aware test device since the test needs to be closely linked with the 1419 IKE FSM (Finite State Machine) and cannot be done by offering 1420 specific traffic pattern at either the Initiator or the Responder. 1422 13.1. IKE Phase 1 Rekey Rate 1424 Objective: Determine the maximum rate at which an IPsec Device can 1425 rekey IKE SA's. 1427 Topology: The test MUST be conducted using a Device Under Test 1428 Topology as depicted in Figure 1. 1430 Procedure: The IPsec Device under test should initially be set up 1431 with the determined IPsec Tunnel Capacity number of Active IPsec 1432 Tunnels. 1434 The IPsec aware tester should then perform a binary search where 1435 it initiates an IKE Phase 1 SA rekey for all Active IPsec Tunnels. 1436 The tester MUST timestamp for each IKE SA when it initiated the 1437 rekey (timestamp_A) and MUST timestamp once more once the FSM 1438 declares the rekey is completed (timestamp_B).The rekey time for a 1439 specific SA equals timestamp_B - timestamp_A. Once the iteration 1440 is complete the tester now has a table of rekey times for each IKE 1441 SA. The reciprocal of the average of this table is the IKE Phase 1442 1 Rekey Rate. 1444 It is expected that all IKE SA were able to rekey succesfully. If 1445 this is not the case, the IPsec Tunnels are all re-established and 1446 the binary search goes to the next value of IKE SA's it will 1447 rekey. The process will repeat itself until a rate is determined 1448 at which all SA's in that timeframe rekey correctly. 1450 Reporting Format: The IKE Phase 1 Rekey Rate results SHOULD be 1451 reported in the format of a table with a row for each of the 1452 tested frame sizes. There SHOULD be columns for the frame size, 1453 the rate at which the test was run for that frame size, for the 1454 media types tested, and for the resultant IKE Phase 1 Rekey Rate 1455 values for each type of data stream tested. The Security Context 1456 Parameters defined in Section 7.6 and utilized for this test MUST 1457 be included in any statement of performance. 1459 13.2. IKE Phase 2 Rekey Rate 1461 Objective: Determine the maximum rate at which an IPsec Device can 1462 rekey IPsec SA's. 1464 Topology: The test MUST be conducted using a Device Under Test 1465 Topology as depicted in Figure 1. 1467 Procedure: The IPsec Device under test should initially be set up 1468 with the determined IPsec Tunnel Capacity number of Active IPsec 1469 Tunnels. 1471 The IPsec aware tester should then perform a binary search where 1472 it initiates an IKE Phase 2 SA rekey for all IPsec SA's. The 1473 tester MUST timestamp for each IPsec SA when it initiated the 1474 rekey (timestamp_A) and MUST timestamp once more once the FSM 1475 declares the rekey is completed (timestamp_B). The rekey time for 1476 a specific IPsec SA is timestamp_B - timestamp_A. Once the 1477 itteration is complete the tester now has a table of rekey times 1478 for each IPsec SA. The reciprocal of the average of this table is 1479 the IKE Phase 2 Rekey Rate. 1481 It is expected that all IPsec SA's were able to rekey succesfully. 1482 If this is not the case, the IPsec Tunnels are all re-established 1483 and the binary search goes to the next value of IPsec SA's it will 1484 rekey. The process will repeat itself until a rate is determined 1485 at which a all SA's in that timeframe rekey correctly. 1487 Reporting Format: The IKE Phase 2 Rekey Rate results SHOULD be 1488 reported in the format of a table with a row for each of the 1489 tested frame sizes. There SHOULD be columns for the frame size, 1490 the rate at which the test was run for that frame size, for the 1491 media types tested, and for the resultant IKE Phase 2 Rekey Rate 1492 values for each type of data stream tested. The Security Context 1493 Parameters defined in Section 7.6 and utilized for this test MUST 1494 be included in any statement of performance. 1496 14. IPsec Tunnel Failover Time 1498 This section presents methodologies relating to the characterization 1499 of the failover behavior of a DUT/SUT in a IPsec environment. 1501 In order to lessen the effect of packet buffering in the DUT/SUT, the 1502 Tunnel Failover Time tests MUST be run at the measured IPsec 1503 Throughput level of the DUT. Tunnel Failover Time tests at other 1504 offered constant loads are OPTIONAL. 1506 Tunnel Failovers can be achieved in various ways, for example: 1508 o Failover between two Software Instances of an IPsec stack. 1510 o Failover between two IPsec devices. 1512 o Failover between two Hardware IPsec Engines within a single IPsec 1513 Device. 1515 o Fallback to Software IPsec from Hardware IPsec within a single 1516 IPsec Device. 1518 In all of the above cases there shall be at least one active IPsec 1519 device and a standby device. In some cases the standby device is not 1520 present and two or more IPsec devices are backing eachother up in 1521 case of a catastrophic device or stack failure. The standby (or 1522 potential other active) IPsec Devices can back up the active IPsec 1523 Device in either a stateless or statefull method. In the former 1524 case, Phase 1 SA's as well as Phase 2 SA's will need to be re- 1525 established in order to guarantuee packet forwarding. In the latter 1526 case, the SPD and SADB of the active IPsec Device is synchronized to 1527 the standby IPsec Device to ensure immediate packet path recovery. 1529 Objective: Determine the time required to fail over all Active 1530 Tunnels from an active IPsec Device to its standby device. 1532 Topology: If no IPsec aware tester is available, the test MUST be 1533 conducted using a Redundant System Under Test Topology as depicted 1534 in Figure 4. When an IPsec aware tester is available the test 1535 MUST be executed using a Redundant Unit Under Test Topology as 1536 depicted in Figure 3. If the failover is being tested withing a 1537 single DUT e.g. crypto engine based failovers, a Device Under Test 1538 Topology as depicted in Figure 1 MAY be used as well. 1540 Procedure: Before a failover can be triggered, the IPsec Device has 1541 to be in a state where the active stack/engine/node has a the 1542 maximum supported number of Active Tunnnels. The Tunnels will be 1543 transporting bidirectional traffic at the determined IPsec 1544 Throughput rate for the smallest framesize that the stack/engine/ 1545 node is capable of forwarding (In most cases, this will be 64 1546 Bytes). The traffic should traverse in a round robin fashion 1547 through all Active Tunnels. 1549 When traffic is flowing through all Active Tunnels in steady 1550 state, a failover shall be triggered. 1552 Both receiver sides of the testers will now look at sequence 1553 counters in the instrumented packets that are being forwarded 1554 through the Tunnels. Each Tunnel MUST have its own counter to 1555 keep track of packetloss on a per SA basis. 1557 If the tester observes no sequence number drops on any of the 1558 Tunnels in both directions then the Failover Time MUST be listed 1559 as 'null', indicating that the failover was immediate and without 1560 any packetloss. 1562 In all other cases where the tester observes a gap in the sequence 1563 numbers of the instrumented payload of the packets, the tester 1564 will monitor all SA's and look for any Tunnels that are still not 1565 receiving packets after the Failover. These will be marked as 1566 'pending' Tunnels. Active Tunnels that are forwarding packets 1567 again without any additional packet loss shall be marked as 1568 'recovered' Tunnels. In background the tester will keep 1569 monitoring all SA's to make sure that no packets are dropped. If 1570 this is the case then the Tunnel in question will be placed back 1571 in 'pending' state. 1573 Note that reordered packets can naturally occur after en/ 1574 decryption. This is not a valid reason to place a Tunnel back in 1575 'pending' state. 1577 The tester will wait until all Tunnel are marked as 'recovered'. 1578 Then it will find the SA with the largest gap in sequence number. 1579 Given the fact that the framesize is fixed and the time of that 1580 framesize can easily be calculated for the initiator links, a 1581 simple multiplication of the framesize time * largest packetloss 1582 gap will yield the Tunnel Failover Time. 1584 This test MUST be repeated for the single tunnel, maximum 1585 throughput failover case. It is RECOMMENDED that the test is 1586 repeated for various number of Active Tunnels as well as for 1587 different framesizes and framerates. 1589 Reporting Format: The results shall be represented in a tabular 1590 format, where the first column will list the number of Active 1591 Tunnels, the second column the Framesize, the third column the 1592 Framerate and the fourth column the Tunnel Failover Time in 1593 milliseconds. 1595 15. DoS Attack Resiliency 1597 15.1. Phase 1 DoS Resiliency Rate 1599 Objective: Determine how many invalid IKE phase 1 sessions can be 1600 directed at a DUT before the Responder ignores or rejects valid 1601 IKE SA attempts. 1603 Topology: The test MUST be conducted using a Device Under Test 1604 Topology as depicted in Figure 1. 1606 Procedure: Configure the Responder with n IKE Phase 1 and 1607 corresponding IKE Phase 2 policies, where n is equal to the IPsec 1608 Tunnel Capacity. Ensure that no SA's are established and that the 1609 Responder has Configured Tunnels for all n policies. Start with 1610 95% of the offered test traffic containing an IKE Phase 1 policy 1611 mismatch (either a mismatched pre-shared-key or an invalid 1612 certificate). 1614 Send a burst of cleartext frames at a particular frame size 1615 through the Responder at the determined throughput rate using 1616 frames with selectors matching all n policies. Once the test 1617 completes, check whether all 5% of the correct IKE Phase 1 SAs 1618 have been established. If not, keep repeating the test by 1619 decrementing the number of mismatched IKE Phase 1 policies 1620 configured by 5% until all correct IKE Phase 1 SAs have been 1621 established. Between each retest, ensure that the DUT is reset 1622 and cleared of all previous state information. 1624 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1625 to exceed the duration of the test time. It is RECOMMENDED that 1626 the test duration is 2 x (n x IKE SA set up rate) to ensure that 1627 there is enough time to establish the valid IKE Phase 1 SAs. 1629 Some devices may support policy configurations where you do not 1630 need a one-to-one correspondence between an IKE Phase 1 policy and 1631 a specific IKE SA. In this case, the number of IKE Phase 1 1632 policies configured should be sufficient so that the transmitted 1633 (i.e. offered) test traffic will create 'n' IKE SAs. 1635 Reporting Format: The result shall be represented as the highest 1636 percentage of invalid IKE Phase1 messages that still allowed all 1637 the valid attempts to be completed. The Security Context 1638 Parameters defined in Section 7.6 and utilized for this test MUST 1639 be included in any statement of performance. 1641 15.2. Phase 2 Hash Mismatch DoS Resiliency Rate 1643 Objective: Determine the rate of Hash Mismatched packets at which a 1644 valid IPsec stream starts dropping frames. 1646 Topology: The test MUST be conducted using a Device Under Test 1647 Topology as depicted in Figure 1. 1649 Procedure: A stream of IPsec traffic is offered to a DUT for 1650 decryption. This stream consists of two microflows. One valid 1651 microflow and one that contains altered IPsec packets with a Hash 1652 Mismatch. The aggregate rate of both microflows MUST be equal to 1653 the IPsec Throughput and should therefore be able to pass the DUT. 1654 A binary search will be applied to determine the ratio between the 1655 two microflows that causes packetloss on the valid microflow of 1656 traffic. 1658 The test MUST be conducted with a single Active Tunnel. It MAY be 1659 repeated at various Tunnel scalability data points (e.g. 90%). 1661 Reporting Format: The results shall be listed as PPS (of invalid 1662 traffic). The Security Context Parameters defined in Section 7.6 1663 and utilized for this test MUST be included in any statement of 1664 performance. The aggregate rate of both microflows which act as 1665 the offrered testing load MUST also be reported. 1667 15.3. Phase 2 Anti Replay Attack DoS Resiliency Rate 1668 Objective: Determine the rate of replayed packets at which a valid 1669 IPsec stream start dropping frames. 1671 Topology: The test MUST be conducted using a Device Under Test 1672 Topology as depicted in Figure 1. 1674 Procedure: A stream of IPsec traffic is offered to a DUT for 1675 decryption. This stream consists of two microflows. One valid 1676 microflow and one that contains replayed packets of the valid 1677 microflow. The aggregate rate of both microflows MUST be equal to 1678 the IPsec Throughput and should therefore be able to pass the DUT. 1679 A binary search will be applied to determine the ratio between the 1680 two microflows that causes packet loss on the valid microflow of 1681 traffic. 1683 The replayed packets should always be offered within the window of 1684 which the original packet arrived i.e. it MUST be replayed 1685 directly after the original packet has been sent to the DUT. The 1686 binary search SHOULD start with a low anti-replay count where 1687 every few anti-replay windows, a single packet in the window is 1688 replayed. To increase this, one should obey the following 1689 sequence: 1691 * Increase the replayed packets so every window contains a single 1692 replayed packet 1694 * Increase the replayed packets so every packet within a window 1695 is replayed once 1697 * Increase the replayed packets so packets within a single window 1698 are replayed multiple times following the same fill sequence 1700 If the flow of replayed traffic equals the IPsec Throughput, the 1701 flow SHOULD be increased till the point where packetloss is 1702 observed on the replayed traffic flow. 1704 The test MUST be conducted with a single Active Tunnel. It MAY be 1705 repeated at various Tunnel scalability data points. The test 1706 SHOULD also be repeated on all configurable anti-replay window 1707 sizes. 1709 Reporting Format: PPS (of replayed traffic). The Security Context 1710 Parameters defined in Section 7.6 and utilized for this test MUST 1711 be included in any statement of performance. 1713 16. Security Considerations 1715 As this document is solely for the purpose of providing test 1716 benchmarking methodology and describes neither a protocol nor a 1717 protocol's implementation; there are no security considerations 1718 associated with this document. 1720 17. Acknowledgements 1722 The authors would like to acknowledge the following individuals for 1723 their help and participation of the compilation and editing of this 1724 document: Michele Bustos, Paul Hoffman, Benno Overeinder, Scott 1725 Poretsky, Yaron Sheffer and Al Morton. 1727 18. References 1729 18.1. Normative References 1731 [RFC1242] Bradner, S., "Benchmarking terminology for network 1732 interconnection devices", RFC 1242, July 1991. 1734 [RFC1981] McCann, J., Deering, S., and J. Mogul, "Path MTU Discovery 1735 for IP version 6", RFC 1981, August 1996. 1737 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1738 Requirement Levels", BCP 14, RFC 2119, March 1997. 1740 [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN 1741 Switching Devices", RFC 2285, February 1998. 1743 [RFC2393] Shacham, A., Monsour, R., Pereira, R., and M. Thomas, "IP 1744 Payload Compression Protocol (IPComp)", RFC 2393, 1745 December 1998. 1747 [RFC2401] Kent, S. and R. Atkinson, "Security Architecture for the 1748 Internet Protocol", RFC 2401, November 1998. 1750 [RFC2402] Kent, S. and R. Atkinson, "IP Authentication Header", 1751 RFC 2402, November 1998. 1753 [RFC2403] Madson, C. and R. Glenn, "The Use of HMAC-MD5-96 within 1754 ESP and AH", RFC 2403, November 1998. 1756 [RFC2404] Madson, C. and R. Glenn, "The Use of HMAC-SHA-1-96 within 1757 ESP and AH", RFC 2404, November 1998. 1759 [RFC2405] Madson, C. and N. Doraswamy, "The ESP DES-CBC Cipher 1760 Algorithm With Explicit IV", RFC 2405, November 1998. 1762 [RFC2406] Kent, S. and R. Atkinson, "IP Encapsulating Security 1763 Payload (ESP)", RFC 2406, November 1998. 1765 [RFC2407] Piper, D., "The Internet IP Security Domain of 1766 Interpretation for ISAKMP", RFC 2407, November 1998. 1768 [RFC2408] Maughan, D., Schneider, M., and M. Schertler, "Internet 1769 Security Association and Key Management Protocol 1770 (ISAKMP)", RFC 2408, November 1998. 1772 [RFC2409] Harkins, D. and D. Carrel, "The Internet Key Exchange 1773 (IKE)", RFC 2409, November 1998. 1775 [RFC2410] Glenn, R. and S. Kent, "The NULL Encryption Algorithm and 1776 Its Use With IPsec", RFC 2410, November 1998. 1778 [RFC2411] Thayer, R., Doraswamy, N., and R. Glenn, "IP Security 1779 Document Roadmap", RFC 2411, November 1998. 1781 [RFC2412] Orman, H., "The OAKLEY Key Determination Protocol", 1782 RFC 2412, November 1998. 1784 [RFC2432] Dubray, K., "Terminology for IP Multicast Benchmarking", 1785 RFC 2432, October 1998. 1787 [RFC2451] Pereira, R. and R. Adams, "The ESP CBC-Mode Cipher 1788 Algorithms", RFC 2451, November 1998. 1790 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1791 Network Interconnect Devices", RFC 2544, March 1999. 1793 [RFC2547] Rosen, E. and Y. Rekhter, "BGP/MPLS VPNs", RFC 2547, 1794 March 1999. 1796 [RFC2661] Townsley, W., Valencia, A., Rubens, A., Pall, G., Zorn, 1797 G., and B. Palter, "Layer Two Tunneling Protocol "L2TP"", 1798 RFC 2661, August 1999. 1800 [RFC2784] Farinacci, D., Li, T., Hanks, S., Meyer, D., and P. 1801 Traina, "Generic Routing Encapsulation (GRE)", RFC 2784, 1802 March 2000. 1804 [RFC4109] Hoffman, P., "Algorithms for Internet Key Exchange version 1805 1 (IKEv1)", RFC 4109, May 2005. 1807 [RFC4305] Eastlake, D., "Cryptographic Algorithm Implementation 1808 Requirements for Encapsulating Security Payload (ESP) and 1809 Authentication Header (AH)", RFC 4305, December 2005. 1811 [RFC4306] Kaufman, C., "Internet Key Exchange (IKEv2) Protocol", 1812 RFC 4306, December 2005. 1814 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 1815 Dugatkin, "IPv6 Benchmarking Methodology for Network 1816 Interconnect Devices", RFC 5180, May 2008. 1818 [I-D.ietf-ipsec-properties] 1819 Krywaniuk, A., "Security Properties of the IPsec Protocol 1820 Suite", draft-ietf-ipsec-properties-02 (work in progress), 1821 July 2002. 1823 18.2. Informative References 1825 [FIPS.186-1.1998] 1826 National Institute of Standards and Technology, "Digital 1827 Signature Standard", FIPS PUB 186-1, December 1998, 1828 . 1830 Authors' Addresses 1832 Merike Kaeo 1833 Double Shot Security 1834 3518 Fremont Ave N #363 1835 Seattle, WA 98103 1836 USA 1838 Phone: +1(310)866-0165 1839 Email: kaeo@merike.com 1841 Tim Van Herck 1842 Cisco Systems 1843 170 West Tasman Drive 1844 San Jose, CA 95134-1706 1845 USA 1847 Phone: +1(408)853-2284 1848 Email: herckt@cisco.com