idnits 2.17.1 draft-ietf-bmwg-ipsec-meth-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5 on line 1699. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1676. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1683. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1689. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([RFC2432], [RFC2544]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 2005) is 6708 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC2119' is defined on line 1561, but no explicit reference was found in the text == Unused Reference: 'RFC2285' is defined on line 1564, but no explicit reference was found in the text == Unused Reference: 'RFC2393' is defined on line 1567, but no explicit reference was found in the text == Unused Reference: 'RFC2401' is defined on line 1571, but no explicit reference was found in the text == Unused Reference: 'RFC2402' is defined on line 1574, but no explicit reference was found in the text == Unused Reference: 'RFC2403' is defined on line 1577, but no explicit reference was found in the text == Unused Reference: 'RFC2404' is defined on line 1580, but no explicit reference was found in the text == Unused Reference: 'RFC2405' is defined on line 1583, but no explicit reference was found in the text == Unused Reference: 'RFC2406' is defined on line 1586, but no explicit reference was found in the text == Unused Reference: 'RFC2407' is defined on line 1589, but no explicit reference was found in the text == Unused Reference: 'RFC2408' is defined on line 1592, but no explicit reference was found in the text == Unused Reference: 'RFC2409' is defined on line 1596, but no explicit reference was found in the text == Unused Reference: 'RFC2410' is defined on line 1599, but no explicit reference was found in the text == Unused Reference: 'RFC2411' is defined on line 1602, but no explicit reference was found in the text == Unused Reference: 'RFC2412' is defined on line 1605, but no explicit reference was found in the text == Unused Reference: 'RFC2451' is defined on line 1611, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ipsec-ikev2' is defined on line 1631, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ipsec-properties' is defined on line 1636, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1242 ** Obsolete normative reference: RFC 1981 (Obsoleted by RFC 8201) ** Downref: Normative reference to an Informational RFC: RFC 2285 ** Obsolete normative reference: RFC 2393 (Obsoleted by RFC 3173) ** Obsolete normative reference: RFC 2401 (Obsoleted by RFC 4301) ** Obsolete normative reference: RFC 2402 (Obsoleted by RFC 4302, RFC 4305) ** Obsolete normative reference: RFC 2406 (Obsoleted by RFC 4303, RFC 4305) ** Obsolete normative reference: RFC 2407 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2408 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2409 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2411 (Obsoleted by RFC 6071) ** Downref: Normative reference to an Informational RFC: RFC 2412 ** Downref: Normative reference to an Informational RFC: RFC 2432 ** Downref: Normative reference to an Informational RFC: RFC 2544 ** Obsolete normative reference: RFC 2547 (Obsoleted by RFC 4364) -- Possible downref: Normative reference to a draft: ref. 'I-D.ietf-ipsec-properties' Summary: 21 errors (**), 0 flaws (~~), 20 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Working Group M. Kaeo 3 Internet-Draft Double Shot Security 4 Expires: May 5, 2006 T. Van Herck 5 Cisco Systems 6 November 2005 8 Methodology for Benchmarking IPsec Devices 9 draft-ietf-bmwg-ipsec-meth-00 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on May 5, 2006. 36 Copyright Notice 38 Copyright (C) The Internet Society (2005). 40 Abstract 42 The purpose of this draft is to describe methodology specific to the 43 benchmarking of IPsec IP forwarding devices. It builds upon the 44 tenets set forth in [RFC2544], [RFC2432] and other IETF Benchmarking 45 Methodology Working Group (BMWG) efforts. This document seeks to 46 extend these efforts to the IPsec paradigm. 48 The BMWG produces two major classes of documents: Benchmarking 49 Terminology documents and Benchmarking Methodology documents. The 50 Terminology documents present the benchmarks and other related terms. 51 The Methodology documents define the procedures required to collect 52 the benchmarks cited in the corresponding Terminology documents. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 57 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 4 58 3. Key Words to Reflect Requirements . . . . . . . . . . . . . . 4 59 4. Test Considerations . . . . . . . . . . . . . . . . . . . . . 4 60 5. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 5 61 6. Test Parameters . . . . . . . . . . . . . . . . . . . . . . . 8 62 6.1. Frame Type . . . . . . . . . . . . . . . . . . . . . . . . 8 63 6.1.1. IP . . . . . . . . . . . . . . . . . . . . . . . . . . 8 64 6.1.2. UDP . . . . . . . . . . . . . . . . . . . . . . . . . 8 65 6.1.3. TCP . . . . . . . . . . . . . . . . . . . . . . . . . 8 66 6.2. Frame Sizes . . . . . . . . . . . . . . . . . . . . . . . 8 67 6.3. Fragmentation and Reassembly . . . . . . . . . . . . . . . 8 68 6.4. Time To Live . . . . . . . . . . . . . . . . . . . . . . . 9 69 6.5. Trial Duration . . . . . . . . . . . . . . . . . . . . . . 10 70 6.6. Security Context Parameters . . . . . . . . . . . . . . . 10 71 6.6.1. IPsec Transform Sets . . . . . . . . . . . . . . . . . 10 72 6.6.2. IPsec Topologies . . . . . . . . . . . . . . . . . . . 11 73 6.6.3. IKE Keepalives . . . . . . . . . . . . . . . . . . . . 11 74 6.6.4. IKE DH-group . . . . . . . . . . . . . . . . . . . . . 11 75 6.6.5. IKE SA / IPsec SA Lifetime . . . . . . . . . . . . . . 12 76 6.6.6. IPsec Selectors . . . . . . . . . . . . . . . . . . . 12 77 7. Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 78 7.1. IKE SA Capacity . . . . . . . . . . . . . . . . . . . . . 12 79 7.2. IPsec SA Capacity . . . . . . . . . . . . . . . . . . . . 13 80 8. Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . 13 81 8.1. Throughput baseline . . . . . . . . . . . . . . . . . . . 13 82 8.2. IPsec Throughput . . . . . . . . . . . . . . . . . . . . . 15 83 8.3. IPsec Encryption Throughput . . . . . . . . . . . . . . . 15 84 8.4. IPsec Decryption Throughput . . . . . . . . . . . . . . . 16 85 8.5. IPsec Fragmentation Throughput . . . . . . . . . . . . . . 17 86 8.6. IPsec Reassembly Throughput . . . . . . . . . . . . . . . 17 87 9. Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 88 9.1. Latency Baseline . . . . . . . . . . . . . . . . . . . . . 18 89 9.2. IPsec Latency . . . . . . . . . . . . . . . . . . . . . . 19 90 9.3. IPsec Encryption Latency . . . . . . . . . . . . . . . . . 20 91 9.4. IPsec Decryption Latency . . . . . . . . . . . . . . . . . 21 92 10. Time To First Packet . . . . . . . . . . . . . . . . . . . . . 21 93 11. Frame Loss Rate . . . . . . . . . . . . . . . . . . . . . . . 22 94 11.1. Frame Loss Baseline . . . . . . . . . . . . . . . . . . . 22 95 11.2. IPsec Frame Loss . . . . . . . . . . . . . . . . . . . . . 23 96 11.3. IPsec Encryption Frame Loss . . . . . . . . . . . . . . . 24 97 11.4. IPsec Decryption Frame Loss . . . . . . . . . . . . . . . 25 98 11.5. IKE Phase 2 Rekey Frame Loss . . . . . . . . . . . . . . . 25 99 12. Back-to-back Frames . . . . . . . . . . . . . . . . . . . . . 26 100 12.1. Back-to-back Frames Baseline . . . . . . . . . . . . . . . 26 101 12.2. IPsec Back-to-back Frames . . . . . . . . . . . . . . . . 27 102 12.3. IPsec Encryption Back-to-back Frames . . . . . . . . . . . 27 103 12.4. IPsec Decryption Back-to-back Frames . . . . . . . . . . . 28 104 13. IPsec Tunnel Setup Behavior . . . . . . . . . . . . . . . . . 29 105 13.1. IPsec Tunnel Setup Rate . . . . . . . . . . . . . . . . . 29 106 13.2. IKE Phase 1 Setup Rate . . . . . . . . . . . . . . . . . . 30 107 13.3. IKE Phase 2 Setup Rate . . . . . . . . . . . . . . . . . . 30 108 14. IPsec Rekey Behavior . . . . . . . . . . . . . . . . . . . . . 31 109 14.1. IKE Phase 1 Rekey Rate . . . . . . . . . . . . . . . . . . 32 110 14.2. IKE Phase 2 Rekey Rate . . . . . . . . . . . . . . . . . . 32 111 15. IPsec Tunnel Failover Time . . . . . . . . . . . . . . . . . . 32 112 16. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 34 113 17. References . . . . . . . . . . . . . . . . . . . . . . . . . . 35 114 17.1. Normative References . . . . . . . . . . . . . . . . . . . 35 115 17.2. Informative References . . . . . . . . . . . . . . . . . . 36 116 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 37 117 Intellectual Property and Copyright Statements . . . . . . . . . . 38 119 1. Introduction 121 This document defines a specific set of tests that can be used to 122 measure and report the performance characteristics of IPsec devices. 123 It extends the methodology already defined for benchmarking network 124 interconnecting devices in [RFC2544] to IPsec gateways and 125 additionally introduces tests which can be used to measure end-host 126 IPsec performance. 128 2. Document Scope 130 The primary focus of this document is to establish a performance 131 testing methodology for IPsec devices that support manual keying and 132 IKEv1. Both IPv4 and IPv6 addressing will be taken into 133 consideration for all relevant test methodologies. 135 The testing will be constrained to: 137 o Devices acting as IPsec gateways whose tests will pertain to both 138 IPsec tunnel and transport mode. 140 o Devices acting as IPsec end-hosts whose tests will pertain to both 141 IPsec tunnel and transport mode. 143 Note that special considerations will be presented for IPsec end-host 144 testing since the tests cannot be conducted without introducing 145 additional variables that may cause variations in test results. 147 What is specifically out of scope is any testing that pertains to 148 considerations involving NAT, L2TP [RFC2661], GRE [RFC2784], BGP/MPLS 149 VPNs [RFC2547] and anything that does not specifically relate to the 150 establishment and tearing down of IPsec tunnels. 152 3. Key Words to Reflect Requirements 154 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 155 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 156 document are to be interpreted as described in RFC 2119. RFC 2119 157 defines the use of these key words to help make the intent of 158 standards track documents as clear as possible. While this document 159 uses these keywords, this document is not a standards track document. 161 4. Test Considerations 163 Before any of the IPsec data plane benchmarking tests are carried 164 out, a Baseline MUST be established. I.e. the particular test in 165 question must first be measured for performance characteristics 166 without enabling IPsec. Once both the Baseline clear text 167 performance and the performance using an IPsec enabled datapath have 168 been measured, the difference between the two can be discerned. 170 This document explicitly assumes that you MUST follow logical 171 performance test methodology that includes the pre-configuration of 172 routing protocols, ARP caches, IPv6 neighbor discovery and all other 173 extraneous IPv4 and IPv6 parameters required to pass packets before 174 the tester is ready to send IPsec protected packets. IPv6 nodes that 175 implement Path MTU Discovery [RFC1981] MUST ensure that the PMTUD 176 process has been completed before any of the tests have been run. 178 For every IPsec data plane benchmarking test, the SA database (SADB) 179 MUST be created and populated with the appropriate SAs before any 180 actual test traffic is sent, i.e. the DUT/SUT MUST have active 181 tunnels. This may require a manual command to be executed on the 182 DUT/SUT or the sending of appropriate learning frames to the DUT/SUT. 183 This is to ensure that none of the control plane parameters (such as 184 IPsec tunnel setup rates and IPsec tunnel rekey rates) are factored 185 into these tests. 187 For control plane benchmarking tests (i.e. IPsec tunnel setup rate 188 and IPsec tunnel rekey rates), the authentication mechanisms(s) used 189 for the authenticated Diffie-Hellman exchange MUST be reported. 191 5. Test Topologies 193 The tests can be performed as a DUT or SUT. When the tests are 194 performed as a DUT, the Tester itself must be an IPsec peer. This 195 scenario is shown in Figure 1. When tested as a DUT where the Tester 196 has to be an IPsec peer, the measurements have several disadvantages: 198 o The Tester can introduce interoperability issues and skew results. 200 o The measurements may not be accurate due to Tester inaccuracies. 202 On the other hand, the measurement of a DUT where the Tester is an 203 IPsec peer has two distinct advantages: 205 o IPsec client scenarios can be benchmarked. 207 o IPsec device encryption/decryption abnormalities may be 208 identified. 210 +------------+ 211 | | 212 +----[D] Tester [A]----+ 213 | | | | 214 | +------------+ | 215 | | 216 | +------------+ | 217 | | | | 218 +----[C] DUT [B]----+ 219 | | 220 +------------+ 222 Figure 1: Topology 1 224 The SUT scenario is depicted in Figure 2. Two identical DUTs are 225 used in this test set up which more accurately simulate the use of 226 IPsec gateways. IPsec SA (i.e. AH/ESP transport or tunnel mode) 227 configurations can be tested using this set-up where the tester is 228 only required to send and receive cleartext traffic. 230 +------------+ 231 | | 232 +-----------------[F] Tester [A]-----------------+ 233 | | | | 234 | +------------+ | 235 | | 236 | +------------+ +------------+ | 237 | | | | | | 238 +----[E] DUTa [D]--------[C] DUTb [B]----+ 239 | | | | 240 +------------+ +------------+ 242 Figure 2: Topology 2 244 When an IPsec DUT needs to be tested in a chassis failover topology, 245 a second DUT needs to be used as shown in figure 3. This is the 246 high-availability equivalent of the topology as depicted in Figure 1. 247 Note that in this topology the Tester MUST be an IPsec peer. 249 +------------+ 250 | | 251 +---------[F] Tester [A]---------+ 252 | | | | 253 | +------------+ | 254 | | 255 | +------------+ | 256 | | | | 257 | +----[C] DUTa [B]----+ | 258 | | | | | | 259 | | +------------+ | | 260 +----+ +----+ 261 | +------------+ | 262 | | | | 263 +----[E] DUTb [D]----+ 264 | | 265 +------------+ 267 Figure 3: Topology 3 269 When no IPsec enabled Tester is available and an IPsec failover 270 scenario needs to be tested, the topology as shown in Figure 4 can be 271 used. In this case, either the high availability pair of IPsec 272 devices can be used as an Initiator or as a Responder. The remaining 273 chassis will take the opposite role. 275 +------------+ 276 | | 277 +--------------------[H] Tester [A]----------------+ 278 | | | | 279 | +------------+ | 280 | | 281 | +------------+ | 282 | | | | 283 | +---[E] DUTa [D]---+ | 284 | | | | | +------------+ | 285 | | +------------+ | | | | 286 +---+ +----[C] DUTc [B]---+ 287 | +------------+ | | | 288 | | | | +------------+ 289 +---[G] DUTb [F]---+ 290 | | 291 +------------+ 293 Figure 4: Topology 4 295 6. Test Parameters 297 For each individual test performed, all of the following parameters 298 MUST be explicitly reported in any test results. 300 6.1. Frame Type 302 6.1.1. IP 304 Both IPv4 and IPv6 frames MUST be used. The basic IPv4 header is 20 305 bytes long (which may be increased by the use of an options field). 306 The basic IPv6 header is a fixed 40 bytes and uses an extension field 307 for additional headers. Only the basic headers plus the IPsec AH 308 and/or ESP headers MUST be present. 310 It is recommended that IPv4 and IPv6 frames be tested separately to 311 ascertain performance parameters for either IPv4 or IPv6 traffic. If 312 both IPv4 and IPv6 traffic are to be tested, the device SHOULD be 313 pre-configured for a dual-stack environment to handle both traffic 314 types. 316 IP traffic with L4 protocol set to 'reserved' (255) SHOULD be used. 317 This ensures maximum space for instrumentation data in the payload 318 section, even with framesizes of minimum allowed length on the 319 transport media. 321 6.1.2. UDP 323 TBD 325 6.1.3. TCP 327 TBD 329 6.2. Frame Sizes 331 Each test SHOULD be run with different frame sizes. The recommended 332 plaintext layer 3 frame sizes for IPv4 tests are 64, 128, 256, 512, 333 1024, 1280, and 1518 bytes, per RFC2544 section 9 [RFC2544]. The 334 four CRC bytes are included in the frame size specified. 336 Since IPv6 requires that every link has an MTU of 1280 octets or 337 greater, the plaintext frame sizes to test for IPv6 are 1280 and 1518 338 bytes. 340 6.3. Fragmentation and Reassembly 342 IPsec devices can and must fragment packets in specific scenarios. 344 Depending on whether the fragmentation is performed in software or 345 using specialized custom hardware, there may be a significant impact 346 on performance. 348 In IPv4, unless the DF (don't fragment) bit is set by the packet 349 source, the sender cannot guarantee that some intermediary device on 350 the way will not fragment an IPsec packet. For transport mode IPsec, 351 the peers must be able to fragment and reassemble IPsec packets. 352 Reassembly of fragmented packets is especially important if an IPv4 353 port selector (or IPv6 transport protocol selector) is configured. 354 For tunnel mode IPsec, it is not a requirement. Note that 355 fragmentation is handled differently in IPv6 than in IPv4. In IPv6 356 networks, fragmentation is no longer done by intermediate routers in 357 the networks, but by the source node that originates the packet. The 358 path MTU discovery (PMTUD) mechanism is recommended for every IPv6 359 node to avoid fragmentation. 361 Packets generated by hosts that do not support PMTUD, and have not 362 set the DF bit in the IP header, will undergo fragmentation before 363 IPsec encapsulation. Packets generated by hosts that do support 364 PMTUD will use it locally to match the statically configured MTU on 365 the tunnel. If you manually set the MTU on the tunnel, you must set 366 it low enough to allow packets to pass through the smallest link on 367 the path. Otherwise, the packets that are too large to fit will be 368 dropped. 370 Fragmentation can occur due to encryption overhead and is closely 371 linked to the choice of transform used. Since each test SHOULD be 372 run with a maximum cleartext frame size (as per the previous section) 373 it will cause fragmentation to occur since the maximum frame size 374 will be exceeded. All tests MUST be run with the DF bit not set. It 375 is also recommended that all tests be run with the DF bit set. 377 Note that some implementations predetermine the encapsulated packet 378 size from information available in transform sets, which are 379 configured as part of the IPsec security association (SA). If it is 380 predetermined that the packet will exceed the MTU of the output 381 interface, the packet is fragmented before encryption. This 382 optimization may favorably impact performance and vendors SHOULD 383 report whether any such optimization is configured. 385 6.4. Time To Live 387 The source frames should have a TTL value large enough to accommodate 388 the DUT/SUT. A Minimum TTL of 64 is RECOMMENDED. 390 6.5. Trial Duration 392 The duration of the test portion of each trial SHOULD be at least 30 393 seconds. In the case of IPsec tunnel rekeying tests, the test 394 duration must be at least two times the IPsec tunnel rekey time to 395 ensure a reasonable worst case scenario test. 397 6.6. Security Context Parameters 399 All of the security context parameters listed in this section and 400 used in any test MUST be reported. 402 6.6.1. IPsec Transform Sets 404 All tests should be done on different IPsec transform set 405 combinations. An IPsec transform specifies a single IPsec security 406 protocol (either AH or ESP) with its corresponding security 407 algorithms and mode. A transform set is a combination of individual 408 IPsec transforms designed to enact a specific security policy for 409 protecting a particular traffic flow. At minumim, the transform set 410 must include one AH algorithm and a mode or one ESP algorithm and a 411 mode, as shown in Table 1: 413 +---------------+--------------+----------------------+-----------+ 414 | Transform Set | AH Algorithm | ESP Algorithm | Mode | 415 +---------------+--------------+----------------------+-----------+ 416 | 1 | AH-SHA1 | None | Tunnel | 417 | 2 | AH-SHA1 | None | Transport | 418 | 3 | AH-SHA1 | ESP-3DES | Tunnel | 419 | 4 | AH-SHA1 | ESP-3DES | Transport | 420 | 5 | AH-SHA1 | ESP-AES128 | Tunnel | 421 | 6 | AH-SHA1 | ESP-AES128 | Transport | 422 | 7 | None | ESP-3DES | Tunnel | 423 | 8 | None | ESP-3DES-HMAC-SHA1 | Tunnel | 424 | 9 | None | ESP-3DES | Transport | 425 | 10 | None | ESP-3DES-HMAC-SHA1 | Transport | 426 | 11 | None | ESP-AES128 | Tunnel | 427 | 12 | None | ESP-AES128-HMAC-SHA1 | Tunnel | 428 | 13 | None | ESP-AES128 | Transport | 429 | 14 | None | ESP-AES128-HMAC-SHA1 | Transport | 430 +---------------+--------------+----------------------+-----------+ 432 Table 1 434 Testing of all the transforms shown in Table 1 MUST be supported. 435 Note that this table is derived from the updated IKEv1 requirements 436 as described in [RFC4109]. Optionally, other AH and/or ESP 437 transforms MAY be supported. 439 6.6.2. IPsec Topologies 441 All tests should be done at various IPsec topology configurations and 442 the IPsec topology used MUST be reported. Since IPv6 requires the 443 implementation of manual keys for IPsec, both manual keying and IKE 444 configurations MUST be tested. 446 For manual keying tests, the IPsec SAs used should vary from 1 to 447 101, increasing in increments of 50. Although it is not expected 448 that manual keying (i.e. manually configuring the IPsec SA) will be 449 deployed in any operational setting with the exception of very small 450 controlled environments (i.e. less than 10 nodes), it is prudent to 451 test for potentially larger scale deployments. 453 For IKE specific tests, the following IPsec topologies MUST be 454 tested: 456 o 1 IKE SA & 1 IPsec SA (i.e. 1 IPsec Tunnel) 458 o 1 IKE SA & {max} IPsec SA's 460 o {max} IKE SA's & {max} IPsec SA's 462 It is RECOMMENDED to also test with the following IPsec topologies in 463 order to gain more datapoints: 465 o {max/2} IKE SA's & {(max/2) IKE SA's} IPsec SA's 467 o {max} IKE SA's & {(max) IKE SA's} IPsec SA's 469 6.6.3. IKE Keepalives 471 IKE keepalives track reachability of peers by sending hello packets 472 between peers. During the typical life of an IKE Phase 1 SA, packets 473 are only exchanged over this IKE Phase 1 SA when an IPsec IKE Quick 474 Mode (QM) negotiation is required at the expiration of the IPSec 475 Tunnel SAs. There is no standards-based mechanism for either type of 476 SA to detect the loss of a peer, except when the QM negotiation 477 fails. Most IPsec implementations use the Dead Peer Detection (i.e. 478 Keepalive) mechanism to determine whether connectivity has been lost 479 with a peer before the expiration of the IPsec Tunnel SA's. 481 All tests using IKEv1 MUST use the same IKE keepalive parameters. 483 6.6.4. IKE DH-group 485 There are 3 Diffie-Hellman groups which can be supported by IPsec 486 standards compliant devices: 488 o DH-group 1: 768 bits 490 o DH-group 2: 1024 bits 492 o DH-group 14: 2048 bits 494 DH-group 2 MUST be tested, to support the new IKEv1 algorithm 495 requirements listed in [RFC4109]. It is recommended that the same 496 DH-group be used for both IKE Phase 1 and IKE phase 2. All test 497 methodologies using IKE MUST report which DH-group was configured to 498 be used for IKE Phase 1 and IKE Phase 2 negotiations. 500 6.6.5. IKE SA / IPsec SA Lifetime 502 An IKE SA or IPsec SA is retained by each peer until the Tunnel 503 lifetime expires. IKE SA's and IPsec SA's have individual lifetime 504 parameters. In many real-world environments, the IPsec SA's will be 505 configured with shorter lifetimes than that of the IKE SA's. This 506 will force a rekey to happen more often for IPsec SA's. 508 When the initiator begins an IKE negotiation between itself and a 509 remote peer (the responder), an IKE policy can be selected only if 510 the lifetime of the responder's policy is shorter than or equal to 511 the lifetime of the initiator's policy. If the lifetimes are not the 512 same, the shorter lifetime will be used. 514 To avoid any incompatibilities in data plane benchmark testing, all 515 devices MUST have the same IKE SA and IPsec SA lifetime configured 516 and they must be configured to a time which exceeds the test duration 517 timeframe or the total number of bytes to be transmitted during the 518 test. 520 Note that the IPsec SA lifetime MUST be equal to or less than the IKE 521 SA lifetime. Both the IKE SA lifetime and the IPsec SA lifetime used 522 MUST be reported. This parameter SHOULD be variable when testing IKE 523 rekeying performance. 525 6.6.6. IPsec Selectors 527 All tests MUST be performed using standard IPsec selectors. 529 7. Capacity 531 7.1. IKE SA Capacity 532 Objective: 534 TBD 536 Procedure: 538 TBD 540 Reporting Format: 542 TBD 544 7.2. IPsec SA Capacity 546 Objective: 548 TBD 550 Procedure: 552 TBD 554 Reporting Format: 556 TBD 558 8. Throughput 560 This section contains the description of the tests that are related 561 to the characterization of the packet forwarding of a DUT/SUT in an 562 IPsec environment. Some metrics extend the concept of throughput 563 presented in RFC 1242. The notion of Forwarding Rate is cited in 564 RFC2285. 566 A separate test SHOULD be performed for Throughput tests using IPv4/ 567 UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic. 569 8.1. Throughput baseline 571 Objective: 573 Measure the intrinsic cleartext throughput of a device without the 574 use of IPsec. The throughput baseline methodology and reporting 575 format is derived from [RFC2544]. 577 Procedure: 579 Send a specific number of frames that matches the IPsec SA 580 selector(s) to be tested at a specific rate through the DUT and 581 then count the frames that are transmitted by the DUT. If the 582 count of offered frames is equal to the count of received frames, 583 the rate of the offered stream is increased and the test is rerun. 584 If fewer frames are received than were transmitted, the rate of 585 the offered stream is reduced and the test is rerun. 587 The throughput is the fastest rate at which the count of test 588 frames transmitted by the DUT is equal to the number of test 589 frames sent to it by the test equipment. 591 Reporting Format: 593 The results of the throughput test SHOULD be reported in the form 594 of a graph. If it is, the x coordinate SHOULD be the frame size, 595 the y coordinate SHOULD be the frame rate. There SHOULD be at 596 least two lines on the graph. There SHOULD be one line showing 597 the theoretical frame rate for the media at the various frame 598 sizes. The second line SHOULD be the plot of the test results. 599 Additional lines MAY be used on the graph to report the results 600 for each type of data stream tested. Text accompanying the graph 601 SHOULD indicate the protocol, data stream format, and type of 602 media used in the tests. 604 We assume that if a single value is desired for advertising 605 purposes the vendor will select the rate for the minimum frame 606 size for the media. If this is done then the figure MUST be 607 expressed in packets per second. The rate MAY also be expressed 608 in bits (or bytes) per second if the vendor so desires. The 609 statement of performance MUST include: 611 * Measured maximum frame rate 613 * Size of the frame used 615 * Theoretical limit of the media for that frame size 617 * Type of protocol used in the test 619 Even if a single value is used as part of the advertising copy, 620 the full table of results SHOULD be included in the product data 621 sheet. 623 8.2. IPsec Throughput 625 Objective: 627 Measure the intrinsic throughput of a device utilizing IPsec. 629 Procedure: 631 Send a specific number of cleartext frames that match the IPsec SA 632 selector(s) at a specific rate through the DUT/SUT. DUTa will 633 encrypt the traffic and forward to DUTb which will in turn decrypt 634 the traffic and forward to the testing device. The testing device 635 counts the frames that are transmitted by the DUTb. If the count 636 of offered frames is equal to the count of received frames, the 637 rate of the offered stream is increased and the test is rerun. If 638 fewer frames are received than were transmitted, the rate of the 639 offered stream is reduced and the test is rerun. 641 The IPsec Throughput is the fastest rate at which the count of 642 test frames transmitted by the DUT/SUT is equal to the number of 643 test frames sent to it by the test equipment. 645 For tests using multiple IPsec SA's, the test traffic associated 646 with the individual traffic selectors defined for each IPsec SA 647 MUST be sent in a round robin type fashion to keep the test 648 balanced so as not to overload any single IPsec SA. 650 Reporting format: 652 The reporting format SHOULD be the same as listed in 7.1 with the 653 additional requirement that the Security Context parameters 654 defined in 5.6 and utilized for this test MUST be included in any 655 statement of performance. 657 8.3. IPsec Encryption Throughput 659 Objective: 661 Measure the intrinsic DUT vendor specific IPsec Encryption 662 Throughput. 664 Procedure: 666 Send a specific number of cleartext frames that match the IPsec SA 667 selector(s) at a specific rate to the DUT. The DUT will receive 668 the cleartext frames, perform IPsec operations and then send the 669 IPsec protected frame to the tester. Upon receipt of the 670 encrypted packet, the testing device will timestamp the packet(s) 671 and record the result. If the count of offered frames is equal to 672 the count of received frames, the rate of the offered stream is 673 increased and the test is rerun. If fewer frames are received 674 than were transmitted, the rate of the offered stream is reduced 675 and the test is rerun. 677 The IPsec Encryption Throughput is the fastest rate at which the 678 count of test frames transmitted by the DUT is equal to the number 679 of test frames sent to it by the test equipment. 681 For tests using multiple IPsec SA's, the test traffic associated 682 with the individual traffic selectors defined for each IPsec SA 683 MUST be sent in a round robin type fashion to keep the test 684 balanced so as not to overload any single IPsec SA. 686 Reporting format: 688 The reporting format SHOULD be the same as listed in 7.1 with the 689 additional requirement that the Security Context parameters 690 defined in 5.6 and utilized for this test MUST be included in any 691 statement of performance. 693 8.4. IPsec Decryption Throughput 695 Objective: 697 Measure the intrinsic DUT vendor specific IPsec Decryption 698 Throughput. 700 Procedure: 702 Send a specific number of IPsec protected frames that match the 703 IPsec SA selector(s) at a specific rate to the DUT. The DUT will 704 receive the IPsec protected frames, perform IPsec operations and 705 then send the cleartext frame to the tester. Upon receipt of the 706 cleartext packet, the testing device will timestamp the packet(s) 707 and record the result. If the count of offered frames is equal to 708 the count of received frames, the rate of the offered stream is 709 increased and the test is rerun. If fewer frames are received 710 than were transmitted, the rate of the offered stream is reduced 711 and the test is rerun. 713 The IPsec Decryption Throughput is the fastest rate at which the 714 count of test frames transmitted by the DUT is equal to the number 715 of test frames sent to it by the test equipment. 717 For tests using multiple IPsec SAs, the test traffic associated 718 with the individual traffic selectors defined for each IPsec SA 719 MUST be sent in a round robin type fashion to keep the test 720 balanced so as not to overload any single IPsec SA. 722 Reporting format: 724 The reporting format SHOULD be the same as listed in 7.1 with the 725 additional requirement that the Security Context parameters 726 defined in 5.6 and utilized for this test MUST be included in any 727 statement of performance. 729 8.5. IPsec Fragmentation Throughput 731 Objective: 733 TBD 735 Procedure: 737 TBD 739 Reporting format: 741 TBD 743 8.6. IPsec Reassembly Throughput 745 Objective: 747 TBD 749 Procedure: 751 TBD 753 Reporting format: 755 TBD 757 9. Latency 759 This section presents methodologies relating to the characterization 760 of the forwarding latency of a DUT/SUT. It extends the concept of 761 latency characterization presented in [RFC2544] to an IPsec 762 environment. 764 A separate tests SHOULD be performed for latency tests using IPv4/ 765 UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic. 767 In order to lessen the effect of packet buffering in the DUT/SUT, the 768 latency tests MUST be run at the measured IPsec throughput level of 769 the DUT/SUT; IPsec latency at other offered loads is optional. 771 Lastly, [RFC1242] and [RFC2544] draw distinction between two classes 772 of devices: "store and forward" and "bit-forwarding". Each class 773 impacts how latency is collected and subsequently presented. See the 774 related RFCs for more information. In practice, much of the test 775 equipment will collect the latency measurement for one class or the 776 other, and, if needed, mathematically derive the reported value by 777 the addition or subtraction of values accounting for medium 778 propagation delay of the packet, bit times to the timestamp trigger 779 within the packet, etc. Test equipment vendors SHOULD provide 780 documentation regarding the composition and calculation latency 781 values being reported. The user of this data SHOULD understand the 782 nature of the latency values being reported, especially when 783 comparing results collected from multiple test vendors. (E.g., If 784 test vendor A presents a "store and forward" latency result and test 785 vendor B presents a "bit-forwarding" latency result, the user may 786 erroneously conclude the DUT has two differing sets of latency 787 values.). 789 9.1. Latency Baseline 791 Objective: 793 Measure the intrinsic latency (min/avg/max) introduced by a device 794 without the use of IPsec. 796 Procedure: 798 First determine the throughput for the DUT/SUT at each of the 799 listed frame sizes. Send a stream of frames at a particular frame 800 size through the DUT at the determined throughput rate using 801 frames that match the IPsec SA selector(s) to be tested. The 802 stream SHOULD be at least 120 seconds in duration. An identifying 803 tag SHOULD be included in one frame after 60 seconds with the type 804 of tag being implementation dependent. The time at which this 805 frame is fully transmitted is recorded (timestamp A). The 806 receiver logic in the test equipment MUST recognize the tag 807 information in the frame stream and record the time at which the 808 tagged frame was received (timestamp B). 810 The latency is timestamp B minus timestamp A as per the relevant 811 definition from RFC 1242, namely latency as defined for store and 812 forward devices or latency as defined for bit forwarding devices. 814 The test MUST be repeated at least 20 times with the reported 815 value being the average of the recorded values. 817 Reporting Format 819 The report MUST state which definition of latency (from [RFC1242]) 820 was used for this test. The latency results SHOULD be reported in 821 the format of a table with a row for each of the tested frame 822 sizes. There SHOULD be columns for the frame size, the rate at 823 which the latency test was run for that frame size, for the media 824 types tested, and for the resultant latency values for each type 825 of data stream tested. 827 9.2. IPsec Latency 829 Objective: 831 Measure the intrinsic IPsec Latency (min/avg/max) introduced by a 832 device when using IPsec. 834 Procedure: 836 First determine the throughput for the DUT/SUT at each of the 837 listed frame sizes. Send a stream of cleartext frames at a 838 particular frame size through the DUT/SUT at the determined 839 throughput rate using frames that match the IPsec SA selector(s) 840 to be tested. DUTa will encrypt the traffic and forward to DUTb 841 which will in turn decrypt the traffic and forward to the testing 842 device. 844 The stream SHOULD be at least 120 seconds in duration. An 845 identifying tag SHOULD be included in one frame after 60 seconds 846 with the type of tag being implementation dependent. The time at 847 which this frame is fully transmitted is recorded (timestamp A). 848 The receiver logic in the test equipment MUST recognize the tag 849 information in the frame stream and record the time at which the 850 tagged frame was received (timestamp B). 852 The IPsec Latency is timestamp B minus timestamp A as per the 853 relevant definition from [RFC1242], namely latency as defined for 854 store and forward devices or latency as defined for bit forwarding 855 devices. 857 The test MUST be repeated at least 20 times with the reported 858 value being the average of the recorded values. 860 Reporting format: 862 The reporting format SHOULD be the same as listed in 8.1 with the 863 additional requirement that the Security Context parameters 864 defined in 5.6 and utilized for this test MUST be included in any 865 statement of performance. 867 9.3. IPsec Encryption Latency 869 Objective: 871 Measure the DUT vendor specific IPsec Encryption Latency for IPsec 872 protected traffic. 874 Procedure: 876 Send a stream of cleartext frames at a particular frame size 877 through the DUT/SUT at the determined throughput rate using frames 878 that match the IPsec SA selector(s) to be tested. 880 The stream SHOULD be at least 120 seconds in duration. An 881 identifying tag SHOULD be included in one frame after 60 seconds 882 with the type of tag being implementation dependent. The time at 883 which this frame is fully transmitted is recorded (timestamp A). 884 The DUT will receive the cleartext frames, perform IPsec 885 operations and then send the IPsec protected frames to the tester. 886 Upon receipt of the encrypted frames, the receiver logic in the 887 test equipment MUST recognize the tag information in the frame 888 stream and record the time at which the tagged frame was received 889 (timestamp B). 891 The IPsec Encryption Latency is timestamp B minus timestamp A as 892 per the relevant definition from [RFC1242], namely latency as 893 defined for store and forward devices or latency as defined for 894 bit forwarding devices. 896 The test MUST be repeated at least 20 times with the reported 897 value being the average of the recorded values. 899 Reporting format: 901 The reporting format SHOULD be the same as listed in 8.1 with the 902 additional requirement that the Security Context parameters 903 defined in 5.6 and utilized for this test MUST be included in any 904 statement of performance. 906 9.4. IPsec Decryption Latency 908 Objective: 910 Measure the DUT Vendor Specific IPsec Decryption Latency for IPsec 911 protected traffic. 913 Procedure: 915 Send a stream of IPsec protected frames at a particular frame size 916 through the DUT/SUT at the determined throughput rate using frames 917 that match the IPsec SA selector(s) to be tested. 919 The stream SHOULD be at least 120 seconds in duration. An 920 identifying tag SHOULD be included in one frame after 60 seconds 921 with the type of tag being implementation dependent. The time at 922 which this frame is fully transmitted is recorded (timestamp A). 923 The DUT will receive the IPsec protected frames, perform IPsec 924 operations and then send the cleartext frames to the tester. Upon 925 receipt of the decrypted frames, the receiver logic in the test 926 equipment MUST recognize the tag information in the frame stream 927 and record the time at which the tagged frame was received 928 (timestamp B). 930 The IPsec Decryption Latency is timestamp B minus timestamp A as 931 per the relevant definition from [RFC1242], namely latency as 932 defined for store and forward devices or latency as defined for 933 bit forwarding devices. 935 The test MUST be repeated at least 20 times with the reported 936 value being the average of the recorded values. 938 Reporting format: 940 The reporting format SHOULD be the same as listed in 8.1 with the 941 additional requirement that the Security Context parameters 942 defined in 5.6 and utilized for this test MUST be included in any 943 statement of performance. 945 10. Time To First Packet 947 Objective: 949 Measure the time it takes to transmit a packet when no SAs have 950 been established. 952 Procedure: 954 Determine the IPsec throughput for the DUT/SUT at each of the 955 listed frame sizes. Start with a DUT/SUT with Configured Tunnels. 956 Send a stream of cleartext frames at a particular frame size 957 through the DUT/SUT at the determined throughput rate using frames 958 that match the IPsec SA selector(s) to be tested. 960 The time at which the first frame is fully transmitted from the 961 testing device is recorded as timestamp A. The time at which the 962 testing device receives its first frame from the DUT/SUT is 963 recorded as timestamp B. The Time To First Packet is the 964 difference between Timestamp B and Timestamp A. 966 Reporting format: 968 The Time To First Packet results SHOULD be reported in the format 969 of a table with a row for each of the tested frame sizes. There 970 SHOULD be columns for the frame size, the rate at which the TTFP 971 test was run for that frame size, for the media types tested, and 972 for the resultant TTFP values for each type of data stream tested. 973 The Security Context parameters defined in 5.6 and utilized for 974 this test MUST be included in any statement of performance. 976 11. Frame Loss Rate 978 This section presents methodologies relating to the characterization 979 of frame loss rate, as defined in [RFC1242], in an IPsec environment. 981 11.1. Frame Loss Baseline 983 Objective: 985 To determine the frame loss rate, as defined in [RFC1242], of a 986 DUT/SUT throughout the entire range of input data rates and frame 987 sizes without the use of IPsec. 989 Procedure: 991 Send a specific number of frames at a specific rate through the 992 DUT/SUT to be tested using frames that match the IPsec SA 993 selector(s) to be tested and count the frames that are transmitted 994 by the DUT/SUT. The frame loss rate at each point is calculated 995 using the following equation: 997 ( ( input_count - output_count ) * 100 ) / input_count 999 The first trial SHOULD be run for the frame rate that corresponds 1000 to 100% of the maximum rate for the frame size on the input media. 1001 Repeat the procedure for the rate that corresponds to 90% of the 1002 maximum rate used and then for 80% of this rate. This sequence 1003 SHOULD be continued (at reducing 10% intervals) until there are 1004 two successive trials in which no frames are lost. The maximum 1005 granularity of the trials MUST be 10% of the maximum rate, a finer 1006 granularity is encouraged. 1008 Reporting Format: 1010 The results of the frame loss rate test SHOULD be plotted as a 1011 graph. If this is done then the X axis MUST be the input frame 1012 rate as a percent of the theoretical rate for the media at the 1013 specific frame size. The Y axis MUST be the percent loss at the 1014 particular input rate. The left end of the X axis and the bottom 1015 of the Y axis MUST be 0 percent; the right end of the X axis and 1016 the top of the Y axis MUST be 100 percent. Multiple lines on the 1017 graph MAY used to report the frame loss rate for different frame 1018 sizes, protocols, and types of data streams. 1020 11.2. IPsec Frame Loss 1022 Objective: 1024 To measure the frame loss rate of a device when using IPsec to 1025 protect the data flow. 1027 Procedure: 1029 Ensure that the DUT/SUT is in active tunnel mode. Send a specific 1030 number of cleartext frames that match the IPsec SA selector(s) to 1031 be tested at a specific rate through the DUT/SUT. DUTa will 1032 encrypt the traffic and forward to DUTb which will in turn decrypt 1033 the traffic and forward to the testing device. The testing device 1034 counts the frames that are transmitted by the DUTb. The frame 1035 loss rate at each point is calculated using the following 1036 equation: 1038 ( ( input_count - output_count ) * 100 ) / input_count 1040 The first trial SHOULD be run for the frame rate that corresponds 1041 to 100% of the maximum rate for the frame size on the input media. 1042 Repeat the procedure for the rate that corresponds to 90% of the 1043 maximum rate used and then for 80% of this rate. This sequence 1044 SHOULD be continued (at reducing 10% intervals) until there are 1045 two successive trials in which no frames are lost. The maximum 1046 granularity of the trials MUST be 10% of the maximum rate, a finer 1047 granularity is encouraged. 1049 Reporting Format: 1051 The reporting format SHOULD be the same as listed in 10.1 with the 1052 additional requirement that the Security Context parameters 1053 defined in 6.7 and utilized for this test MUST be included in any 1054 statement of performance. 1056 11.3. IPsec Encryption Frame Loss 1058 Objective: 1060 To measure the effect of IPsec encryption on the frame loss rate 1061 of a device. 1063 Procedure: 1065 Send a specific number of cleartext frames that match the IPsec SA 1066 selector(s) at a specific rate to the DUT. The DUT will receive 1067 the cleartext frames, perform IPsec operations and then send the 1068 IPsec protected frame to the tester. The testing device counts 1069 the encrypted frames that are transmitted by the DUT. The frame 1070 loss rate at each point is calculated using the following 1071 equation: 1073 ( ( input_count - output_count ) * 100 ) / input_count 1075 The first trial SHOULD be run for the frame rate that corresponds 1076 to 100% of the maximum rate for the frame size on the input media. 1077 Repeat the procedure for the rate that corresponds to 90% of the 1078 maximum rate used and then for 80% of this rate. This sequence 1079 SHOULD be continued (at reducing 10% intervals) until there are 1080 two successive trials in which no frames are lost. The maximum 1081 granularity of the trials MUST be 10% of the maximum rate, a finer 1082 granularity is encouraged. 1084 Reporting Format: 1086 The reporting format SHOULD be the same as listed in 10.1 with the 1087 additional requirement that the Security Context parameters 1088 defined in 6.7 and utilized for this test MUST be included in any 1089 statement of performance. 1091 11.4. IPsec Decryption Frame Loss 1093 Objective: 1095 To measure the effects of IPsec encryption on the frame loss rate 1096 of a device. 1098 Procedure: 1100 Send a specific number of IPsec protected frames that match the 1101 IPsec SA selector(s) at a specific rate to the DUT. The DUT will 1102 receive the IPsec protected frames, perform IPsec operations and 1103 then send the cleartext frames to the tester. The testing device 1104 counts the cleartext frames that are transmitted by the DUT. The 1105 frame loss rate at each point is calculated using the following 1106 equation: 1108 ( ( input_count - output_count ) * 100 ) / input_count 1110 The first trial SHOULD be run for the frame rate that corresponds 1111 to 100% of the maximum rate for the frame size on the input media. 1112 Repeat the procedure for the rate that corresponds to 90% of the 1113 maximum rate used and then for 80% of this rate. This sequence 1114 SHOULD be continued (at reducing 10% intervals) until there are 1115 two successive trials in which no frames are lost. The maximum 1116 granularity of the trials MUST be 10% of the maximum rate, a finer 1117 granularity is encouraged. 1119 Reporting format: 1121 The reporting format SHOULD be the same as listed in 10.1 with the 1122 additional requirement that the Security Context parameters 1123 defined in 6.7 and utilized for this test MUST be included in any 1124 statement of performance. 1126 11.5. IKE Phase 2 Rekey Frame Loss 1128 Objective: 1130 To measure the frame loss due to an IKE Phase 2 (i.e. IPsec SA) 1131 Rekey event. 1133 Procedure: 1135 The procedure is the same as in 10.2 with the exception that the 1136 IPsec SA lifetime MUST be configured to be one-third of the trial 1137 test duration or one-third of the total number of bytes to be 1138 transmitted during the trial duration. 1140 Reporting format: 1142 The reporting format SHOULD be the same as listed in 10.1 with the 1143 additional requirement that the Security Context parameters 1144 defined in 6.7 and utilized for this test MUST be included in any 1145 statement of performance. 1147 12. Back-to-back Frames 1149 This section presents methodologies relating to the characterization 1150 of back-to-back frame processing, as defined in [RFC1242], in an 1151 IPsec environment. 1153 12.1. Back-to-back Frames Baseline 1155 Objective: 1157 To characterize the ability of a DUT to process back-to-back 1158 frames as defined in [RFC1242], without the use of IPsec. 1160 Procedure: 1162 Send a burst of frames that matches the IPsec SA selector(s) to be 1163 tested with minimum inter-frame gaps to the DUT and count the 1164 number of frames forwarded by the DUT. If the count of 1165 transmitted frames is equal to the number of frames forwarded the 1166 length of the burst is increased and the test is rerun. If the 1167 number of forwarded frames is less than the number transmitted, 1168 the length of the burst is reduced and the test is rerun. 1170 The back-to-back value is the number of frames in the longest 1171 burst that the DUT will handle without the loss of any frames. 1172 The trial length MUST be at least 2 seconds and SHOULD be repeated 1173 at least 50 times with the average of the recorded values being 1174 reported. 1176 Reporting format: 1178 The back-to-back results SHOULD be reported in the format of a 1179 table with a row for each of the tested frame sizes. There SHOULD 1180 be columns for the frame size and for the resultant average frame 1181 count for each type of data stream tested. The standard deviation 1182 for each measurement MAY also be reported. 1184 12.2. IPsec Back-to-back Frames 1186 Objective: 1188 To measure the back-to-back frame processing rate of a device when 1189 using IPsec to protect the data flow. 1191 Procedure: 1193 Send a burst of cleartext frames that matches the IPsec SA 1194 selector(s) to be tested with minimum inter-frame gaps to the DUT/ 1195 SUT. DUTa will encrypt the traffic and forward to DUTb which will 1196 in turn decrypt the traffic and forward to the testing device. 1197 The testing device counts the frames that are transmitted by the 1198 DUTb. If the count of transmitted frames is equal to the number 1199 of frames forwarded the length of the burst is increased and the 1200 test is rerun. If the number of forwarded frames is less than the 1201 number transmitted, the length of the burst is reduced and the 1202 test is rerun. 1204 The back-to-back value is the number of frames in the longest 1205 burst that the DUT/SUT will handle without the loss of any frames. 1206 The trial length MUST be at least 2 seconds and SHOULD be repeated 1207 at least 50 times with the average of the recorded values being 1208 reported. 1210 Reporting Format: 1212 The reporting format SHOULD be the same as listed in 11.1 with the 1213 additional requirement that the Security Context parameters 1214 defined in 6.7 and utilized for this test MUST be included in any 1215 statement of performance. 1217 12.3. IPsec Encryption Back-to-back Frames 1219 Objective: 1221 To measure the effect of IPsec encryption on the back-to-back 1222 frame processing rate of a device. 1224 Procedure: 1226 Send a burst of cleartext frames that matches the IPsec SA 1227 selector(s) to be tested with minimum inter-frame gaps to the DUT. 1228 The DUT will receive the cleartext frames, perform IPsec 1229 operations and then send the IPsec protected frame to the tester. 1230 The testing device counts the encrypted frames that are 1231 transmitted by the DUT. If the count of transmitted encrypted 1232 frames is equal to the number of frames forwarded the length of 1233 the burst is increased and the test is rerun. If the number of 1234 forwarded frames is less than the number transmitted, the length 1235 of the burst is reduced and the test is rerun. 1237 The back-to-back value is the number of frames in the longest 1238 burst that the DUT will handle without the loss of any frames. 1239 The trial length MUST be at least 2 seconds and SHOULD be repeated 1240 at least 50 times with the average of the recorded values being 1241 reported. 1243 Reporting format: 1245 The reporting format SHOULD be the same as listed in 11.1 with the 1246 additional requirement that the Security Context parameters 1247 defined in 6.7 and utilized for this test MUST be included in any 1248 statement of performance. 1250 12.4. IPsec Decryption Back-to-back Frames 1252 Objective: 1254 To measure the effect of IPsec decryption on the back-to-back 1255 frame processing rate of a device. 1257 Procedure: 1259 Send a burst of cleartext frames that matches the IPsec SA 1260 selector(s) to be tested with minimum inter-frame gaps to the DUT. 1261 The DUT will receive the IPsec protected frames, perform IPsec 1262 operations and then send the cleartext frame to the tester. The 1263 testing device counts the frames that are transmitted by the DUT. 1264 If the count of transmitted frames is equal to the number of 1265 frames forwarded the length of the burst is increased and the test 1266 is rerun. If the number of forwarded frames is less than the 1267 number transmitted, the length of the burst is reduced and the 1268 test is rerun. 1270 The back-to-back value is the number of frames in the longest 1271 burst that the DUT will handle without the loss of any frames. 1272 The trial length MUST be at least 2 seconds and SHOULD be repeated 1273 at least 50 times with the average of the recorded values being 1274 reported. 1276 Reporting format: 1278 The reporting format SHOULD be the same as listed in 11.1 with the 1279 additional requirement that the Security Context parameters 1280 defined in 6.7 and utilized for this test MUST be included in any 1281 statement of performance. 1283 13. IPsec Tunnel Setup Behavior 1285 13.1. IPsec Tunnel Setup Rate 1287 Objective: 1289 Determine the rate at which IPsec Tunnels can be established. 1291 Procedure: 1293 Configure the DUT/SUT with n IKE Phase 1 and corresponding IKE 1294 Phase 2 policies. Ensure that no SA's are established and that 1295 the DUT/SUT is in configured tunnel mode for all n policies. Send 1296 a stream of cleartext frames at a particular frame size through 1297 the DUT/SUT at the determined throughput rate using frames with 1298 selectors matching the first IKE Phase 1 policy. As soon as the 1299 testing device receives its first frame from the DUT/SUT, it knows 1300 that the IPsec Tunnel is established and starts sending the next 1301 stream of cleartext frames using the same frame size and 1302 throughput rate but this time using selectors matching the second 1303 IKE Phase 1 policy. This process is repeated until all configured 1304 IPsec Tunnels have been established. 1306 The IPsec Tunnel Setup Rate is determined by the following 1307 formula: 1309 Tunnel Setup Rate = n / [Duration of Test - (n * 1310 frame_transmit_time)] 1312 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1313 to exceed the duration of the test time. It is RECOMMENDED that 1314 n=100 IPsec Tunnels are tested at a minimum to get a large enough 1315 sample size to depict some real-world behavior. 1317 Reporting Format: 1319 The Tunnel Setup Rate results SHOULD be reported in the format of 1320 a table with a row for each of the tested frame sizes. There 1321 SHOULD be columns for the frame size, the rate at which the test 1322 was run for that frame size, for the media types tested, and for 1323 the resultant Tunnel Setup Rate values for each type of data 1324 stream tested. The Security Context parameters defined in 6.7 and 1325 utilized for this test MUST be included in any statement of 1326 performance. 1328 13.2. IKE Phase 1 Setup Rate 1330 Objective: 1332 Determine the rate of IKE SA's that can be established. 1334 Procedure: 1336 Configure the DUT with n IKE Phase 1 and corresponding IKE Phase 2 1337 policies. Ensure that no SAs are established and that the DUT is 1338 in configured tunnel mode for all n policies. Send a stream of 1339 cleartext frames at a particular frame size through the DUT at the 1340 determined throughput rate using frames with selectors matching 1341 the first IKE Phase 1 policy. As soon as the Phase 1 SA is 1342 established, the testing device starts sending the next stream of 1343 cleartext frames using the same frame size and throughput rate but 1344 this time using selectors matching the second IKE Phase 1 policy. 1345 This process is repeated until all configured IKE SAs have been 1346 established. 1348 The IKE SA Setup Rate is determined by the following formula: 1350 IKE SA Setup Rate = n / [Duration of Test - (n * 1351 frame_transmit_time)] 1353 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1354 to exceed the duration of the test time. It is RECOMMENDED that 1355 n=100 IKE SAs are tested at a minumum to get a large enough sample 1356 size to depict some real-world behavior. 1358 Reporting Format: 1360 The IKE Phase 1 Setup Rate results SHOULD be reported in the 1361 format of a table with a row for each of the tested frame sizes. 1362 There SHOULD be columns for the frame size, the rate at which the 1363 test was run for that frame size, for the media types tested, and 1364 for the resultant IKE Phase 1 Setup Rate values for each type of 1365 data stream tested. The Security Context parameters defined in 1366 6.7 and utilized for this test MUST be included in any statement 1367 of performance. 1369 13.3. IKE Phase 2 Setup Rate 1370 Objective: 1372 Determine the rate of IPsec SA's that can be established. 1374 Procedure: 1376 Configure the DUT with a single IKE Phase 1 policy and n 1377 corresponding IKE Phase 2 policies. Ensure that no SAs are 1378 established and that the DUT is in configured tunnel mode for all 1379 policies. Send a stream of cleartext frames at a particular frame 1380 size through the DUT at the determined throughput rate using 1381 frames with selectors matching the first IPsec SA policy. 1383 The time at which the IKE SA is established is recorded as 1384 timestamp A. As soon as the Phase 1 SA is established, the IPsec 1385 SA negotiation will be initiated. Once the first IPsec SA has 1386 been established, start sending the next stream of cleartext 1387 frames using the same frame size and throughput rate but this time 1388 using selectors matching the second IKE Phase 2 policy. This 1389 process is repeated until all configured IPsec SA's have been 1390 established. 1392 The IPsec SA Setup Rate is determined by the following formula: 1394 IPsec SA Setup Rate = n / [Duration of Test - {A +((n-1) * 1395 frame_transmit_time)}] 1397 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1398 to exceed the duration of the test time. It is RECOMMENDED that 1399 n=100 IPsec SAs are tested at a minumum to get a large enough 1400 sample size to depict some real-world behavior. 1402 Reporting Format: 1404 The IKE Phase 2 Setup Rate results SHOULD be reported in the 1405 format of a table with a row for each of the tested frame sizes. 1406 There SHOULD be columns for the frame size, the rate at which the 1407 test was run for that frame size, for the media types tested, and 1408 for the resultant IKE Phase 2 Setup Rate values for each type of 1409 data stream tested. The Security Context parameters defined in 1410 6.7 and utilized for this test MUST be included in any statement 1411 of performance. 1413 14. IPsec Rekey Behavior 1414 14.1. IKE Phase 1 Rekey Rate 1416 Objective: 1418 Determine the maximum rate at which an IPsec Device can rekey IKE 1419 SA's. 1421 Procedure: 1423 Set up a number of Active IPsec Tunnels each with an IKE SA 1424 lifetime set to one-half of the test duration time. Send a stream 1425 of cleartext frames at a particular frame size through the DUT at 1426 the determined throughput rate using frames with selectors 1427 matching each of the IPsec Tunnels. Record the time at which the 1428 first IKE SA rekey is initiated. 1430 Reporting Format: 1432 TBD 1434 14.2. IKE Phase 2 Rekey Rate 1436 Objective: 1438 Determine the maximum rate at which an IPsec Device can rekey 1439 IPsec SA's. 1441 Procedure: 1443 TBD 1445 Reporting Format: 1447 TBD 1449 15. IPsec Tunnel Failover Time 1451 This section presents methodologies relating to the characterization 1452 of the failover behavior of a DUT/SUT in a IPsec environment. 1454 In order to lessen the effect of packet buffering in the DUT/SUT, the 1455 Tunnel Failover Time tests MUST be run at the measured IPsec 1456 throughput level of the DUT. Tunnel Failover Time tests at other 1457 offered constant loads are OPTIONAL. 1459 Tunnel Failovers can be achieved in various ways like : 1461 o Failover between two or more software instances of an IPsec stack. 1463 o Failover between two IPsec devices. 1465 o Failover between two or more crypto engines. 1467 o Failover between hardware and software crypto. 1469 In all of the above cases there shall be at least one active IPsec 1470 device and a standby device. In some cases the standby device is not 1471 present and two or more IPsec devices are backing eachother up in 1472 case of a catastrophic device or stack failure. The standby (or 1473 potential other active) IPsec Devices can back up the active IPsec 1474 Device in either a stateless or statefull method. In the former 1475 case, Phase 1 SA's as well as Phase 2 SA's will need to be re- 1476 established in order to guarantuee packet forwarding. In the latter 1477 case, the SPD and SADB of the active IPsec Device is synchronized to 1478 the standby IPsec Device to ensure immediate packet path recovery. 1480 Objective: 1482 Determine the time required to fail over all Active Tunnels from 1483 an active IPsec Device to its standby device. 1485 Procedure: 1487 Before a failover can be triggered, the IPsec Device has to be in 1488 a state where the active stack/engine/node has a the maximum 1489 supported number of Active Tunnnels. The Tunnels will be 1490 transporting bidirectional traffic at the Tunnel Throughput rate 1491 for the smallest framesize that the stack/engine/node is capable 1492 of forwarding (In most cases, this will be 64 Bytes). The traffic 1493 should traverse in a round robin fashion through all Active 1494 Tunnels. 1496 It is RECOMMENDED that the test is repeated for various number of 1497 Active Tunnels as well as for different framesizes and framerates. 1499 When traffic is flowing through all Active Tunnels in steady 1500 state, a failover shall be triggered. 1502 Both receiver sides of the testers will now look at sequence 1503 counters in the instrumented packets that are being forwarded 1504 through the Tunnels. Each Tunnel MUST have it's own counter to 1505 keep track of packetloss on a per SA basis. 1507 If the tester observes no sequence number drops on any of the 1508 Tunnels in both directions then the Failover Time MUST be listed 1509 as 'null', indicating that the failover was immediate and without 1510 any packetloss. 1512 In all other cases where the tester observes a gap in the sequence 1513 numbers of the instrumented payload of the packets, the tester 1514 will monitor all SA's and look for any Tunnels that are still not 1515 receiving packets after the Failover. These will be marked as 1516 'pending' Tunnels. Active Tunnels that are forwarding packets 1517 again without any packetloss shall be marked as 'recovered' 1518 Tunnels. In background the tester will keep monitoring all SA's 1519 to make sure that no packets are dropped. If this is the case 1520 then the Tunnel in question will be placed back in 'pending' 1521 state. 1523 Note that reordered packets can naturally occur after en/ 1524 decryption. This is not a valid reason to place a Tunnel back in 1525 'pending' state. A sliding window of 128 packets per SA SHALL be 1526 allowed before packetloss is declared on the SA. 1528 The tester will wait until all Tunnel are marked as 'recovered'. 1529 Then it will find the SA with the largest gap in sequence number. 1530 Given the fact that the framesize is fixed and the time of that 1531 framesize can easily be calculated for the initiator links, a 1532 simple multiplication of the framesize time * largest packetloss 1533 gap will yield the Tunnel Failover Time. 1535 If the tester never reaches a state where all Tunnels are marked 1536 as 'recovered', the the Failover Time MUST be listed as 1537 'infinite'. 1539 Reporting Format: 1541 The results shall be represented in a tabular format, where the 1542 first column will list the number of Active Tunnels, the second 1543 column the Framesize, the third column the Framerate and the 1544 fourth column the Tunnel Failover Time in milliseconds. 1546 16. Acknowledgements 1548 The authors would like to acknowledge the following individual for 1549 their help and participation of the compilation and editing of this 1550 document: Michele Bustos, Ixia. ; Paul Hoffman, VPNC 1552 17. References 1553 17.1. Normative References 1555 [RFC1242] Bradner, S., "Benchmarking terminology for network 1556 interconnection devices", RFC 1242, July 1991. 1558 [RFC1981] McCann, J., Deering, S., and J. Mogul, "Path MTU Discovery 1559 for IP version 6", RFC 1981, August 1996. 1561 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1562 Requirement Levels", BCP 14, RFC 2119, March 1997. 1564 [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN 1565 Switching Devices", RFC 2285, February 1998. 1567 [RFC2393] Shacham, A., Monsour, R., Pereira, R., and M. Thomas, "IP 1568 Payload Compression Protocol (IPComp)", RFC 2393, 1569 December 1998. 1571 [RFC2401] Kent, S. and R. Atkinson, "Security Architecture for the 1572 Internet Protocol", RFC 2401, November 1998. 1574 [RFC2402] Kent, S. and R. Atkinson, "IP Authentication Header", 1575 RFC 2402, November 1998. 1577 [RFC2403] Madson, C. and R. Glenn, "The Use of HMAC-MD5-96 within 1578 ESP and AH", RFC 2403, November 1998. 1580 [RFC2404] Madson, C. and R. Glenn, "The Use of HMAC-SHA-1-96 within 1581 ESP and AH", RFC 2404, November 1998. 1583 [RFC2405] Madson, C. and N. Doraswamy, "The ESP DES-CBC Cipher 1584 Algorithm With Explicit IV", RFC 2405, November 1998. 1586 [RFC2406] Kent, S. and R. Atkinson, "IP Encapsulating Security 1587 Payload (ESP)", RFC 2406, November 1998. 1589 [RFC2407] Piper, D., "The Internet IP Security Domain of 1590 Interpretation for ISAKMP", RFC 2407, November 1998. 1592 [RFC2408] Maughan, D., Schneider, M., and M. Schertler, "Internet 1593 Security Association and Key Management Protocol 1594 (ISAKMP)", RFC 2408, November 1998. 1596 [RFC2409] Harkins, D. and D. Carrel, "The Internet Key Exchange 1597 (IKE)", RFC 2409, November 1998. 1599 [RFC2410] Glenn, R. and S. Kent, "The NULL Encryption Algorithm and 1600 Its Use With IPsec", RFC 2410, November 1998. 1602 [RFC2411] Thayer, R., Doraswamy, N., and R. Glenn, "IP Security 1603 Document Roadmap", RFC 2411, November 1998. 1605 [RFC2412] Orman, H., "The OAKLEY Key Determination Protocol", 1606 RFC 2412, November 1998. 1608 [RFC2432] Dubray, K., "Terminology for IP Multicast Benchmarking", 1609 RFC 2432, October 1998. 1611 [RFC2451] Pereira, R. and R. Adams, "The ESP CBC-Mode Cipher 1612 Algorithms", RFC 2451, November 1998. 1614 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1615 Network Interconnect Devices", RFC 2544, March 1999. 1617 [RFC2547] Rosen, E. and Y. Rekhter, "BGP/MPLS VPNs", RFC 2547, 1618 March 1999. 1620 [RFC2661] Townsley, W., Valencia, A., Rubens, A., Pall, G., Zorn, 1621 G., and B. Palter, "Layer Two Tunneling Protocol "L2TP"", 1622 RFC 2661, August 1999. 1624 [RFC2784] Farinacci, D., Li, T., Hanks, S., Meyer, D., and P. 1625 Traina, "Generic Routing Encapsulation (GRE)", RFC 2784, 1626 March 2000. 1628 [RFC4109] Hoffman, P., "Algorithms for Internet Key Exchange version 1629 1 (IKEv1)", RFC 4109, May 2005. 1631 [I-D.ietf-ipsec-ikev2] 1632 Kaufman, C., "Internet Key Exchange (IKEv2) Protocol", 1633 draft-ietf-ipsec-ikev2-17 (work in progress), 1634 October 2004. 1636 [I-D.ietf-ipsec-properties] 1637 Krywaniuk, A., "Security Properties of the IPsec Protocol 1638 Suite", draft-ietf-ipsec-properties-02 (work in progress), 1639 July 2002. 1641 17.2. Informative References 1643 [FIPS.186-1.1998] 1644 National Institute of Standards and Technology, "Digital 1645 Signature Standard", FIPS PUB 186-1, December 1998, 1646 . 1648 Authors' Addresses 1650 Merike Kaeo 1651 Double Shot Security 1652 520 Washington Blvd #363 1653 Marina Del Rey, CA 90292 1654 US 1656 Phone: +1 (310)866-0165 1657 Email: kaeo@merike.com 1659 Tim Van Herck 1660 Cisco Systems 1661 170 West Tasman Drive 1662 San Jose, CA 95134-1706 1663 US 1665 Email: herckt@cisco.com 1667 Intellectual Property Statement 1669 The IETF takes no position regarding the validity or scope of any 1670 Intellectual Property Rights or other rights that might be claimed to 1671 pertain to the implementation or use of the technology described in 1672 this document or the extent to which any license under such rights 1673 might or might not be available; nor does it represent that it has 1674 made any independent effort to identify any such rights. Information 1675 on the procedures with respect to rights in RFC documents can be 1676 found in BCP 78 and BCP 79. 1678 Copies of IPR disclosures made to the IETF Secretariat and any 1679 assurances of licenses to be made available, or the result of an 1680 attempt made to obtain a general license or permission for the use of 1681 such proprietary rights by implementers or users of this 1682 specification can be obtained from the IETF on-line IPR repository at 1683 http://www.ietf.org/ipr. 1685 The IETF invites any interested party to bring to its attention any 1686 copyrights, patents or patent applications, or other proprietary 1687 rights that may cover technology that may be required to implement 1688 this standard. Please address the information to the IETF at 1689 ietf-ipr@ietf.org. 1691 Disclaimer of Validity 1693 This document and the information contained herein are provided on an 1694 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1695 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 1696 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 1697 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 1698 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1699 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1701 Copyright Statement 1703 Copyright (C) The Internet Society (2005). This document is subject 1704 to the rights, licenses and restrictions contained in BCP 78, and 1705 except as set forth therein, the authors retain all their rights. 1707 Acknowledgment 1709 Funding for the RFC Editor function is currently provided by the 1710 Internet Society.