idnits 2.17.1 draft-ietf-bmwg-ipsec-meth-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5 on line 1787. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1764. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1771. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1777. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([RFC2432], [RFC2544]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (March 5, 2006) is 6626 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC2119' is defined on line 1649, but no explicit reference was found in the text == Unused Reference: 'RFC2285' is defined on line 1652, but no explicit reference was found in the text == Unused Reference: 'RFC2393' is defined on line 1655, but no explicit reference was found in the text == Unused Reference: 'RFC2401' is defined on line 1659, but no explicit reference was found in the text == Unused Reference: 'RFC2402' is defined on line 1662, but no explicit reference was found in the text == Unused Reference: 'RFC2403' is defined on line 1665, but no explicit reference was found in the text == Unused Reference: 'RFC2404' is defined on line 1668, but no explicit reference was found in the text == Unused Reference: 'RFC2405' is defined on line 1671, but no explicit reference was found in the text == Unused Reference: 'RFC2406' is defined on line 1674, but no explicit reference was found in the text == Unused Reference: 'RFC2407' is defined on line 1677, but no explicit reference was found in the text == Unused Reference: 'RFC2408' is defined on line 1680, but no explicit reference was found in the text == Unused Reference: 'RFC2409' is defined on line 1684, but no explicit reference was found in the text == Unused Reference: 'RFC2410' is defined on line 1687, but no explicit reference was found in the text == Unused Reference: 'RFC2411' is defined on line 1690, but no explicit reference was found in the text == Unused Reference: 'RFC2412' is defined on line 1693, but no explicit reference was found in the text == Unused Reference: 'RFC2451' is defined on line 1699, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ipsec-ikev2' is defined on line 1719, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ipsec-properties' is defined on line 1724, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1242 ** Obsolete normative reference: RFC 1981 (Obsoleted by RFC 8201) ** Downref: Normative reference to an Informational RFC: RFC 2285 ** Obsolete normative reference: RFC 2393 (Obsoleted by RFC 3173) ** Obsolete normative reference: RFC 2401 (Obsoleted by RFC 4301) ** Obsolete normative reference: RFC 2402 (Obsoleted by RFC 4302, RFC 4305) ** Obsolete normative reference: RFC 2406 (Obsoleted by RFC 4303, RFC 4305) ** Obsolete normative reference: RFC 2407 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2408 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2409 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2411 (Obsoleted by RFC 6071) ** Downref: Normative reference to an Informational RFC: RFC 2412 ** Downref: Normative reference to an Informational RFC: RFC 2432 ** Downref: Normative reference to an Informational RFC: RFC 2544 ** Obsolete normative reference: RFC 2547 (Obsoleted by RFC 4364) -- Possible downref: Normative reference to a draft: ref. 'I-D.ietf-ipsec-properties' Summary: 21 errors (**), 0 flaws (~~), 20 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Working Group M. Kaeo 3 Internet-Draft Double Shot Security 4 Expires: September 6, 2006 T. Van Herck 5 Cisco Systems 6 March 5, 2006 8 Methodology for Benchmarking IPsec Devices 9 draft-ietf-bmwg-ipsec-meth-01 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on September 6, 2006. 36 Copyright Notice 38 Copyright (C) The Internet Society (2006). 40 Abstract 42 The purpose of this draft is to describe methodology specific to the 43 benchmarking of IPsec IP forwarding devices. It builds upon the 44 tenets set forth in [RFC2544], [RFC2432] and other IETF Benchmarking 45 Methodology Working Group (BMWG) efforts. This document seeks to 46 extend these efforts to the IPsec paradigm. 48 The BMWG produces two major classes of documents: Benchmarking 49 Terminology documents and Benchmarking Methodology documents. The 50 Terminology documents present the benchmarks and other related terms. 51 The Methodology documents define the procedures required to collect 52 the benchmarks cited in the corresponding Terminology documents. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 57 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 4 58 3. Key Words to Reflect Requirements . . . . . . . . . . . . . . 4 59 4. Test Considerations . . . . . . . . . . . . . . . . . . . . . 4 60 5. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 5 61 6. Test Parameters . . . . . . . . . . . . . . . . . . . . . . . 8 62 6.1. Frame Type . . . . . . . . . . . . . . . . . . . . . . . . 8 63 6.1.1. IP . . . . . . . . . . . . . . . . . . . . . . . . . . 8 64 6.1.2. UDP . . . . . . . . . . . . . . . . . . . . . . . . . 8 65 6.1.3. TCP . . . . . . . . . . . . . . . . . . . . . . . . . 8 66 6.2. Frame Sizes . . . . . . . . . . . . . . . . . . . . . . . 8 67 6.3. Fragmentation and Reassembly . . . . . . . . . . . . . . . 9 68 6.4. Time To Live . . . . . . . . . . . . . . . . . . . . . . . 10 69 6.5. Trial Duration . . . . . . . . . . . . . . . . . . . . . . 10 70 6.6. Security Context Parameters . . . . . . . . . . . . . . . 10 71 6.6.1. IPsec Transform Sets . . . . . . . . . . . . . . . . . 10 72 6.6.2. IPsec Topologies . . . . . . . . . . . . . . . . . . . 11 73 6.6.3. IKE Keepalives . . . . . . . . . . . . . . . . . . . . 12 74 6.6.4. IKE DH-group . . . . . . . . . . . . . . . . . . . . . 12 75 6.6.5. IKE SA / IPsec SA Lifetime . . . . . . . . . . . . . . 12 76 6.6.6. IPsec Selectors . . . . . . . . . . . . . . . . . . . 13 77 7. Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 78 7.1. IKE SA Capacity . . . . . . . . . . . . . . . . . . . . . 13 79 7.2. IPsec SA Capacity . . . . . . . . . . . . . . . . . . . . 14 80 8. Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . 15 81 8.1. Throughput baseline . . . . . . . . . . . . . . . . . . . 15 82 8.2. IPsec Throughput . . . . . . . . . . . . . . . . . . . . . 16 83 8.3. IPsec Encryption Throughput . . . . . . . . . . . . . . . 17 84 8.4. IPsec Decryption Throughput . . . . . . . . . . . . . . . 18 85 9. Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 86 9.1. Latency Baseline . . . . . . . . . . . . . . . . . . . . . 19 87 9.2. IPsec Latency . . . . . . . . . . . . . . . . . . . . . . 20 88 9.3. IPsec Encryption Latency . . . . . . . . . . . . . . . . . 21 89 9.4. IPsec Decryption Latency . . . . . . . . . . . . . . . . . 22 90 10. Time To First Packet . . . . . . . . . . . . . . . . . . . . . 22 91 11. Frame Loss Rate . . . . . . . . . . . . . . . . . . . . . . . 23 92 11.1. Frame Loss Baseline . . . . . . . . . . . . . . . . . . . 23 93 11.2. IPsec Frame Loss . . . . . . . . . . . . . . . . . . . . . 24 94 11.3. IPsec Encryption Frame Loss . . . . . . . . . . . . . . . 25 95 11.4. IPsec Decryption Frame Loss . . . . . . . . . . . . . . . 26 96 11.5. IKE Phase 2 Rekey Frame Loss . . . . . . . . . . . . . . . 26 97 12. Back-to-back Frames . . . . . . . . . . . . . . . . . . . . . 27 98 12.1. Back-to-back Frames Baseline . . . . . . . . . . . . . . . 27 99 12.2. IPsec Back-to-back Frames . . . . . . . . . . . . . . . . 28 100 12.3. IPsec Encryption Back-to-back Frames . . . . . . . . . . . 28 101 12.4. IPsec Decryption Back-to-back Frames . . . . . . . . . . . 29 102 13. IPsec Tunnel Setup Behavior . . . . . . . . . . . . . . . . . 30 103 13.1. IPsec Tunnel Setup Rate . . . . . . . . . . . . . . . . . 30 104 13.2. IKE Phase 1 Setup Rate . . . . . . . . . . . . . . . . . . 31 105 13.3. IKE Phase 2 Setup Rate . . . . . . . . . . . . . . . . . . 32 106 14. IPsec Rekey Behavior . . . . . . . . . . . . . . . . . . . . . 33 107 14.1. IKE Phase 1 Rekey Rate . . . . . . . . . . . . . . . . . . 33 108 14.2. IKE Phase 2 Rekey Rate . . . . . . . . . . . . . . . . . . 34 109 15. IPsec Tunnel Failover Time . . . . . . . . . . . . . . . . . . 34 110 16. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 36 111 17. References . . . . . . . . . . . . . . . . . . . . . . . . . . 37 112 17.1. Normative References . . . . . . . . . . . . . . . . . . . 37 113 17.2. Informative References . . . . . . . . . . . . . . . . . . 38 114 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 39 115 Intellectual Property and Copyright Statements . . . . . . . . . . 40 117 1. Introduction 119 This document defines a specific set of tests that can be used to 120 measure and report the performance characteristics of IPsec devices. 121 It extends the methodology already defined for benchmarking network 122 interconnecting devices in [RFC2544] to IPsec gateways and 123 additionally introduces tests which can be used to measure end-host 124 IPsec performance. 126 2. Document Scope 128 The primary focus of this document is to establish a performance 129 testing methodology for IPsec devices that support manual keying and 130 IKEv1. Both IPv4 and IPv6 addressing will be taken into 131 consideration for all relevant test methodologies. 133 The testing will be constrained to: 135 o Devices acting as IPsec gateways whose tests will pertain to both 136 IPsec tunnel and transport mode. 138 o Devices acting as IPsec end-hosts whose tests will pertain to both 139 IPsec tunnel and transport mode. 141 Note that special considerations will be presented for IPsec end-host 142 testing since the tests cannot be conducted without introducing 143 additional variables that may cause variations in test results. 145 What is specifically out of scope is any testing that pertains to 146 considerations involving NAT, L2TP [RFC2661], GRE [RFC2784], BGP/MPLS 147 VPNs [RFC2547] and anything that does not specifically relate to the 148 establishment and tearing down of IPsec tunnels. 150 3. Key Words to Reflect Requirements 152 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 153 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 154 document are to be interpreted as described in RFC 2119. RFC 2119 155 defines the use of these key words to help make the intent of 156 standards track documents as clear as possible. While this document 157 uses these keywords, this document is not a standards track document. 159 4. Test Considerations 161 Before any of the IPsec data plane benchmarking tests are carried 162 out, a Baseline MUST be established. I.e. the particular test in 163 question must first be measured for performance characteristics 164 without enabling IPsec. Once both the Baseline clear text 165 performance and the performance using an IPsec enabled datapath have 166 been measured, the difference between the two can be discerned. 168 This document explicitly assumes that you MUST follow logical 169 performance test methodology that includes the pre-configuration of 170 routing protocols, ARP caches, IPv6 neighbor discovery and all other 171 extraneous IPv4 and IPv6 parameters required to pass packets before 172 the tester is ready to send IPsec protected packets. IPv6 nodes that 173 implement Path MTU Discovery [RFC1981] MUST ensure that the PMTUD 174 process has been completed before any of the tests have been run. 176 For every IPsec data plane benchmarking test, the SA database (SADB) 177 MUST be created and populated with the appropriate SAs before any 178 actual test traffic is sent, i.e. the DUT/SUT MUST have active 179 tunnels. This may require a manual command to be executed on the 180 DUT/SUT or the sending of appropriate learning frames to the DUT/SUT. 181 This is to ensure that none of the control plane parameters (such as 182 IPsec tunnel setup rates and IPsec tunnel rekey rates) are factored 183 into these tests. 185 For control plane benchmarking tests (i.e. IPsec tunnel setup rate 186 and IPsec tunnel rekey rates), the authentication mechanisms(s) used 187 for the authenticated Diffie-Hellman exchange MUST be reported. 189 5. Test Topologies 191 The tests can be performed as a DUT or SUT. When the tests are 192 performed as a DUT, the Tester itself must be an IPsec peer. This 193 scenario is shown in Figure 1. When tested as a DUT where the Tester 194 has to be an IPsec peer, the measurements have several disadvantages: 196 o The Tester can introduce interoperability issues and skew results. 198 o The measurements may not be accurate due to Tester inaccuracies. 200 On the other hand, the measurement of a DUT where the Tester is an 201 IPsec peer has two distinct advantages: 203 o IPsec client scenarios can be benchmarked. 205 o IPsec device encryption/decryption abnormalities may be 206 identified. 208 +------------+ 209 | | 210 +----[D] Tester [A]----+ 211 | | | | 212 | +------------+ | 213 | | 214 | +------------+ | 215 | | | | 216 +----[C] DUT [B]----+ 217 | | 218 +------------+ 220 Figure 1: Topology 1 222 The SUT scenario is depicted in Figure 2. Two identical DUTs are 223 used in this test set up which more accurately simulate the use of 224 IPsec gateways. IPsec SA (i.e. AH/ESP transport or tunnel mode) 225 configurations can be tested using this set-up where the tester is 226 only required to send and receive cleartext traffic. 228 +------------+ 229 | | 230 +-----------------[F] Tester [A]-----------------+ 231 | | | | 232 | +------------+ | 233 | | 234 | +------------+ +------------+ | 235 | | | | | | 236 +----[E] DUTa [D]--------[C] DUTb [B]----+ 237 | | | | 238 +------------+ +------------+ 240 Figure 2: Topology 2 242 When an IPsec DUT needs to be tested in a chassis failover topology, 243 a second DUT needs to be used as shown in figure 3. This is the 244 high-availability equivalent of the topology as depicted in Figure 1. 245 Note that in this topology the Tester MUST be an IPsec peer. 247 +------------+ 248 | | 249 +---------[F] Tester [A]---------+ 250 | | | | 251 | +------------+ | 252 | | 253 | +------------+ | 254 | | | | 255 | +----[C] DUTa [B]----+ | 256 | | | | | | 257 | | +------------+ | | 258 +----+ +----+ 259 | +------------+ | 260 | | | | 261 +----[E] DUTb [D]----+ 262 | | 263 +------------+ 265 Figure 3: Topology 3 267 When no IPsec enabled Tester is available and an IPsec failover 268 scenario needs to be tested, the topology as shown in Figure 4 can be 269 used. In this case, either the high availability pair of IPsec 270 devices can be used as an Initiator or as a Responder. The remaining 271 chassis will take the opposite role. 273 +------------+ 274 | | 275 +--------------------[H] Tester [A]----------------+ 276 | | | | 277 | +------------+ | 278 | | 279 | +------------+ | 280 | | | | 281 | +---[E] DUTa [D]---+ | 282 | | | | | +------------+ | 283 | | +------------+ | | | | 284 +---+ +----[C] DUTc [B]---+ 285 | +------------+ | | | 286 | | | | +------------+ 287 +---[G] DUTb [F]---+ 288 | | 289 +------------+ 291 Figure 4: Topology 4 293 6. Test Parameters 295 For each individual test performed, all of the following parameters 296 MUST be explicitly reported in any test results. 298 6.1. Frame Type 300 6.1.1. IP 302 Both IPv4 and IPv6 frames MUST be used. The basic IPv4 header is 20 303 bytes long (which may be increased by the use of an options field). 304 The basic IPv6 header is a fixed 40 bytes and uses an extension field 305 for additional headers. Only the basic headers plus the IPsec AH 306 and/or ESP headers MUST be present. 308 It is recommended that IPv4 and IPv6 frames be tested separately to 309 ascertain performance parameters for either IPv4 or IPv6 traffic. If 310 both IPv4 and IPv6 traffic are to be tested, the device SHOULD be 311 pre-configured for a dual-stack environment to handle both traffic 312 types. 314 IP traffic with L4 protocol set to 'reserved' (255) MUST be used. 315 This ensures maximum space for instrumentation data in the payload 316 section, even with framesizes of minimum allowed length on the 317 transport media. 319 6.1.2. UDP 321 It is also RECOMMENDED that the test is executed using UDP as the L4 322 protocol. When using UDP, instrumentation data SHOULD be present in 323 the payload of the packet. It is OPTIONAL to have application 324 payload. 326 6.1.3. TCP 328 It is OPTIONAL to perform the tests with TCP as the L4 protocol but 329 in case this is considered, the TCP traffic is RECOMMENDED to be 330 stateful. With a TCP as a L4 header it is possible that there will 331 not be enough room to add all instrumentation data to identify the 332 packets within the DUT/SUT. 334 6.2. Frame Sizes 336 Each test SHOULD be run with different frame sizes. The recommended 337 cleartext layer 2 frame sizes for IPv4 tests over Ethernet media are 338 64, 128, 256, 512, 1024, 1280, and 1518 bytes, per RFC2544 section 9 339 [RFC2544]. The four CRC bytes are included in the frame size 340 specified. 342 For GigabitEthernet, supporting jumboframes, the cleartext layer 2 343 framesizes used are 64, 128, 256, 512, 1024, 1280, 1518, 2048, 3072, 344 4096, 5120, 6144, 7168, 8192 and 9234 bytes 346 Since IPv6 requires that every link has an MTU of 1280 octets or 347 greater, it is MANDATORY to execute tests with cleartext layer 2 348 frame sizes that include 1280 and 1518 bytes. It is RECOMMENDED that 349 additional frame sizes are included in the IPv6 test execution, 350 including the maximum supported datagram size for the linktype used. 352 6.3. Fragmentation and Reassembly 354 IPsec devices can and must fragment packets in specific scenarios. 355 Depending on whether the fragmentation is performed in software or 356 using specialized custom hardware, there may be a significant impact 357 on performance. 359 In IPv4, unless the DF (don't fragment) bit is set by the packet 360 source, the sender cannot guarantee that some intermediary device on 361 the way will not fragment an IPsec packet. For transport mode IPsec, 362 the peers must be able to fragment and reassemble IPsec packets. 363 Reassembly of fragmented packets is especially important if an IPv4 364 port selector (or IPv6 transport protocol selector) is configured. 365 For tunnel mode IPsec, it is not a requirement. Note that 366 fragmentation is handled differently in IPv6 than in IPv4. In IPv6 367 networks, fragmentation is no longer done by intermediate routers in 368 the networks, but by the source node that originates the packet. The 369 path MTU discovery (PMTUD) mechanism is recommended for every IPv6 370 node to avoid fragmentation. 372 Packets generated by hosts that do not support PMTUD, and have not 373 set the DF bit in the IP header, will undergo fragmentation before 374 IPsec encapsulation. Packets generated by hosts that do support 375 PMTUD will use it locally to match the statically configured MTU on 376 the tunnel. If you manually set the MTU on the tunnel, you must set 377 it low enough to allow packets to pass through the smallest link on 378 the path. Otherwise, the packets that are too large to fit will be 379 dropped. 381 Fragmentation can occur due to encryption overhead and is closely 382 linked to the choice of transform used. Since each test SHOULD be 383 run with a maximum cleartext frame size (as per the previous section) 384 it will cause fragmentation to occur since the maximum frame size 385 will be exceeded. All tests MUST be run with the DF bit not set. It 386 is also recommended that all tests be run with the DF bit set. 388 Note that some implementations predetermine the encapsulated packet 389 size from information available in transform sets, which are 390 configured as part of the IPsec security association (SA). If it is 391 predetermined that the packet will exceed the MTU of the output 392 interface, the packet is fragmented before encryption. This 393 optimization may favorably impact performance and vendors SHOULD 394 report whether any such optimization is configured. 396 6.4. Time To Live 398 The source frames should have a TTL value large enough to accommodate 399 the DUT/SUT. A Minimum TTL of 64 is RECOMMENDED. 401 6.5. Trial Duration 403 The duration of the test portion of each trial SHOULD be at least 30 404 seconds. In the case of IPsec tunnel rekeying tests, the test 405 duration must be at least two times the IPsec tunnel rekey time to 406 ensure a reasonable worst case scenario test. 408 6.6. Security Context Parameters 410 All of the security context parameters listed in this section and 411 used in any test MUST be reported. 413 6.6.1. IPsec Transform Sets 415 All tests should be done on different IPsec transform set 416 combinations. An IPsec transform specifies a single IPsec security 417 protocol (either AH or ESP) with its corresponding security 418 algorithms and mode. A transform set is a combination of individual 419 IPsec transforms designed to enact a specific security policy for 420 protecting a particular traffic flow. At minumim, the transform set 421 must include one AH algorithm and a mode or one ESP algorithm and a 422 mode, as shown in Table 1: 424 +---------------+--------------+----------------------+-----------+ 425 | Transform Set | AH Algorithm | ESP Algorithm | Mode | 426 +---------------+--------------+----------------------+-----------+ 427 | 1 | AH-SHA1 | None | Tunnel | 428 | 2 | AH-SHA1 | None | Transport | 429 | 3 | AH-SHA1 | ESP-3DES | Tunnel | 430 | 4 | AH-SHA1 | ESP-3DES | Transport | 431 | 5 | AH-SHA1 | ESP-AES128 | Tunnel | 432 | 6 | AH-SHA1 | ESP-AES128 | Transport | 433 | 7 | None | ESP-3DES | Tunnel | 434 | 8 | None | ESP-3DES-HMAC-SHA1 | Tunnel | 435 | 9 | None | ESP-3DES | Transport | 436 | 10 | None | ESP-3DES-HMAC-SHA1 | Transport | 437 | 11 | None | ESP-AES128 | Tunnel | 438 | 12 | None | ESP-AES128-HMAC-SHA1 | Tunnel | 439 | 13 | None | ESP-AES128 | Transport | 440 | 14 | None | ESP-AES128-HMAC-SHA1 | Transport | 441 +---------------+--------------+----------------------+-----------+ 443 Table 1 445 Testing of all the transforms shown in Table 1 MUST be supported. 446 Note that this table is derived from the updated IKEv1 requirements 447 as described in [RFC4109]. Optionally, other AH and/or ESP 448 transforms MAY be supported. 450 6.6.2. IPsec Topologies 452 All tests should be done at various IPsec topology configurations and 453 the IPsec topology used MUST be reported. Since IPv6 requires the 454 implementation of manual keys for IPsec, both manual keying and IKE 455 configurations MUST be tested. 457 For manual keying tests, the IPsec SAs used should vary from 1 to 458 101, increasing in increments of 50. Although it is not expected 459 that manual keying (i.e. manually configuring the IPsec SA) will be 460 deployed in any operational setting with the exception of very small 461 controlled environments (i.e. less than 10 nodes), it is prudent to 462 test for potentially larger scale deployments. 464 For IKE specific tests, the following IPsec topologies MUST be 465 tested: 467 o 1 IKE SA & 1 IPsec SA (i.e. 1 IPsec Tunnel) 469 o 1 IKE SA & {max} IPsec SA's 470 o {max} IKE SA's & {max} IPsec SA's 472 It is RECOMMENDED to also test with the following IPsec topologies in 473 order to gain more datapoints: 475 o {max/2} IKE SA's & {(max/2) IKE SA's} IPsec SA's 477 o {max} IKE SA's & {(max) IKE SA's} IPsec SA's 479 6.6.3. IKE Keepalives 481 IKE keepalives track reachability of peers by sending hello packets 482 between peers. During the typical life of an IKE Phase 1 SA, packets 483 are only exchanged over this IKE Phase 1 SA when an IPsec IKE Quick 484 Mode (QM) negotiation is required at the expiration of the IPSec 485 Tunnel SAs. There is no standards-based mechanism for either type of 486 SA to detect the loss of a peer, except when the QM negotiation 487 fails. Most IPsec implementations use the Dead Peer Detection (i.e. 488 Keepalive) mechanism to determine whether connectivity has been lost 489 with a peer before the expiration of the IPsec Tunnel SA's. 491 All tests using IKEv1 MUST use the same IKE keepalive parameters. 493 6.6.4. IKE DH-group 495 There are 3 Diffie-Hellman groups which can be supported by IPsec 496 standards compliant devices: 498 o DH-group 1: 768 bits 500 o DH-group 2: 1024 bits 502 o DH-group 14: 2048 bits 504 DH-group 2 MUST be tested, to support the new IKEv1 algorithm 505 requirements listed in [RFC4109]. It is recommended that the same 506 DH-group be used for both IKE Phase 1 and IKE phase 2. All test 507 methodologies using IKE MUST report which DH-group was configured to 508 be used for IKE Phase 1 and IKE Phase 2 negotiations. 510 6.6.5. IKE SA / IPsec SA Lifetime 512 An IKE SA or IPsec SA is retained by each peer until the Tunnel 513 lifetime expires. IKE SA's and IPsec SA's have individual lifetime 514 parameters. In many real-world environments, the IPsec SA's will be 515 configured with shorter lifetimes than that of the IKE SA's. This 516 will force a rekey to happen more often for IPsec SA's. 518 When the initiator begins an IKE negotiation between itself and a 519 remote peer (the responder), an IKE policy can be selected only if 520 the lifetime of the responder's policy is shorter than or equal to 521 the lifetime of the initiator's policy. If the lifetimes are not the 522 same, the shorter lifetime will be used. 524 To avoid any incompatibilities in data plane benchmark testing, all 525 devices MUST have the same IKE SA and IPsec SA lifetime configured 526 and they must be configured to a time which exceeds the test duration 527 timeframe or the total number of bytes to be transmitted during the 528 test. 530 Note that the IPsec SA lifetime MUST be equal to or less than the IKE 531 SA lifetime. Both the IKE SA lifetime and the IPsec SA lifetime used 532 MUST be reported. This parameter SHOULD be variable when testing IKE 533 rekeying performance. 535 6.6.6. IPsec Selectors 537 All tests MUST be performed using standard IPsec selectors. 539 7. Capacity 541 7.1. IKE SA Capacity 543 Objective: 545 Measure the maximum number of IKE SA's that can be sustained on an 546 IPsec Device. 548 Procedure: 550 The IPsec Device under test initially MUST NOT have any Active 551 IPsec Tunnels. The Initiator (either a tester or an IPsec peer) 552 will start the negotiation of an IPsec Tunnel (a single Phase 1 SA 553 and a pair Phase 2 SA's). 555 After it is detected that the tunnel is established, a limited 556 number (50 packets RECOMMENDED) SHALL be sent through the tunnel. 557 If all packet are received by the Responder (i.e. the DUT), a new 558 IPsec Tunnel may be attempted. 560 This proces will be repeated until no more IPsec Tunnels can be 561 established. 563 At the end of the test, a traffic pattern is sent to the initiator 564 that will be distributed over all Active IPsec Tunnels, where each 565 tunnel will need to propagate a fixed number of packets at a 566 minimum rate of 5 pps. When all packets sent by the Iniator are 567 being received by the Responder, the test has succesfully 568 determined the IKE SA Capacity. If however this final check 569 fails, the test needs to be re-executed with a lower number of 570 Active IPsec Tunnels. There MAY be a need to enforce a lower 571 number of Active IPsec Tunnels i.e. an upper limit of Active IPsec 572 Tunnel SHOULD be defined in the test. 574 Reporting Format: 576 The reporting format SHOULD be the same as listed in 7.1 with the 577 additional requirement that the Security Context parameters 578 defined in 5.6 and utilized for this test MUST be included in any 579 statement of performance. 581 7.2. IPsec SA Capacity 583 Objective: 585 Measure the maximum number of IPsec SA's that can be sustained on 586 an IPsec Device. 588 Procedure: 590 The IPsec Device under test initially MUST NOT have any Active 591 IPsec Tunnels. The Initiator (either a tester or an IPsec peer) 592 will start the negotiation of an IPsec Tunnel (a single Phase 1 SA 593 and a pair Phase 2 SA's). 595 After it is detected that the tunnel is established, a limited 596 number (50 packets RECOMMENDED) SHALL be sent through the tunnel. 597 If all packet are received by the Responder (i.e. the DUT), a new 598 pair of IPsec SA's may be attempted. This will be achieved by 599 offering a specific traffic pattern to the Initiator that matches 600 a given selector and therfore triggering the negotiation of a new 601 pair of IPsec SA's. 603 This proces will be repeated until no more IPsec SA' can be 604 established. 606 At the end of the test, a traffic pattern is sent to the initiator 607 that will be distributed over all IPsec SA's, where each SA will 608 need to propagate a fixed number of packets at a minimum rate of 5 609 pps. When all packets sent by the Iniator are being received by 610 the Responder, the test has succesfully determined the IPsec SA 611 Capacity. If however this final check fails, the test needs to be 612 re-executed with a lower number of IPsec SA's. There MAY be a 613 need to enforce a lower number IPsec SA's i.e. an upper limit of 614 IPsec SA's SHOULD be defined in the test. 616 Reporting Format: 618 The reporting format SHOULD be the same as listed in 7.1 with the 619 additional requirement that the Security Context parameters 620 defined in 5.6 and utilized for this test MUST be included in any 621 statement of performance. 623 8. Throughput 625 This section contains the description of the tests that are related 626 to the characterization of the packet forwarding of a DUT/SUT in an 627 IPsec environment. Some metrics extend the concept of throughput 628 presented in RFC 1242. The notion of Forwarding Rate is cited in 629 RFC2285. 631 A separate test SHOULD be performed for Throughput tests using IPv4/ 632 UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic. 634 8.1. Throughput baseline 636 Objective: 638 Measure the intrinsic cleartext throughput of a device without the 639 use of IPsec. The throughput baseline methodology and reporting 640 format is derived from [RFC2544]. 642 Procedure: 644 Send a specific number of frames that matches the IPsec SA 645 selector(s) to be tested at a specific rate through the DUT and 646 then count the frames that are transmitted by the DUT. If the 647 count of offered frames is equal to the count of received frames, 648 the rate of the offered stream is increased and the test is rerun. 649 If fewer frames are received than were transmitted, the rate of 650 the offered stream is reduced and the test is rerun. 652 The throughput is the fastest rate at which the count of test 653 frames transmitted by the DUT is equal to the number of test 654 frames sent to it by the test equipment. 656 Reporting Format: 658 The results of the throughput test SHOULD be reported in the form 659 of a graph. If it is, the x coordinate SHOULD be the frame size, 660 the y coordinate SHOULD be the frame rate. There SHOULD be at 661 least two lines on the graph. There SHOULD be one line showing 662 the theoretical frame rate for the media at the various frame 663 sizes. The second line SHOULD be the plot of the test results. 664 Additional lines MAY be used on the graph to report the results 665 for each type of data stream tested. Text accompanying the graph 666 SHOULD indicate the protocol, data stream format, and type of 667 media used in the tests. 669 We assume that if a single value is desired for advertising 670 purposes the vendor will select the rate for the minimum frame 671 size for the media. If this is done then the figure MUST be 672 expressed in packets per second. The rate MAY also be expressed 673 in bits (or bytes) per second if the vendor so desires. The 674 statement of performance MUST include: 676 * Measured maximum frame rate 678 * Size of the frame used 680 * Theoretical limit of the media for that frame size 682 * Type of protocol used in the test 684 Even if a single value is used as part of the advertising copy, 685 the full table of results SHOULD be included in the product data 686 sheet. 688 8.2. IPsec Throughput 690 Objective: 692 Measure the intrinsic throughput of a device utilizing IPsec. 694 Procedure: 696 Send a specific number of cleartext frames that match the IPsec SA 697 selector(s) at a specific rate through the DUT/SUT. DUTa will 698 encrypt the traffic and forward to DUTb which will in turn decrypt 699 the traffic and forward to the testing device. The testing device 700 counts the frames that are transmitted by the DUTb. If the count 701 of offered frames is equal to the count of received frames, the 702 rate of the offered stream is increased and the test is rerun. If 703 fewer frames are received than were transmitted, the rate of the 704 offered stream is reduced and the test is rerun. 706 The IPsec Throughput is the fastest rate at which the count of 707 test frames transmitted by the DUT/SUT is equal to the number of 708 test frames sent to it by the test equipment. 710 For tests using multiple IPsec SA's, the test traffic associated 711 with the individual traffic selectors defined for each IPsec SA 712 MUST be sent in a round robin type fashion to keep the test 713 balanced so as not to overload any single IPsec SA. 715 Reporting format: 717 The reporting format SHOULD be the same as listed in 7.1 with the 718 additional requirement that the Security Context parameters 719 defined in 5.6 and utilized for this test MUST be included in any 720 statement of performance. 722 8.3. IPsec Encryption Throughput 724 Objective: 726 Measure the intrinsic DUT vendor specific IPsec Encryption 727 Throughput. 729 Procedure: 731 Send a specific number of cleartext frames that match the IPsec SA 732 selector(s) at a specific rate to the DUT. The DUT will receive 733 the cleartext frames, perform IPsec operations and then send the 734 IPsec protected frame to the tester. Upon receipt of the 735 encrypted packet, the testing device will timestamp the packet(s) 736 and record the result. If the count of offered frames is equal to 737 the count of received frames, the rate of the offered stream is 738 increased and the test is rerun. If fewer frames are received 739 than were transmitted, the rate of the offered stream is reduced 740 and the test is rerun. 742 The IPsec Encryption Throughput is the fastest rate at which the 743 count of test frames transmitted by the DUT is equal to the number 744 of test frames sent to it by the test equipment. 746 For tests using multiple IPsec SA's, the test traffic associated 747 with the individual traffic selectors defined for each IPsec SA 748 MUST be sent in a round robin type fashion to keep the test 749 balanced so as not to overload any single IPsec SA. 751 Reporting format: 753 The reporting format SHOULD be the same as listed in 7.1 with the 754 additional requirement that the Security Context parameters 755 defined in 5.6 and utilized for this test MUST be included in any 756 statement of performance. 758 8.4. IPsec Decryption Throughput 760 Objective: 762 Measure the intrinsic DUT vendor specific IPsec Decryption 763 Throughput. 765 Procedure: 767 Send a specific number of IPsec protected frames that match the 768 IPsec SA selector(s) at a specific rate to the DUT. The DUT will 769 receive the IPsec protected frames, perform IPsec operations and 770 then send the cleartext frame to the tester. Upon receipt of the 771 cleartext packet, the testing device will timestamp the packet(s) 772 and record the result. If the count of offered frames is equal to 773 the count of received frames, the rate of the offered stream is 774 increased and the test is rerun. If fewer frames are received 775 than were transmitted, the rate of the offered stream is reduced 776 and the test is rerun. 778 The IPsec Decryption Throughput is the fastest rate at which the 779 count of test frames transmitted by the DUT is equal to the number 780 of test frames sent to it by the test equipment. 782 For tests using multiple IPsec SAs, the test traffic associated 783 with the individual traffic selectors defined for each IPsec SA 784 MUST be sent in a round robin type fashion to keep the test 785 balanced so as not to overload any single IPsec SA. 787 Reporting format: 789 The reporting format SHOULD be the same as listed in 7.1 with the 790 additional requirement that the Security Context parameters 791 defined in 5.6 and utilized for this test MUST be included in any 792 statement of performance. 794 9. Latency 796 This section presents methodologies relating to the characterization 797 of the forwarding latency of a DUT/SUT. It extends the concept of 798 latency characterization presented in [RFC2544] to an IPsec 799 environment. 801 A separate tests SHOULD be performed for latency tests using IPv4/ 802 UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic. 804 In order to lessen the effect of packet buffering in the DUT/SUT, the 805 latency tests MUST be run at the measured IPsec throughput level of 806 the DUT/SUT; IPsec latency at other offered loads is optional. 808 Lastly, [RFC1242] and [RFC2544] draw distinction between two classes 809 of devices: "store and forward" and "bit-forwarding". Each class 810 impacts how latency is collected and subsequently presented. See the 811 related RFCs for more information. In practice, much of the test 812 equipment will collect the latency measurement for one class or the 813 other, and, if needed, mathematically derive the reported value by 814 the addition or subtraction of values accounting for medium 815 propagation delay of the packet, bit times to the timestamp trigger 816 within the packet, etc. Test equipment vendors SHOULD provide 817 documentation regarding the composition and calculation latency 818 values being reported. The user of this data SHOULD understand the 819 nature of the latency values being reported, especially when 820 comparing results collected from multiple test vendors. (E.g., If 821 test vendor A presents a "store and forward" latency result and test 822 vendor B presents a "bit-forwarding" latency result, the user may 823 erroneously conclude the DUT has two differing sets of latency 824 values.). 826 9.1. Latency Baseline 828 Objective: 830 Measure the intrinsic latency (min/avg/max) introduced by a device 831 without the use of IPsec. 833 Procedure: 835 First determine the throughput for the DUT/SUT at each of the 836 listed frame sizes. Send a stream of frames at a particular frame 837 size through the DUT at the determined throughput rate using 838 frames that match the IPsec SA selector(s) to be tested. The 839 stream SHOULD be at least 120 seconds in duration. An identifying 840 tag SHOULD be included in one frame after 60 seconds with the type 841 of tag being implementation dependent. The time at which this 842 frame is fully transmitted is recorded (timestamp A). The 843 receiver logic in the test equipment MUST recognize the tag 844 information in the frame stream and record the time at which the 845 tagged frame was received (timestamp B). 847 The latency is timestamp B minus timestamp A as per the relevant 848 definition from RFC 1242, namely latency as defined for store and 849 forward devices or latency as defined for bit forwarding devices. 851 The test MUST be repeated at least 20 times with the reported 852 value being the average of the recorded values. 854 Reporting Format 856 The report MUST state which definition of latency (from [RFC1242]) 857 was used for this test. The latency results SHOULD be reported in 858 the format of a table with a row for each of the tested frame 859 sizes. There SHOULD be columns for the frame size, the rate at 860 which the latency test was run for that frame size, for the media 861 types tested, and for the resultant latency values for each type 862 of data stream tested. 864 9.2. IPsec Latency 866 Objective: 868 Measure the intrinsic IPsec Latency (min/avg/max) introduced by a 869 device when using IPsec. 871 Procedure: 873 First determine the throughput for the DUT/SUT at each of the 874 listed frame sizes. Send a stream of cleartext frames at a 875 particular frame size through the DUT/SUT at the determined 876 throughput rate using frames that match the IPsec SA selector(s) 877 to be tested. DUTa will encrypt the traffic and forward to DUTb 878 which will in turn decrypt the traffic and forward to the testing 879 device. 881 The stream SHOULD be at least 120 seconds in duration. An 882 identifying tag SHOULD be included in one frame after 60 seconds 883 with the type of tag being implementation dependent. The time at 884 which this frame is fully transmitted is recorded (timestamp A). 885 The receiver logic in the test equipment MUST recognize the tag 886 information in the frame stream and record the time at which the 887 tagged frame was received (timestamp B). 889 The IPsec Latency is timestamp B minus timestamp A as per the 890 relevant definition from [RFC1242], namely latency as defined for 891 store and forward devices or latency as defined for bit forwarding 892 devices. 894 The test MUST be repeated at least 20 times with the reported 895 value being the average of the recorded values. 897 Reporting format: 899 The reporting format SHOULD be the same as listed in 8.1 with the 900 additional requirement that the Security Context parameters 901 defined in 5.6 and utilized for this test MUST be included in any 902 statement of performance. 904 9.3. IPsec Encryption Latency 906 Objective: 908 Measure the DUT vendor specific IPsec Encryption Latency for IPsec 909 protected traffic. 911 Procedure: 913 Send a stream of cleartext frames at a particular frame size 914 through the DUT/SUT at the determined throughput rate using frames 915 that match the IPsec SA selector(s) to be tested. 917 The stream SHOULD be at least 120 seconds in duration. An 918 identifying tag SHOULD be included in one frame after 60 seconds 919 with the type of tag being implementation dependent. The time at 920 which this frame is fully transmitted is recorded (timestamp A). 921 The DUT will receive the cleartext frames, perform IPsec 922 operations and then send the IPsec protected frames to the tester. 923 Upon receipt of the encrypted frames, the receiver logic in the 924 test equipment MUST recognize the tag information in the frame 925 stream and record the time at which the tagged frame was received 926 (timestamp B). 928 The IPsec Encryption Latency is timestamp B minus timestamp A as 929 per the relevant definition from [RFC1242], namely latency as 930 defined for store and forward devices or latency as defined for 931 bit forwarding devices. 933 The test MUST be repeated at least 20 times with the reported 934 value being the average of the recorded values. 936 Reporting format: 938 The reporting format SHOULD be the same as listed in 8.1 with the 939 additional requirement that the Security Context parameters 940 defined in 5.6 and utilized for this test MUST be included in any 941 statement of performance. 943 9.4. IPsec Decryption Latency 945 Objective: 947 Measure the DUT Vendor Specific IPsec Decryption Latency for IPsec 948 protected traffic. 950 Procedure: 952 Send a stream of IPsec protected frames at a particular frame size 953 through the DUT/SUT at the determined throughput rate using frames 954 that match the IPsec SA selector(s) to be tested. 956 The stream SHOULD be at least 120 seconds in duration. An 957 identifying tag SHOULD be included in one frame after 60 seconds 958 with the type of tag being implementation dependent. The time at 959 which this frame is fully transmitted is recorded (timestamp A). 960 The DUT will receive the IPsec protected frames, perform IPsec 961 operations and then send the cleartext frames to the tester. Upon 962 receipt of the decrypted frames, the receiver logic in the test 963 equipment MUST recognize the tag information in the frame stream 964 and record the time at which the tagged frame was received 965 (timestamp B). 967 The IPsec Decryption Latency is timestamp B minus timestamp A as 968 per the relevant definition from [RFC1242], namely latency as 969 defined for store and forward devices or latency as defined for 970 bit forwarding devices. 972 The test MUST be repeated at least 20 times with the reported 973 value being the average of the recorded values. 975 Reporting format: 977 The reporting format SHOULD be the same as listed in 8.1 with the 978 additional requirement that the Security Context parameters 979 defined in 5.6 and utilized for this test MUST be included in any 980 statement of performance. 982 10. Time To First Packet 984 Objective: 986 Measure the time it takes to transmit a packet when no SAs have 987 been established. 989 Procedure: 991 Determine the IPsec throughput for the DUT/SUT at each of the 992 listed frame sizes. Start with a DUT/SUT with Configured Tunnels. 993 Send a stream of cleartext frames at a particular frame size 994 through the DUT/SUT at the determined throughput rate using frames 995 that match the IPsec SA selector(s) to be tested. 997 The time at which the first frame is fully transmitted from the 998 testing device is recorded as timestamp A. The time at which the 999 testing device receives its first frame from the DUT/SUT is 1000 recorded as timestamp B. The Time To First Packet is the 1001 difference between Timestamp B and Timestamp A. 1003 Note that it is possible that packets can be lost during IPsec 1004 Tunnel establishment and that timestamp A & B are not required to 1005 be associated with a unique packet. 1007 Reporting format: 1009 The Time To First Packet results SHOULD be reported in the format 1010 of a table with a row for each of the tested frame sizes. There 1011 SHOULD be columns for the frame size, the rate at which the TTFP 1012 test was run for that frame size, for the media types tested, and 1013 for the resultant TTFP values for each type of data stream tested. 1014 The Security Context parameters defined in 5.6 and utilized for 1015 this test MUST be included in any statement of performance. 1017 11. Frame Loss Rate 1019 This section presents methodologies relating to the characterization 1020 of frame loss rate, as defined in [RFC1242], in an IPsec environment. 1022 11.1. Frame Loss Baseline 1024 Objective: 1026 To determine the frame loss rate, as defined in [RFC1242], of a 1027 DUT/SUT throughout the entire range of input data rates and frame 1028 sizes without the use of IPsec. 1030 Procedure: 1032 Send a specific number of frames at a specific rate through the 1033 DUT/SUT to be tested using frames that match the IPsec SA 1034 selector(s) to be tested and count the frames that are transmitted 1035 by the DUT/SUT. The frame loss rate at each point is calculated 1036 using the following equation: 1038 ( ( input_count - output_count ) * 100 ) / input_count 1040 The first trial SHOULD be run for the frame rate that corresponds 1041 to 100% of the maximum rate for the frame size on the input media. 1042 Repeat the procedure for the rate that corresponds to 90% of the 1043 maximum rate used and then for 80% of this rate. This sequence 1044 SHOULD be continued (at reducing 10% intervals) until there are 1045 two successive trials in which no frames are lost. The maximum 1046 granularity of the trials MUST be 10% of the maximum rate, a finer 1047 granularity is encouraged. 1049 Reporting Format: 1051 The results of the frame loss rate test SHOULD be plotted as a 1052 graph. If this is done then the X axis MUST be the input frame 1053 rate as a percent of the theoretical rate for the media at the 1054 specific frame size. The Y axis MUST be the percent loss at the 1055 particular input rate. The left end of the X axis and the bottom 1056 of the Y axis MUST be 0 percent; the right end of the X axis and 1057 the top of the Y axis MUST be 100 percent. Multiple lines on the 1058 graph MAY used to report the frame loss rate for different frame 1059 sizes, protocols, and types of data streams. 1061 11.2. IPsec Frame Loss 1063 Objective: 1065 To measure the frame loss rate of a device when using IPsec to 1066 protect the data flow. 1068 Procedure: 1070 Ensure that the DUT/SUT is in active tunnel mode. Send a specific 1071 number of cleartext frames that match the IPsec SA selector(s) to 1072 be tested at a specific rate through the DUT/SUT. DUTa will 1073 encrypt the traffic and forward to DUTb which will in turn decrypt 1074 the traffic and forward to the testing device. The testing device 1075 counts the frames that are transmitted by the DUTb. The frame 1076 loss rate at each point is calculated using the following 1077 equation: 1079 ( ( input_count - output_count ) * 100 ) / input_count 1080 The first trial SHOULD be run for the frame rate that corresponds 1081 to 100% of the maximum rate for the frame size on the input media. 1082 Repeat the procedure for the rate that corresponds to 90% of the 1083 maximum rate used and then for 80% of this rate. This sequence 1084 SHOULD be continued (at reducing 10% intervals) until there are 1085 two successive trials in which no frames are lost. The maximum 1086 granularity of the trials MUST be 10% of the maximum rate, a finer 1087 granularity is encouraged. 1089 Reporting Format: 1091 The reporting format SHOULD be the same as listed in 10.1 with the 1092 additional requirement that the Security Context parameters 1093 defined in 6.7 and utilized for this test MUST be included in any 1094 statement of performance. 1096 11.3. IPsec Encryption Frame Loss 1098 Objective: 1100 To measure the effect of IPsec encryption on the frame loss rate 1101 of a device. 1103 Procedure: 1105 Send a specific number of cleartext frames that match the IPsec SA 1106 selector(s) at a specific rate to the DUT. The DUT will receive 1107 the cleartext frames, perform IPsec operations and then send the 1108 IPsec protected frame to the tester. The testing device counts 1109 the encrypted frames that are transmitted by the DUT. The frame 1110 loss rate at each point is calculated using the following 1111 equation: 1113 ( ( input_count - output_count ) * 100 ) / input_count 1115 The first trial SHOULD be run for the frame rate that corresponds 1116 to 100% of the maximum rate for the frame size on the input media. 1117 Repeat the procedure for the rate that corresponds to 90% of the 1118 maximum rate used and then for 80% of this rate. This sequence 1119 SHOULD be continued (at reducing 10% intervals) until there are 1120 two successive trials in which no frames are lost. The maximum 1121 granularity of the trials MUST be 10% of the maximum rate, a finer 1122 granularity is encouraged. 1124 Reporting Format: 1126 The reporting format SHOULD be the same as listed in 10.1 with the 1127 additional requirement that the Security Context parameters 1128 defined in 6.7 and utilized for this test MUST be included in any 1129 statement of performance. 1131 11.4. IPsec Decryption Frame Loss 1133 Objective: 1135 To measure the effects of IPsec encryption on the frame loss rate 1136 of a device. 1138 Procedure: 1140 Send a specific number of IPsec protected frames that match the 1141 IPsec SA selector(s) at a specific rate to the DUT. The DUT will 1142 receive the IPsec protected frames, perform IPsec operations and 1143 then send the cleartext frames to the tester. The testing device 1144 counts the cleartext frames that are transmitted by the DUT. The 1145 frame loss rate at each point is calculated using the following 1146 equation: 1148 ( ( input_count - output_count ) * 100 ) / input_count 1150 The first trial SHOULD be run for the frame rate that corresponds 1151 to 100% of the maximum rate for the frame size on the input media. 1152 Repeat the procedure for the rate that corresponds to 90% of the 1153 maximum rate used and then for 80% of this rate. This sequence 1154 SHOULD be continued (at reducing 10% intervals) until there are 1155 two successive trials in which no frames are lost. The maximum 1156 granularity of the trials MUST be 10% of the maximum rate, a finer 1157 granularity is encouraged. 1159 Reporting format: 1161 The reporting format SHOULD be the same as listed in 10.1 with the 1162 additional requirement that the Security Context parameters 1163 defined in 6.7 and utilized for this test MUST be included in any 1164 statement of performance. 1166 11.5. IKE Phase 2 Rekey Frame Loss 1168 Objective: 1170 To measure the frame loss due to an IKE Phase 2 (i.e. IPsec SA) 1171 Rekey event. 1173 Procedure: 1175 The procedure is the same as in 10.2 with the exception that the 1176 IPsec SA lifetime MUST be configured to be one-third of the trial 1177 test duration or one-third of the total number of bytes to be 1178 transmitted during the trial duration. 1180 Reporting format: 1182 The reporting format SHOULD be the same as listed in 10.1 with the 1183 additional requirement that the Security Context parameters 1184 defined in 6.7 and utilized for this test MUST be included in any 1185 statement of performance. 1187 12. Back-to-back Frames 1189 This section presents methodologies relating to the characterization 1190 of back-to-back frame processing, as defined in [RFC1242], in an 1191 IPsec environment. 1193 12.1. Back-to-back Frames Baseline 1195 Objective: 1197 To characterize the ability of a DUT to process back-to-back 1198 frames as defined in [RFC1242], without the use of IPsec. 1200 Procedure: 1202 Send a burst of frames that matches the IPsec SA selector(s) to be 1203 tested with minimum inter-frame gaps to the DUT and count the 1204 number of frames forwarded by the DUT. If the count of 1205 transmitted frames is equal to the number of frames forwarded the 1206 length of the burst is increased and the test is rerun. If the 1207 number of forwarded frames is less than the number transmitted, 1208 the length of the burst is reduced and the test is rerun. 1210 The back-to-back value is the number of frames in the longest 1211 burst that the DUT will handle without the loss of any frames. 1212 The trial length MUST be at least 2 seconds and SHOULD be repeated 1213 at least 50 times with the average of the recorded values being 1214 reported. 1216 Reporting format: 1218 The back-to-back results SHOULD be reported in the format of a 1219 table with a row for each of the tested frame sizes. There SHOULD 1220 be columns for the frame size and for the resultant average frame 1221 count for each type of data stream tested. The standard deviation 1222 for each measurement MAY also be reported. 1224 12.2. IPsec Back-to-back Frames 1226 Objective: 1228 To measure the back-to-back frame processing rate of a device when 1229 using IPsec to protect the data flow. 1231 Procedure: 1233 Send a burst of cleartext frames that matches the IPsec SA 1234 selector(s) to be tested with minimum inter-frame gaps to the DUT/ 1235 SUT. DUTa will encrypt the traffic and forward to DUTb which will 1236 in turn decrypt the traffic and forward to the testing device. 1237 The testing device counts the frames that are transmitted by the 1238 DUTb. If the count of transmitted frames is equal to the number 1239 of frames forwarded the length of the burst is increased and the 1240 test is rerun. If the number of forwarded frames is less than the 1241 number transmitted, the length of the burst is reduced and the 1242 test is rerun. 1244 The back-to-back value is the number of frames in the longest 1245 burst that the DUT/SUT will handle without the loss of any frames. 1246 The trial length MUST be at least 2 seconds and SHOULD be repeated 1247 at least 50 times with the average of the recorded values being 1248 reported. 1250 Reporting Format: 1252 The reporting format SHOULD be the same as listed in 11.1 with the 1253 additional requirement that the Security Context parameters 1254 defined in 6.7 and utilized for this test MUST be included in any 1255 statement of performance. 1257 12.3. IPsec Encryption Back-to-back Frames 1259 Objective: 1261 To measure the effect of IPsec encryption on the back-to-back 1262 frame processing rate of a device. 1264 Procedure: 1266 Send a burst of cleartext frames that matches the IPsec SA 1267 selector(s) to be tested with minimum inter-frame gaps to the DUT. 1268 The DUT will receive the cleartext frames, perform IPsec 1269 operations and then send the IPsec protected frame to the tester. 1270 The testing device counts the encrypted frames that are 1271 transmitted by the DUT. If the count of transmitted encrypted 1272 frames is equal to the number of frames forwarded the length of 1273 the burst is increased and the test is rerun. If the number of 1274 forwarded frames is less than the number transmitted, the length 1275 of the burst is reduced and the test is rerun. 1277 The back-to-back value is the number of frames in the longest 1278 burst that the DUT will handle without the loss of any frames. 1279 The trial length MUST be at least 2 seconds and SHOULD be repeated 1280 at least 50 times with the average of the recorded values being 1281 reported. 1283 Reporting format: 1285 The reporting format SHOULD be the same as listed in 11.1 with the 1286 additional requirement that the Security Context parameters 1287 defined in 6.7 and utilized for this test MUST be included in any 1288 statement of performance. 1290 12.4. IPsec Decryption Back-to-back Frames 1292 Objective: 1294 To measure the effect of IPsec decryption on the back-to-back 1295 frame processing rate of a device. 1297 Procedure: 1299 Send a burst of cleartext frames that matches the IPsec SA 1300 selector(s) to be tested with minimum inter-frame gaps to the DUT. 1301 The DUT will receive the IPsec protected frames, perform IPsec 1302 operations and then send the cleartext frame to the tester. The 1303 testing device counts the frames that are transmitted by the DUT. 1304 If the count of transmitted frames is equal to the number of 1305 frames forwarded the length of the burst is increased and the test 1306 is rerun. If the number of forwarded frames is less than the 1307 number transmitted, the length of the burst is reduced and the 1308 test is rerun. 1310 The back-to-back value is the number of frames in the longest 1311 burst that the DUT will handle without the loss of any frames. 1312 The trial length MUST be at least 2 seconds and SHOULD be repeated 1313 at least 50 times with the average of the recorded values being 1314 reported. 1316 Reporting format: 1318 The reporting format SHOULD be the same as listed in 11.1 with the 1319 additional requirement that the Security Context parameters 1320 defined in 6.7 and utilized for this test MUST be included in any 1321 statement of performance. 1323 13. IPsec Tunnel Setup Behavior 1325 13.1. IPsec Tunnel Setup Rate 1327 Objective: 1329 Determine the rate at which IPsec Tunnels can be established. 1331 Procedure: 1333 Configure the DUT/SUT with n IKE Phase 1 and corresponding IKE 1334 Phase 2 policies. Ensure that no SA's are established and that 1335 the DUT/SUT is in configured tunnel mode for all n policies. Send 1336 a stream of cleartext frames at a particular frame size through 1337 the DUT/SUT at the determined throughput rate using frames with 1338 selectors matching the first IKE Phase 1 policy. As soon as the 1339 testing device receives its first frame from the DUT/SUT, it knows 1340 that the IPsec Tunnel is established and starts sending the next 1341 stream of cleartext frames using the same frame size and 1342 throughput rate but this time using selectors matching the second 1343 IKE Phase 1 policy. This process is repeated until all configured 1344 IPsec Tunnels have been established. 1346 The IPsec Tunnel Setup Rate is determined by the following 1347 formula: 1349 Tunnel Setup Rate = n / [Duration of Test - (n * 1350 frame_transmit_time)] 1352 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1353 to exceed the duration of the test time. It is RECOMMENDED that 1354 n=100 IPsec Tunnels are tested at a minimum to get a large enough 1355 sample size to depict some real-world behavior. 1357 Reporting Format: 1359 The Tunnel Setup Rate results SHOULD be reported in the format of 1360 a table with a row for each of the tested frame sizes. There 1361 SHOULD be columns for the frame size, the rate at which the test 1362 was run for that frame size, for the media types tested, and for 1363 the resultant Tunnel Setup Rate values for each type of data 1364 stream tested. The Security Context parameters defined in 6.7 and 1365 utilized for this test MUST be included in any statement of 1366 performance. 1368 13.2. IKE Phase 1 Setup Rate 1370 Objective: 1372 Determine the rate of IKE SA's that can be established. 1374 Procedure: 1376 Configure the DUT with n IKE Phase 1 and corresponding IKE Phase 2 1377 policies. Ensure that no SAs are established and that the DUT is 1378 in configured tunnel mode for all n policies. Send a stream of 1379 cleartext frames at a particular frame size through the DUT at the 1380 determined throughput rate using frames with selectors matching 1381 the first IKE Phase 1 policy. As soon as the Phase 1 SA is 1382 established, the testing device starts sending the next stream of 1383 cleartext frames using the same frame size and throughput rate but 1384 this time using selectors matching the second IKE Phase 1 policy. 1385 This process is repeated until all configured IKE SAs have been 1386 established. 1388 The IKE SA Setup Rate is determined by the following formula: 1390 IKE SA Setup Rate = n / [Duration of Test - (n * 1391 frame_transmit_time)] 1393 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1394 to exceed the duration of the test time. It is RECOMMENDED that 1395 n=100 IKE SAs are tested at a minumum to get a large enough sample 1396 size to depict some real-world behavior. 1398 Reporting Format: 1400 The IKE Phase 1 Setup Rate results SHOULD be reported in the 1401 format of a table with a row for each of the tested frame sizes. 1402 There SHOULD be columns for the frame size, the rate at which the 1403 test was run for that frame size, for the media types tested, and 1404 for the resultant IKE Phase 1 Setup Rate values for each type of 1405 data stream tested. The Security Context parameters defined in 1406 6.7 and utilized for this test MUST be included in any statement 1407 of performance. 1409 13.3. IKE Phase 2 Setup Rate 1411 Objective: 1413 Determine the rate of IPsec SA's that can be established. 1415 Procedure: 1417 Configure the DUT with a single IKE Phase 1 policy and n 1418 corresponding IKE Phase 2 policies. Ensure that no SAs are 1419 established and that the DUT is in configured tunnel mode for all 1420 policies. Send a stream of cleartext frames at a particular frame 1421 size through the DUT at the determined throughput rate using 1422 frames with selectors matching the first IPsec SA policy. 1424 The time at which the IKE SA is established is recorded as 1425 timestamp A. As soon as the Phase 1 SA is established, the IPsec 1426 SA negotiation will be initiated. Once the first IPsec SA has 1427 been established, start sending the next stream of cleartext 1428 frames using the same frame size and throughput rate but this time 1429 using selectors matching the second IKE Phase 2 policy. This 1430 process is repeated until all configured IPsec SA's have been 1431 established. 1433 The IPsec SA Setup Rate is determined by the following formula: 1435 IPsec SA Setup Rate = n / [Duration of Test - {A +((n-1) * 1436 frame_transmit_time)}] 1438 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1439 to exceed the duration of the test time. It is RECOMMENDED that 1440 n=100 IPsec SAs are tested at a minumum to get a large enough 1441 sample size to depict some real-world behavior. 1443 Reporting Format: 1445 The IKE Phase 2 Setup Rate results SHOULD be reported in the 1446 format of a table with a row for each of the tested frame sizes. 1447 There SHOULD be columns for the frame size, the rate at which the 1448 test was run for that frame size, for the media types tested, and 1449 for the resultant IKE Phase 2 Setup Rate values for each type of 1450 data stream tested. The Security Context parameters defined in 1451 6.7 and utilized for this test MUST be included in any statement 1452 of performance. 1454 14. IPsec Rekey Behavior 1456 The IPsec Rekey Behavior test all need to be executed by an IPsec 1457 aware test device since the test needs to be closely linked with the 1458 IKE FSM and cannot be done by offering specific traffic pattern at 1459 either the Initiator or the Responder. 1461 14.1. IKE Phase 1 Rekey Rate 1463 Objective: 1465 Determine the maximum rate at which an IPsec Device can rekey IKE 1466 SA's. 1468 Procedure: 1470 The IPsec Device under test should initially be set up with the 1471 determined IKE SA Capacity number of Active IPsec Tunnels. 1473 The IPsec aware tester should then perform a binary search where 1474 it initiates an IKE Phase 1 SA rekey for all Active IPsec Tunnels. 1475 The tester MUST timestamp for each IKE SA when it initiated the 1476 rekey and MUST timestamp once more once the FSM declares the rekey 1477 is completed. Once the itteration is complete the tester now has 1478 a table of rekey times for each IKE SA. The reciproce of the 1479 average of this table is the IKE Phase 1 Rekey Rate. 1481 This is obviously granted that all IKE SA were able to rekey 1482 succesfully. If this is not the case, the IPsec Tunnels are all 1483 re-established and the binary search goes to the next value of IKE 1484 SA's it will rekey. The process will repeat itself until a rate 1485 is determined at which a all SA's in that timeframe rekey 1486 correctly. 1488 Reporting Format: 1490 The IKE Phase 1 Rekey Rate results SHOULD be reported in the 1491 format of a table with a row for each of the tested frame sizes. 1492 There SHOULD be columns for the frame size, the rate at which the 1493 test was run for that frame size, for the media types tested, and 1494 for the resultant IKE Phase 1 Rekey Rate values for each type of 1495 data stream tested. The Security Context parameters defined in 1496 6.7 and utilized for this test MUST be included in any statement 1497 of performance. 1499 14.2. IKE Phase 2 Rekey Rate 1501 Objective: 1503 Determine the maximum rate at which an IPsec Device can rekey 1504 IPsec SA's. 1506 Procedure: 1508 The IPsec Device under test should initially be set up with the 1509 determined IKE SA Capacity number of Active IPsec Tunnels. 1511 The IPsec aware tester should then perform a binary search where 1512 it initiates an IKE Phase 2 SA rekey for all IPsec SA's. The 1513 tester MUST timestamp for each IPsec SA when it initiated the 1514 rekey and MUST timestamp once more once the FSM declares the rekey 1515 is completed. Once the itteration is complete the tester now has 1516 a table of rekey times for each IPsec SA. The reciproce of the 1517 average of this table is the IKE Phase 2 Rekey Rate. 1519 This is obviously granted that all IPsec SA were able to rekey 1520 succesfully. If this is not the case, the IPsec Tunnels are all 1521 re-established and the binary search goes to the next value of 1522 IPsec SA's it will rekey. The process will repeat itself until a 1523 rate is determined at which a all SA's in that timeframe rekey 1524 correctly. 1526 Reporting Format: 1528 The IKE Phase 2 Rekey Rate results SHOULD be reported in the 1529 format of a table with a row for each of the tested frame sizes. 1530 There SHOULD be columns for the frame size, the rate at which the 1531 test was run for that frame size, for the media types tested, and 1532 for the resultant IKE Phase 2 Rekey Rate values for each type of 1533 data stream tested. The Security Context parameters defined in 1534 6.7 and utilized for this test MUST be included in any statement 1535 of performance. 1537 15. IPsec Tunnel Failover Time 1539 This section presents methodologies relating to the characterization 1540 of the failover behavior of a DUT/SUT in a IPsec environment. 1542 In order to lessen the effect of packet buffering in the DUT/SUT, the 1543 Tunnel Failover Time tests MUST be run at the measured IPsec 1544 throughput level of the DUT. Tunnel Failover Time tests at other 1545 offered constant loads are OPTIONAL. 1547 Tunnel Failovers can be achieved in various ways like : 1549 o Failover between two or more software instances of an IPsec stack. 1551 o Failover between two IPsec devices. 1553 o Failover between two or more crypto engines. 1555 o Failover between hardware and software crypto. 1557 In all of the above cases there shall be at least one active IPsec 1558 device and a standby device. In some cases the standby device is not 1559 present and two or more IPsec devices are backing eachother up in 1560 case of a catastrophic device or stack failure. The standby (or 1561 potential other active) IPsec Devices can back up the active IPsec 1562 Device in either a stateless or statefull method. In the former 1563 case, Phase 1 SA's as well as Phase 2 SA's will need to be re- 1564 established in order to guarantuee packet forwarding. In the latter 1565 case, the SPD and SADB of the active IPsec Device is synchronized to 1566 the standby IPsec Device to ensure immediate packet path recovery. 1568 Objective: 1570 Determine the time required to fail over all Active Tunnels from 1571 an active IPsec Device to its standby device. 1573 Procedure: 1575 Before a failover can be triggered, the IPsec Device has to be in 1576 a state where the active stack/engine/node has a the maximum 1577 supported number of Active Tunnnels. The Tunnels will be 1578 transporting bidirectional traffic at the Tunnel Throughput rate 1579 for the smallest framesize that the stack/engine/node is capable 1580 of forwarding (In most cases, this will be 64 Bytes). The traffic 1581 should traverse in a round robin fashion through all Active 1582 Tunnels. 1584 It is RECOMMENDED that the test is repeated for various number of 1585 Active Tunnels as well as for different framesizes and framerates. 1587 When traffic is flowing through all Active Tunnels in steady 1588 state, a failover shall be triggered. 1590 Both receiver sides of the testers will now look at sequence 1591 counters in the instrumented packets that are being forwarded 1592 through the Tunnels. Each Tunnel MUST have it's own counter to 1593 keep track of packetloss on a per SA basis. 1595 If the tester observes no sequence number drops on any of the 1596 Tunnels in both directions then the Failover Time MUST be listed 1597 as 'null', indicating that the failover was immediate and without 1598 any packetloss. 1600 In all other cases where the tester observes a gap in the sequence 1601 numbers of the instrumented payload of the packets, the tester 1602 will monitor all SA's and look for any Tunnels that are still not 1603 receiving packets after the Failover. These will be marked as 1604 'pending' Tunnels. Active Tunnels that are forwarding packets 1605 again without any packetloss shall be marked as 'recovered' 1606 Tunnels. In background the tester will keep monitoring all SA's 1607 to make sure that no packets are dropped. If this is the case 1608 then the Tunnel in question will be placed back in 'pending' 1609 state. 1611 Note that reordered packets can naturally occur after en/ 1612 decryption. This is not a valid reason to place a Tunnel back in 1613 'pending' state. A sliding window of 128 packets per SA SHALL be 1614 allowed before packetloss is declared on the SA. 1616 The tester will wait until all Tunnel are marked as 'recovered'. 1617 Then it will find the SA with the largest gap in sequence number. 1618 Given the fact that the framesize is fixed and the time of that 1619 framesize can easily be calculated for the initiator links, a 1620 simple multiplication of the framesize time * largest packetloss 1621 gap will yield the Tunnel Failover Time. 1623 If the tester never reaches a state where all Tunnels are marked 1624 as 'recovered', the the Failover Time MUST be listed as 1625 'infinite'. 1627 Reporting Format: 1629 The results shall be represented in a tabular format, where the 1630 first column will list the number of Active Tunnels, the second 1631 column the Framesize, the third column the Framerate and the 1632 fourth column the Tunnel Failover Time in milliseconds. 1634 16. Acknowledgements 1636 The authors would like to acknowledge the following individual for 1637 their help and participation of the compilation and editing of this 1638 document: Michele Bustos, Ixia. ; Paul Hoffman, VPNC 1640 17. References 1641 17.1. Normative References 1643 [RFC1242] Bradner, S., "Benchmarking terminology for network 1644 interconnection devices", RFC 1242, July 1991. 1646 [RFC1981] McCann, J., Deering, S., and J. Mogul, "Path MTU Discovery 1647 for IP version 6", RFC 1981, August 1996. 1649 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1650 Requirement Levels", BCP 14, RFC 2119, March 1997. 1652 [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN 1653 Switching Devices", RFC 2285, February 1998. 1655 [RFC2393] Shacham, A., Monsour, R., Pereira, R., and M. Thomas, "IP 1656 Payload Compression Protocol (IPComp)", RFC 2393, 1657 December 1998. 1659 [RFC2401] Kent, S. and R. Atkinson, "Security Architecture for the 1660 Internet Protocol", RFC 2401, November 1998. 1662 [RFC2402] Kent, S. and R. Atkinson, "IP Authentication Header", 1663 RFC 2402, November 1998. 1665 [RFC2403] Madson, C. and R. Glenn, "The Use of HMAC-MD5-96 within 1666 ESP and AH", RFC 2403, November 1998. 1668 [RFC2404] Madson, C. and R. Glenn, "The Use of HMAC-SHA-1-96 within 1669 ESP and AH", RFC 2404, November 1998. 1671 [RFC2405] Madson, C. and N. Doraswamy, "The ESP DES-CBC Cipher 1672 Algorithm With Explicit IV", RFC 2405, November 1998. 1674 [RFC2406] Kent, S. and R. Atkinson, "IP Encapsulating Security 1675 Payload (ESP)", RFC 2406, November 1998. 1677 [RFC2407] Piper, D., "The Internet IP Security Domain of 1678 Interpretation for ISAKMP", RFC 2407, November 1998. 1680 [RFC2408] Maughan, D., Schneider, M., and M. Schertler, "Internet 1681 Security Association and Key Management Protocol 1682 (ISAKMP)", RFC 2408, November 1998. 1684 [RFC2409] Harkins, D. and D. Carrel, "The Internet Key Exchange 1685 (IKE)", RFC 2409, November 1998. 1687 [RFC2410] Glenn, R. and S. Kent, "The NULL Encryption Algorithm and 1688 Its Use With IPsec", RFC 2410, November 1998. 1690 [RFC2411] Thayer, R., Doraswamy, N., and R. Glenn, "IP Security 1691 Document Roadmap", RFC 2411, November 1998. 1693 [RFC2412] Orman, H., "The OAKLEY Key Determination Protocol", 1694 RFC 2412, November 1998. 1696 [RFC2432] Dubray, K., "Terminology for IP Multicast Benchmarking", 1697 RFC 2432, October 1998. 1699 [RFC2451] Pereira, R. and R. Adams, "The ESP CBC-Mode Cipher 1700 Algorithms", RFC 2451, November 1998. 1702 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1703 Network Interconnect Devices", RFC 2544, March 1999. 1705 [RFC2547] Rosen, E. and Y. Rekhter, "BGP/MPLS VPNs", RFC 2547, 1706 March 1999. 1708 [RFC2661] Townsley, W., Valencia, A., Rubens, A., Pall, G., Zorn, 1709 G., and B. Palter, "Layer Two Tunneling Protocol "L2TP"", 1710 RFC 2661, August 1999. 1712 [RFC2784] Farinacci, D., Li, T., Hanks, S., Meyer, D., and P. 1713 Traina, "Generic Routing Encapsulation (GRE)", RFC 2784, 1714 March 2000. 1716 [RFC4109] Hoffman, P., "Algorithms for Internet Key Exchange version 1717 1 (IKEv1)", RFC 4109, May 2005. 1719 [I-D.ietf-ipsec-ikev2] 1720 Kaufman, C., "Internet Key Exchange (IKEv2) Protocol", 1721 draft-ietf-ipsec-ikev2-17 (work in progress), 1722 October 2004. 1724 [I-D.ietf-ipsec-properties] 1725 Krywaniuk, A., "Security Properties of the IPsec Protocol 1726 Suite", draft-ietf-ipsec-properties-02 (work in progress), 1727 July 2002. 1729 17.2. Informative References 1731 [FIPS.186-1.1998] 1732 National Institute of Standards and Technology, "Digital 1733 Signature Standard", FIPS PUB 186-1, December 1998, 1734 . 1736 Authors' Addresses 1738 Merike Kaeo 1739 Double Shot Security 1740 520 Washington Blvd #363 1741 Marina Del Rey, CA 90292 1742 US 1744 Phone: +1 (310)866-0165 1745 Email: kaeo@merike.com 1747 Tim Van Herck 1748 Cisco Systems 1749 170 West Tasman Drive 1750 San Jose, CA 95134-1706 1751 US 1753 Email: herckt@cisco.com 1755 Intellectual Property Statement 1757 The IETF takes no position regarding the validity or scope of any 1758 Intellectual Property Rights or other rights that might be claimed to 1759 pertain to the implementation or use of the technology described in 1760 this document or the extent to which any license under such rights 1761 might or might not be available; nor does it represent that it has 1762 made any independent effort to identify any such rights. Information 1763 on the procedures with respect to rights in RFC documents can be 1764 found in BCP 78 and BCP 79. 1766 Copies of IPR disclosures made to the IETF Secretariat and any 1767 assurances of licenses to be made available, or the result of an 1768 attempt made to obtain a general license or permission for the use of 1769 such proprietary rights by implementers or users of this 1770 specification can be obtained from the IETF on-line IPR repository at 1771 http://www.ietf.org/ipr. 1773 The IETF invites any interested party to bring to its attention any 1774 copyrights, patents or patent applications, or other proprietary 1775 rights that may cover technology that may be required to implement 1776 this standard. Please address the information to the IETF at 1777 ietf-ipr@ietf.org. 1779 Disclaimer of Validity 1781 This document and the information contained herein are provided on an 1782 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1783 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 1784 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 1785 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 1786 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1787 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1789 Copyright Statement 1791 Copyright (C) The Internet Society (2006). This document is subject 1792 to the rights, licenses and restrictions contained in BCP 78, and 1793 except as set forth therein, the authors retain all their rights. 1795 Acknowledgment 1797 Funding for the RFC Editor function is currently provided by the 1798 Internet Society.