idnits 2.17.1 draft-ietf-bmwg-ipsec-meth-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1695. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1706. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1713. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1719. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([RFC2432], [RFC2544]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 8, 2007) is 6136 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC2119' is defined on line 1570, but no explicit reference was found in the text == Unused Reference: 'RFC2285' is defined on line 1573, but no explicit reference was found in the text == Unused Reference: 'RFC2393' is defined on line 1576, but no explicit reference was found in the text == Unused Reference: 'RFC2402' is defined on line 1583, but no explicit reference was found in the text == Unused Reference: 'RFC2403' is defined on line 1586, but no explicit reference was found in the text == Unused Reference: 'RFC2404' is defined on line 1589, but no explicit reference was found in the text == Unused Reference: 'RFC2405' is defined on line 1592, but no explicit reference was found in the text == Unused Reference: 'RFC2406' is defined on line 1595, but no explicit reference was found in the text == Unused Reference: 'RFC2407' is defined on line 1598, but no explicit reference was found in the text == Unused Reference: 'RFC2408' is defined on line 1601, but no explicit reference was found in the text == Unused Reference: 'RFC2409' is defined on line 1605, but no explicit reference was found in the text == Unused Reference: 'RFC2410' is defined on line 1608, but no explicit reference was found in the text == Unused Reference: 'RFC2411' is defined on line 1611, but no explicit reference was found in the text == Unused Reference: 'RFC2412' is defined on line 1614, but no explicit reference was found in the text == Unused Reference: 'RFC2451' is defined on line 1620, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ipsec-ikev2' is defined on line 1644, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ipsec-properties' is defined on line 1649, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1242 ** Obsolete normative reference: RFC 1981 (Obsoleted by RFC 8201) ** Downref: Normative reference to an Informational RFC: RFC 2285 ** Obsolete normative reference: RFC 2393 (Obsoleted by RFC 3173) ** Obsolete normative reference: RFC 2401 (Obsoleted by RFC 4301) ** Obsolete normative reference: RFC 2402 (Obsoleted by RFC 4302, RFC 4305) ** Obsolete normative reference: RFC 2406 (Obsoleted by RFC 4303, RFC 4305) ** Obsolete normative reference: RFC 2407 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2408 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2409 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2411 (Obsoleted by RFC 6071) ** Downref: Normative reference to an Informational RFC: RFC 2412 ** Downref: Normative reference to an Informational RFC: RFC 2432 ** Downref: Normative reference to an Informational RFC: RFC 2544 ** Obsolete normative reference: RFC 2547 (Obsoleted by RFC 4364) ** Obsolete normative reference: RFC 4305 (Obsoleted by RFC 4835) -- Possible downref: Normative reference to a draft: ref. 'I-D.ietf-ipsec-properties' Summary: 20 errors (**), 0 flaws (~~), 19 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Working Group M. Kaeo 3 Internet-Draft Double Shot Security 4 Expires: January 9, 2008 T. Van Herck 5 Cisco Systems 6 July 8, 2007 8 Methodology for Benchmarking IPsec Devices 9 draft-ietf-bmwg-ipsec-meth-02 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on January 9, 2008. 36 Copyright Notice 38 Copyright (C) The IETF Trust (2007). 40 Abstract 42 The purpose of this draft is to describe methodology specific to the 43 benchmarking of IPsec IP forwarding devices. It builds upon the 44 tenets set forth in [RFC2544], [RFC2432] and other IETF Benchmarking 45 Methodology Working Group (BMWG) efforts. This document seeks to 46 extend these efforts to the IPsec paradigm. 48 The BMWG produces two major classes of documents: Benchmarking 49 Terminology documents and Benchmarking Methodology documents. The 50 Terminology documents present the benchmarks and other related terms. 51 The Methodology documents define the procedures required to collect 52 the benchmarks cited in the corresponding Terminology documents. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 57 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 4 58 3. Key Words to Reflect Requirements . . . . . . . . . . . . . . 4 59 4. Test Considerations . . . . . . . . . . . . . . . . . . . . . 4 60 5. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 5 61 6. Test Parameters . . . . . . . . . . . . . . . . . . . . . . . 8 62 6.1. Frame Type . . . . . . . . . . . . . . . . . . . . . . . . 8 63 6.1.1. IP . . . . . . . . . . . . . . . . . . . . . . . . . . 8 64 6.1.2. UDP . . . . . . . . . . . . . . . . . . . . . . . . . 8 65 6.1.3. TCP . . . . . . . . . . . . . . . . . . . . . . . . . 8 66 6.2. Frame Sizes . . . . . . . . . . . . . . . . . . . . . . . 8 67 6.3. Fragmentation and Reassembly . . . . . . . . . . . . . . . 9 68 6.4. Time To Live . . . . . . . . . . . . . . . . . . . . . . . 10 69 6.5. Trial Duration . . . . . . . . . . . . . . . . . . . . . . 10 70 6.6. Security Context Parameters . . . . . . . . . . . . . . . 10 71 6.6.1. IPsec Transform Sets . . . . . . . . . . . . . . . . . 10 72 6.6.2. IPsec Topologies . . . . . . . . . . . . . . . . . . . 12 73 6.6.3. IKE Keepalives . . . . . . . . . . . . . . . . . . . . 13 74 6.6.4. IKE DH-group . . . . . . . . . . . . . . . . . . . . . 13 75 6.6.5. IKE SA / IPsec SA Lifetime . . . . . . . . . . . . . . 13 76 6.6.6. IPsec Selectors . . . . . . . . . . . . . . . . . . . 14 77 6.6.7. NAT-Traversal . . . . . . . . . . . . . . . . . . . . 14 78 7. Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 79 7.1. IKE SA Capacity . . . . . . . . . . . . . . . . . . . . . 14 80 7.2. IPsec SA Capacity . . . . . . . . . . . . . . . . . . . . 15 81 8. Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . 16 82 8.1. Throughput baseline . . . . . . . . . . . . . . . . . . . 16 83 8.2. IPsec Throughput . . . . . . . . . . . . . . . . . . . . . 17 84 8.3. IPsec Encryption Throughput . . . . . . . . . . . . . . . 17 85 8.4. IPsec Decryption Throughput . . . . . . . . . . . . . . . 18 86 9. Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 87 9.1. Latency Baseline . . . . . . . . . . . . . . . . . . . . . 19 88 9.2. IPsec Latency . . . . . . . . . . . . . . . . . . . . . . 20 89 9.3. IPsec Encryption Latency . . . . . . . . . . . . . . . . . 21 90 9.4. IPsec Decryption Latency . . . . . . . . . . . . . . . . . 22 91 10. Time To First Packet . . . . . . . . . . . . . . . . . . . . . 22 92 11. Frame Loss Rate . . . . . . . . . . . . . . . . . . . . . . . 23 93 11.1. Frame Loss Baseline . . . . . . . . . . . . . . . . . . . 23 94 11.2. IPsec Frame Loss . . . . . . . . . . . . . . . . . . . . . 24 95 11.3. IPsec Encryption Frame Loss . . . . . . . . . . . . . . . 24 96 11.4. IPsec Decryption Frame Loss . . . . . . . . . . . . . . . 25 97 11.5. IKE Phase 2 Rekey Frame Loss . . . . . . . . . . . . . . . 26 98 12. Back-to-back Frames . . . . . . . . . . . . . . . . . . . . . 26 99 12.1. Back-to-back Frames Baseline . . . . . . . . . . . . . . . 26 100 12.2. IPsec Back-to-back Frames . . . . . . . . . . . . . . . . 27 101 12.3. IPsec Encryption Back-to-back Frames . . . . . . . . . . . 27 102 12.4. IPsec Decryption Back-to-back Frames . . . . . . . . . . . 28 103 13. IPsec Tunnel Setup Behavior . . . . . . . . . . . . . . . . . 28 104 13.1. IPsec Tunnel Setup Rate . . . . . . . . . . . . . . . . . 28 105 13.2. IKE Phase 1 Setup Rate . . . . . . . . . . . . . . . . . . 29 106 13.3. IKE Phase 2 Setup Rate . . . . . . . . . . . . . . . . . . 30 107 14. IPsec Rekey Behavior . . . . . . . . . . . . . . . . . . . . . 31 108 14.1. IKE Phase 1 Rekey Rate . . . . . . . . . . . . . . . . . . 31 109 14.2. IKE Phase 2 Rekey Rate . . . . . . . . . . . . . . . . . . 32 110 15. IPsec Tunnel Failover Time . . . . . . . . . . . . . . . . . . 32 111 16. DoS Resiliency . . . . . . . . . . . . . . . . . . . . . . . . 34 112 16.1. Phase 1 DoS Resiliency Rate . . . . . . . . . . . . . . . 34 113 16.2. Phase 2 DoS Resiliency Rate . . . . . . . . . . . . . . . 35 114 17. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 35 115 18. References . . . . . . . . . . . . . . . . . . . . . . . . . . 35 116 18.1. Normative References . . . . . . . . . . . . . . . . . . . 35 117 18.2. Informative References . . . . . . . . . . . . . . . . . . 37 118 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 37 119 Intellectual Property and Copyright Statements . . . . . . . . . . 38 121 1. Introduction 123 This document defines a specific set of tests that can be used to 124 measure and report the performance characteristics of IPsec devices. 125 It extends the methodology already defined for benchmarking network 126 interconnecting devices in [RFC2544] to IPsec gateways and 127 additionally introduces tests which can be used to measure end-host 128 IPsec performance. 130 2. Document Scope 132 The primary focus of this document is to establish a performance 133 testing methodology for IPsec devices that support manual keying and 134 IKEv1. Both IPv4 and IPv6 addressing will be taken into 135 consideration for all relevant test methodologies. 137 The testing will be constrained to: 139 o Devices acting as IPsec gateways whose tests will pertain to both 140 IPsec tunnel and transport mode. 142 o Devices acting as IPsec end-hosts whose tests will pertain to both 143 IPsec tunnel and transport mode. 145 Note that special considerations will be presented for IPsec end-host 146 testing since the tests cannot be conducted without introducing 147 additional variables that may cause variations in test results. 149 What is specifically out of scope is any testing that pertains to 150 considerations involving, L2TP [RFC2661], GRE [RFC2784], BGP/MPLS 151 VPNs [RFC2547] and anything that does not specifically relate to the 152 establishment and tearing down of IPsec tunnels. 154 3. Key Words to Reflect Requirements 156 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 157 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 158 document are to be interpreted as described in RFC 2119. RFC 2119 159 defines the use of these key words to help make the intent of 160 standards track documents as clear as possible. While this document 161 uses these keywords, this document is not a standards track document. 163 4. Test Considerations 165 Before any of the IPsec data plane benchmarking tests are carried 166 out, a baseline MUST be established. I.e. the particular test in 167 question must first be measured for performance characteristics 168 without enabling IPsec. Once both the Baseline clear text 169 performance and the performance using an IPsec enabled datapath have 170 been measured, the difference between the two can be discerned. 172 This document explicitly assumes that you MUST follow logical 173 performance test methodology that includes the pre-configuration of 174 routing protocols, ARP caches, IPv6 neighbor discovery and all other 175 extraneous IPv4 and IPv6 parameters required to pass packets before 176 the tester is ready to send IPsec protected packets. IPv6 nodes that 177 implement Path MTU Discovery [RFC1981] MUST ensure that the PMTUD 178 process has been completed before any of the tests have been run. 180 For every IPsec data plane benchmarking test, the SA database (SADB) 181 MUST be created and populated with the appropriate SAs before any 182 actual test traffic is sent, i.e. the DUT/SUT MUST have active 183 tunnels. This may require a manual command to be executed on the 184 DUT/SUT or the sending of appropriate learning frames to the DUT/SUT. 185 This is to ensure that none of the control plane parameters (such as 186 IPsec tunnel setup rates and IPsec tunnel rekey rates) are factored 187 into these tests. 189 For control plane benchmarking tests (i.e. IPsec tunnel setup rate 190 and IPsec tunnel rekey rates), the authentication mechanisms(s) used 191 for the authenticated Diffie-Hellman exchange MUST be reported. 193 5. Test Topologies 195 The tests can be performed as a DUT or SUT. When the tests are 196 performed as a DUT, the Tester itself must be an IPsec peer. This 197 scenario is shown in Figure 1. When tested as a DUT where the Tester 198 has to be an IPsec peer, the measurements have several disadvantages: 200 o The Tester can introduce interoperability issues and skew results. 202 o The measurements may not be accurate due to Tester inaccuracies. 204 On the other hand, the measurement of a DUT where the Tester is an 205 IPsec peer has two distinct advantages: 207 o IPsec client scenarios can be benchmarked. 209 o IPsec device encryption/decryption abnormalities may be 210 identified. 212 +------------+ 213 | | 214 +----[D] Tester [A]----+ 215 | | | | 216 | +------------+ | 217 | | 218 | +------------+ | 219 | | | | 220 +----[C] DUT [B]----+ 221 | | 222 +------------+ 224 Figure 1: Topology 1 226 The SUT scenario is depicted in Figure 2. Two identical DUTs are 227 used in this test set up which more accurately simulate the use of 228 IPsec gateways. IPsec SA (i.e. AH/ESP transport or tunnel mode) 229 configurations can be tested using this set-up where the tester is 230 only required to send and receive cleartext traffic. 232 +------------+ 233 | | 234 +-----------------[F] Tester [A]-----------------+ 235 | | | | 236 | +------------+ | 237 | | 238 | +------------+ +------------+ | 239 | | | | | | 240 +----[E] DUTa [D]--------[C] DUTb [B]----+ 241 | | | | 242 +------------+ +------------+ 244 Figure 2: Topology 2 246 When an IPsec DUT needs to be tested in a chassis failover topology, 247 a second DUT needs to be used as shown in figure 3. This is the 248 high-availability equivalent of the topology as depicted in Figure 1. 249 Note that in this topology the Tester MUST be an IPsec peer. 251 +------------+ 252 | | 253 +---------[F] Tester [A]---------+ 254 | | | | 255 | +------------+ | 256 | | 257 | +------------+ | 258 | | | | 259 | +----[C] DUTa [B]----+ | 260 | | | | | | 261 | | +------------+ | | 262 +----+ +----+ 263 | +------------+ | 264 | | | | 265 +----[E] DUTb [D]----+ 266 | | 267 +------------+ 269 Figure 3: Topology 3 271 When no IPsec enabled Tester is available and an IPsec failover 272 scenario needs to be tested, the topology as shown in Figure 4 can be 273 used. In this case, either the high availability pair of IPsec 274 devices can be used as an Initiator or as a Responder. The remaining 275 chassis will take the opposite role. 277 +------------+ 278 | | 279 +--------------------[H] Tester [A]----------------+ 280 | | | | 281 | +------------+ | 282 | | 283 | +------------+ | 284 | | | | 285 | +---[E] DUTa [D]---+ | 286 | | | | | +------------+ | 287 | | +------------+ | | | | 288 +---+ +----[C] DUTc [B]---+ 289 | +------------+ | | | 290 | | | | +------------+ 291 +---[G] DUTb [F]---+ 292 | | 293 +------------+ 295 Figure 4: Topology 4 297 6. Test Parameters 299 For each individual test performed, all of the following parameters 300 MUST be explicitly reported in any test results. 302 6.1. Frame Type 304 6.1.1. IP 306 Both IPv4 and IPv6 frames MUST be used. The basic IPv4 header is 20 307 bytes long (which may be increased by the use of an options field). 308 The basic IPv6 header is a fixed 40 bytes and uses an extension field 309 for additional headers. Only the basic headers plus the IPsec AH 310 and/or ESP headers MUST be present. 312 It is recommended that IPv4 and IPv6 frames be tested separately to 313 ascertain performance parameters for either IPv4 or IPv6 traffic. If 314 both IPv4 and IPv6 traffic are to be tested, the device SHOULD be 315 pre-configured for a dual-stack environment to handle both traffic 316 types. 318 IP traffic with L4 protocol set to 'reserved' (255) MUST be used. 319 This ensures maximum space for instrumentation data in the payload 320 section, even with framesizes of minimum allowed length on the 321 transport media. 323 6.1.2. UDP 325 It is also RECOMMENDED that the test is executed using UDP as the L4 326 protocol. When using UDP, instrumentation data SHOULD be present in 327 the payload of the packet. It is OPTIONAL to have application 328 payload. 330 6.1.3. TCP 332 It is OPTIONAL to perform the tests with TCP as the L4 protocol but 333 in case this is considered, the TCP traffic is RECOMMENDED to be 334 stateful. With a TCP as a L4 header it is possible that there will 335 not be enough room to add all instrumentation data to identify the 336 packets within the DUT/SUT. 338 6.2. Frame Sizes 340 Each test SHOULD be run with different frame sizes. The recommended 341 cleartext layer 2 frame sizes for IPv4 tests over Ethernet media are 342 64, 128, 256, 512, 1024, 1280, and 1518 bytes, per RFC2544 section 9 343 [RFC2544]. The four CRC bytes are included in the frame size 344 specified. 346 For GigabitEthernet, supporting jumboframes, the cleartext layer 2 347 framesizes used are 64, 128, 256, 512, 1024, 1280, 1518, 2048, 3072, 348 4096, 5120, 6144, 7168, 8192 and 9234 bytes 350 Since IPv6 requires that every link has an MTU of 1280 octets or 351 greater, it is MANDATORY to execute tests with cleartext layer 2 352 frame sizes that include 1280 and 1518 bytes. It is RECOMMENDED that 353 additional frame sizes are included in the IPv6 test execution, 354 including the maximum supported datagram size for the linktype used. 356 6.3. Fragmentation and Reassembly 358 IPsec devices can and must fragment packets in specific scenarios. 359 Depending on whether the fragmentation is performed in software or 360 using specialized custom hardware, there may be a significant impact 361 on performance. 363 In IPv4, unless the DF (don't fragment) bit is set by the packet 364 source, the sender cannot guarantee that some intermediary device on 365 the way will not fragment an IPsec packet. For transport mode IPsec, 366 the peers must be able to fragment and reassemble IPsec packets. 367 Reassembly of fragmented packets is especially important if an IPv4 368 port selector (or IPv6 transport protocol selector) is configured. 369 For tunnel mode IPsec, it is not a requirement. Note that 370 fragmentation is handled differently in IPv6 than in IPv4. In IPv6 371 networks, fragmentation is no longer done by intermediate routers in 372 the networks, but by the source node that originates the packet. The 373 path MTU discovery (PMTUD) mechanism is recommended for every IPv6 374 node to avoid fragmentation. 376 Packets generated by hosts that do not support PMTUD, and have not 377 set the DF bit in the IP header, will undergo fragmentation before 378 IPsec encapsulation. Packets generated by hosts that do support 379 PMTUD will use it locally to match the statically configured MTU on 380 the tunnel. If you manually set the MTU on the tunnel, you must set 381 it low enough to allow packets to pass through the smallest link on 382 the path. Otherwise, the packets that are too large to fit will be 383 dropped. 385 Fragmentation can occur due to encryption overhead and is closely 386 linked to the choice of transform used. Since each test SHOULD be 387 run with a maximum cleartext frame size (as per the previous section) 388 it will cause fragmentation to occur since the maximum frame size 389 will be exceeded. All tests MUST be run with the DF bit not set. It 390 is also recommended that all tests be run with the DF bit set. 392 Note that some implementations predetermine the encapsulated packet 393 size from information available in transform sets, which are 394 configured as part of the IPsec security association (SA). If it is 395 predetermined that the packet will exceed the MTU of the output 396 interface, the packet is fragmented before encryption. This 397 optimization may favorably impact performance and vendors SHOULD 398 report whether any such optimization is configured. 400 6.4. Time To Live 402 The source frames should have a TTL value large enough to accommodate 403 the DUT/SUT. A Minimum TTL of 64 is RECOMMENDED. 405 6.5. Trial Duration 407 The duration of the test portion of each trial SHOULD be at least 60 408 seconds. In the case of IPsec tunnel rekeying tests, the test 409 duration must be at least two times the IPsec tunnel rekey time to 410 ensure a reasonable worst case scenario test. 412 6.6. Security Context Parameters 414 All of the security context parameters listed in section 7.13 of the 415 IPsec Benchmarking Terminology document MUST be reported. When 416 merely discussing the behavior of traffic flows through IPsec 417 devices, an IPsec context MUST be provided. In the cases where IKE 418 is configured (as opposed to using manually keyed tunnels), both an 419 IPsec and an IKE context MUST be provided. Additional considerations 420 for reporting security context parameters are detailed below. These 421 all MUST be reported. 423 6.6.1. IPsec Transform Sets 425 All tests should be done on different IPsec transform set 426 combinations. An IPsec transform specifies a single IPsec security 427 protocol (either AH or ESP) with its corresponding security 428 algorithms and mode. A transform set is a combination of individual 429 IPsec transforms designed to enact a specific security policy for 430 protecting a particular traffic flow. At minumim, the transform set 431 must include one AH algorithm and a mode or one ESP algorithm and a 432 mode. 434 +-------------+------------------+----------------------+-----------+ 435 | ESP | Encryption | Authentication | Mode | 436 | Transform | Algorithm | Algorithm | | 437 +-------------+------------------+----------------------+-----------+ 438 | 1 | NULL | HMAC-SHA1-96 | Transport | 439 | 2 | NULL | HMAC-SHA1-96 | Tunnel | 440 | 3 | 3DES-CBC | HMAC-SHA1-96 | Transport | 441 | 4 | 3DES-CBC | HMAC-SHA1-96 | Tunnel | 442 | 5 | AES-CBC-128 | HMAC-SHA1-96 | Transport | 443 | 6 | AES-CBC-128 | HMAC-SHA1-96 | Tunnel | 444 | 7 | NULL | AES-XCBC-MAC-96 | Transport | 445 | 8 | NULL | AES-XCBC-MAC-96 | Tunnel | 446 | 9 | 3DES-CBC | AES-XCBC-MAC-96 | Transport | 447 | 10 | 3DES-CBC | AES-XCBC-MAC-96 | Tunnel | 448 | 11 | AES-CBC-128 | AES-XCBC-MAC-96 | Transport | 449 | 12 | AES-CBC-128 | AES-XCBC-MAC-96 | Tunnel | 450 +-------------+------------------+----------------------+-----------+ 452 Table 1 454 Testing of ESP Transforms 1-4 MUST be supported. Testing of ESP 455 Transforms 5-12 SHOULD be supported. 457 +--------------+--------------------------+-----------+ 458 | AH Transform | Authentication Algorithm | Mode | 459 +--------------+--------------------------+-----------+ 460 | 1 | HMAC-SHA1-96 | Transport | 461 | 2 | HMAC-SHA1-96 | Tunnel | 462 | 3 | AES-XBC-MAC-96 | Transport | 463 | 4 | AES-XBC-MAC-96 | Tunnel | 464 +--------------+--------------------------+-----------+ 466 Table 2 468 Testing of AH Transforms 1 and 2 MUST be supported. Testing of AH 469 Transforms 3 And 4 SHOULD be supported. 471 Note that this these tables are derived from the Cryptographic 472 Algorithms for AH and ESP requirements as described in [RFC4305]. 473 Optionally, other AH and/or ESP transforms MAY be supported. 475 +-----------------------+----+-----+ 476 | Transform Combination | AH | ESP | 477 +-----------------------+----+-----+ 478 | 1 | 1 | 1 | 479 | 2 | 2 | 2 | 480 | 3 | 1 | 3 | 481 | 4 | 2 | 4 | 482 +-----------------------+----+-----+ 484 Table 3 486 It is RECOMMENDED that the transforms shown in Table 3 be supported 487 for IPv6 traffic selectors since AH may be used with ESP in these 488 environments. Since AH will provide the overall authentication and 489 integrity, the ESP Authentication algorithm MUST be Null for these 490 tests. Optionally, other combined AH/ESP transform sets MAY be 491 supported. 493 6.6.2. IPsec Topologies 495 All tests should be done at various IPsec topology configurations and 496 the IPsec topology used MUST be reported. Since IPv6 requires the 497 implementation of manual keys for IPsec, both manual keying and IKE 498 configurations MUST be tested. 500 For manual keying tests, the IPsec SAs used should vary from 1 to 501 101, increasing in increments of 50. Although it is not expected 502 that manual keying (i.e. manually configuring the IPsec SA) will be 503 deployed in any operational setting with the exception of very small 504 controlled environments (i.e. less than 10 nodes), it is prudent to 505 test for potentially larger scale deployments. 507 For IKE specific tests, the following IPsec topologies MUST be 508 tested: 510 o 1 IKE SA & 1 IPsec SA (i.e. 1 IPsec Tunnel) 512 o 1 IKE SA & {max} IPsec SA's 514 o {max} IKE SA's & {max} IPsec SA's 516 It is RECOMMENDED to also test with the following IPsec topologies in 517 order to gain more datapoints: 519 o {max/2} IKE SA's & {(max/2) IKE SA's} IPsec SA's 521 o {max} IKE SA's & {(max) IKE SA's} IPsec SA's 523 6.6.3. IKE Keepalives 525 IKE keepalives track reachability of peers by sending hello packets 526 between peers. During the typical life of an IKE Phase 1 SA, packets 527 are only exchanged over this IKE Phase 1 SA when an IPsec IKE Quick 528 Mode (QM) negotiation is required at the expiration of the IPSec 529 Tunnel SA's. There is no standards-based mechanism for either type 530 of SA to detect the loss of a peer, except when the QM negotiation 531 fails. Most IPsec implementations use the Dead Peer Detection (i.e. 532 Keepalive) mechanism to determine whether connectivity has been lost 533 with a peer before the expiration of the IPsec Tunnel SA's. 535 All tests using IKEv1 MUST use the same IKE keepalive parameters. 537 6.6.4. IKE DH-group 539 There are 3 Diffie-Hellman groups which can be supported by IPsec 540 standards compliant devices: 542 o DH-group 1: 768 bits 544 o DH-group 2: 1024 bits 546 o DH-group 14: 2048 bits 548 DH-group 2 MUST be tested, to support the new IKEv1 algorithm 549 requirements listed in [RFC4109]. It is recommended that the same 550 DH-group be used for both IKE Phase 1 and IKE phase 2. All test 551 methodologies using IKE MUST report which DH-group was configured to 552 be used for IKE Phase 1 and IKE Phase 2 negotiations. 554 6.6.5. IKE SA / IPsec SA Lifetime 556 An IKE SA or IPsec SA is retained by each peer until the Tunnel 557 lifetime expires. IKE SA's and IPsec SA's have individual lifetime 558 parameters. In many real-world environments, the IPsec SA's will be 559 configured with shorter lifetimes than that of the IKE SA's. This 560 will force a rekey to happen more often for IPsec SA's. 562 When the initiator begins an IKE negotiation between itself and a 563 remote peer (the responder), an IKE policy can be selected only if 564 the lifetime of the responder's policy is shorter than or equal to 565 the lifetime of the initiator's policy. If the lifetimes are not the 566 same, the shorter lifetime will be used. 568 To avoid any incompatibilities in data plane benchmark testing, all 569 devices MUST have the same IKE SA and IPsec SA lifetime configured 570 and they must be configured to a time which exceeds the test duration 571 timeframe or the total number of bytes to be transmitted during the 572 test. 574 Note that the IPsec SA lifetime MUST be equal to or less than the IKE 575 SA lifetime. Both the IKE SA lifetime and the IPsec SA lifetime used 576 MUST be reported. This parameter SHOULD be variable when testing IKE 577 rekeying performance. 579 6.6.6. IPsec Selectors 581 All tests MUST be performed using standard IPsec selectors as 582 described in [RFC2401] section 4.4.2. 584 6.6.7. NAT-Traversal 586 For any tests that include network address translation 587 considerations, the use of NAT-T in the test environment MUST be 588 recorded. 590 7. Capacity 592 7.1. IKE SA Capacity 594 Objective: Measure the maximum number of IKE SA's that can be 595 sustained on an IPsec Device. 597 Procedure: The IPsec Device under test initially MUST NOT have any 598 Active IPsec Tunnels. The Initiator (either a tester or an IPsec 599 peer) will start the negotiation of an IPsec Tunnel (a single 600 Phase 1 SA and a pair Phase 2 SA's). 602 After it is detected that the tunnel is established, a limited 603 number (50 packets RECOMMENDED) SHALL be sent through the tunnel. 604 If all packet are received by the Responder (i.e. the DUT), a new 605 IPsec Tunnel may be attempted. 607 This proces will be repeated until no more IPsec Tunnels can be 608 established. 610 At the end of the test, a traffic pattern is sent to the initiator 611 that will be distributed over all Active IPsec Tunnels, where each 612 tunnel will need to propagate a fixed number of packets at a 613 minimum rate of 5 pps. When all packets sent by the Iniator are 614 being received by the Responder, the test has succesfully 615 determined the IKE SA Capacity. If however this final check 616 fails, the test needs to be re-executed with a lower number of 617 Active IPsec Tunnels. There MAY be a need to enforce a lower 618 number of Active IPsec Tunnels i.e. an upper limit of Active IPsec 619 Tunnel SHOULD be defined in the test. 621 Reporting Format: The reporting format SHOULD be the same as listed 622 in 7.1 with the additional requirement that the Security Context 623 parameters defined in 5.6 and utilized for this test MUST be 624 included in any statement of performance. 626 7.2. IPsec SA Capacity 628 Objective: Measure the maximum number of IPsec SA's that can be 629 sustained on an IPsec Device. 631 Procedure: The IPsec Device under test initially MUST NOT have any 632 Active IPsec Tunnels. The Initiator (either a tester or an IPsec 633 peer) will start the negotiation of an IPsec Tunnel (a single 634 Phase 1 SA and a pair Phase 2 SA's). 636 After it is detected that the tunnel is established, a limited 637 number (50 packets RECOMMENDED) SHALL be sent through the tunnel. 638 If all packet are received by the Responder (i.e. the DUT), a new 639 pair of IPsec SA's may be attempted. This will be achieved by 640 offering a specific traffic pattern to the Initiator that matches 641 a given selector and therfore triggering the negotiation of a new 642 pair of IPsec SA's. 644 This proces will be repeated until no more IPsec SA' can be 645 established. 647 At the end of the test, a traffic pattern is sent to the initiator 648 that will be distributed over all IPsec SA's, where each SA will 649 need to propagate a fixed number of packets at a minimum rate of 5 650 pps. When all packets sent by the Iniator are being received by 651 the Responder, the test has succesfully determined the IPsec SA 652 Capacity. If however this final check fails, the test needs to be 653 re-executed with a lower number of IPsec SA's. There MAY be a 654 need to enforce a lower number IPsec SA's i.e. an upper limit of 655 IPsec SA's SHOULD be defined in the test. 657 Reporting Format: The reporting format SHOULD be the same as listed 658 in 7.1 with the additional requirement that the Security Context 659 parameters defined in 5.6 and utilized for this test MUST be 660 included in any statement of performance. 662 8. Throughput 664 This section contains the description of the tests that are related 665 to the characterization of the packet forwarding of a DUT/SUT in an 666 IPsec environment. Some metrics extend the concept of throughput 667 presented in RFC 1242. The notion of Forwarding Rate is cited in 668 RFC2285. 670 A separate test SHOULD be performed for Throughput tests using IPv4/ 671 UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic. 673 8.1. Throughput baseline 675 Objective: Measure the intrinsic cleartext throughput of a device 676 without the use of IPsec. The throughput baseline methodology and 677 reporting format is derived from [RFC2544]. 679 Procedure: Send a specific number of frames that matches the IPsec 680 SA selector(s) to be tested at a specific rate through the DUT and 681 then count the frames that are transmitted by the DUT. If the 682 count of offered frames is equal to the count of received frames, 683 the rate of the offered stream is increased and the test is rerun. 684 If fewer frames are received than were transmitted, the rate of 685 the offered stream is reduced and the test is rerun. 687 The throughput is the fastest rate at which the count of test 688 frames transmitted by the DUT is equal to the number of test 689 frames sent to it by the test equipment. 691 Reporting Format: The results of the throughput test SHOULD be 692 reported in the form of a graph. If it is, the x coordinate 693 SHOULD be the frame size, the y coordinate SHOULD be the frame 694 rate. There SHOULD be at least two lines on the graph. There 695 SHOULD be one line showing the theoretical frame rate for the 696 media at the various frame sizes. The second line SHOULD be the 697 plot of the test results. Additional lines MAY be used on the 698 graph to report the results for each type of data stream tested. 699 Text accompanying the graph SHOULD indicate the protocol, data 700 stream format, and type of media used in the tests. 702 We assume that if a single value is desired for advertising 703 purposes the vendor will select the rate for the minimum frame 704 size for the media. If this is done then the figure MUST be 705 expressed in packets per second. The rate MAY also be expressed 706 in bits (or bytes) per second if the vendor so desires. The 707 statement of performance MUST include: 709 * Measured maximum frame rate 711 * Size of the frame used 713 * Theoretical limit of the media for that frame size 715 * Type of protocol used in the test 717 Even if a single value is used as part of the advertising copy, 718 the full table of results SHOULD be included in the product data 719 sheet. 721 8.2. IPsec Throughput 723 Objective: Measure the intrinsic throughput of a device utilizing 724 IPsec. 726 Procedure: Send a specific number of cleartext frames that match the 727 IPsec SA selector(s) at a specific rate through the DUT/SUT. DUTa 728 will encrypt the traffic and forward to DUTb which will in turn 729 decrypt the traffic and forward to the testing device. The 730 testing device counts the frames that are transmitted by the DUTb. 731 If the count of offered frames is equal to the count of received 732 frames, the rate of the offered stream is increased and the test 733 is rerun. If fewer frames are received than were transmitted, the 734 rate of the offered stream is reduced and the test is rerun. 736 The IPsec Throughput is the fastest rate at which the count of 737 test frames transmitted by the DUT/SUT is equal to the number of 738 test frames sent to it by the test equipment. 740 For tests using multiple IPsec SA's, the test traffic associated 741 with the individual traffic selectors defined for each IPsec SA 742 MUST be sent in a round robin type fashion to keep the test 743 balanced so as not to overload any single IPsec SA. 745 Reporting format: The reporting format SHOULD be the same as listed 746 in 7.1 with the additional requirement that the Security Context 747 parameters defined in 5.6 and utilized for this test MUST be 748 included in any statement of performance. 750 8.3. IPsec Encryption Throughput 752 Objective: Measure the intrinsic DUT vendor specific IPsec 753 Encryption Throughput. 755 Procedure: Send a specific number of cleartext frames that match the 756 IPsec SA selector(s) at a specific rate to the DUT. The DUT will 757 receive the cleartext frames, perform IPsec operations and then 758 send the IPsec protected frame to the tester. Upon receipt of the 759 encrypted packet, the testing device will timestamp the packet(s) 760 and record the result. If the count of offered frames is equal to 761 the count of received frames, the rate of the offered stream is 762 increased and the test is rerun. If fewer frames are received 763 than were transmitted, the rate of the offered stream is reduced 764 and the test is rerun. 766 The IPsec Encryption Throughput is the fastest rate at which the 767 count of test frames transmitted by the DUT is equal to the number 768 of test frames sent to it by the test equipment. 770 For tests using multiple IPsec SA's, the test traffic associated 771 with the individual traffic selectors defined for each IPsec SA 772 MUST be sent in a round robin type fashion to keep the test 773 balanced so as not to overload any single IPsec SA. 775 Reporting format: The reporting format SHOULD be the same as listed 776 in 7.1 with the additional requirement that the Security Context 777 parameters defined in 5.6 and utilized for this test MUST be 778 included in any statement of performance. 780 8.4. IPsec Decryption Throughput 782 Objective: Measure the intrinsic DUT vendor specific IPsec 783 Decryption Throughput. 785 Procedure: Send a specific number of IPsec protected frames that 786 match the IPsec SA selector(s) at a specific rate to the DUT. The 787 DUT will receive the IPsec protected frames, perform IPsec 788 operations and then send the cleartext frame to the tester. Upon 789 receipt of the cleartext packet, the testing device will timestamp 790 the packet(s) and record the result. If the count of offered 791 frames is equal to the count of received frames, the rate of the 792 offered stream is increased and the test is rerun. If fewer 793 frames are received than were transmitted, the rate of the offered 794 stream is reduced and the test is rerun. 796 The IPsec Decryption Throughput is the fastest rate at which the 797 count of test frames transmitted by the DUT is equal to the number 798 of test frames sent to it by the test equipment. 800 For tests using multiple IPsec SAs, the test traffic associated 801 with the individual traffic selectors defined for each IPsec SA 802 MUST be sent in a round robin type fashion to keep the test 803 balanced so as not to overload any single IPsec SA. 805 Reporting format: The reporting format SHOULD be the same as listed 806 in 7.1 with the additional requirement that the Security Context 807 parameters defined in 5.6 and utilized for this test MUST be 808 included in any statement of performance. 810 9. Latency 812 This section presents methodologies relating to the characterization 813 of the forwarding latency of a DUT/SUT. It extends the concept of 814 latency characterization presented in [RFC2544] to an IPsec 815 environment. 817 A separate tests SHOULD be performed for latency tests using IPv4/ 818 UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic. 820 In order to lessen the effect of packet buffering in the DUT/SUT, the 821 latency tests MUST be run at the measured IPsec throughput level of 822 the DUT/SUT; IPsec latency at other offered loads is optional. 824 Lastly, [RFC1242] and [RFC2544] draw distinction between two classes 825 of devices: "store and forward" and "bit-forwarding". Each class 826 impacts how latency is collected and subsequently presented. See the 827 related RFC's for more information. In practice, much of the test 828 equipment will collect the latency measurement for one class or the 829 other, and, if needed, mathematically derive the reported value by 830 the addition or subtraction of values accounting for medium 831 propagation delay of the packet, bit times to the timestamp trigger 832 within the packet, etc. Test equipment vendors SHOULD provide 833 documentation regarding the composition and calculation latency 834 values being reported. The user of this data SHOULD understand the 835 nature of the latency values being reported, especially when 836 comparing results collected from multiple test vendors. (E.g., If 837 test vendor A presents a "store and forward" latency result and test 838 vendor B presents a "bit-forwarding" latency result, the user may 839 erroneously conclude the DUT has two differing sets of latency 840 values.). 842 9.1. Latency Baseline 843 Objective: Measure the intrinsic latency (min/avg/max) introduced by 844 a device without the use of IPsec. 846 Procedure: First determine the throughput for the DUT/SUT at each of 847 the listed frame sizes. Send a stream of frames at a particular 848 frame size through the DUT at the determined throughput rate using 849 frames that match the IPsec SA selector(s) to be tested. The 850 stream SHOULD be at least 120 seconds in duration. An identifying 851 tag SHOULD be included in one frame after 60 seconds with the type 852 of tag being implementation dependent. The time at which this 853 frame is fully transmitted is recorded (timestamp A). The 854 receiver logic in the test equipment MUST recognize the tag 855 information in the frame stream and record the time at which the 856 tagged frame was received (timestamp B). 858 The latency is timestamp B minus timestamp A as per the relevant 859 definition from RFC 1242, namely latency as defined for store and 860 forward devices or latency as defined for bit forwarding devices. 862 The test MUST be repeated at least 20 times with the reported 863 value being the average of the recorded values. 865 Reporting Format The report MUST state which definition of latency 866 (from [RFC1242]) was used for this test. The latency results 867 SHOULD be reported in the format of a table with a row for each of 868 the tested frame sizes. There SHOULD be columns for the frame 869 size, the rate at which the latency test was run for that frame 870 size, for the media types tested, and for the resultant latency 871 values for each type of data stream tested. 873 9.2. IPsec Latency 875 Objective: Measure the intrinsic IPsec Latency (min/avg/max) 876 introduced by a device when using IPsec. 878 Procedure: First determine the throughput for the DUT/SUT at each of 879 the listed frame sizes. Send a stream of cleartext frames at a 880 particular frame size through the DUT/SUT at the determined 881 throughput rate using frames that match the IPsec SA selector(s) 882 to be tested. DUTa will encrypt the traffic and forward to DUTb 883 which will in turn decrypt the traffic and forward to the testing 884 device. 886 The stream SHOULD be at least 120 seconds in duration. An 887 identifying tag SHOULD be included in one frame after 60 seconds 888 with the type of tag being implementation dependent. The time at 889 which this frame is fully transmitted is recorded (timestamp A). 890 The receiver logic in the test equipment MUST recognize the tag 891 information in the frame stream and record the time at which the 892 tagged frame was received (timestamp B). 894 The IPsec Latency is timestamp B minus timestamp A as per the 895 relevant definition from [RFC1242], namely latency as defined for 896 store and forward devices or latency as defined for bit forwarding 897 devices. 899 The test MUST be repeated at least 20 times with the reported 900 value being the average of the recorded values. 902 Reporting format: The reporting format SHOULD be the same as listed 903 in 8.1 with the additional requirement that the Security Context 904 parameters defined in 5.6 and utilized for this test MUST be 905 included in any statement of performance. 907 9.3. IPsec Encryption Latency 909 Objective: Measure the DUT vendor specific IPsec Encryption Latency 910 for IPsec protected traffic. 912 Procedure: Send a stream of cleartext frames at a particular frame 913 size through the DUT/SUT at the determined throughput rate using 914 frames that match the IPsec SA selector(s) to be tested. 916 The stream SHOULD be at least 120 seconds in duration. An 917 identifying tag SHOULD be included in one frame after 60 seconds 918 with the type of tag being implementation dependent. The time at 919 which this frame is fully transmitted is recorded (timestamp A). 920 The DUT will receive the cleartext frames, perform IPsec 921 operations and then send the IPsec protected frames to the tester. 922 Upon receipt of the encrypted frames, the receiver logic in the 923 test equipment MUST recognize the tag information in the frame 924 stream and record the time at which the tagged frame was received 925 (timestamp B). 927 The IPsec Encryption Latency is timestamp B minus timestamp A as 928 per the relevant definition from [RFC1242], namely latency as 929 defined for store and forward devices or latency as defined for 930 bit forwarding devices. 932 The test MUST be repeated at least 20 times with the reported 933 value being the average of the recorded values. 935 Reporting format: The reporting format SHOULD be the same as listed 936 in 8.1 with the additional requirement that the Security Context 937 parameters defined in 5.6 and utilized for this test MUST be 938 included in any statement of performance. 940 9.4. IPsec Decryption Latency 942 Objective: Measure the DUT Vendor Specific IPsec Decryption Latency 943 for IPsec protected traffic. 945 Procedure: Send a stream of IPsec protected frames at a particular 946 frame size through the DUT/SUT at the determined throughput rate 947 using frames that match the IPsec SA selector(s) to be tested. 949 The stream SHOULD be at least 120 seconds in duration. An 950 identifying tag SHOULD be included in one frame after 60 seconds 951 with the type of tag being implementation dependent. The time at 952 which this frame is fully transmitted is recorded (timestamp A). 953 The DUT will receive the IPsec protected frames, perform IPsec 954 operations and then send the cleartext frames to the tester. Upon 955 receipt of the decrypted frames, the receiver logic in the test 956 equipment MUST recognize the tag information in the frame stream 957 and record the time at which the tagged frame was received 958 (timestamp B). 960 The IPsec Decryption Latency is timestamp B minus timestamp A as 961 per the relevant definition from [RFC1242], namely latency as 962 defined for store and forward devices or latency as defined for 963 bit forwarding devices. 965 The test MUST be repeated at least 20 times with the reported 966 value being the average of the recorded values. 968 Reporting format: The reporting format SHOULD be the same as listed 969 in 8.1 with the additional requirement that the Security Context 970 parameters defined in 5.6 and utilized for this test MUST be 971 included in any statement of performance. 973 10. Time To First Packet 975 Objective: Measure the time it takes to transmit a packet when no 976 SA's have been established. 978 Procedure: Determine the IPsec throughput for the DUT/SUT at each of 979 the listed frame sizes. Start with a DUT/SUT with Configured 980 Tunnels. Send a stream of cleartext frames at a particular frame 981 size through the DUT/SUT at the determined throughput rate using 982 frames that match the IPsec SA selector(s) to be tested. 984 The time at which the first frame is fully transmitted from the 985 testing device is recorded as timestamp A. The time at which the 986 testing device receives its first frame from the DUT/SUT is 987 recorded as timestamp B. The Time To First Packet is the 988 difference between Timestamp B and Timestamp A. 990 Note that it is possible that packets can be lost during IPsec 991 Tunnel establishment and that timestamp A & B are not required to 992 be associated with a unique packet. 994 Reporting format: The Time To First Packet results SHOULD be 995 reported in the format of a table with a row for each of the 996 tested frame sizes. There SHOULD be columns for the frame size, 997 the rate at which the TTFP test was run for that frame size, for 998 the media types tested, and for the resultant TTFP values for each 999 type of data stream tested. The Security Context parameters 1000 defined in 5.6 and utilized for this test MUST be included in any 1001 statement of performance. 1003 11. Frame Loss Rate 1005 This section presents methodologies relating to the characterization 1006 of frame loss rate, as defined in [RFC1242], in an IPsec environment. 1008 11.1. Frame Loss Baseline 1010 Objective: To determine the frame loss rate, as defined in 1011 [RFC1242], of a DUT/SUT throughout the entire range of input data 1012 rates and frame sizes without the use of IPsec. 1014 Procedure: Send a specific number of frames at a specific rate 1015 through the DUT/SUT to be tested using frames that match the IPsec 1016 SA selector(s) to be tested and count the frames that are 1017 transmitted by the DUT/SUT. The frame loss rate at each point is 1018 calculated using the following equation: 1020 ( ( input_count - output_count ) * 100 ) / input_count 1022 The first trial SHOULD be run for the frame rate that corresponds 1023 to 100% of the maximum rate for the frame size on the input media. 1024 Repeat the procedure for the rate that corresponds to 90% of the 1025 maximum rate used and then for 80% of this rate. This sequence 1026 SHOULD be continued (at reducing 10% intervals) until there are 1027 two successive trials in which no frames are lost. The maximum 1028 granularity of the trials MUST be 10% of the maximum rate, a finer 1029 granularity is encouraged. 1031 Reporting Format: The results of the frame loss rate test SHOULD be 1032 plotted as a graph. If this is done then the X axis MUST be the 1033 input frame rate as a percent of the theoretical rate for the 1034 media at the specific frame size. The Y axis MUST be the percent 1035 loss at the particular input rate. The left end of the X axis and 1036 the bottom of the Y axis MUST be 0 percent; the right end of the X 1037 axis and the top of the Y axis MUST be 100 percent. Multiple 1038 lines on the graph MAY used to report the frame loss rate for 1039 different frame sizes, protocols, and types of data streams. 1041 11.2. IPsec Frame Loss 1043 Objective: To measure the frame loss rate of a device when using 1044 IPsec to protect the data flow. 1046 Procedure: Ensure that the DUT/SUT is in active tunnel mode. Send a 1047 specific number of cleartext frames that match the IPsec SA 1048 selector(s) to be tested at a specific rate through the DUT/SUT. 1049 DUTa will encrypt the traffic and forward to DUTb which will in 1050 turn decrypt the traffic and forward to the testing device. The 1051 testing device counts the frames that are transmitted by the DUTb. 1052 The frame loss rate at each point is calculated using the 1053 following equation: 1055 ( ( input_count - output_count ) * 100 ) / input_count 1057 The first trial SHOULD be run for the frame rate that corresponds 1058 to 100% of the maximum rate for the frame size on the input media. 1059 Repeat the procedure for the rate that corresponds to 90% of the 1060 maximum rate used and then for 80% of this rate. This sequence 1061 SHOULD be continued (at reducing 10% intervals) until there are 1062 two successive trials in which no frames are lost. The maximum 1063 granularity of the trials MUST be 10% of the maximum rate, a finer 1064 granularity is encouraged. 1066 Reporting Format: The reporting format SHOULD be the same as listed 1067 in 10.1 with the additional requirement that the Security Context 1068 parameters defined in 6.7 and utilized for this test MUST be 1069 included in any statement of performance. 1071 11.3. IPsec Encryption Frame Loss 1073 Objective: To measure the effect of IPsec encryption on the frame 1074 loss rate of a device. 1076 Procedure: Send a specific number of cleartext frames that match the 1077 IPsec SA selector(s) at a specific rate to the DUT. The DUT will 1078 receive the cleartext frames, perform IPsec operations and then 1079 send the IPsec protected frame to the tester. The testing device 1080 counts the encrypted frames that are transmitted by the DUT. The 1081 frame loss rate at each point is calculated using the following 1082 equation: 1084 ( ( input_count - output_count ) * 100 ) / input_count 1086 The first trial SHOULD be run for the frame rate that corresponds 1087 to 100% of the maximum rate for the frame size on the input media. 1088 Repeat the procedure for the rate that corresponds to 90% of the 1089 maximum rate used and then for 80% of this rate. This sequence 1090 SHOULD be continued (at reducing 10% intervals) until there are 1091 two successive trials in which no frames are lost. The maximum 1092 granularity of the trials MUST be 10% of the maximum rate, a finer 1093 granularity is encouraged. 1095 Reporting Format: The reporting format SHOULD be the same as listed 1096 in 10.1 with the additional requirement that the Security Context 1097 parameters defined in 6.7 and utilized for this test MUST be 1098 included in any statement of performance. 1100 11.4. IPsec Decryption Frame Loss 1102 Objective: To measure the effects of IPsec encryption on the frame 1103 loss rate of a device. 1105 Procedure: Send a specific number of IPsec protected frames that 1106 match the IPsec SA selector(s) at a specific rate to the DUT. The 1107 DUT will receive the IPsec protected frames, perform IPsec 1108 operations and then send the cleartext frames to the tester. The 1109 testing device counts the cleartext frames that are transmitted by 1110 the DUT. The frame loss rate at each point is calculated using 1111 the following equation: 1113 ( ( input_count - output_count ) * 100 ) / input_count 1115 The first trial SHOULD be run for the frame rate that corresponds 1116 to 100% of the maximum rate for the frame size on the input media. 1117 Repeat the procedure for the rate that corresponds to 90% of the 1118 maximum rate used and then for 80% of this rate. This sequence 1119 SHOULD be continued (at reducing 10% intervals) until there are 1120 two successive trials in which no frames are lost. The maximum 1121 granularity of the trials MUST be 10% of the maximum rate, a finer 1122 granularity is encouraged. 1124 Reporting format: The reporting format SHOULD be the same as listed 1125 in 10.1 with theadditional requirement that the Security Context 1126 parameters defined in 6.7 and utilized for this test MUST be 1127 included in any statement of performance. 1129 11.5. IKE Phase 2 Rekey Frame Loss 1131 Objective: To measure the frame loss due to an IKE Phase 2 (i.e. 1132 IPsec SA) Rekey event. 1134 Procedure: The procedure is the same as in 10.2 with the exception 1135 that the IPsec SA lifetime MUST be configured to be one-third of 1136 the trial test duration or one-third of the total number of bytes 1137 to be transmitted during the trial duration. 1139 Reporting format: The reporting format SHOULD be the same as listed 1140 in 10.1 with the additional requirement that the Security Context 1141 parameters defined in 6.7 and utilized for this test MUST be 1142 included in any statement of performance. 1144 12. Back-to-back Frames 1146 This section presents methodologies relating to the characterization 1147 of back-to-back frame processing, as defined in [RFC1242], in an 1148 IPsec environment. 1150 12.1. Back-to-back Frames Baseline 1152 Objective: To characterize the ability of a DUT to process back-to- 1153 back frames as defined in [RFC1242], without the use of IPsec. 1155 Procedure: Send a burst of frames that matches the IPsec SA 1156 selector(s) to be tested with minimum inter-frame gaps to the DUT 1157 and count the number of frames forwarded by the DUT. If the count 1158 of transmitted frames is equal to the number of frames forwarded 1159 the length of the burst is increased and the test is rerun. If 1160 the number of forwarded frames is less than the number 1161 transmitted, the length of the burst is reduced and the test is 1162 rerun. 1164 The back-to-back value is the number of frames in the longest 1165 burst that the DUT will handle without the loss of any frames. 1166 The trial length MUST be at least 2 seconds and SHOULD be repeated 1167 at least 50 times with the average of the recorded values being 1168 reported. 1170 Reporting format: The back-to-back results SHOULD be reported in the 1171 format of a table with a row for each of the tested frame sizes. 1172 There SHOULD be columns for the frame size and for the resultant 1173 average frame count for each type of data stream tested. The 1174 standard deviation for each measurement MAY also be reported. 1176 12.2. IPsec Back-to-back Frames 1178 Objective: To measure the back-to-back frame processing rate of a 1179 device when using IPsec to protect the data flow. 1181 Procedure: Send a burst of cleartext frames that matches the IPsec 1182 SA selector(s) to be tested with minimum inter-frame gaps to the 1183 DUT/SUT. DUTa will encrypt the traffic and forward to DUTb which 1184 will in turn decrypt the traffic and forward to the testing 1185 device. The testing device counts the frames that are transmitted 1186 by the DUTb. If the count of transmitted frames is equal to the 1187 number of frames forwarded the length of the burst is increased 1188 and the test is rerun. If the number of forwarded frames is less 1189 than the number transmitted, the length of the burst is reduced 1190 and the test is rerun. 1192 The back-to-back value is the number of frames in the longest 1193 burst that the DUT/SUT will handle without the loss of any frames. 1194 The trial length MUST be at least 2 seconds and SHOULD be repeated 1195 at least 50 times with the average of the recorded values being 1196 reported. 1198 Reporting Format: The reporting format SHOULD be the same as listed 1199 in 11.1 with the additional requirement that the Security Context 1200 parameters defined in 6.7 and utilized for this test MUST be 1201 included in any statement of performance. 1203 12.3. IPsec Encryption Back-to-back Frames 1205 Objective: To measure the effect of IPsec encryption on the back-to- 1206 back frame processing rate of a device. 1208 Procedure: Send a burst of cleartext frames that matches the IPsec 1209 SA selector(s) to be tested with minimum inter-frame gaps to the 1210 DUT. The DUT will receive the cleartext frames, perform IPsec 1211 operations and then send the IPsec protected frame to the tester. 1212 The testing device counts the encrypted frames that are 1213 transmitted by the DUT. If the count of transmitted encrypted 1214 frames is equal to the number of frames forwarded the length of 1215 the burst is increased and the test is rerun. If the number of 1216 forwarded frames is less than the number transmitted, the length 1217 of the burst is reduced and the test is rerun. 1219 The back-to-back value is the number of frames in the longest 1220 burst that the DUT will handle without the loss of any frames. 1221 The trial length MUST be at least 2 seconds and SHOULD be repeated 1222 at least 50 times with the average of the recorded values being 1223 reported. 1225 Reporting format: The reporting format SHOULD be the same as listed 1226 in 11.1 with the additional requirement that the Security Context 1227 parameters defined in 6.7 and utilized for this test MUST be 1228 included in any statement of performance. 1230 12.4. IPsec Decryption Back-to-back Frames 1232 Objective: To measure the effect of IPsec decryption on the back-to- 1233 back frame processing rate of a device. 1235 Procedure: Send a burst of cleartext frames that matches the IPsec 1236 SA selector(s) to be tested with minimum inter-frame gaps to the 1237 DUT. The DUT will receive the IPsec protected frames, perform 1238 IPsec operations and then send the cleartext frame to the tester. 1239 The testing device counts the frames that are transmitted by the 1240 DUT. If the count of transmitted frames is equal to the number of 1241 frames forwarded the length of the burst is increased and the test 1242 is rerun. If the number of forwarded frames is less than the 1243 number transmitted, the length of the burst is reduced and the 1244 test is rerun. 1246 The back-to-back value is the number of frames in the longest 1247 burst that the DUT will handle without the loss of any frames. 1248 The trial length MUST be at least 2 seconds and SHOULD be repeated 1249 at least 50 times with the average of the recorded values being 1250 reported. 1252 Reporting format: The reporting format SHOULD be the same as listed 1253 in 11.1 with the additional requirement that the Security Context 1254 parameters defined in 6.7 and utilized for this test MUST be 1255 included in any statement of performance. 1257 13. IPsec Tunnel Setup Behavior 1259 13.1. IPsec Tunnel Setup Rate 1261 Objective: Determine the rate at which IPsec Tunnels can be 1262 established. 1264 Procedure: Configure the DUT/SUT with n IKE Phase 1 and 1265 corresponding IKE Phase 2 policies. Ensure that no SA's are 1266 established and that the DUT/SUT is in configured tunnel mode for 1267 all n policies. Send a stream of cleartext frames at a particular 1268 frame size through the DUT/SUT at the determined throughput rate 1269 using frames with selectors matching the first IKE Phase 1 policy. 1270 As soon as the testing device receives its first frame from the 1271 DUT/SUT, it knows that the IPsec Tunnel is established and starts 1272 sending the next stream of cleartext frames using the same frame 1273 size and throughput rate but this time using selectors matching 1274 the second IKE Phase 1 policy. This process is repeated until all 1275 configured IPsec Tunnels have been established. 1277 The IPsec Tunnel Setup Rate is determined by the following 1278 formula: 1280 Tunnel Setup Rate = n / [Duration of Test - (n * 1281 frame_transmit_time)] 1283 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1284 to exceed the duration of the test time. It is RECOMMENDED that 1285 n=100 IPsec Tunnels are tested at a minimum to get a large enough 1286 sample size to depict some real-world behavior. 1288 Reporting Format: The Tunnel Setup Rate results SHOULD be reported 1289 in the format of a table with a row for each of the tested frame 1290 sizes. There SHOULD be columns for the frame size, the rate at 1291 which the test was run for that frame size, for the media types 1292 tested, and for the resultant Tunnel Setup Rate values for each 1293 type of data stream tested. The Security Context parameters 1294 defined in 6.7 and utilized for this test MUST be included in any 1295 statement of performance. 1297 13.2. IKE Phase 1 Setup Rate 1299 Objective: Determine the rate of IKE SA's that can be established. 1301 Procedure: Configure the DUT with n IKE Phase 1 and corresponding 1302 IKE Phase 2 policies. Ensure that no SAs are established and that 1303 the DUT is in configured tunnel mode for all n policies. Send a 1304 stream of cleartext frames at a particular frame size through the 1305 DUT at the determined throughput rate using frames with selectors 1306 matching the first IKE Phase 1 policy. As soon as the Phase 1 SA 1307 is established, the testing device starts sending the next stream 1308 of cleartext frames using the same frame size and throughput rate 1309 but this time using selectors matching the second IKE Phase 1 1310 policy. This process is repeated until all configured IKE SA's 1311 have been established. 1313 The IKE SA Setup Rate is determined by the following formula: 1315 IKE SA Setup Rate = n / [Duration of Test - (n * 1316 frame_transmit_time)] 1318 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1319 to exceed the duration of the test time. It is RECOMMENDED that 1320 n=100 IKE SA's are tested at a minumum to get a large enough 1321 sample size to depict some real-world behavior. 1323 Reporting Format: The IKE Phase 1 Setup Rate results SHOULD be 1324 reported in the format of a table with a row for each of the 1325 tested frame sizes. There SHOULD be columns for the frame size, 1326 the rate at which the test was run for that frame size, for the 1327 media types tested, and for the resultant IKE Phase 1 Setup Rate 1328 values for each type of data stream tested. The Security Context 1329 parameters defined in 6.7 and utilized for this test MUST be 1330 included in any statement of performance. 1332 13.3. IKE Phase 2 Setup Rate 1334 Objective: Determine the rate of IPsec SA's that can be established. 1336 Procedure: Configure the DUT with a single IKE Phase 1 policy and n 1337 corresponding IKE Phase 2 policies. Ensure that no SAs are 1338 established and that the DUT is in configured tunnel mode for all 1339 policies. Send a stream of cleartext frames at a particular frame 1340 size through the DUT at the determined throughput rate using 1341 frames with selectors matching the first IPsec SA policy. 1343 The time at which the IKE SA is established is recorded as 1344 timestamp A. As soon as the Phase 1 SA is established, the IPsec 1345 SA negotiation will be initiated. Once the first IPsec SA has 1346 been established, start sending the next stream of cleartext 1347 frames using the same frame size and throughput rate but this time 1348 using selectors matching the second IKE Phase 2 policy. This 1349 process is repeated until all configured IPsec SA's have been 1350 established. 1352 The IPsec SA Setup Rate is determined by the following formula: 1354 IPsec SA Setup Rate = n / [Duration of Test - {A +((n-1) * 1355 frame_transmit_time)}] 1357 The IKE SA lifetime and the IPsec SA lifetime MUST be configured 1358 to exceed the duration of the test time. It is RECOMMENDED that 1359 n=100 IPsec SA's are tested at a minumum to get a large enough 1360 sample size to depict some real-world behavior. 1362 Reporting Format: The IKE Phase 2 Setup Rate results SHOULD be 1363 reported in the format of a table with a row for each of the 1364 tested frame sizes. There SHOULD be columns for the frame size, 1365 the rate at which the test was run for that frame size, for the 1366 media types tested, and for the resultant IKE Phase 2 Setup Rate 1367 values for each type of data stream tested. The Security Context 1368 parameters defined in 6.7 and utilized for this test MUST be 1369 included in any statement of performance. 1371 14. IPsec Rekey Behavior 1373 The IPsec Rekey Behavior test all need to be executed by an IPsec 1374 aware test device since the test needs to be closely linked with the 1375 IKE FSM and cannot be done by offering specific traffic pattern at 1376 either the Initiator or the Responder. 1378 14.1. IKE Phase 1 Rekey Rate 1380 Objective: Determine the maximum rate at which an IPsec Device can 1381 rekey IKE SA's. 1383 Procedure: The IPsec Device under test should initially be set up 1384 with the determined IKE SA Capacity number of Active IPsec 1385 Tunnels. 1387 The IPsec aware tester should then perform a binary search where 1388 it initiates an IKE Phase 1 SA rekey for all Active IPsec Tunnels. 1389 The tester MUST timestamp for each IKE SA when it initiated the 1390 rekey and MUST timestamp once more once the FSM declares the rekey 1391 is completed. Once the itteration is complete the tester now has 1392 a table of rekey times for each IKE SA. The reciproce of the 1393 average of this table is the IKE Phase 1 Rekey Rate. 1395 This is obviously granted that all IKE SA were able to rekey 1396 succesfully. If this is not the case, the IPsec Tunnels are all 1397 re-established and the binary search goes to the next value of IKE 1398 SA's it will rekey. The process will repeat itself until a rate 1399 is determined at which a all SA's in that timeframe rekey 1400 correctly. 1402 Reporting Format: The IKE Phase 1 Rekey Rate results SHOULD be 1403 reported in the format of a table with a row for each of the 1404 tested frame sizes. There SHOULD be columns for the frame size, 1405 the rate at which the test was run for that frame size, for the 1406 media types tested, and for the resultant IKE Phase 1 Rekey Rate 1407 values for each type of data stream tested. The Security Context 1408 parameters defined in 6.7 and utilized for this test MUST be 1409 included in any statement of performance. 1411 14.2. IKE Phase 2 Rekey Rate 1413 Objective: Determine the maximum rate at which an IPsec Device can 1414 rekey IPsec SA's. 1416 Procedure: The IPsec Device under test should initially be set up 1417 with the determined IKE SA Capacity number of Active IPsec 1418 Tunnels. 1420 The IPsec aware tester should then perform a binary search where 1421 it initiates an IKE Phase 2 SA rekey for all IPsec SA's. The 1422 tester MUST timestamp for each IPsec SA when it initiated the 1423 rekey and MUST timestamp once more once the FSM declares the rekey 1424 is completed. Once the itteration is complete the tester now has 1425 a table of rekey times for each IPsec SA. The reciproce of the 1426 average of this table is the IKE Phase 2 Rekey Rate. 1428 This is obviously granted that all IPsec SA were able to rekey 1429 succesfully. If this is not the case, the IPsec Tunnels are all 1430 re-established and the binary search goes to the next value of 1431 IPsec SA's it will rekey. The process will repeat itself until a 1432 rate is determined at which a all SA's in that timeframe rekey 1433 correctly. 1435 Reporting Format: The IKE Phase 2 Rekey Rate results SHOULD be 1436 reported in the format of a table with a row for each of the 1437 tested frame sizes. There SHOULD be columns for the frame size, 1438 the rate at which the test was run for that frame size, for the 1439 media types tested, and for the resultant IKE Phase 2 Rekey Rate 1440 values for each type of data stream tested. The Security Context 1441 parameters defined in 6.7 and utilized for this test MUST be 1442 included in any statement of performance. 1444 15. IPsec Tunnel Failover Time 1446 This section presents methodologies relating to the characterization 1447 of the failover behavior of a DUT/SUT in a IPsec environment. 1449 In order to lessen the effect of packet buffering in the DUT/SUT, the 1450 Tunnel Failover Time tests MUST be run at the measured IPsec 1451 throughput level of the DUT. Tunnel Failover Time tests at other 1452 offered constant loads are OPTIONAL. 1454 Tunnel Failovers can be achieved in various ways like : 1456 o Failover between two or more software instances of an IPsec stack. 1458 o Failover between two IPsec devices. 1460 o Failover between two or more crypto engines. 1462 o Failover between hardware and software crypto. 1464 In all of the above cases there shall be at least one active IPsec 1465 device and a standby device. In some cases the standby device is not 1466 present and two or more IPsec devices are backing eachother up in 1467 case of a catastrophic device or stack failure. The standby (or 1468 potential other active) IPsec Devices can back up the active IPsec 1469 Device in either a stateless or statefull method. In the former 1470 case, Phase 1 SA's as well as Phase 2 SA's will need to be re- 1471 established in order to guarantuee packet forwarding. In the latter 1472 case, the SPD and SADB of the active IPsec Device is synchronized to 1473 the standby IPsec Device to ensure immediate packet path recovery. 1475 Objective: Determine the time required to fail over all Active 1476 Tunnels from an active IPsec Device to its standby device. 1478 Procedure: Before a failover can be triggered, the IPsec Device has 1479 to be in a state where the active stack/engine/node has a the 1480 maximum supported number of Active Tunnnels. The Tunnels will be 1481 transporting bidirectional traffic at the Tunnel Throughput rate 1482 for the smallest framesize that the stack/engine/node is capable 1483 of forwarding (In most cases, this will be 64 Bytes). The traffic 1484 should traverse in a round robin fashion through all Active 1485 Tunnels. 1487 It is RECOMMENDED that the test is repeated for various number of 1488 Active Tunnels as well as for different framesizes and framerates. 1490 When traffic is flowing through all Active Tunnels in steady 1491 state, a failover shall be triggered. 1493 Both receiver sides of the testers will now look at sequence 1494 counters in the instrumented packets that are being forwarded 1495 through the Tunnels. Each Tunnel MUST have it's own counter to 1496 keep track of packetloss on a per SA basis. 1498 If the tester observes no sequence number drops on any of the 1499 Tunnels in both directions then the Failover Time MUST be listed 1500 as 'null', indicating that the failover was immediate and without 1501 any packetloss. 1503 In all other cases where the tester observes a gap in the sequence 1504 numbers of the instrumented payload of the packets, the tester 1505 will monitor all SA's and look for any Tunnels that are still not 1506 receiving packets after the Failover. These will be marked as 1507 'pending' Tunnels. Active Tunnels that are forwarding packets 1508 again without any packetloss shall be marked as 'recovered' 1509 Tunnels. In background the tester will keep monitoring all SA's 1510 to make sure that no packets are dropped. If this is the case 1511 then the Tunnel in question will be placed back in 'pending' 1512 state. 1514 Note that reordered packets can naturally occur after en/ 1515 decryption. This is not a valid reason to place a Tunnel back in 1516 'pending' state. A sliding window of 128 packets per SA SHALL be 1517 allowed before packetloss is declared on the SA. 1519 The tester will wait until all Tunnel are marked as 'recovered'. 1520 Then it will find the SA with the largest gap in sequence number. 1521 Given the fact that the framesize is fixed and the time of that 1522 framesize can easily be calculated for the initiator links, a 1523 simple multiplication of the framesize time * largest packetloss 1524 gap will yield the Tunnel Failover Time. 1526 If the tester never reaches a state where all Tunnels are marked 1527 as 'recovered', the the Failover Time MUST be listed as 1528 'infinite'. 1530 Reporting Format: The results shall be represented in a tabular 1531 format, where the first column will list the number of Active 1532 Tunnels, the second column the Framesize, the third column the 1533 Framerate and the fourth column the Tunnel Failover Time in 1534 milliseconds. 1536 16. DoS Resiliency 1538 16.1. Phase 1 DoS Resiliency Rate 1540 Objective: 1542 Procedure: 1544 Reporting Format: 1546 16.2. Phase 2 DoS Resiliency Rate 1548 Objective: 1550 Procedure: 1552 Reporting Format: 1554 17. Acknowledgements 1556 The authors would like to acknowledge the following individual for 1557 their help and participation of the compilation and editing of this 1558 document: Michele Bustos, Ixia. ; Paul Hoffman, VPNC 1560 18. References 1562 18.1. Normative References 1564 [RFC1242] Bradner, S., "Benchmarking terminology for network 1565 interconnection devices", RFC 1242, July 1991. 1567 [RFC1981] McCann, J., Deering, S., and J. Mogul, "Path MTU Discovery 1568 for IP version 6", RFC 1981, August 1996. 1570 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1571 Requirement Levels", BCP 14, RFC 2119, March 1997. 1573 [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN 1574 Switching Devices", RFC 2285, February 1998. 1576 [RFC2393] Shacham, A., Monsour, R., Pereira, R., and M. Thomas, "IP 1577 Payload Compression Protocol (IPComp)", RFC 2393, 1578 December 1998. 1580 [RFC2401] Kent, S. and R. Atkinson, "Security Architecture for the 1581 Internet Protocol", RFC 2401, November 1998. 1583 [RFC2402] Kent, S. and R. Atkinson, "IP Authentication Header", 1584 RFC 2402, November 1998. 1586 [RFC2403] Madson, C. and R. Glenn, "The Use of HMAC-MD5-96 within 1587 ESP and AH", RFC 2403, November 1998. 1589 [RFC2404] Madson, C. and R. Glenn, "The Use of HMAC-SHA-1-96 within 1590 ESP and AH", RFC 2404, November 1998. 1592 [RFC2405] Madson, C. and N. Doraswamy, "The ESP DES-CBC Cipher 1593 Algorithm With Explicit IV", RFC 2405, November 1998. 1595 [RFC2406] Kent, S. and R. Atkinson, "IP Encapsulating Security 1596 Payload (ESP)", RFC 2406, November 1998. 1598 [RFC2407] Piper, D., "The Internet IP Security Domain of 1599 Interpretation for ISAKMP", RFC 2407, November 1998. 1601 [RFC2408] Maughan, D., Schneider, M., and M. Schertler, "Internet 1602 Security Association and Key Management Protocol 1603 (ISAKMP)", RFC 2408, November 1998. 1605 [RFC2409] Harkins, D. and D. Carrel, "The Internet Key Exchange 1606 (IKE)", RFC 2409, November 1998. 1608 [RFC2410] Glenn, R. and S. Kent, "The NULL Encryption Algorithm and 1609 Its Use With IPsec", RFC 2410, November 1998. 1611 [RFC2411] Thayer, R., Doraswamy, N., and R. Glenn, "IP Security 1612 Document Roadmap", RFC 2411, November 1998. 1614 [RFC2412] Orman, H., "The OAKLEY Key Determination Protocol", 1615 RFC 2412, November 1998. 1617 [RFC2432] Dubray, K., "Terminology for IP Multicast Benchmarking", 1618 RFC 2432, October 1998. 1620 [RFC2451] Pereira, R. and R. Adams, "The ESP CBC-Mode Cipher 1621 Algorithms", RFC 2451, November 1998. 1623 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1624 Network Interconnect Devices", RFC 2544, March 1999. 1626 [RFC2547] Rosen, E. and Y. Rekhter, "BGP/MPLS VPNs", RFC 2547, 1627 March 1999. 1629 [RFC2661] Townsley, W., Valencia, A., Rubens, A., Pall, G., Zorn, 1630 G., and B. Palter, "Layer Two Tunneling Protocol "L2TP"", 1631 RFC 2661, August 1999. 1633 [RFC2784] Farinacci, D., Li, T., Hanks, S., Meyer, D., and P. 1634 Traina, "Generic Routing Encapsulation (GRE)", RFC 2784, 1635 March 2000. 1637 [RFC4109] Hoffman, P., "Algorithms for Internet Key Exchange version 1638 1 (IKEv1)", RFC 4109, May 2005. 1640 [RFC4305] Eastlake, D., "Cryptographic Algorithm Implementation 1641 Requirements for Encapsulating Security Payload (ESP) and 1642 Authentication Header (AH)", RFC 4305, December 2005. 1644 [I-D.ietf-ipsec-ikev2] 1645 Kaufman, C., "Internet Key Exchange (IKEv2) Protocol", 1646 draft-ietf-ipsec-ikev2-17 (work in progress), 1647 October 2004. 1649 [I-D.ietf-ipsec-properties] 1650 Krywaniuk, A., "Security Properties of the IPsec Protocol 1651 Suite", draft-ietf-ipsec-properties-02 (work in progress), 1652 July 2002. 1654 18.2. Informative References 1656 [FIPS.186-1.1998] 1657 National Institute of Standards and Technology, "Digital 1658 Signature Standard", FIPS PUB 186-1, December 1998, 1659 . 1661 Authors' Addresses 1663 Merike Kaeo 1664 Double Shot Security 1665 3518 Fremont Ave N #363 1666 Seattle, WA 98103 1667 USA 1669 Phone: +1(310)866-0165 1670 Email: kaeo@merike.com 1672 Tim Van Herck 1673 Cisco Systems 1674 170 West Tasman Drive 1675 San Jose, CA 95134-1706 1676 USA 1678 Phone: +1(408)853-2284 1679 Email: herckt@cisco.com 1681 Full Copyright Statement 1683 Copyright (C) The IETF Trust (2007). 1685 This document is subject to the rights, licenses and restrictions 1686 contained in BCP 78, and except as set forth therein, the authors 1687 retain all their rights. 1689 This document and the information contained herein are provided on an 1690 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1691 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1692 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1693 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1694 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1695 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1697 Intellectual Property 1699 The IETF takes no position regarding the validity or scope of any 1700 Intellectual Property Rights or other rights that might be claimed to 1701 pertain to the implementation or use of the technology described in 1702 this document or the extent to which any license under such rights 1703 might or might not be available; nor does it represent that it has 1704 made any independent effort to identify any such rights. Information 1705 on the procedures with respect to rights in RFC documents can be 1706 found in BCP 78 and BCP 79. 1708 Copies of IPR disclosures made to the IETF Secretariat and any 1709 assurances of licenses to be made available, or the result of an 1710 attempt made to obtain a general license or permission for the use of 1711 such proprietary rights by implementers or users of this 1712 specification can be obtained from the IETF on-line IPR repository at 1713 http://www.ietf.org/ipr. 1715 The IETF invites any interested party to bring to its attention any 1716 copyrights, patents or patent applications, or other proprietary 1717 rights that may cover technology that may be required to implement 1718 this standard. Please address the information to the IETF at 1719 ietf-ipr@ietf.org. 1721 Acknowledgment 1723 Funding for the RFC Editor function is provided by the IETF 1724 Administrative Support Activity (IASA).