idnits 2.17.1 draft-ietf-bmwg-ipv6-meth-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 18. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 793. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 804. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 811. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 817. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([2]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 689 has weird spacing: '... Bytes pps...' == Line 727 has weird spacing: '... Bytes fps ...' == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: IANA reserved the IPv6 address block xxxxx/48 for use with IPv6 benchmark testing. These addresses MUST not be assumed to be routable on the Internet and MUST not be used as Internet source or destination addresses. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: Special care needs to be taken about the Neighbor Unreachability Detection (NUD) [6] process. The IPv6 prefix reachable time in the router advertisement SHOULD be set to 30 seconds and allow 50% jitter. The IPv6 source and destination addresses SHOULD not appear to be directly connected to the DUT to avoid Neighbor Solicitation (NS) and Neighbor Advertisement (NA) storms due to multiple test traffic flows. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The tests with traffic containing each individual extension headers MUST be complemented with tests that contain a chain of two or more extension headers (the chain MUST not contain the Hop-by-hop extension header). The chain should also exclude the ESP extension header since traffic with an encrypted payload can not be used in tests with modifiers such as filters based on upper layer information (see Section 5). Since the DUT is not analyzing the content of the extension headers, any combination of extension headers can be used in testing. The extension headers chain recommended to be used in testing is: o Routing header - 24-32 bytes o Destination options header - 8 bytes o Fragment header - 8 bytes -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (January 2, 2007) is 6316 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Looks like a reference, but probably isn't: 'SA' on line 419 -- Looks like a reference, but probably isn't: 'DA' on line 419 == Unused Reference: '10' is defined on line 660, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 2544 (ref. '2') -- Obsolete informational reference (is this intentional?): RFC 2460 (ref. '5') (Obsoleted by RFC 8200) -- Obsolete informational reference (is this intentional?): RFC 2461 (ref. '6') (Obsoleted by RFC 4861) -- Obsolete informational reference (is this intentional?): RFC 3330 (ref. '8') (Obsoleted by RFC 5735) == Outdated reference: A later version (-08) exists of draft-ietf-bmwg-hash-stuffing-07 == Outdated reference: A later version (-08) exists of draft-ietf-bmwg-hash-stuffing-07 -- Duplicate reference: draft-ietf-bmwg-hash-stuffing, mentioned in '11', was also mentioned in '10'. Summary: 3 errors (**), 0 flaws (~~), 11 warnings (==), 13 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group C. Popoviciu 3 Internet-Draft A. Hamza 4 Expires: July 6, 2007 G. Van de Velde 5 Cisco Systems 6 D. Dugatkin 7 IXIA 8 January 2, 2007 10 IPv6 Benchmarking Methodology 11 13 Status of this Memo 15 By submitting this Internet-Draft, each author represents that any 16 applicable patent or other IPR claims of which he or she is aware 17 have been or will be disclosed, and any of which he or she becomes 18 aware will be disclosed, in accordance with Section 6 of BCP 79. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as Internet- 23 Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt. 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 This Internet-Draft will expire on July 6, 2007. 38 Copyright Notice 40 Copyright (C) The IETF Trust (2007). 42 Abstract 44 The Benchmarking Methodologies defined in RFC2544 [2] are IP version 45 independent however, they do not address some of the specificities of 46 IPv6. This document provides additional benchmarking guidelines 47 which in conjunction with RFC2544 will lead to a more complete and 48 realistic evaluation of the IPv6 performance of network elements. 50 Table of Contents 52 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 53 2. Existing Definitions . . . . . . . . . . . . . . . . . . . . . 3 54 3. Tests and Results Evaluation . . . . . . . . . . . . . . . . . 3 55 4. Test Environment Set Up . . . . . . . . . . . . . . . . . . . 4 56 5. Test Traffic . . . . . . . . . . . . . . . . . . . . . . . . . 4 57 5.1. Frame Formats and Sizes . . . . . . . . . . . . . . . . . 4 58 5.1.1. Frame Sizes to be used on Ethernet . . . . . . . . . . 5 59 5.1.2. Frame Sizes to be used on SONET . . . . . . . . . . . 5 60 5.2. Protocol Addresses Selection . . . . . . . . . . . . . . . 5 61 5.2.1. DUT Protocol Addresses . . . . . . . . . . . . . . . . 5 62 5.2.2. Test Traffic Protocol Addresses . . . . . . . . . . . 6 63 5.3. Traffic with Extension Headers . . . . . . . . . . . . . . 7 64 5.4. Traffic set up . . . . . . . . . . . . . . . . . . . . . . 8 65 6. Modifiers . . . . . . . . . . . . . . . . . . . . . . . . . . 9 66 6.1. Management and Routing Traffic . . . . . . . . . . . . . . 9 67 6.2. Filters . . . . . . . . . . . . . . . . . . . . . . . . . 9 68 6.2.1. Filter Format . . . . . . . . . . . . . . . . . . . . 9 69 6.2.2. Filter Types . . . . . . . . . . . . . . . . . . . . . 10 70 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . . 11 71 7.1. Throughput . . . . . . . . . . . . . . . . . . . . . . . . 12 72 7.2. Latency . . . . . . . . . . . . . . . . . . . . . . . . . 12 73 7.3. Frame Loss . . . . . . . . . . . . . . . . . . . . . . . . 12 74 7.4. Back-to-Back Frames . . . . . . . . . . . . . . . . . . . 12 75 7.5. System Recovery . . . . . . . . . . . . . . . . . . . . . 12 76 7.6. Reset . . . . . . . . . . . . . . . . . . . . . . . . . . 13 77 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 78 9. Security Considerations . . . . . . . . . . . . . . . . . . . 13 79 10. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 14 80 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 14 81 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 14 82 12.1. Normative References . . . . . . . . . . . . . . . . . . . 14 83 12.2. Informative References . . . . . . . . . . . . . . . . . . 14 84 Appendix A. Maximum Frame Rates Reference . . . . . . . . . . . . 15 85 A.1. Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . 15 86 A.2. Packet over SONET . . . . . . . . . . . . . . . . . . . . 16 87 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16 88 Intellectual Property and Copyright Statements . . . . . . . . . . 18 90 1. Introduction 92 The benchmarking methodologies defined by RFC2544 [2] are proving to 93 be very useful in consistently evaluating IPv4 forwarding performance 94 of network elements. Adherence to these testing and result analysis 95 procedures facilitates objective comparison of product IPv4 96 performance. While the principles behind the methodologies 97 introduced in RFC2544 are largely IP version independent, the 98 protocol continued to evolve, particularly in its version 6 (IPv6). 100 This document provides benchmarking methodology recommendations that 101 address IPv6 specific aspects such as evaluating the forwarding 102 performance of traffic containing extension headers as defined in 103 RFC2460 [5]. These recommendations are defined within the RFC2544 104 framework and are meant to complement the test and result analysis 105 procedures described in RFC2544 and not to replace them. 107 The terms used in this document remain consistent with those defined 108 in "Benchmarking Terminology for Network Interconnect Devices" [3]. 109 This terminology document SHOULD be consulted before using or 110 applying the recommendations of this document. 112 Any methodology aspects not covered in this document SHOULD be 113 assumed to be treated based on the RFC2544 recommendations. 115 2. Existing Definitions 117 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 118 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 119 document are to be interpreted as described in BCP 14, RFC 2119 [1]. 120 RFC 2119 defines the use of these key words to help make the intent 121 of standards track documents as clear as possible. While this 122 document uses these keywords, this document is not a standards track 123 document. 125 3. Tests and Results Evaluation 127 The recommended performance evaluation tests are described in Section 128 6 of this document. Not all of these tests are applicable to all 129 network element types. Nevertheless, for each evaluated device it is 130 recommended to perform as many of the applicable tests described in 131 Section 6 as possible. 133 Test execution and the results analysis MUST be performed while 134 observing generally accepted testing practices regarding 135 repeatability, variance and statistical significance of small numbers 136 of trials. 138 4. Test Environment Set Up 140 The test environment setup options recommended for the IPv6 141 performance evaluation are the same as the ones described in Section 142 6 of RFC2544, in both single-port and multi-port scenarios. Single- 143 port testing is used in measuring per interface forwarding 144 performance while multi-port testing is used to measure the 145 scalability of this performance across the entire platform. 147 Throughout the test, the DUT can be monitored for relevant resource 148 (Processor, Memory, etc.) utilization. This data could be beneficial 149 in understanding traffic processing by each DUT and the resources 150 that must be allocated for IPv6. It could reveal if the IPv6 traffic 151 is processed in hardware, by applicable devices, under all test 152 conditions or it is punted in the software switched path. If such 153 data is considered of interest, it MUST be collected out of band and 154 independent of any management data that might be recommended to be 155 collected through the interfaces forwarding the test traffic. 157 Note: During testing, either static or dynamic options for neighbor 158 discovery can be used. The static option can be used as long as it 159 is supported by the test tool. The dynamic option is preferred if 160 the test tool interacts with the DUT for the duration of the test to 161 maintain the respective neighbor caches in an active state. The test 162 scenarios assume the test traffic end points and the IPv6 source and 163 destination addresses are not directly attached to the DUT, but are 164 seen as one hop beyond, to avoid Neighbor Solicitation (NS) and 165 Neighbor Advertisement (NA) storms due to the Neighbor Unreachability 166 Detection (NUD) mechanism [6]. 168 5. Test Traffic 170 The traffic used for all tests described in this document SHOULD meet 171 the requirements described in this section. These requirements are 172 designed to reflect the characteristics of IPv6 unicast traffic in 173 all its aspects. Using this IPv6 traffic leads to a complete 174 evaluation of the network element performance. 176 5.1. Frame Formats and Sizes 178 Two types of media are commonly deployed and SHOULD be tested: 179 Ethernet and SONET. This section identifies the frame sizes that 180 SHOULD be used for each media type. Refer to recommendations in 181 RFC2544 for all other media types. 183 Similar to IPv4, small frame sizes help characterize the per-frame 184 processing overhead of the DUT. Note that the minimum IPv6 packet 185 size (40 bytes) is larger than that of an IPv4 packet (20 bytes). 186 Tests should compensate for this difference. 188 The frame sizes listed for IPv6 include the extension headers used in 189 testing (see section 4.3). By definition, extension headers are part 190 of the IPv6 packet payload. Depending on the total length of the 191 extension headers, their use might not be possible at the smallest 192 frame sizes. 194 5.1.1. Frame Sizes to be used on Ethernet 196 Ethernet in all its types has become the most commonly deployed 197 interface in today's networks. The following frame sizes SHOULD be 198 used for benchmarking over this media type: 64, 128, 256, 512, 1024, 199 1280, 1518 bytes. The 4096, 8192, 9216 bytes long jumbo frame sizes 200 SHOULD be used when benchmarking Gigabit Ethernet interfaces. The 201 maximum frame rates for each frame size and the various Ethernet 202 interface types are provided in Appendix A. 204 5.1.2. Frame Sizes to be used on SONET 206 Packet over SONET (PoS) interfaces are commonly used for core uplinks 207 and high bandwidth core links. Evaluating the forwarding performance 208 of PoS interfaces supported by the DUT is recommended. The following 209 frame sizes SHOULD be used for this media type: 64, 128, 256, 512, 210 1024, 1280, 1518, 2048, 4096 bytes. The maximum frame rates for each 211 frame size and the various PoS interface types are provided in 212 Appendix A. 214 5.2. Protocol Addresses Selection 216 There are two aspects of IPv6 benchmarking testing where IP address 217 selection considerations MUST be analyzed: The selection of IP 218 addresses for the DUT and the selection of addresses for the test 219 traffic. 221 5.2.1. DUT Protocol Addresses 223 IANA reserved the IPv6 address block xxxxx/48 for use with IPv6 224 benchmark testing. These addresses MUST not be assumed to be 225 routable on the Internet and MUST not be used as Internet source or 226 destination addresses. 228 Similar to RFC2544, Appendix C, addresses from the first half of this 229 range SHOULD be used for the ports viewed as input and addresses from 230 the other half of the range for the output ports. 232 The prefix length of the IPv6 addresses configured on the DUT 233 interfaces MUST fall into either one of the following: 234 o Prefix length is /126 which would simulate a point-to-point link 235 for a core router. 236 o Prefix length is smaller or equal to /64. 237 No prefix lengths SHOULD be selected in the range between 64 and 128 238 except the 126 value mentioned above. 240 Note that /126 prefixes might not be always the best choice for 241 addressing point-to-point links such as back-to-back Ethernet unless 242 the autoprovisioning mechanism is disabled. Also, not all network 243 elements support this type of addresses. 245 While with IPv6, the DUT interfaces can be configured with multiple 246 global unicast addresses, the methodology described in this document 247 does not require testing such a scenario. It is not expected that 248 such an evaluation would bring additional data with respect to the 249 performance of the network element. 251 The Interface ID portion of the global unicast IPv6 DUT addresses 252 SHOULD be set to ::1. There are no requirements in the selection of 253 the Interface ID portion of the link local IPv6 addresses. 255 It is recommended that multiple iterations of the benchmark tests be 256 conducted using the following lengths: 32, 48, 64, 126 and 128 for 257 the advertised traffic destination prefix. Other prefix lengths can 258 also be used if desired, however the indicated range should be 259 sufficient to establish baseline performance metrics. 261 5.2.2. Test Traffic Protocol Addresses 263 The IPv6 source and destination addresses for the test streams SHOULD 264 belong to the IPv6 range to be assigned by IANA as discussed in 265 section 4.2.1. The source addresses SHOULD belong to one half of the 266 range and the destination addresses to the other, reflecting the DUT 267 interface IPv6 address selection. 269 Tests SHOULD first be executed with a single stream leveraging a 270 single source-destination address pair. The tests SHOULD then be 271 repeated with traffic using a random destination address in the 272 corresponding range. If the network element prefix lookup 273 capabilities are evaluated, the tests SHOULD focus on the IPv6 274 relevant prefix boundaries: 0-64, 126 and 128. 276 Special care needs to be taken about the Neighbor Unreachability 277 Detection (NUD) [6] process. The IPv6 prefix reachable time in the 278 router advertisement SHOULD be set to 30 seconds and allow 50% 279 jitter. The IPv6 source and destination addresses SHOULD not appear 280 to be directly connected to the DUT to avoid Neighbor Solicitation 281 (NS) and Neighbor Advertisement (NA) storms due to multiple test 282 traffic flows. 284 5.3. Traffic with Extension Headers 286 Extension headers are an intrinsic part of the IPv6 architecture [5]. 287 They are used with various types of practical traffic such as: 288 Fragmented traffic, mobile IP based traffic, authenticated and 289 encrypted traffic. For these reasons, all tests described in this 290 document SHOULD be performed with both traffic that has no extension 291 headers and traffic that has a set of extension headers selected from 292 the following ordered list [5]: 293 o Hop-by-hop header 294 o Destination options header 295 o Routing header 296 o Fragment header 297 o Authentication header 298 o Encapsulating Security Payload header 299 o Destination options header 300 o Mobility header 302 Considering the fact that extension headers are an intrinsic part of 303 the protocol and that they fulfill different roles, benchmarking of 304 traffic containing each extension header SHOULD be executed 305 individually. 307 The special processing rules for the Hop-by-hop extension header 308 require a specific benchmarking approach. Unlike the other extension 309 headers, this header must be processed by each node that forwards the 310 traffic. Tests with traffic containing this extension headers type 311 will not measure the forwarding performance of the DUT but rather its 312 extension headers processing ability which is dependent on the 313 information contained in the extension headers. The concern is that 314 this traffic, at high rates, could have a negative impact on the 315 operational resources of the router and could be used as a security 316 threat. When benchmarking with traffic that contains the Hop-by-hop 317 extension headers, the goal is not to measure throughput [2] as in 318 the case of the other extension headers but rather to evaluate impact 319 of such traffic on the router. In this case, traffic with the Hop- 320 by-hop extension headers should be sent at 1%, 10% and 50% of the 321 interface total bandwidth. Device resources must be monitored at 322 each traffic rate to determine the impact. 324 The tests with traffic containing each individual extension headers 325 MUST be complemented with tests that contain a chain of two or more 326 extension headers (the chain MUST not contain the Hop-by-hop 327 extension header). The chain should also exclude the ESP extension 328 header since traffic with an encrypted payload can not be used in 329 tests with modifiers such as filters based on upper layer information 330 (see Section 5). Since the DUT is not analyzing the content of the 331 extension headers, any combination of extension headers can be used 332 in testing. The extension headers chain recommended to be used in 333 testing is: 334 o Routing header - 24-32 bytes 335 o Destination options header - 8 bytes 336 o Fragment header - 8 bytes 338 This is a real life extension headers chain that would be found in an 339 IPv6 packet between two mobile nodes exchanged over the optimized 340 path that requires fragmentation. The listed extension headers 341 lengths represent test tool defaults. The total length of the 342 extension headers chain SHOULD be larger than 32 bytes. 344 Extension headers add extra bytes to the payload size of the IP 345 packets which MUST be factored in when used in testing at low frame 346 sizes. Their presence will modify the minimum packet size used in 347 testing. For direct comparison between the data obtained with 348 traffic that has extension headers and with traffic that doesn't have 349 them, at low frame size, a common bottom size SHOULD be selected for 350 both types of traffic. 352 For the most cases, the network elements ignore the extension headers 353 when forwarding IPv6 traffic. For these reasons it is most likely 354 that the extension headers related performance impact will be 355 observed only when testing the DUT with traffic filters that contain 356 matching conditions for the upper layer protocol information. In 357 those cases, the DUT MUST traverse the chain of extension headers, a 358 process that could impact performance. 360 5.4. Traffic set up 362 All tests recommended in this document SHOULD be performed with bi- 363 directional traffic. For asymmetric situations, tests MAY be 364 performed with unidirectional traffic for a more granular 365 characterization of the DUT performance. In these cases, the 366 bidirectional traffic testing would reveal only the lowest 367 performance between the two directions. 369 All other traffic profile characteristics described in RFC2544 SHOULD 370 be applied in this testing as well. IPv6 multicast benchmarking is 371 outside the scope of this document. 373 6. Modifiers 375 RFC2544 underlines the importance of evaluating the performance of 376 network elements under certain operational conditions. The 377 conditions defined in RFC2544 Section 11 are common to IPv4 and IPv6 378 with the exception of broadcast frames. IPv6 does not use layer 2 or 379 layer 3 broadcasts. This section provides additional conditions that 380 are specific to IPv6. The suite of tests recommended in this 381 document SHOULD be first executed in the absence of these conditions 382 and then repeated under each of the conditions separately. 384 6.1. Management and Routing Traffic 386 The procedures defined in RFC2544 sections 11.2 and 11.3 are 387 applicable for IPv6 management and routing update Frames as well. 389 6.2. Filters 391 The filters defined in Section 11.4 of RFC2544 apply to IPv6 392 benchmarking as well. The filter definitions however must be 393 expanded to include upper layer protocol information matching in 394 order to analyze the handling of traffic with extension headers which 395 are an important architectural component of IPv6. 397 6.2.1. Filter Format 399 The filter format defined in RFC2544 is applicable to IPv6 as well 400 except that the Source Addresses (SA) and Destination Addresses (DA) 401 are IPv6. In addition to these basic filters, the evaluation of IPv6 402 performance SHOULD analyze the correct filtering and handling of 403 traffic with extension headers. 405 While the intent is not to evaluate a platform's capability to 406 process the various extension header types, the goal is to measure 407 performance impact when the network element must parse through the 408 extension headers in order to reach upper layer information. In 409 IPv6, routers do not have to parse through the extension headers 410 (other than Hop-by-hop) unless, for example, the upper layer 411 information has to be analyzed due to filters. 413 For these reasons, to evaluate the network element handling of IPv6 414 traffic with extension headers, the definition of the filters must be 415 extended to include conditions applied to upper layer protocol 416 information. The following filter format SHOULD be used for this 417 type of evaluation: 419 [permit|deny] [protocol] [SA] [DA] 421 where permit or deny indicates the action to allow or deny a packet 422 through the interface the filter is applied to. The Protocol field 423 is defined as: 424 o ipv6: any IP Version 6 traffic 425 o tcp: Transmission Control Protocol 426 o udp: User Datagram Protocol 427 and SA stands for the Source Address and DA for the Destination 428 Address. 430 6.2.2. Filter Types 432 Based on the RFC2544 recommendations, two types of tests are executed 433 when evaluating performance in the presence of modifiers. One with a 434 single filter and one with 25 filters. The recommended filters are 435 exemplified with the help of the IPv6 documentation prefix [9] 2001: 436 DB8::. 438 Examples of single filters are: 440 Filter for TCP traffic - permit tcp 2001:DB8::1 2001:DB8::2 441 Filter for UDP traffic - permit udp 2001:DB8::1 2001:DB8::2 442 Filter for IPv6 traffic - permit ipv6 2001:DB8::1 2001:DB8::2 444 The single line filter case SHOULD verify that the network element 445 permits all TCP/UDP/IPv6 traffic with or without any number of 446 extension headers from IPv6 SA 2001:DB8::1 to IPv6 DA 2001:DB8::2 and 447 deny all other traffic. 449 Example of 25 filters: 451 deny tcp 2001:DB8:1::1 2001:DB8:1::2 452 deny tcp 2001:DB8:2::1 2001:DB8:2::2 453 deny tcp 2001:DB8:3::1 2001:DB8:3::2 454 ... 455 deny tcp 2001:DB8:C::1 2001:DB8:C::2 456 permit tcp 2001:DB8:99::1 2001:DB8:99::2 457 deny tcp 2001:DB8:D::1 2001:DB8:D::2 458 deny tcp 2001:DB8:E::1 2001:DB8:E::2 459 ... 460 deny tcp 2001:DB8:19::1 2001:DB8:19::2 461 deny ipv6 any any 463 The router SHOULD deny all traffic with or without extension headers 464 except TCP traffic with SA 2001:DB8:99::1 and DA 2001:DB8:99::2. 466 7. Benchmarking Tests 468 This document recommends the same benchmarking tests described in 469 RFC2544 while observing the DUT setup and the traffic setup 470 considerations described above. The following sections state the 471 test types explicitly and highlight only the methodology differences 472 that might exist with respect to those described in Section 26 of 473 RFC2544. 475 The specificities of IPv6, particularly the definition of extension 476 headers processing, require additional benchmarking steps. In this 477 sense, the tests recommended by RFC2544 MUST be repeated for IPv6 478 traffic without extension headers and with one or multiple extension 479 headers. IPv6's deployment in existing IPv4 environments and the 480 expected long co-existence of the two protocols leads network 481 operators to place great emphasis on understanding the performance of 482 platforms forwarding both types of traffic. While device resources 483 are shared between the two protocols, it is important for IPv6 484 enabled platforms to not experience degraded IPv4 performance. In 485 this context the IPv6 benchmarking SHOULD be performed in the context 486 of a stand alone protocol as well as in the context of its co- 487 existence with IPv4. 489 The modifiers defined are independent of extension header type so 490 they can be applied equally to each one of the above tests. 492 The benchmarking tests described in this section SHOULD be performed 493 under each of the following conditions: 495 Extension headers specific conditions: 496 i) IPv6 traffic with no extension headers 497 ii) IPv6 traffic with one extension header from the list in 498 section 4.3 499 iii) IPv6 traffic with the chain of extension headers described in 500 section 4.3 502 Co-existence specific conditions: 503 iv) IPv4 ONLY traffic benchmarking 504 v) IPv6 ONLY traffic benchmarking 505 vi) IPv4-IPv6 traffic mix with the ratio 90% vs 10% 506 vii) IPv4-IPv6 traffic mix with the ratio 50% vs 50% 507 viii) IPv4-IPv6 traffic mix with the ratio 10% vs 90% 509 Combining the test conditions listed for benchmarking IPv6 as a 510 stand-alone protocol and the co-existence tests leads to a large 511 coverage matrix. A minimum requirement is to cover the co-existence 512 conditions in the case of IPv6 with no extension headers and those 513 where either of the traffic is 10% and 90% respectively. 515 The subsequent sections describe each specific tests that MUST be 516 executed under the conditions listed above for a complete 517 benchmarking of IPv6 forwarding performance. 519 7.1. Throughput 521 Objective: To determine the DUT throughput as defined in RFC1242. 523 Procedure: Same as RFC2544. 525 Reporting Format: Same as RFC2544. 527 7.2. Latency 529 Objective: To determine the latency as defined in RFC1242. 531 Procedure: Same as RFC2544. 533 Reporting Format: Same as RFC2544. 535 7.3. Frame Loss 537 Objective: To determine the frame loss rate, as defined in RFC1242, 538 of a DUT throughout the entire range of input data rates and frame 539 sizes. 541 Procedure: Same as RFC2544. 543 Reporting Format: Same as RFC2544. 545 7.4. Back-to-Back Frames 547 Objective: To characterize the ability of a DUT to process back-to- 548 back frames as defined in RFC1242. 550 Based on the IPv4 experience, the Back-to-Back frames test is 551 characterized by significant variance due to short term variations in 552 the processing flow. For these reasons, this test is not recommended 553 anymore for IPv6 benchamrking. 555 7.5. System Recovery 557 Objective: To characterize the speed at which a DUT recovers from an 558 overload condition. 560 Procedure: Same as RFC2544. 562 Reporting Format: Same as RFC2544. 564 7.6. Reset 566 Objective: To characterize the speed at which a DUT recovers from a 567 device or software reset. 569 Procedure: Same as RFC2544. 571 Reporting Format: Same as RFC2544. 573 8. IANA Considerations 575 IANA reserved prefix xxxx/48 IPv6 for IPv6 benchmarking similar to 576 192.18.0.0 in RFC 3330 [8]. This prefix length provides similar 577 flexibility as the range allocated for IPv4 benchmarking and it is 578 taking into consideration address conservation and simplicity of 579 usage concerns. Most network infrastructures are allocated a /48 580 prefix, hence this range would allow most network administrators to 581 mimic their IPv6 Address Plans when performing IPv6 benchmarking. 583 9. Security Considerations 585 Benchmarking activities as described in this memo are limited to 586 technology characterization using controlled stimuli in a laboratory 587 environment, with dedicated address space and the constraints 588 specified in the sections above. 590 The benchmarking network topology will be an independent test setup 591 and MUST NOT be connected to devices that may forward the test 592 traffic into a production network, or misroute traffic to the test 593 management network. 595 Further, benchmarking is performed on a "black-box" basis, relying 596 solely on measurements observable external to the DUT/SUT. 598 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 599 benchmarking purposes. Any implications for network security arising 600 from the DUT/SUT SHOULD be identical in the lab and in production 601 networks. 603 The isolated nature of the benchmarking environments and the fact 604 that no special features or capabilities, other than those used in 605 operational networks, are enabled on the DUT/SUT requires no security 606 considerations specific to the benchmarking process. 608 10. Conclusions 610 The Benchmarking Methodology for Network Interconnect Devices 611 document, RFC2544 [2], is for the most part applicable to evaluating 612 the IPv6 performance of network elements. This document is 613 addressing the IPv6 specific requirements that MUST be observed when 614 applying the recommendations of RFC2544. These additional 615 requirements stem from the architecture characteristics of IPv6. 616 This document is not a replacement of but a complement to RFC2544. 618 11. Acknowledgements 620 Scott Bradner provided valuable guidance and recommendations for this 621 document. The authors acknowledge the work done by Cynthia Martin 622 and Jeff Dunn with respect to defining the terminology for IPv6 623 benchmarking. The authors would like to thank Bill Kine for his 624 contribution to the initial document and to Bill Cerveny, Silvija 625 Dry, Sven Lanckmans, Athanassios Liakopoulos, Benoit Lourdelet, Al 626 Morton, Rajiv Papejna and Pekka Savola for their feedback. 628 12. References 630 12.1. Normative References 632 [1] Bradner, S., "Key words for use in RFCs to Indicate Requirement 633 Levels", BCP 14, RFC 2119, March 1997. 635 [2] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 636 Network Interconnect Devices", RFC 2544, March 1999. 638 12.2. Informative References 640 [3] Bradner, S., "Benchmarking terminology for network 641 interconnection devices", RFC 1242, July 1991. 643 [4] Simpson, W., "PPP in HDLC-like Framing", STD 51, RFC 1662, 644 July 1994. 646 [5] Deering, S. and R. Hinden, "Internet Protocol, Version 6 (IPv6) 647 Specification", RFC 2460, December 1998. 649 [6] Narten, T., Nordmark, E., and W. Simpson, "Neighbor Discovery 650 for IP Version 6 (IPv6)", RFC 2461, December 1998. 652 [7] Malis, A. and W. Simpson, "PPP over SONET/SDH", RFC 2615, 653 June 1999. 655 [8] IANA, "Special-Use IPv4 Addresses", RFC 3330, September 2002. 657 [9] Huston, G., Lord, A., and P. Smith, "IPv6 Address Prefix 658 Reserved for Documentation", RFC 3849, July 2004. 660 [10] Newman, D. and T. Player, "Hash and Stuffing: Overlooked 661 Factors in Network Device Benchmarking", 662 draft-ietf-bmwg-hash-stuffing-07 (work in progress), 663 November 2006. 665 [11] Newman, D. and T. Player, "Hash and Stuffing: Overlooked 666 Factors in Network Device Benchmarking 667 (draft-ietf-bmwg-hash-stuffing-07.txt)", November 2006. 669 Appendix A. Maximum Frame Rates Reference 671 This appendix provides the formulas to calculate and the values for 672 the maximum frame rates for two media types: Ethernet and SONET. 674 A.1. Ethernet 676 The maximum throughput in frames per second (fps) for various 677 Ethernet interface types and for a frame size X can be calculated 678 with the following formula: 680 Line Rate (bps) 681 ------------------------------ 682 (8bits/byte)*(X+20)bytes/frame 684 The 20 bytes in the formula is the sum of the Preamble (8 bytes) and 685 the Inter Frame Gap (12 bytes). The maximum throughput for various 686 PoS interface types and frame sizes: 688 Size 10Mb/s 100Mb/s 1000Mb/s 10000Mb/s 689 Bytes pps pps pps pps 691 64 14881 148810 1488096 14880952 692 128 8446 84449 844595 8445946 693 256 4529 45290 452899 4528986 694 512 2350 23497 234962 2349625 695 1024 1198 11973 119731 1197318 696 1280 961 9616 96153 961538 697 1518 813 8128 81275 812744 698 4096 303 3036 30369 303692 699 8192 152 1522 15221 152216 700 9216 135 1353 13534 135340 702 A.2. Packet over SONET 704 ANSI T1.105 SONET provides the formula for calculating the maximum 705 available bandwidth for the various Packet over SONET (PoS) interface 706 types: 708 STS-Nc (N = 3Y, where Y=1,2,3,etc) 710 [(N*87) - N/3]*(9 rows)*(8 bit/byte)*(8000 frames/sec) 712 Packets over SONET can use various encapsulations: PPP [7], HDLC [4] 713 and Frame Relay. All these encapsulations use a 4 bytes header, a 2 714 or 4 bytes FCS field and a 1 byte Flag which are all accounted for in 715 the overall framesize. The maximum frame rate for various interface 716 types can be calculated with the formula (where X represents the 717 frame size in bytes): 719 Line Rate (bps) 720 ------------------------------ 721 (8bits/byte)*(X+1)bytes/frame 723 The maximum throughput for various PoS interface types and frame 724 sizes: 726 Size OC-3c OC-12c OC-48c OC-192c OC-768c 727 Bytes fps fps fps fps fps 729 64 288,000 1,152,000 4,608,000 18,432,000 73,728,000 730 128 145,116 580,465 2,321,860 9,287,442 37,149,767 731 256 72,840 291,362 1,165,447 4,661,790 18,647,160 732 512 36,491 145,965 583,860 2,335,439 9,341,754 733 1024 18,263 73,054 292,215 1,168,859 4,675,434 734 2048 9,136 36,545 146,179 584,714 2,338,858 735 4096 4,569 18,277 73,107 292,429 1,169,714 737 It is important to note that throughput test results may vary from 738 the values presented in appendices A.1 and A.2 due to bit stuffing 739 performed by various media types [11]. 741 Authors' Addresses 743 Ciprian Popoviciu 744 Cisco Systems 745 Kit Creek Road 746 RTP, North Carolina 27709 747 USA 749 Phone: 919 787 8162 750 Email: cpopovic@cisco.com 752 Ahmed Hamza 753 Cisco Systems 754 3000 Innovation Drive 755 Kanata K2K 3E8 756 Canada 758 Phone: 613 254 3656 759 Email: ahamza@cisco.com 761 Gunter Van de Velde 762 Cisco Systems 763 De Kleetlaan 6a 764 Diegem 1831 765 Belgium 767 Phone: +32 2704 5473 768 Email: gunter@cisco.com 770 Diego Dugatkin 771 IXIA 772 26601 West Agoura Rd 773 Calabasas 91302 774 USA 776 Phone: 818 444 3124 777 Email: diego@ixiacom.com 779 Full Copyright Statement 781 Copyright (C) The IETF Trust (2007). 783 This document is subject to the rights, licenses and restrictions 784 contained in BCP 78, and except as set forth therein, the authors 785 retain all their rights. 787 This document and the information contained herein are provided on an 788 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 789 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 790 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 791 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 792 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 793 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 795 Intellectual Property 797 The IETF takes no position regarding the validity or scope of any 798 Intellectual Property Rights or other rights that might be claimed to 799 pertain to the implementation or use of the technology described in 800 this document or the extent to which any license under such rights 801 might or might not be available; nor does it represent that it has 802 made any independent effort to identify any such rights. Information 803 on the procedures with respect to rights in RFC documents can be 804 found in BCP 78 and BCP 79. 806 Copies of IPR disclosures made to the IETF Secretariat and any 807 assurances of licenses to be made available, or the result of an 808 attempt made to obtain a general license or permission for the use of 809 such proprietary rights by implementers or users of this 810 specification can be obtained from the IETF on-line IPR repository at 811 http://www.ietf.org/ipr. 813 The IETF invites any interested party to bring to its attention any 814 copyrights, patents or patent applications, or other proprietary 815 rights that may cover technology that may be required to implement 816 this standard. Please address the information to the IETF at 817 ietf-ipr@ietf.org. 819 Acknowledgment 821 Funding for the RFC Editor function is provided by the IETF 822 Administrative Support Activity (IASA).