idnits 2.17.1 draft-georgescu-bmwg-ipv6-tran-tech-benchmarking-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 2, 2015) is 3211 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC6146' is mentioned on line 164, but not defined -- Looks like a reference, but probably isn't: '2647' on line 515 == Missing Reference: 'RFC6890' is mentioned on line 651, but not defined == Unused Reference: 'RFC2234' is defined on line 671, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2234 (Obsoleted by RFC 4234) -- Obsolete informational reference (is this intentional?): RFC 3511 (Obsoleted by RFC 9411) Summary: 1 error (**), 0 flaws (~~), 6 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group M. Georgescu 2 Internet Draft NAIST 3 Intended status: Informational July 2, 2015 4 Expires: January 2016 6 Benchmarking Methodology for IPv6 Transition Technologies 7 draft-georgescu-bmwg-ipv6-tran-tech-benchmarking-01.txt 9 Abstract 11 There are benchmarking methodologies addressing the performance of 12 network interconnect devices that are IPv4- or IPv6-capable, but the 13 IPv6 transition technologies are outside of their scope. This 14 document provides complementary guidelines for evaluating the 15 performance of IPv6 transition technologies. The methodology also 16 includes a tentative metric for benchmarking scalability. 18 Status of this Memo 20 This Internet-Draft is submitted in full conformance with the 21 provisions of BCP 78 and BCP 79. 23 Internet-Drafts are working documents of the Internet Engineering 24 Task Force (IETF), its areas, and its working groups. Note that 25 other groups may also distribute working documents as Internet- 26 Drafts. 28 Internet-Drafts are draft documents valid for a maximum of six 29 months and may be updated, replaced, or obsoleted by other documents 30 at any time. It is inappropriate to use Internet-Drafts as 31 reference material or to cite them other than as "work in progress." 33 The list of current Internet-Drafts can be accessed at 34 http://www.ietf.org/ietf/1id-abstracts.txt 36 The list of Internet-Draft Shadow Directories can be accessed at 37 http://www.ietf.org/shadow.html 39 This Internet-Draft will expire on January 2, 2015. 41 Copyright Notice 43 Copyright (c) 2015 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with 51 respect to this document. 53 Table of Contents 55 1. Introduction...................................................3 56 1.1. IPv6 Transition Technologies..............................3 57 2. Conventions used in this document..............................4 58 3. Test Setup.....................................................4 59 3.1. Single-stack Transition Technologies......................5 60 3.2. Encapsulation/Translation Based Transition Technologies...5 61 4. Test Traffic...................................................6 62 4.1. Frame Formats and Sizes...................................6 63 4.1.1. Frame Sizes to Be Used over Ethernet.................7 64 4.1.2. Frame Sizes to Be Used over SONET....................7 65 4.2. Protocol Addresses........................................7 66 4.3. Traffic Setup.............................................7 67 5. Modifiers......................................................8 68 6. Benchmarking Tests.............................................8 69 6.1. Throughput................................................8 70 6.2. Latency...................................................8 71 6.3. Packet Delay Variation....................................8 72 6.3.1. PDV..................................................8 73 6.3.2. IPDV.................................................9 74 6.4. Frame Loss Rate..........................................10 75 6.5. Back-to-back Frames......................................10 76 6.6. System Recovery..........................................10 77 6.7. Reset....................................................10 78 7. Additional Benchmarking Tests for Stateful IPv6 Transition 79 Technologies.....................................................11 80 7.1. Concurrent TCP Connection Capacity.......................11 81 7.2. Maximum TCP Connection Establishment Rate................11 82 8. Scalability...................................................11 83 8.1. Test Setup...............................................12 84 8.1.1. Single-stack Transition Technologies................12 85 8.1.2. Encapsulation/Translation Transition Technologies...12 86 8.2. Benchmarking Performance Degradation.....................13 87 9. Security Considerations.......................................14 88 10. IANA Considerations..........................................14 89 11. Conclusions..................................................14 90 12. References...................................................14 91 12.1. Normative References....................................14 92 12.2. Informative References..................................15 93 13. Acknowledgments..............................................16 94 Appendix A. Theoretical Maximum Frame Rates......................17 95 A.1. Ethernet.................................................17 96 A.2. SONET....................................................18 98 1. Introduction 100 The methodologies described in [RFC2544] and [RFC5180] help vendors 101 and network operators alike analyze the performance of IPv4 and 102 IPv6-capable network devices. The methodology presented in [RFC2544] 103 is mostly IP version independent, while [RFC5180] contains 104 complementary recommendations, which are specific to the latest IP 105 version, IPv6. However, [RFC5180] does not cover IPv6 transition 106 technologies. 108 IPv6 is not backwards compatible, which means that IPv4-only nodes 109 cannot directly communicate with IPv6-only nodes. To solve this 110 issue, IPv6 transition technologies have been proposed and 111 implemented, many of which are still in development. 113 This document presents benchmarking guidelines dedicated to IPv6 114 transition technologies. The benchmarking tests can provide insights 115 about the performance of these technologies, which can act as useful 116 feedback for developers, as well as for network operators going 117 through the IPv6 transition process. 119 1.1. IPv6 Transition Technologies 121 Two of the basic transition technologies, dual IP layer (also known 122 as dual stack) and encapsulation, are presented in [RFC4213]. 123 IPv4/IPv6 Translation is presented in [RFC6144]. Most of the 124 transition technologies employ at least one variation of these 125 mechanisms. Some of the more complex ones (e.g. DSLite [RFC6333]) 126 are using all three. In this context, a generic classification of 127 the transition technologies can prove useful. 129 Tentatively, we can consider a basic production IP-based network as 130 being constructed using the following components: 132 o a Customer Edge (CE) segment 134 o a Core network segment 136 o a Provider Edge (PE) segment 138 According to the technology used for the core network traversal the 139 transition technologies can be categorized as follows: 141 1. Single-stack: either IPv4 or IPv6 is used to traverse the core 142 network, and translation is used at one of the edges 144 2. Dual-stack: the core network devices implement both IP protocols 146 3. Encapsulation-based: an encapsulation mechanism is used to 147 traverse the core network; CE nodes encapsulate the IPvX packets 148 in IPvY packets, while PE nodes are responsible for the 149 decapsulation process. 151 4. Translation-based: a translation mechanism is employed for the 152 traversal of the core network; CE nodes translate IPvX packets to 153 IPvY packets and PE nodes translate the packets back to IPvX. 155 The performance of Dual-stack transition technologies can be fully 156 evaluated using the benchmarking methodologies presented by 157 [RFC2544] and [RFC5180]. Consequently, this document focuses on the 158 other 3 categories: Single-stack, Encapsulation-based, and 159 Translation-based transition technologies. 161 Another important aspect by which the IPv6 transition technologies 162 can be categorized is their use of stateful or stateless mapping 163 algorithms. The technologies that use stateful mapping algorithms 164 (e.g. Stateful NAT64 [RFC6146]) create dynamic correlations between 165 IP addresses or {IP address, transport protocol, transport port 166 number} tuples, which are stored in a state table. For ease of 167 reference, the IPv6 transition technologies which employ stateful 168 mapping algorithms will be called stateful IPv6 transition 169 technologies. The efficiency with which the state table is managed 170 can be an important performance indicator for these technologies. 171 Hence, for the stateful IPv6 transition technologies additional 172 benchmarking tests are RECOMMENDED. 174 2. Conventions used in this document 176 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 177 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 178 document are to be interpreted as described in [RFC2119]. 180 In this document, these words will appear with that interpretation 181 only when in ALL CAPS. Lower case uses of these words are not to be 182 interpreted as carrying [RFC2119] significance. 184 3. Test Setup 186 The test environment setup options recommended for IPv6 transition 187 technologies benchmarking are very similar to the ones presented in 188 Section 6 of [RFC2544]. In the case of the tester setup, the options 189 presented in [RFC2544] can be applied here as well. However, the 190 Device under test (DUT) setup options should be explained in the 191 context of the 3 targeted categories of IPv6 transition 192 technologies: Single-stack, Encapsulation-based and Translation- 193 based transition technologies. 195 Although both single tester and sender/receiver setups are 196 applicable to this methodology, the single tester setup will be used 197 to describe the DUT setup options. 199 For the test setups presented in this memo dynamic routing SHOULD be 200 employed. However, the presence of routing and management frames can 201 represent unwanted background data that can affect the benchmarking 202 result. To that end, the procedures defined in [RFC2544] (Sections 203 11.2 and 11.3) related to routing and management frames SHOULD be 204 used here as well. Moreover, the "Trial description" recommendations 205 presented in [RFC2544] (Section 23) are valid for this memo as well. 207 3.1. Single-stack Transition Technologies 209 For the evaluation of Single-stack transition technologies a single 210 DUT setup (see Figure 1) SHOULD be used. The DUT is responsible for 211 translating the IPvX packets into IPvY packets. In this context, the 212 tester device should be configured to support both IPvX and IPvY. 214 +--------------------+ 215 | | 216 +------------|IPvX tester IPvY|<-------------+ 217 | | | | 218 | +--------------------+ | 219 | | 220 | +--------------------+ | 221 | | | | 222 +----------->|IPvX DUT IPvY|--------------+ 223 | (translator) | 224 +--------------------+ 225 Figure 1. Test setup 1 227 3.2. Encapsulation/Translation Based Transition Technologies 229 For evaluating the performance of Encapsulation-based and 230 Translation-based transition technologies a dual DUT setup (see 231 Figure 2) SHOULD be employed. The tester creates a network flow of 232 IPvX packets. The DUT CE is responsible for the encapsulation or 233 translation of IPvX packets into IPvY packets. The IPvY packets are 234 decapsulated/translated back to IPvX packets by the DUT PE and 235 forwarded to the tester. 237 +--------------------+ 238 | | 239 +---------------------|IPvX tester IPvX|<------------------+ 240 | | | | 241 | +--------------------+ | 242 | | 243 | +--------------------+ +--------------------+ | 244 | | | | | | 245 +----->|IPvX DUT CE IPvY|----->|IPvY DUT PE IPvX|------+ 246 | trans/encaps | | trans/decaps | 247 +--------------------+ +--------------------+ 248 Figure 2. Test setup 2 250 In the case of translation based transition technology, the DUT CE 251 and DUT PE machines MAY be tested separately as well. These tests 252 can represent a fine grain performance analysis of the IPvX to IPvY 253 translation direction versus the IPvY to IPvX translation direction. 254 The tests SHOULD follow the test setup presented in Figure 1. 256 4. Test Traffic 258 The test traffic represents the experimental workload and SHOULD 259 meet the requirements specified in this section. The requirements 260 are dedicated to unicast IP traffic. Multicast IP traffic is outside 261 of the scope of this document. 263 4.1. Frame Formats and Sizes 265 [RFC5180] describes the frame size requirements for two commonly 266 used media types: Ethernet and SONET (Synchronous Optical Network). 267 [RFC2544] covers also other media types, such as token ring and 268 FDDI. The two documents can be referred for the dual-stack 269 transition technologies. For the rest of the transition technologies 270 the frame overhead introduced by translation or encapsulation MUST 271 be considered. 273 The encapsulation/translation process generates different size 274 frames on different segments of the test setup. For example, the 275 single-stack transition technologies will create different frame 276 sizes on the receiving segment of the test setup, as IPvX packets 277 are translated to IPvY. This is not a problem if the bandwidth of 278 the employed media is not exceeded. To prevent exceeding the 279 limitations imposed by the media, the frame size overhead needs to 280 be taken into account when calculating the maximum theoretical frame 281 rates. The calculation methods for the two media types, Ethernet and 282 SONET, as well as a calculation example are detailed in Appendix A. 284 In the context of frame size overhead MTU recommendations are needed 285 in order to avoid frame loss due to MTU mismatch between the virtual 286 encapsulation/translation interfaces and the physical network 287 interface controllers (NICs). To avoid this situation, the larger 288 MTU between the physical NICs and virtual encapsulation/translation 289 interfaces SHOULD be set for all interfaces of the DUT and tester. 291 4.1.1. Frame Sizes to Be Used over Ethernet 293 Based on the recommendations of [RFC5180], the following frame sizes 294 SHOULD be used for benchmarking Ethernet traffic: 64, 128, 256, 512, 295 1024, 1280, 1518, 1522, 2048, 4096, 8192 and 9216. 297 The theoretical maximum frame rates considering an example of frame 298 overhead are presented in Appendix A1. 300 4.1.2. Frame Sizes to Be Used over SONET 302 Based on the recommendations of [RFC5180], the frame sizes for SONET 303 traffic SHOULD be: 47, 64, 128, 256, 512, 1024, 1280, 1518, 2048, 304 4096 bytes. 306 An example of theoretical maximum frame rates calculation is shown 307 in Appendix A2. 309 4.2. Protocol Addresses 311 The selected protocol addresses should follow the recommendations of 312 [RFC5180](Section 5) for IPv6 and [RFC2544](Section 12) for IPv4. 314 Note: testing traffic with extension headers might not be possible 315 for the transition technologies which employ translation. 317 4.3. Traffic Setup 319 Following the recommendations of [RFC5180], all tests described 320 SHOULD be performed with bi-directional traffic. Uni-directional 321 traffic tests MAY also be performed for a fine grained performance 322 assessment. 324 Because of the simplicity of UDP, UDP measurements offer a more 325 reliable basis for comparison than other transport layer protocols. 326 Consequently, for the benchmarking tests described in Section 6 of 327 this document UDP traffic SHOULD be employed. 329 Considering that the stateful transition technologies need to manage 330 the state table for each connection, a connection-oriented transport 331 layer protocol needs to be used with the test traffic. Consequently, 332 TCP test traffic SHOULD be employed for the tests described in 333 Section 7 of this document. 335 5. Modifiers 337 The idea of testing under different operational conditions was first 338 introduced in [RFC2544](Section 11) and represents an important 339 aspect of benchmarking network elements, as it emulates to some 340 extent the conditions of a production environment. [RFC5180] 341 describes complementary testing conditions specific to IPv6. Their 342 recommendations can be referred for IPv6 transition technologies 343 testing as well. 345 6. Benchmarking Tests 347 The benchmarking test conditions described in [RFC2544] (Sections 348 24, 25, 26) are also recommended here. The following sub-sections 349 contain the list of all recommended benchmarking tests. 351 6.1. Throughput 353 Objective: To determine the DUT throughput as defined in [RFC1242]. 355 Procedure: As described by [RFC2544]. 357 Reporting Format: As described by [RFC2544]. 359 6.2. Latency 361 Objective: To determine the latency as defined in [RFC1242]. 363 Procedure: As described by [RFC2544]. 365 Reporting Format: As described by [RFC2544]. 367 6.3. Packet Delay Variation 369 Considering two of the metrics presented in [RFC5481], Packet Delay 370 Variation (PDV) and Inter Packet Delay Variation (IPDV), it is 371 RECOMMENDED to measure PDV. For a fine grain analysis of delay 372 variation, IPDV measurements MAY be performed as well. 374 6.3.1. PDV 376 Objective: To determine the Packet Delay Variation as defined in 377 [RFC5481]. 379 Procedure: As described by [RFC2544], first determine the throughput 380 for the DUT at each of the listed frame sizes. Send a stream of 381 frames at a particular frame size through the DUT at the determined 382 throughput rate to a specific destination. The stream SHOULD be at 383 least 60 seconds in duration. Measure the One-way delay as described 384 by [RFC3393] for all frames in the stream. Calculate the PDV of the 385 stream using the formula: 387 PDV=Avg(D(i) - Dmin) 389 Where: D(i) - the One-way delay of the i-th frame in the stream 391 Dmin - the minimum One-way delay in the stream 393 As recommended in RFC 2544, the test MUST be repeated at least 20 394 times with the reported value being the average of the recorded 395 values. Moreover, the margin of error from the average MAY be 396 evaluated following the formula: 398 StDev 399 MoE= alpha * ---------- 400 sqrt(N) 402 Where: alpha - critical value; the recommended value is 2.576 for 403 a 99% level of confidence 404 StDev - standard deviation 405 N - number of repetitions 407 Reporting Format: The PDV results SHOULD be reported in a table with 408 a row for each of the tested frame sizes and columns for the frame 409 size and the applied frame rate for the tested media types. A column 410 for the margin of error values MAY as well be displayed. 412 6.3.2. IPDV 414 Objective: To determine the Inter Packet Delay Variation as defined 415 in [RFC5481]. 417 Procedure: As described by [RFC2544], first determine the throughput 418 for the DUT at each of the listed frame sizes. Send a stream of 419 frames at a particular frame size through the DUT at the determined 420 throughput rate to a specific destination. The stream SHOULD be at 421 least 60 seconds in duration. Measure the One-way delay as described 422 by [RFC3393] for all frames in the stream. Calculate the IPDV for 423 each of the frames using the formula: 425 IPDV(i)=D(i) - D(i-1) 427 Where: D(i) - the One-way delay of the i th frame in the stream 429 D(i-1) - the One-way delay of i-1 th frame in the stream 431 Given the nature of IPDV, reporting a single number might lead to 432 over-summarization. In this context, the report for each measurement 433 SHOULD include 3 values: Dmin, Davg, and Dmax 435 Where: Dmin - the minimum One-way delay in the stream 437 Davg - the average One-way delay of the stream 439 Dmax - the maximum One-way delay in the stream 441 As recommended in RFC 2544, the test MUST be repeated at least 20 442 times. The average of the 3 proposed values SHOULD be reported. The 443 IPDV results SHOULD be reported in a table with a row for each of 444 the tested frame sizes. The columns SHOULD include the frame size 445 and associated frame rate for the tested media types and sub-columns 446 for the three proposed reported values. 448 6.4. Frame Loss Rate 450 Objective: To determine the frame loss rate, as defined in 451 [RFC1242], of a DUT throughout the entire range of input data rates 452 and frame sizes. 454 Procedure: As described by [RFC2544]. 456 Reporting Format: As described by [RFC2544]. 458 6.5. Back-to-back Frames 460 Objective: To characterize the ability of a DUT to process back-to- 461 back frames as defined in [RFC1242]. 463 Procedure: As described by [RFC2544]. 465 Reporting Format: As described by [RFC2544]. 467 6.6. System Recovery 469 Objective: To characterize the speed at which a DUT recovers from an 470 overload condition. 472 Procedure: As described by [RFC2544]. 474 Reporting Format: As described by [RFC2544]. 476 6.7. Reset 478 Objective: To characterize the speed at which a DUT recovers from a 479 device or software reset. 481 Procedure: As described by [RFC2544]. 483 Reporting Format: As described by [RFC2544]. 485 7. Additional Benchmarking Tests for Stateful IPv6 Transition 486 Technologies 488 This section describes additional tests dedicated to the stateful 489 IPv6 transition technologies. For the tests described in this 490 section the DUT devices SHOULD follow the test setup and test 491 parameters recommendations presented in [RFC3511] (Sections 4, 5). 493 In addition to the IPv4/IPv6 transition function a network node can 494 have a firewall function. This document is targeting only the 495 network devices that do not have a firewall function, as this 496 function can be benchmarked using the recommendations of [RFC3511]. 497 Consequently, only the tests described in [RFC3511] (Sections 5.2, 498 5.3) are RECOMMENDED. Namely, the following additional tests SHOULD 499 be performed: 501 7.1. Concurrent TCP Connection Capacity 503 Objective: To determine the maximum number of concurrent TCP 504 connections supported through or with the DUT, as defined in [RFC 505 2647]. This test is supposed to find the maximum number of entries 506 the DUT can store in its state table. 508 Procedure: As described by [RFC3511]. 510 Reporting Format: As described by [RFC3511]. 512 7.2. Maximum TCP Connection Establishment Rate 514 Objective: To determine the maximum TCP connection establishment 515 rate through or with the DUT, as defined by RFC [2647]. This test 516 is expected to find the maximum rate at which the DUT can update its 517 connection table. 519 Procedure: As described by [RFC3511]. 521 Reporting Format: As described by [RFC3511]. 523 8. Scalability 525 Scalability has been often discussed; however, in the context of 526 network devices, a formal definition or a measurement method has not 527 yet been approached. 529 Scalability can be defined as the ability of each transition 530 technology to accommodate network growth. 532 Poor scalability usually leads to poor performance. Considering 533 this, scalability can be measured by quantifying the network 534 performance degradation while the network grows. 536 The following subsections describe how the test setups can be 537 modified to create network growth and how the associated performance 538 degradation can be quantified. 540 8.1. Test Setup 542 The test setups defined in Section 3 have to be modified to create 543 network growth. 545 8.1.1. Single-stack Transition Technologies 547 In the case of single-stack transition technologies the network 548 growth can be generated by increasing the number of network flows 549 generated by the tester machine (see Figure 3). 551 +-------------------------+ 552 +------------|NF1 NF1|<-------------+ 553 | +---------|NF2 tester NF2|<----------+ | 554 | | ...| | | | 555 | | +-----|NFn NFn|<------+ | | 556 | | | +-------------------------+ | | | 557 | | | | | | 558 | | | +-------------------------+ | | | 559 | | +---->|NFn NFn|-------+ | | 560 | | ...| DUT | | | 561 | +-------->|NF2 (translator) NF2|-----------+ | 562 +----------->|NF1 NF1|--------------+ 563 +-------------------------+ 564 Figure 3. Test setup 3 566 8.1.2. Encapsulation/Translation Transition Technologies 568 Similarly, for the encapsulation/translation based technologies a 569 multi-flow setup is recommended. For most transition technologies, 570 the provider edge device is designed to support more than one 571 customer edge network. Hence, the recommended test setup is a n:1 572 design, where n is the number of CE DUTs connected to the same PE 573 DUT (See Figure 4). 575 +-------------------------+ 576 +--------------------|NF1 NF1|<--------------+ 577 | +-----------------|NF2 tester NF2|<-----------+ | 578 | | ...| | | | 579 | | +-------------|NFn NFn|<-------+ | | 580 | | | +-------------------------+ | | | 581 | | | | | | 582 | | | +-----------------+ +---------------+ | | | 583 | | +--->|NFn DUT CEn NFn|--->|NFn NFn| ---+ | | 584 | | +-----------------+ | | | | 585 | | ... | | | | 586 | | +-----------------+ | DUT PE | | | 587 | +------->|NF2 DUT CE2 NF2|--->|NF2 NF2|--------+ | 588 | +-----------------+ | | | 589 | +-----------------+ | | | 590 +---------->|NF1 DUT CE1 NF1|--->|NF1 NF1|-----------+ 591 +-----------------+ +---------------+ 592 Figure 4. Test setup 4 594 This test setup can help to quantify the scalability of the PE 595 device. However, for testing the scalability of the DUT CEs 596 additional recommendations are needed. 597 For encapsulation based transition technologies a m:n setup can be 598 created, where m is the number of flows applied to the same CE 599 device and n the number of CE devices connected to the same PE 600 device. 601 For the translation based transition technologies the CE devices can 602 be separately tested with n network flows using the test setup 603 presented in Figure 3. 605 8.2. Benchmarking Performance Degradation 607 Objective: To quantify the performance degradation introduced by n 608 parallel network flows. 610 Procedure: First the benchmarking tests presented in Section 6 have 611 to be performed for one network flow. 613 The same tests have to be repeated for n network flows. The 614 performance degradation of the X benchmarking dimension SHOULD be 615 calculated as relative performance change between the 1-flow results 616 and the n-flow results, using the following formula: 618 Xn - X1 619 Xpd= ----------- * 100, where: X1 - result for 1-flow 620 X1 Xn - result for n-flows 622 Reporting Format: The performance degradation SHOULD be expressed as 623 a percentage. The number of tested parallel flows n MUST be clearly 624 specified. For each of the performed benchmarking tests, there 625 SHOULD be a table containing a column for each frame size. The table 626 SHOULD also state the applied frame rate. 628 9. Security Considerations 630 Benchmarking activities as described in this memo are limited to 631 technology characterization using controlled stimuli in a laboratory 632 environment, with dedicated address space and the constraints 633 specified in the sections above. 635 The benchmarking network topology will be an independent test setup 636 and MUST NOT be connected to devices that may forward the test 637 traffic into a production network, or misroute traffic to the test 638 management network. 640 Further, benchmarking is performed on a "black-box" basis, relying 641 solely on measurements observable external to the DUT/SUT. Special 642 capabilities SHOULD NOT exist in the DUT/SUT specifically for 643 benchmarking purposes. Any implications for network security arising 644 from the DUT/SUT SHOULD be identical in the lab and in production 645 networks. 647 10. IANA Considerations 649 The IANA has allocated the prefix 2001:0002::/48 [RFC5180] for IPv6 650 benchmarking. For IPv4 benchmarking, the 198.18.0.0/15 prefix was 651 reserved, as described in [RFC6890]. The two ranges are sufficient 652 for benchmarking IPv6 transition technologies. 654 11. Conclusions 656 The methodologies described in [RFC2544] and [RFC5180] can be used 657 for benchmarking the performance of IPv4-only, IPv6-only and dual- 658 stack supporting network devices. This document presents 659 complementary recommendations dedicated to IPv6 transition 660 technologies. Furthermore, the methodology includes a tentative 661 approach for benchmarking scalability by quantifying the performance 662 degradation associated with network growth. 664 12. References 666 12.1. Normative References 668 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 669 Requirement Levels", BCP 14, RFC 2119, March 1997. 671 [RFC2234] Crocker, D. and Overell, P.(Editors), "Augmented BNF for 672 Syntax Specifications: ABNF", RFC 2234, Internet Mail 673 Consortium and Demon Internet Ltd., November 1997. 675 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 676 Metric for IP Performance Metrics (IPPM)", RFC 3393, 677 November 2002. 679 [RFC4213] Nordmark, E. and R. Gilligan, "Basic Transition Mechanisms 680 for IPv6 Hosts and Routers", RFC 4213, October 2005. 682 [RFC6144] Baker, F., Li, X., Bao, C., and K. Yin, "Framework for 683 IPv4/IPv6 Translation", RFC 6144, April 2011. 685 [RFC6333] Durand, A., Droms, R., Woodyatt, J., and Y. Lee, "Dual- 686 Stack Lite Broadband Deployments Following IPv4 687 Exhaustion", RFC 6333, August 2011. 689 [RFC6333] Cotton, M., Vegoda, L., Bonica, R., and B. Haberman, 690 "Special-Purpose IP Address Registries", BCP 153, RFC6890, 691 April 2013. 693 12.2. Informative References 695 [RFC1242] Bradner, S., "Benchmarking Terminology for Network 696 Interconnection Devices", [RFC1242], July 1991. 698 [RFC2544] Bradner, S., McQuaid, J., "Benchmarking Methodology for 699 Network Interconnect Devices", [RFC2544], March 1999. 701 [RFC2647] Newman, D., "Benchmarking Terminology for Firewall 702 Devices", [RFC2647], August 1999. 704 [RFC3511] Hickman, B., Newman, D., Tadjudin, S., Martin, T., 705 "Benchmarking Methodology for Firewall Performance", 706 [RFC3511], April 2003. 708 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 709 Dugatkin, "IPv6 Benchmarking Methodology for Network 710 Interconnect Devices", RFC 5180, May 2008. 712 [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation 713 Applicability Statement", RFC 5481, March 2009. 715 13. Acknowledgments 717 The author would like to thank Professor Youki Kadobayashi for his 718 constant feedback and support. The thanks should be extended to the 719 NECOMA project members for their continuous support. Helpful 720 comments and suggestions were offered by Scott Bradner, Al Morton, 721 Bhuvaneswaran Vengainathan, Andrew McGregor, Nalini Elkins, Kaname 722 Nishizuka and Yasuhiro Ohara. A special thank you to the RFC Editor 723 Team for their thorough editorial review and helpful suggestions. 724 This document was prepared using 2-Word-v2.0.template.dot. 726 Appendix A. Theoretical Maximum Frame Rates 728 This appendix describes the recommended calculation formulas for the 729 theoretical maximum frame rates to be employed over two types of 730 commonly used media. The formulas take into account the frame size 731 overhead created by the encapsulation or the translation process. 732 For example, the 6in4 encapsulation described in [RFC4213] adds 20 733 bytes of overhead to each frame. 735 A.1. Ethernet 737 Considering X to be the frame size and O to be the frame size 738 overhead created by the encapsulation on translation process, the 739 maximum theoretical frame rate for Ethernet can be calculated using 740 the following formula: 742 Line Rate (bps) 743 ------------------------------ 744 (8bits/byte)*(X+O+20)bytes/frame 746 The calculation is based on the formula recommended by RFC5180 in 747 Appendix A1. As an example, the frame rate recommended for testing a 748 6in4 implementation over 10Mb/s Ethernet with 64 bytes frames is: 750 10,000,000(bps) 751 ------------------------------ = 12,019 fps 752 (8bits/byte)*(64+20+20)bytes/frame 754 The complete list of recommended frame rates for 6in4 encapsulation 755 can be found in the following table: 757 +------------+---------+----------+-----------+------------+ 758 | Frame size | 10 Mb/s | 100 Mb/s | 1000 Mb/s | 10000 Mb/s | 759 | (bytes) | (fps) | (fps) | (fps) | (fps) | 760 +------------+---------+----------+-----------+------------+ 761 | 64 | 12,019 | 120,192 | 1,201,923 | 12,019,231 | 762 | 128 | 7,440 | 74,405 | 744,048 | 7,440,476 | 763 | 256 | 4,223 | 42,230 | 422,297 | 4,222,973 | 764 | 512 | 2,264 | 22,645 | 226,449 | 2,264,493 | 765 | 1024 | 1,175 | 11,748 | 117,481 | 1,174,812 | 766 | 1280 | 947 | 9,470 | 94,697 | 946,970 | 767 | 1518 | 802 | 8,023 | 80,231 | 802,311 | 768 | 1522 | 800 | 8,003 | 80,026 | 800,256 | 769 | 2048 | 599 | 5,987 | 59,866 | 598,659 | 770 | 4096 | 302 | 3,022 | 30,222 | 302,224 | 771 | 8192 | 152 | 1,518 | 15,185 | 151,846 | 772 | 9216 | 135 | 1,350 | 13,505 | 135,048 | 773 +------------+---------+----------+-----------+------------+ 775 A.2. SONET 777 Similarly for SONET, if X is the target frame size and O the frame 778 size overhead, the recommended formula for calculating the maximum 779 theoretical frame rate is: 781 Line Rate (bps) 782 ------------------------------ 783 (8bits/byte)*(X+O+1)bytes/frame 785 The calculation formula is based on the recommendation of RFC5180 in 786 Appendix A2. 788 As an example, the frame rate recommended for testing a 6in4 789 implementation over a 10Mb/s PoS interface with 64 bytes frames is: 791 10,000,000(bps) 792 ------------------------------ = 14,706 fps 793 (8bits/byte)*(64+20+1)bytes/frame 795 The complete list of recommended frame rates for 6in4 encapsulation 796 can be found in the following table: 798 +------------+---------+----------+-----------+------------+ 799 | Frame size | 10 Mb/s | 100 Mb/s | 1000 Mb/s | 10000 Mb/s | 800 | (bytes) | (fps) | (fps) | (fps) | (fps) | 801 +------------+---------+----------+-----------+------------+ 802 | 47 | 18,382 | 183,824 | 1,838,235 | 18,382,353 | 803 | 64 | 14,706 | 147,059 | 1,470,588 | 14,705,882 | 804 | 128 | 8,389 | 83,893 | 838,926 | 8,389,262 | 805 | 256 | 4,513 | 45,126 | 451,264 | 4,512,635 | 806 | 512 | 2,345 | 23,452 | 234,522 | 2,345,216 | 807 | 1024 | 1,196 | 11,962 | 119,617 | 1,196,172 | 808 | 2048 | 604 | 6,042 | 60,416 | 604,157 | 809 | 4096 | 304 | 3,036 | 30,362 | 303,619 | 810 +------------+---------+----------+-----------+------------+ 812 Authors' Addresses 814 Marius Georgescu 815 Nara Institute of Science and Technology (NAIST) 816 Takayama 8916-5 817 Nara 818 Japan 820 Phone: +81 743 72 5216 821 Email: liviumarius-g@is.naist.jp