idnits 2.17.1 draft-ietf-bmwg-ipv6-tran-tech-benchmarking-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 2 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (April 29, 2017) is 2553 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 3511 (Obsoleted by RFC 9411) Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Benchmarking Working Group M. Georgescu 2 Internet Draft L. Pislaru 3 Intended status: Informational RCS&RDS 4 Expires: October 2017 G. Lencse 5 Szechenyi Istvan University 6 April 29, 2017 8 Benchmarking Methodology for IPv6 Transition Technologies 9 draft-ietf-bmwg-ipv6-tran-tech-benchmarking-07.txt 11 Abstract 13 There are benchmarking methodologies addressing the performance of 14 network interconnect devices that are IPv4- or IPv6-capable, but the 15 IPv6 transition technologies are outside of their scope. This 16 document provides complementary guidelines for evaluating the 17 performance of IPv6 transition technologies. More specifically, 18 this document targets IPv6 transition technologies that employ 19 encapsulation or translation mechanisms, as dual-stack nodes can be 20 very well tested using the recommendations of RFC2544 and RFC5180. 21 The methodology also includes a metric for benchmarking load 22 scalability. 24 Status of this Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF), its areas, and its working groups. Note that 31 other groups may also distribute working documents as Internet- 32 Drafts. 34 Internet-Drafts are draft documents valid for a maximum of six 35 months and may be updated, replaced, or obsoleted by other documents 36 at any time. It is inappropriate to use Internet-Drafts as 37 reference material or to cite them other than as "work in progress." 39 The list of current Internet-Drafts can be accessed at 40 http://www.ietf.org/ietf/1id-abstracts.txt 42 The list of Internet-Draft Shadow Directories can be accessed at 43 http://www.ietf.org/shadow.html 45 This Internet-Draft will expire on October 29, 2016. 47 Copyright Notice 49 Copyright (c) 2017 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with 57 respect to this document. 59 This document is subject to BCP 78 and the IETF Trust's Legal 60 Provisions Relating to IETF Documents 61 (http://trustee.ietf.org/license-info) in effect on the date of 62 publication of this document. Please review these documents 63 carefully, as they describe your rights and restrictions with 64 respect to this document. Code Components extracted from this 65 document must include Simplified BSD License text as described in 66 Section 4.e of the Trust Legal Provisions and are provided without 67 warranty as described in the Simplified BSD License. 69 Table of Contents 71 1. Introduction...................................................3 72 1.1. IPv6 Transition Technologies..............................4 73 2. Conventions used in this document..............................6 74 3. Terminology....................................................6 75 4. Test Setup.....................................................6 76 4.1. Single translation Transition Technologies................7 77 4.2. Encapsulation/Double translation Transition Technologies..7 78 5. Test Traffic...................................................8 79 5.1. Frame Formats and Sizes...................................8 80 5.1.1. Frame Sizes to Be Used over Ethernet.................9 81 5.2. Protocol Addresses........................................9 82 5.3. Traffic Setup.............................................9 83 6. Modifiers.....................................................10 84 7. Benchmarking Tests............................................10 85 7.1. Throughput...............................................11 86 Use Section 26.1 of RFC2544 unmodified........................11 87 7.2. Latency..................................................11 88 7.3. Packet Delay Variation...................................12 89 7.3.1. PDV.................................................12 90 7.3.2. IPDV................................................13 91 7.4. Frame Loss Rate..........................................14 92 7.5. Back-to-back Frames......................................14 93 7.6. System Recovery..........................................14 94 7.7. Reset....................................................14 96 8. Additional Benchmarking Tests for Stateful IPv6 Transition 97 Technologies.....................................................14 98 8.1. Concurrent TCP Connection Capacity.......................14 99 8.2. Maximum TCP Connection Establishment Rate................14 100 9. DNS Resolution Performance....................................14 101 9.1. Test and Traffic Setup...................................14 102 9.2. Benchmarking DNS Resolution Performance..................16 103 9.2.1. Requirements for the Tester.........................17 104 10. Overload Scalability.........................................18 105 10.1. Test Setup..............................................18 106 10.1.1. Single Translation Transition Technologies.........18 107 10.1.2. Encapsulation/Double Translation Transition 108 Technologies...............................................19 109 10.2. Benchmarking Performance Degradation....................19 110 10.2.1. Network performance degradation with simultaneous load 111 ...........................................................19 112 10.2.2. Network performance degradation with incremental load 113 ...........................................................20 114 11. NAT44 and NAT66..............................................21 115 12. Summarizing function and variation...........................21 116 13. Security Considerations......................................22 117 14. IANA Considerations..........................................22 118 15. References...................................................22 119 15.1. Normative References....................................22 120 15.2. Informative References..................................23 121 16. Acknowledgements.............................................26 122 Appendix A. Theoretical Maximum Frame Rates......................27 124 1. Introduction 126 The methodologies described in [RFC2544] and [RFC5180] help vendors 127 and network operators alike analyze the performance of IPv4 and 128 IPv6-capable network devices. The methodology presented in [RFC2544] 129 is mostly IP version independent, while [RFC5180] contains 130 complementary recommendations, which are specific to the latest IP 131 version, IPv6. However, [RFC5180] does not cover IPv6 transition 132 technologies. 134 IPv6 is not backwards compatible, which means that IPv4-only nodes 135 cannot directly communicate with IPv6-only nodes. To solve this 136 issue, IPv6 transition technologies have been proposed and 137 implemented. 139 This document presents benchmarking guidelines dedicated to IPv6 140 transition technologies. The benchmarking tests can provide insights 141 about the performance of these technologies, which can act as useful 142 feedback for developers, as well as for network operators going 143 through the IPv6 transition process. 145 The document also includes an approach to quantify performance when 146 operating in overload. Overload scalability can be defined as a 147 system's ability to gracefully accommodate greater numbers of flows 148 than the maximum number of flows which the Device under test (DUT) 149 can operate normally. The approach taken here is to quantify the 150 overload scalability by measuring the performance created by an 151 excessive number of network flows, and comparing performance to the 152 non-overloaded case. 154 1.1. IPv6 Transition Technologies 156 Two of the basic transition technologies, dual IP layer (also known 157 as dual stack) and encapsulation are presented in [RFC4213]. 158 IPv4/IPv6 Translation is presented in [RFC6144]. Most of the 159 transition technologies employ at least one variation of these 160 mechanisms. In this context, a generic classification of the 161 transition technologies can prove useful. 163 We can consider a production network transitioning to IPv6 as being 164 constructed using the following IP domains: 166 o Domain A: IPvX specific domain 168 o Core domain: which may be IPvY specific or dual-stack(IPvX and 169 IPvY) 171 o Domain B: IPvX specific domain 173 Note: X,Y are part of the set {4,6}, and X NOT.EQUAL Y. 175 According to the technology used for the core domain traversal the 176 transition technologies can be categorized as follows: 178 1. Dual-stack: the core domain devices implement both IP protocols. 180 2. Single Translation: In this case, the production network is 181 assumed to have only two domains, Domain A and the Core domain. 182 The core domain is assumed to be IPvY specific. IPvX packets are 183 translated to IPvY at the edge between Domain A and the Core 184 domain. 186 3. Double translation: The production network is assumed to have all 187 three domains; Domains A and B are IPvX specific, while the core 188 domain is IPvY specific. A translation mechanism is employed for 189 the traversal of the core network. The IPvX packets are 190 translated to IPvY packets at the edge between Domain A and the 191 Core domain. Subsequently, the IPvY packets are translated back 192 to IPvX at the edge between the Core domain and Domain B. 194 4. Encapsulation: The production network is assumed to have all 195 three domains; Domains A and B are IPvX specific, while the core 196 domain is IPvY specific. An encapsulation mechanism is used to 197 traverse the core domain. The IPvX packets are encapsulated to 198 IPvY packets at the edge between Domain A and the Core domain. 199 Subsequently, the IPvY packets are de-encapsulated at the edge 200 between the Core domain and Domain B. 202 The performance of Dual-stack transition technologies can be fully 203 evaluated using the benchmarking methodologies presented by 204 [RFC2544] and [RFC5180]. Consequently, this document focuses on the 205 other 3 categories: Single translation, Encapsulation and Double 206 translation transition technologies. 208 Another important aspect by which the IPv6 transition technologies 209 can be categorized is their use of stateful or stateless mapping 210 algorithms. The technologies that use stateful mapping algorithms 211 (e.g. Stateful NAT64 [RFC6146]) create dynamic correlations between 212 IP addresses or {IP address, transport protocol, transport port 213 number} tuples, which are stored in a state table. For ease of 214 reference, the IPv6 transition technologies which employ stateful 215 mapping algorithms will be called stateful IPv6 transition 216 technologies. The efficiency with which the state table is managed 217 can be an important performance indicator for these technologies. 218 Hence, for the stateful IPv6 transition technologies additional 219 benchmarking tests are RECOMMENDED. 221 Table 1 contains the generic categories as well as associations with 222 some of the IPv6 transition technologies proposed in the IETF. 224 Table 1. IPv6 Transition Technologies Categories 225 +---+--------------------+------------------------------------+ 226 | | Generic category | IPv6 Transition Technology | 227 +---+--------------------+------------------------------------+ 228 | 1 | Dual-stack | Dual IP Layer Operations [RFC4213] | 229 +---+--------------------+------------------------------------+ 230 | 2 | Single translation | NAT64 [RFC6146], IVI [RFC6219] | 231 +---+--------------------+------------------------------------+ 232 | 3 | Double translation | 464XLAT [RFC6877], MAP-T [RFC7599] | 233 +---+--------------------+------------------------------------+ 234 | 4 | Encapsulation | DSLite[RFC6333], MAP-E [RFC7597] | 235 | | | Lightweight 4over6 [RFC7596] | 236 | | | 6RD [RFC5569], 6PE [RFC4798], 6VPE | 237 | | | 6VPE [RFC4659] | 238 +---+--------------------+------------------------------------+ 240 2. Conventions used in this document 242 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 243 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 244 document are to be interpreted as described in [RFC2119]. 246 In this document, these words will appear with that interpretation 247 only when in ALL CAPS. Lower case uses of these words are not to be 248 interpreted as carrying [RFC2119] significance. 250 Although these terms are usually associated with protocol 251 requirements, in this document the terms are requirements for users 252 and systems that intend to implement the test conditions and claim 253 conformance with this specification. 255 3. Terminology 257 A number of terms used in this memo have been defined in other RFCs. 258 Please refer to those RFCs for definitions, testing procedures and 259 reporting formats. 261 Throughput (Benchmark) - [RFC2544] 263 Frame Loss Rate (Benchmark) - [RFC2544] 265 Back-to-back Frames (Benchmark) - [RFC2544] 267 System Recovery (Benchmark) - [RFC2544] 269 Reset (Benchmark) - [RFC6201] 271 Concurrent TCP Connection Capacity (Benchmark) - [RFC3511] 273 Maximum TCP Connection Establishment Rate (Benchmark) - [RFC3511] 275 4. Test Setup 277 The test environment setup options recommended for IPv6 transition 278 technologies benchmarking are very similar to the ones presented in 279 Section 6 of [RFC2544]. In the case of the tester setup, the options 280 presented in [RFC2544] and [RFC5180] can be applied here as well. 281 However, the Device under test (DUT) setup options should be 282 explained in the context of the targeted categories of IPv6 283 transition technologies: Single translation, Double translation and 284 Encapsulation transition technologies. 286 Although both single tester and sender/receiver setups are 287 applicable to this methodology, the single tester setup will be used 288 to describe the DUT setup options. 290 For the test setups presented in this memo, dynamic routing SHOULD 291 be employed. However, the presence of routing and management frames 292 can represent unwanted background data that can affect the 293 benchmarking result. To that end, the procedures defined in 294 [RFC2544] (Sections 11.2 and 11.3) related to routing and management 295 frames SHOULD be used here. Moreover, the "Trial description" 296 recommendations presented in [RFC2544] (Section 23) are also valid 297 for this memo. 299 In terms of route setup, the recommendations of [RFC2544] Section 13 300 are valid for this document assuming that an IPv6 version of the 301 routing packets shown in appendix C.2.6.2 is used. 303 4.1. Single translation Transition Technologies 305 For the evaluation of Single translation transition technologies, a 306 single DUT setup (see Figure 1) SHOULD be used. The DUT is 307 responsible for translating the IPvX packets into IPvY packets. In 308 this context, the tester device SHOULD be configured to support both 309 IPvX and IPvY. 311 +--------------------+ 312 | | 313 +------------|IPvX tester IPvY|<-------------+ 314 | | | | 315 | +--------------------+ | 316 | | 317 | +--------------------+ | 318 | | | | 319 +----------->|IPvX DUT IPvY|--------------+ 320 | | 321 +--------------------+ 322 Figure 1. Test setup 1 324 4.2. Encapsulation/Double translation Transition Technologies 326 For evaluating the performance of Encapsulation and Double 327 translation transition technologies, a dual DUT setup (see Figure 2) 328 SHOULD be employed. The tester creates a network flow of IPvX 329 packets. The first DUT is responsible for the encapsulation or 330 translation of IPvX packets into IPvY packets. The IPvY packets are 331 de-encapsulated/translated back to IPvX packets by the second DUT 332 and forwarded to the tester. 334 +--------------------+ 335 | | 336 +---------------------|IPvX tester IPvX|<------------------+ 337 | | | | 338 | +--------------------+ | 339 | | 340 | +--------------------+ +--------------------+ | 341 | | | | | | 342 +----->|IPvX DUT 1 IPvY |----->|IPvY DUT 2 IPvX |------+ 343 | | | | 344 +--------------------+ +--------------------+ 345 Figure 2. Test setup 2 347 One of the limitations of the dual DUT setup is the inability to 348 reflect asymmetries in behavior between the DUTs. Considering this, 349 additional performance tests SHOULD be performed using the single 350 DUT setup. 352 Note: For encapsulation IPv6 transition technologies, in the single 353 DUT setup, in order to test the de-encapsulation efficiency, the 354 tester SHOULD be able to send IPvX packets encasulated as IPvY. 356 5. Test Traffic 358 The test traffic represents the experimental workload and SHOULD 359 meet the requirements specified in this section. The requirements 360 are dedicated to unicast IP traffic. Multicast IP traffic is outside 361 of the scope of this document. 363 5.1. Frame Formats and Sizes 365 [RFC5180] describes the frame size requirements for two commonly 366 used media types: Ethernet and SONET (Synchronous Optical Network). 367 [RFC2544] covers also other media types, such as token ring and 368 FDDI. The recommendations of the two documents can be used for the 369 dual-stack transition technologies. For the rest of the transition 370 technologies, the frame overhead introduced by translation or 371 encapsulation MUST be considered. 373 The encapsulation/translation process generates different size 374 frames on different segments of the test setup. For instance, the 375 single translation transition technologies will create different 376 frame sizes on the receiving segment of the test setup, as IPvX 377 packets are translated to IPvY. This is not a problem if the 378 bandwidth of the employed media is not exceeded. To prevent 379 exceeding the limitations imposed by the media, the frame size 380 overhead needs to be taken into account when calculating the maximum 381 theoretical frame rates. The calculation method for the Ethernet, as 382 well as a calculation example, are detailed in Appendix A. The 383 details of the media employed for the benchmarking tests MUST be 384 noted in all test reports. 386 In the context of frame size overhead, MTU recommendations are 387 needed in order to avoid frame loss due to MTU mismatch between the 388 virtual encapsulation/translation interfaces and the physical 389 network interface controllers (NICs). To avoid this situation, the 390 larger MTU between the physical NICs and virtual 391 encapsulation/translation interfaces SHOULD be set for all 392 interfaces of the DUT and tester. To be more specific, the minimum 393 IPv6 MTU size (1280 bytes) plus the encapsulation/translation 394 overhead is the RECOMMENDED value for the physical interfaces as 395 well as virtual ones. 397 5.1.1. Frame Sizes to Be Used over Ethernet 399 Based on the recommendations of [RFC5180], the following frame sizes 400 SHOULD be used for benchmarking IPvX/IPvY traffic on Ethernet links: 401 64, 128, 256, 512, 768, 1024, 1280, 1518, 1522, 2048, 4096, 8192 and 402 9216. 404 Note: for single translation transition technologies (e.g. NAT64) in 405 the IPv6 -> IPv4 translation direction, 64 byte frames SHOULD be 406 replaced by 84 byte frames. This would allow the frames to be 407 transported over media such as the ones described by the IEEE 802.1Q 408 standard. Moreover, this would also allow the implementation of a 409 frame identifier in the UDP data. 411 The theoretical maximum frame rates considering an example of frame 412 overhead are presented in Appendix A. 414 5.2. Protocol Addresses 416 The selected protocol addresses should follow the recommendations of 417 [RFC5180](Section 5) for IPv6 and [RFC2544](Section 12) for IPv4. 419 Note: testing traffic with extension headers might not be possible 420 for the transition technologies, which employ translation. Proposed 421 IPvX/IPvY translation algorithms such as IP/ICMP translation 422 [RFC7915] do not support the use of extension headers. 424 5.3. Traffic Setup 426 Following the recommendations of [RFC5180], all tests described 427 SHOULD be performed with bi-directional traffic. Uni-directional 428 traffic tests MAY also be performed for a fine grained performance 429 assessment. 431 Because of the simplicity of UDP, UDP measurements offer a more 432 reliable basis for comparison than other transport layer protocols. 433 Consequently, for the benchmarking tests described in Section 7 of 434 this document UDP traffic SHOULD be employed. 436 Considering that a transition technology could process both native 437 IPv6 traffic and translated/encapsulated traffic, the following 438 traffic setups are recommended: 440 i) IPvX only traffic (where the IPvX traffic is to be 441 translated/encapsulated by the DUT) 442 ii) 90% IPvX traffic and 10% IPvY native traffic 443 iii) 50% IPvX traffic and 50% IPvY native traffic 444 iv) 10% IPvX traffic and 90% IPvY native traffic 446 For the benchmarks dedicated to stateful IPv6 transition 447 technologies, included in Section 8 of this memo (Concurrent TCP 448 Connection Capacity and Maximum TCP Connection Establishment Rate), 449 the traffic SHOULD follow the recommendations of [RFC3511], Sections 450 5.2.2.2 and 5.3.2.2. 452 6. Modifiers 454 The idea of testing under different operational conditions was first 455 introduced in [RFC2544](Section 11) and represents an important 456 aspect of benchmarking network elements, as it emulates, to some 457 extent, the conditions of a production environment. Section 6 of 458 [RFC5180] describes complementary testing conditions specific to 459 IPv6. Their recommendations can also be followed for IPv6 transition 460 technologies testing. 462 7. Benchmarking Tests 464 The following sub-sections contain the list of all recommended 465 benchmarking tests. 467 7.1. Throughput 469 Use Section 26.1 of RFC2544 unmodified. 471 7.2. Latency 473 Objective: To determine the latency. Typical latency is based on the 474 definitions of latency from [RFC1242]. However, this memo provides a 475 new measurement procedure. 477 Procedure: Similar to [RFC2544], the throughput for DUT at each of 478 the listed frame sizes SHOULD be determined. Send a stream of frames 479 at a particular frame size through the DUT at the determined 480 throughput rate to a specific destination. The stream SHOULD be at 481 least 120 seconds in duration. 483 Identifying tags SHOULD be included in at least 500 frames after 60 484 seconds. For each tagged frame, the time at which the frame was 485 fully transmitted (timestamp A) and the time at which the frame was 486 received (timestamp B) MUST be recorded. The latency is timestamp B 487 minus timestamp A as per the relevant definition from RFC 1242, 488 namely latency as defined for store and forward devices or latency 489 as defined for bit forwarding devices. 491 We recommend to encode the identifying tag in the payload of the 492 frame. To be more exact, the identifier SHOULD be inserted after the 493 UDP header. 495 From the resulted (at least 500) latencies, 2 quantities SHOULD be 496 calculated. One is the typical latency, which SHOULD be calculated 497 with the following formula: 499 TL=Median(Li) 501 Where: TL - the reported typical latency of the stream 503 Li -the latency for tagged frame i 505 The other measure is the worst case latency, which SHOULD be 506 calculated with the following formula: 508 WCL=L99.9thPercentile 510 Where: WCL - The reported worst case latency 512 L99.9thPercentile - The 99.9th Percentile of the stream measured 513 latencies 514 The test MUST be repeated at least 20 times with the reported 515 value being the median of the recorded values for TL and WCL. 517 Reporting Format: The report MUST state which definition of latency 518 (from RFC 1242) was used for this test. The summarized latency 519 results SHOULD be reported in the format of a table with a row for 520 each of the tested frame sizes. There SHOULD be columns for the 521 frame size, the rate at which the latency test was run for that 522 frame size, for the media types tested, and for the resultant 523 typical latency and worst case latency values for each type of data 524 stream tested. To account for the variation, the 1st and 99th 525 percentiles of the 20 iterations MAY be reported in two separated 526 columns. For a fine grained analysis, the histogram (as exemplified 527 in [RFC5481] Section 4.4) of one of the iterations MAY be 528 displayed . 530 7.3. Packet Delay Variation 532 Considering two of the metrics presented in [RFC5481], Packet Delay 533 Variation (PDV) and Inter Packet Delay Variation (IPDV), it is 534 RECOMMENDED to measure PDV. For a fine grained analysis of delay 535 variation, IPDV measurements MAY be performed. 537 7.3.1. PDV 539 Objective: To determine the Packet Delay Variation as defined in 540 [RFC5481]. 542 Procedure: As described by [RFC2544], first determine the throughput 543 for the DUT at each of the listed frame sizes. Send a stream of 544 frames at a particular frame size through the DUT at the determined 545 throughput rate to a specific destination. The stream SHOULD be at 546 least 60 seconds in duration. Measure the One-way delay as described 547 by [RFC3393] for all frames in the stream. Calculate the PDV of the 548 stream using the formula: 550 PDV=D99.9thPercentile - Dmin 552 Where: D99.9thPercentile - the 99.9th Percentile (as it was 553 described in [RFC5481]) of the One-way delay for the stream 555 Dmin - the minimum One-way delay in the stream 557 As recommended in [RFC2544], the test MUST be repeated at least 20 558 times with the reported value being the median of the recorded 559 values. Moreover, the 1st and 99th percentiles SHOULD be calculated 560 to account for the variation of the dataset. 562 Reporting Format: The PDV results SHOULD be reported in a table with 563 a row for each of the tested frame sizes and columns for the frame 564 size and the applied frame rate for the tested media types. Two 565 columns for the 1st and 99th percentile values MAY be displayed. 566 Following the recommendations of [RFC5481], the RECOMMENDED units of 567 measurement are milliseconds. 569 7.3.2. IPDV 571 Objective: To determine the Inter Packet Delay Variation as defined 572 in [RFC5481]. 574 Procedure: As described by [RFC2544], first determine the throughput 575 for the DUT at each of the listed frame sizes. Send a stream of 576 frames at a particular frame size through the DUT at the determined 577 throughput rate to a specific destination. The stream SHOULD be at 578 least 60 seconds in duration. Measure the One-way delay as described 579 by [RFC3393] for all frames in the stream. Calculate the IPDV for 580 each of the frames using the formula: 582 IPDV(i)=D(i) - D(i-1) 584 Where: D(i) - the One-way delay of the i th frame in the stream 586 D(i-1) - the One-way delay of i-1 th frame in the stream 588 Given the nature of IPDV, reporting a single number might lead to 589 over-summarization. In this context, the report for each measurement 590 SHOULD include 3 values: Dmin, Dmed, and Dmax 592 Where: Dmin - the minimum IPDV in the stream 594 Dmed - the median IPDV of the stream 596 Dmax - the maximum IPDV in the stream 598 The test MUST be repeated at least 20 times. To summarize the 20 599 repetitions, for each of the 3 (Dmin, Dmed and Dmax) the median 600 value SHOULD be reported. 602 Reporting format: The median for the 3 proposed values SHOULD be 603 reported. The IPDV results SHOULD be reported in a table with a row 604 for each of the tested frame sizes. The columns SHOULD include the 605 frame size and associated frame rate for the tested media types and 606 sub-columns for the three proposed reported values. Following the 607 recommendations of [RFC5481], the RECOMMENDED units of measurement 608 are milliseconds. 610 7.4. Frame Loss Rate 612 Use Section 26.3 of [RFC2544] unmodified. 614 7.5. Back-to-back Frames 616 Use Section 26.4 of [RFC2544] unmodified. 618 7.6. System Recovery 620 Use Section 26.5 of [RFC2544] unmodified. 622 7.7. Reset 624 Use Section 4 of [RFC6201] unmodified. 626 8. Additional Benchmarking Tests for Stateful IPv6 Transition 627 Technologies 629 This section describes additional tests dedicated to the stateful 630 IPv6 transition technologies. For the tests described in this 631 section, the DUT devices SHOULD follow the test setup and test 632 parameters recommendations presented in [RFC3511] (Sections 5.2 and 633 5.3) 635 The following additional tests SHOULD be performed. 637 8.1. Concurrent TCP Connection Capacity 639 Use Section 5.2 of [RFC3511] unmodified. 641 8.2. Maximum TCP Connection Establishment Rate 643 Use Section 5.3 of RFC3511 unmodified. 645 9. DNS Resolution Performance 647 This section describes benchmarking tests dedicated to DNS64 (see 648 [RFC6147]), used as DNS support for single translation technologies 649 such as NAT64. 651 9.1. Test and Traffic Setup 653 The test setup in Figure 3 follows the setup proposed for single 654 translation IPv6 transition technologies in Figure 1. 656 1:AAAA query +--------------------+ 657 +------------| |<-------------+ 658 | |IPv6 Tester IPv4| | 659 | +-------->| |----------+ | 660 | | +--------------------+ 3:empty | | 661 | | 6:synt'd AAAA, | | 662 | | AAAA +--------------------+ 5:valid A| | 663 | +---------| |<---------+ | 664 | |IPv6 DUT IPv4| | 665 +----------->| (DNS64) |--------------+ 666 +--------------------+ 2:AAAA query, 4:A query 667 Figure 3. DNS64 test setup 669 The test traffic SHOULD follow the following steps. 671 1. Query for the AAAA record of a domain name (from client to DNS64 672 server) 674 2. Query for the AAAA record of the same domain name (from DNS64 675 server to authoritative DNS server) 677 3. Empty AAAA record answer (from authoritative DNS server to DNS64 678 server) 680 4. Query for the A record of the same domain name (from DNS64 server 681 to authoritative DNS server) 683 5. Valid A record answer (from authoritative DNS server to DNS64 684 server) 686 6. Synthesized AAAA record answer (from DNS64 server to client) 688 The Tester plays the role of DNS client as well as authoritative DNS 689 server. It MAY be realized as a single physical device, or 690 alternatively, two physical devices MAY be used. 692 Please note that: 694 - If the DNS64 server implements caching and there is a cache 695 hit, then step 1 is followed by step 6 (and steps 2 through 5 696 are omitted). 697 - If the domain name has an AAAA record, then it is returned in 698 step 3 by the authoritative DNS server; steps 4 and 5 are 699 omitted, and the DNS64 server does not synthesizes an AAAA 700 record, but returns the received AAAA record to the client. 702 - As for the IP version used between the tester and the DUT, IPv6 703 MUST be used between the client and the DNS64 server (as a 704 DNS64 server provides service for an IPv6-only client), but 705 either IPv4 or IPv6 MAY be used between the DNS64 server and 706 the authoritative DNS server. 708 9.2. Benchmarking DNS Resolution Performance 710 Objective: To determine DNS64 performance by means of the maximum 711 number of successfully processed DNS requests per second. 713 Procedure: Send a specific number of DNS queries at a specific rate 714 to the DUT and then count the replies received in time (within a 715 predefined timeout period from the sending time of the corresponding 716 query, having the default value 1 second) and valid (contains an 717 AAAA record) from the DUT. If the count of sent queries is equal to 718 the count of received replies, the rate of the queries is raised and 719 the test is rerun. If fewer replies are received than queries were 720 sent, the rate of the queries is reduced and the test is rerun. The 721 duration of each trial SHOULD be at least 60 seconds. This will 722 reduce the potential gain of a DNS64 server, which is able to 723 exhibit higher performance by storing the requests and thus 724 utilizing also the timeout time for answering them. For the same 725 reason, no higher timeout time than 1 second SHOULD be used. For 726 further considerations, see [Lencse1]. 728 The maximum number of processed DNS queries per second is the 729 fastest rate at which the count of DNS replies sent by the DUT is 730 equal to the number of DNS queries sent to it by the test equipment. 732 The test SHOULD be repeated at least 20 times and the median and 1st 733 /99th percentiles of the number of processed DNS queries per second 734 SHOULD be calculated. 736 Details and parameters: 738 1. Caching 739 First, all the DNS queries MUST contain different domain names (or 740 domain names MUST NOT be repeated before the cache of the DUT is 741 exhausted). Then new tests MAY be executed with domain names, 20%, 742 40%, 60%, 80% and 100% of which are cached. We note that ensuring a 743 record being cached requires repeating it both "late enough" after 744 the first query to be already resolved and be present in the cache 745 and "early enough" to be still present in the cache. 747 2. Existence of AAAA record 748 First, all the DNS queries MUST contain domain names which do not 749 have an AAAA record and have exactly one A record. 751 Then new tests MAY be executed with domain names, 20%, 40%, 60%, 80% 752 and 100% of which have an AAAA record. 754 Please note that the two conditions above are orthogonal, thus all 755 their combinations are possible and MAY be tested. The testing with 756 0% cached domain names and with 0% existing AAAA record is REQUIRED 757 and the other combinations are OPTIONAL. (When all the domain names 758 are cached, then the results do not depend on what percentage of the 759 domain names have AAAA records, thus these combinations are not 760 worth testing one by one.) 762 Reporting format: The primary result of the DNS64 test is the median 763 of the number of processed DNS queries per second measured with the 764 above mentioned "0% + 0% combination". The median SHOULD be 765 complemented with the 1st and 99th percentiles to show the stability 766 of the result. If optional tests are done, the median and the 1st 767 and 99th percentiles MAY be presented in a two dimensional table 768 where the dimensions are the proportion of the repeated domain names 769 and the proportion of the DNS names having AAAA records. The two 770 table headings SHOULD contain these percentage values. 771 Alternatively, the results MAY be presented as the corresponding two 772 dimensional graph, too. In this case the graph SHOULD show the 773 median values with the percentiles as error bars. From both the 774 table and the graph, one dimensional excerpts MAY be made at any 775 given fixed percentage value of the other dimension. In this case, 776 the fixed value MUST be given together with a one dimensional table 777 or graph. 779 9.2.1. Requirements for the Tester 781 Before a Tester can be used for testing a DUT at rate r queries per 782 second with t seconds timeout, it MUST perform a self-test in order 783 to exclude the possibility that the poor performance of the Tester 784 itself influences the results. For performing a self-test, the 785 tester is looped back (leaving out DUT) and its authoritative DNS 786 server subsystem is configured to be able to answer all the AAAA 787 record queries. For passing the self-test, the Tester SHOULD be able 788 to answer AAAA record queries at 2*(r+delta) rate within 0.25*t 789 timeout, where the value of delta is at least 0.1. 791 Explanation: When performing DNS64 testing, each AAAA record query 792 may result in at most two queries sent by the DUT, the first one of 793 them is for an AAAA record and the second one is for an A record 794 (the are both sent when there is no cache hit and also no AAAA 795 record exists). The parameters above guarantee that the 796 authoritative DNS server subsystem of the DUT is able to answer the 797 queries at the required frequency using up not more than the half of 798 the timeout time. 800 Remark: a sample open-source test program, dns64perf++, is available 801 from [Dns64perf] and it is documented in [Lencse2]. It implements 802 only the client part of the Tester and it should be used together 803 with an authoritative DNS server implementation, e.g. BIND, NSD or 804 YADIFA. Its experimental extension for testing caching is available 805 from [Lencse3] and it is documented in [Lencse4]. 807 10. Overload Scalability 809 Scalability has been often discussed; however, in the context of 810 network devices, a formal definition or a measurement method has not 811 yet been proposed. In this context, we can define overload 812 scalability as the ability of each transition technology to 813 accommodate network growth. Poor scalability usually leads to poor 814 performance. Considering this, overload scalability can be measured 815 by quantifying the network performance degradation associated with 816 an increased number of network flows. 818 The following subsections describe how the test setups can be 819 modified to create network growth and how the associated performance 820 degradation can be quantified. 822 10.1. Test Setup 824 The test setups defined in Section 3 have to be modified to create 825 network growth. 827 10.1.1. Single Translation Transition Technologies 829 In the case of single translation transition technologies the 830 network growth can be generated by increasing the number of network 831 flows generated by the tester machine (see Figure 4). 833 +-------------------------+ 834 +------------|NF1 NF1|<-------------+ 835 | +---------|NF2 tester NF2|<----------+ | 836 | | ...| | | | 837 | | +-----|NFn NFn|<------+ | | 838 | | | +-------------------------+ | | | 839 | | | | | | 840 | | | +-------------------------+ | | | 841 | | +---->|NFn NFn|-------+ | | 842 | | ...| DUT | | | 843 | +-------->|NF2 (translator) NF2|-----------+ | 844 +----------->|NF1 NF1|--------------+ 845 +-------------------------+ 846 Figure 4. Test setup 3 848 10.1.2. Encapsulation/Double Translation Transition Technologies 850 Similarly, for the encapsulation/double translation technologies a 851 multi-flow setup is recommended. Considering a multipoint-to-point 852 scenario, for most transition technologies, one of the edge nodes is 853 designed to support more than one connecting devices. Hence, the 854 recommended test setup is a n:1 design, where n is the number of 855 client DUTs connected to the same server DUT (See Figure 5). 857 +-------------------------+ 858 +--------------------|NF1 NF1|<--------------+ 859 | +-----------------|NF2 tester NF2|<-----------+ | 860 | | ...| | | | 861 | | +-------------|NFn NFn|<-------+ | | 862 | | | +-------------------------+ | | | 863 | | | | | | 864 | | | +-----------------+ +---------------+ | | | 865 | | +--->| NFn DUT n NFn |--->|NFn NFn| ---+ | | 866 | | +-----------------+ | | | | 867 | | ... | | | | 868 | | +-----------------+ | DUT n+1 | | | 869 | +------->| NF2 DUT 2 NF2 |--->|NF2 NF2|--------+ | 870 | +-----------------+ | | | 871 | +-----------------+ | | | 872 +---------->| NF1 DUT 1 NF1 |--->|NF1 NF1|-----------+ 873 +-----------------+ +---------------+ 874 Figure 5. Test setup 4 876 This test setup can help to quantify the scalability of the server 877 device. However, for testing the overload scalability of the client 878 DUTs additional recommendations are needed. 879 For encapsulation transition technologies, a m:n setup can be 880 created, where m is the number of flows applied to the same client 881 device and n the number of client devices connected to the same 882 server device. 883 For the translation based transition technologies, the client 884 devices can be separately tested with n network flows using the test 885 setup presented in Figure 4. 887 10.2. Benchmarking Performance Degradation 889 10.2.1. Network performance degradation with simultaneous load 891 Objective: To quantify the performance degradation introduced by n 892 parallel and simultaneous network flows. 894 Procedure: First, the benchmarking tests presented in Section 7 have 895 to be performed for one network flow. 897 The same tests have to be repeated for n network flows, where the 898 network flows are started simultaneously. The performance 899 degradation of the X benchmarking dimension SHOULD be calculated as 900 relative performance change between the 1-flow (single flow) results 901 and the n-flow results, using the following formula: 903 Xn - X1 904 Xpd= ----------- * 100, where: X1 - result for 1-flow 905 X1 Xn - result for n-flows 907 This formula SHOULD be applied only for lower is better benchmarks 908 (e.g. latency). 909 For higher is better benchmarks (e.g. throughput), the following 910 formula is RECOMMENDED. 912 X1 - Xn 913 Xpd= ----------- * 100, where: X1 - result for 1-flow 914 X1 Xn - result for n-flows 916 As a guideline for the maximum number of flows n, the value can be 917 deduced by measuring the Concurrent TCP Connection Capacity as 918 described by [RFC3511], following the test setups specified by 919 Section 4. 921 Reporting Format: The performance degradation SHOULD be expressed as 922 a percentage. The number of tested parallel flows n MUST be clearly 923 specified. For each of the performed benchmarking tests, there 924 SHOULD be a table containing a column for each frame size. The table 925 SHOULD also state the applied frame rate. In the case of benchmarks 926 for which more than one value is reported (e.g. IPDV Section 7.3.2), 927 a column for each of the values SHOULD be included. 929 10.2.2. Network performance degradation with incremental load 931 Objective: To quantify the performance degradation introduced by n 932 parallel and incrementally started network flows. 934 Procedure: First, the benchmarking tests presented in Section 7 have 935 to be performed for one network flow. 937 The same tests have to be repeated for n network flows, where the 938 network flows are started incrementally in succession, each after 939 time t. In other words, if flow i is started at time x, flow i+1 940 will be started at time x+t. Considering the time t, the time 941 duration of each iteration must be extended with the time necessary 942 to start all the flows, namely (n-1)xt. The measurement for the 943 first flow SHOULD be at least 60 seconds in duration. 945 The performance degradation of the x benchmarking dimension SHOULD 946 be calculated as relative performance change between the 1-flow 947 results and the n-flow results, using the formula presented in 948 Section 10.2.1. Intermediary degradation points for 1/4*n, 1/2*n and 949 3/4*n MAY also be performed. 951 Reporting Format: The performance degradation SHOULD be expressed as 952 a percentage. The number of tested parallel flows n MUST be clearly 953 specified. For each of the performed benchmarking tests, there 954 SHOULD be a table containing a column for each frame size. The table 955 SHOULD also state the applied frame rate and time duration T, used 956 as increment step between the network flows. The units of 957 measurement for T SHOULD be seconds. A column for the intermediary 958 degradation points MAY also be displayed. In the case of benchmarks 959 for which more than one value is reported (e.g. IPDV Section 7.3.2), 960 a column for each of the values SHOULD be included. 962 11. NAT44 and NAT66 964 Although these technologies are not the primary scope of this 965 document, the benchmarking methodology associated with single 966 translation technologies as defined in Section 4.1 can be employed 967 to benchmark NAT44 (as defined by [RFC2663] with the behavior 968 described by [RFC7857]) implementations and NAT66 (as defined by 969 [RFC6296]) implementations. 971 12. Summarizing function and variation 973 To ensure the stability of the benchmarking scores obtained using 974 the tests presented in Sections 7 through 9, multiple test 975 iterations are RECOMMENDED. Using a summarizing function (or measure 976 of central tendency) can be a simple and effective way to compare 977 the results obtained across different iterations. However, over- 978 summarization is an unwanted effect of reporting a single number. 980 Measuring the variation (dispersion index) can be used to counter 981 the over-summarization effect. Empirical data obtained following the 982 proposed methodology can also offer insights on which summarizing 983 function would fit better. 985 To that end, data presented in [ietf95pres] indicate the median as 986 suitable summarizing function and the 1st and 99th percentiles as 987 variation measures for DNS Resolution Performance and PDV. The 988 median and percentile calculation functions SHOULD follow the 989 recommendations of [RFC2330] Section 11.3. 991 For a fine grained analysis of the frequency distribution of the 992 data, histograms or cumulative distribution function plots can be 993 employed. 995 13. Security Considerations 997 Benchmarking activities as described in this memo are limited to 998 technology characterization using controlled stimuli in a laboratory 999 environment, with dedicated address space and the constraints 1000 specified in the sections above. 1002 The benchmarking network topology will be an independent test setup 1003 and MUST NOT be connected to devices that may forward the test 1004 traffic into a production network, or misroute traffic to the test 1005 management network. 1007 Further, benchmarking is performed on a "black-box" basis, relying 1008 solely on measurements observable external to the DUT/SUT. Special 1009 capabilities SHOULD NOT exist in the DUT/SUT specifically for 1010 benchmarking purposes. Any implications for network security arising 1011 from the DUT/SUT SHOULD be identical in the lab and in production 1012 networks. 1014 14. IANA Considerations 1016 The IANA has allocated the prefix 2001:2::/48 [RFC5180] for IPv6 1017 benchmarking. For IPv4 benchmarking, the 198.18.0.0/15 prefix was 1018 reserved, as described in [RFC6890]. The two ranges are sufficient 1019 for benchmarking IPv6 transition technologies. Thus, no action is 1020 requested. 1022 15. References 1024 15.1. Normative References 1026 [RFC1242] Bradner, S., "Benchmarking Terminology for Network 1027 Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242, 1028 July 1991, . 1030 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1031 Requirement Levels", BCP 14, RFC 2119, DOI 1032 10.17487/RFC2119, March 1997, . 1035 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 1036 "Framework for IP performance metrics", RFC 2330, DOI 1037 10.17487/RFC2330, May 1998, . 1040 [RFC2544] Bradner, S., and J. McQuaid, "Benchmarking Methodology for 1041 Network Interconnect Devices", RFC 2544, DOI 1042 10.17487/RFC2544, March 1999, . 1045 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 1046 Metric for IP Performance Metrics (IPPM)", RFC 3393, DOI 1047 10.17487/RFC3393, November 2002, . 1050 [RFC3511] Hickman, B., Newman, D., Tadjudin, S. and T. Martin, 1051 "Benchmarking Methodology for Firewall Performance", RFC 1052 3511, DOI 10.17487/RFC3511, April 2003, . 1055 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 1056 Dugatkin, "IPv6 Benchmarking Methodology for Network 1057 Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May 1058 2008, . 1060 [RFC5481] Morton, A., and B. Claise, "Packet Delay Variation 1061 Applicability Statement", RFC 5481, DOI 10.17487/RFC5481, 1062 March 2009, . 1064 [RFC6201] Asati, R., Pignataro, C., Calabria, F. and C. Olvera, 1065 "Device Reset Characterization ", RFC 6201, DOI 1066 10.17487/RFC6201, March 2011, . 1069 15.2. Informative References 1071 [RFC2663] Srisuresh, P., and M. Holdrege. "IP Network Address 1072 Translator (NAT) Terminology and Considerations", RFC2663, 1073 DOI 10.17487/RFC2663, August 1999, . 1076 [RFC4213] Nordmark, E. and R. Gilligan, "Basic Transition Mechanisms 1077 for IPv6 Hosts and Routers", RFC 4213, DOI 1078 10.17487/RFC4213, October 2005, . 1081 [RFC4659] De Clercq, J., Ooms, D., Carugi, M., and F. Le Faucheur, 1082 "BGP-MPLS IP Virtual Private Network (VPN) Extension for 1083 IPv6 VPN", RFC 4659, September 2006, . 1086 [RFC4798] De Clercq, J., Ooms, D., Prevost, S., and F. Le Faucheur, 1087 "Connecting IPv6 Islands over IPv4 MPLS Using IPv6 1088 Provider Edge Routers (6PE)", RFC 4798, February 2007, 1089 1091 [RFC5569] Despres, R., "IPv6 Rapid Deployment on IPv4 1092 Infrastructures (6rd)", RFC 5569, DOI 10.17487/RFC5569, 1093 January 2010, . 1095 [RFC6144] Baker, F., Li, X., Bao, C., and K. Yin, "Framework for 1096 IPv4/IPv6 Translation", RFC 6144, DOI 10.17487/RFC6144, 1097 April 2011, . 1099 [RFC6146] Bagnulo, M., Matthews, P., and I. van Beijnum, "Stateful 1100 NAT64: Network Address and Protocol Translation from IPv6 1101 Clients to IPv4 Servers", RFC 6146, DOI 10.17487/RFC6146, 1102 April 2011, . 1104 [RFC6147] Bagnulo, M., Sullivan, A., Matthews, P., and I. van 1105 Beijnum, "DNS64: DNS Extensions for Network Address 1106 Translation from IPv6 Clients to IPv4 Servers", RFC 6147, 1107 DOI 10.17487/RFC6147, April 2011, . 1110 [RFC6219] Li, X., Bao, C., Chen, M., Zhang, H., and J. Wu, "The 1111 China Education and Research Network (CERNET) IVI 1112 Translation Design and Deployment for the IPv4/IPv6 1113 Coexistence and Transition", RFC 6219, DOI 1114 10.17487/RFC6219, May 2011, . 1117 [RFC6296] Wasserman, M., and F. Baker. "IPv6-to-IPv6 network prefix 1118 translation." RFC6296, DOI 10.17487/RFC6296, June 2011. 1120 [RFC6333] Durand, A., Droms, R., Woodyatt, J., and Y. Lee, "Dual- 1121 Stack Lite Broadband Deployments Following IPv4 1122 Exhaustion", RFC 6333, DOI 10.17487/RFC6333, August 2011, 1123 . 1125 [RFC6877] Mawatari, M., Kawashima, M., and C. Byrne, "464XLAT: 1126 Combination of Stateful and Stateless Translation", RFC 1127 6877, DOI 10.17487/RFC6877, April 2013, . 1130 [RFC6890] Cotton, M., Vegoda, L., Bonica, R., and B. Haberman, 1131 "Special-Purpose IP Address Registries", BCP 153, RFC6890, 1132 DOI 10.17487/RFC6890, April 2013, . 1135 [RFC7596] Cui, Y., Sun, Q., Boucadair, M., Tsou, T., Lee, Y., and I. 1136 Farrer, "Lightweight 4over6: An Extension to the Dual- 1137 Stack Lite Architecture", RFC 7596, DOI 10.17487/RFC7596, 1138 July 2015, . 1140 [RFC7597] Troan, O., Ed., Dec, W., Li, X., Bao, C., Matsushima, S., 1141 Murakami, T., and T. Taylor, Ed., "Mapping of Address and 1142 Port with Encapsulation (MAP-E)", RFC 7597, DOI 1143 10.17487/RFC7597, July 2015, . 1146 [RFC7599] Li, X., Bao, C., Dec, W., Ed., Troan, O., Matsushima, S., 1147 and T. Murakami, "Mapping of Address and Port using 1148 Translation (MAP-T)", RFC 7599, DOI 10.17487/RFC7599, July 1149 2015, . 1151 [RFC7857] Penno, R., Perreault, S., Boucadair, M., Sivakumar, S., 1152 and K. Naito "Updates to Network Address Translation (NAT) 1153 Behavioral Requirements" RFC 7857, DOI 10.17487/RFC7857, 1154 April 2016, . 1156 [RFC7915] LBao, C., Li, X., Baker, F., Anderson, T., and F. Gont, 1157 "IP/ICMP Translation Algorithm", RFC 7915, DOI 1158 10.17487/RFC7915, June 2016, . 1161 [Dns64perf] Bakai, D., "A C++11 DNS64 performance tester", 1162 available: https://github.com/bakaid/dns64perfpp 1164 [ietf95pres] Georgescu, M., "Benchmarking Methodology for IPv6 1165 Transition Technologies", IETF 95, Buenos Aires, 1166 Argentina, April 2016, available: 1167 https://www.ietf.org/proceedings/95/slides/slides-95-bmwg- 1168 2.pdf 1170 [Lencse1] Lencse, G., Georgescu, M. and Y. Kadobayashi, 1171 "Benchmarking Methodology for DNS64 Servers", unpublished, 1172 revised version is available: 1173 http://www.hit.bme.hu/~lencse/publications/ECC-2017-B-M- 1174 DNS64-revised.pdf 1176 [Lencse2] Lencse, G., Bakai, D, "Design and Implementation of a Test 1177 Program for Benchmarking DNS64 Servers", IEICE 1178 Transactions on Communications, to be published (vol. 1179 E100-B, no. 6. pp. -, June 2017.), advance publication is 1180 available: http://doi.org/10.1587/transcom.2016EBN0007 1181 revised version is freely available: 1182 http://www.hit.bme.hu/~lencse/publications/IEICE-2016- 1183 dns64perfpp-revised.pdf 1185 [Lencse3] http://www.hit.bme.hu/~lencse/dns64perfppc/ 1187 [Lencse4] Lencse, G., "Enabling Dns64perf++ for Benchmarking the 1188 Caching Performance of DNS64 Servers", unpublished, review 1189 version is available: 1190 http://www.hit.bme.hu/~lencse/publications/IEICE-2016- 1191 dns64perfppc-for-review.pdf 1193 16. Acknowledgements 1195 The authors would like to thank Youki Kadobayashi and Hiroaki 1196 Hazeyama for their constant feedback and support. The thanks should 1197 be extended to the NECOMA project members for their continuous 1198 support. The thank you list should also include Emanuel Popa, Ionut 1199 Spirlea and the RCS&RDS IP/MPLS Backbone Team for their support and 1200 insights. We would also like to thank Scott Bradner for the useful 1201 suggestions. We also note that portions of text from Scott's 1202 documents were used in this memo (e.g. Latency section). A big thank 1203 you to Al Morton and Fred Baker for their detailed review of the 1204 draft and very helpful suggestions. Other helpful comments and 1205 suggestions were offered by Bhuvaneswaran Vengainathan, Andrew 1206 McGregor, Nalini Elkins, Kaname Nishizuka, Yasuhiro Ohara, Masataka 1207 Mawatari, Kostas Pentikousis, Bela Almasi, Tim Chown, Paul Emmerich 1208 and Stenio Fernandes. A special thank you to the RFC Editor Team for 1209 their thorough editorial review and helpful suggestions. This 1210 document was prepared using 2-Word-v2.0.template.dot. 1212 Appendix A. Theoretical Maximum Frame Rates 1214 This appendix describes the recommended calculation formulas for the 1215 theoretical maximum frame rates to be employed over Ethernet as 1216 example media. The formula takes into account the frame size 1217 overhead created by the encapsulation or the translation process. 1218 For example, the 6in4 encapsulation described in [RFC4213] adds 20 1219 bytes of overhead to each frame. 1221 Considering X to be the frame size and O to be the frame size 1222 overhead created by the encapsulation on translation process, the 1223 maximum theoretical frame rate for Ethernet can be calculated using 1224 the following formula: 1226 Line Rate (bps) 1227 ------------------------------ 1228 (8bits/byte)*(X+O+20)bytes/frame 1230 The calculation is based on the formula recommended by RFC5180 in 1231 Appendix A1. As an example, the frame rate recommended for testing a 1232 6in4 implementation over 10Mb/s Ethernet with 64 bytes frames is: 1234 10,000,000(bps) 1235 ------------------------------ = 12,019 fps 1236 (8bits/byte)*(64+20+20)bytes/frame 1238 The complete list of recommended frame rates for 6in4 encapsulation 1239 can be found in the following table: 1241 +------------+---------+----------+-----------+------------+ 1242 | Frame size | 10 Mb/s | 100 Mb/s | 1000 Mb/s | 10000 Mb/s | 1243 | (bytes) | (fps) | (fps) | (fps) | (fps) | 1244 +------------+---------+----------+-----------+------------+ 1245 | 64 | 12,019 | 120,192 | 1,201,923 | 12,019,231 | 1246 | 128 | 7,440 | 74,405 | 744,048 | 7,440,476 | 1247 | 256 | 4,223 | 42,230 | 422,297 | 4,222,973 | 1248 | 512 | 2,264 | 22,645 | 226,449 | 2,264,493 | 1249 | 678 | 1,740 | 17,409 | 174,094 | 1,740,947 | 1250 | 1024 | 1,175 | 11,748 | 117,481 | 1,174,812 | 1251 | 1280 | 947 | 9,470 | 94,697 | 946,970 | 1252 | 1518 | 802 | 8,023 | 80,231 | 802,311 | 1253 | 1522 | 800 | 8,003 | 80,026 | 800,256 | 1254 | 2048 | 599 | 5,987 | 59,866 | 598,659 | 1255 | 4096 | 302 | 3,022 | 30,222 | 302,224 | 1256 | 8192 | 152 | 1,518 | 15,185 | 151,846 | 1257 | 9216 | 135 | 1,350 | 13,505 | 135,048 | 1258 +------------+---------+----------+-----------+------------+ 1260 Authors' Addresses 1261 Marius Georgescu 1262 RCS&RDS 1263 Strada Dr. Nicolae D. Staicovici 71-75 1264 Bucharest 030167 1265 Romania 1267 Phone: +40 31 005 0979 1268 Email: marius.georgescu@rcs-rds.ro 1270 Liviu Pislaru 1271 RCS&RDS 1272 Strada Dr. Nicolae D. Staicovici 71-75 1273 Bucharest 030167 1274 Romania 1276 Phone: +40 31 005 0979 1277 Email: liviu.pislaru@rcs-rds.ro 1279 Gabor Lencse 1280 Szechenyi Istvan University 1281 Egyetem ter 1. 1282 Gyor 1283 Hungary 1285 Phone: +36 20 775 8267 1286 Email: lencse@sze.hu