idnits 2.17.1 draft-ietf-bmwg-dcbench-methodology-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force L. Avramov 3 INTERNET-DRAFT, Intended Status: Informational Google 4 Expires December 1,2017 J. Rapp 5 May 30, 2017 VMware 7 Data Center Benchmarking Methodology 8 draft-ietf-bmwg-dcbench-methodology-07 10 Abstract 12 The purpose of this informational document is to establish test and 13 evaluation methodology and measurement techniques for physical 14 network equipment in the data center. 16 Status of this Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF). Note that other groups may also distribute working 23 documents as Internet-Drafts. The list of current Internet-Drafts is 24 at http://datatracker.ietf.org/drafts/current. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 Copyright Notice 33 Copyright (c) 2017 IETF Trust and the persons identified as the 34 document authors. All rights reserved. 36 This document is subject to BCP 78 and the IETF Trust's Legal 37 Provisions Relating to IETF Documents 38 (http://trustee.ietf.org/license-info) in effect on the date of 39 publication of this document. Please review these documents 40 carefully, as they describe your rights and restrictions with respect 41 to this document. Code Components extracted from this document must 42 include Simplified BSD License text as described in Section 4.e of 43 the Trust Legal Provisions and are provided without warranty as 44 described in the Simplified BSD License. 46 Table of Contents 48 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 49 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 5 50 1.2. Methodology format and repeatability recommendation . . . . 5 51 2. Line Rate Testing . . . . . . . . . . . . . . . . . . . . . . . 5 52 2.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 5 53 2.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 5 54 2.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 6 55 3. Buffering Testing . . . . . . . . . . . . . . . . . . . . . . . 7 56 3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 7 57 3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 7 58 3.3 Reporting format . . . . . . . . . . . . . . . . . . . . . . 10 59 4 Microburst Testing . . . . . . . . . . . . . . . . . . . . . . . 11 60 4.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 11 61 4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 11 62 4.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 11 63 5. Head of Line Blocking . . . . . . . . . . . . . . . . . . . . . 12 64 5.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 12 65 5.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 12 66 5.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 14 67 6. Incast Stateful and Stateless Traffic . . . . . . . . . . . . . 14 68 6.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 14 69 6.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 14 70 6.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 15 71 7. Security Considerations . . . . . . . . . . . . . . . . . . . 15 72 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 73 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16 74 9.1. Normative References . . . . . . . . . . . . . . . . . . . 17 75 9.2. Informative References . . . . . . . . . . . . . . . . . . 17 76 9.2. Acknowledgements . . . . . . . . . . . . . . . . . . . . . 17 77 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17 79 1. Introduction 81 Traffic patterns in the data center are not uniform and are 82 constantly changing. They are dictated by the nature and variety of 83 applications utilized in the data center. It can be largely east-west 84 traffic flows in one data center and north-south in another, while 85 others may combine both. Traffic patterns can be bursty in nature and 86 contain many-to-one, many-to-many, or one-to-many flows. Each flow 87 may also be small and latency sensitive or large and throughput 88 sensitive while containing a mix of UDP and TCP traffic. All of these 89 can coexist in a single cluster and flow through a single network 90 device simultaneously. Benchmarking of network devices have long used 91 [RFC1242], [RFC2432], [RFC2544], [2] and [3] which have largely been 92 focused around various latency attributes and Throughput [2] of the 93 Device Under Test (DUT) being benchmarked. These standards are good 94 at measuring theoretical Throughput, forwarding rates and latency 95 under testing conditions; however, they do not represent real traffic 96 patterns that may affect these networking devices. 98 The following provides a methodology for benchmarking Data Center DUT 99 including congestion scenarios, switch buffer analysis, microburst, 100 head of line blocking, while also using a wide mix of traffic 101 conditions. The terminology [1] is a pre-requisite. 103 1.1. Requirements Language 105 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 106 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 107 document are to be interpreted as described in RFC 2119 [RFC2119]. 109 1.2. Methodology format and repeatability recommendation 111 The format used for each section of this document is the following: 113 -Objective 115 -Methodology 117 -Reporting Format: Additional interpretation of RFC2119 terms: 119 MUST: required metric or benchmark for the scenario described 120 (minimum) 122 SHOULD or RECOMMENDED: strongly suggested metric for the scenario 123 described 125 MAY: Comprehensive metric for the scenario described 127 For each test methodology described, it is critical to obtain 128 repeatability in the results. The recommendation is to perform enough 129 iterations of the given test and to make sure the result is 130 consistent. This is especially important for section 3, as the 131 buffering testing has been historically the least reliable. The 132 number of iterations SHOULD be explicitly reported. The relative 133 standard deviation SHOULD be below 10%. 135 2. Line Rate Testing 137 2.1 Objective 139 Provide a maximum rate test for the performance values for 140 Throughput, latency and jitter. It is meant to provide the tests to 141 perform, and methodology to verify that a DUT is capable of 142 forwarding packets at line rate under non-congested conditions. 144 2.2 Methodology 146 A traffic generator SHOULD be connected to all ports on the DUT. Two 147 tests MUST be conducted: a port-pair test [RFC 2544/3918 section 15 148 compliant] and also in a full mesh type of DUT test [2889/3918 149 section 16 compliant]. 151 For all tests, the percentage of traffic per port capacity sent MUST 152 be 99.98% at most, with no PPM adjustment to ensure stressing the DUT 153 in worst case conditions. Tests results at a lower rate MAY be 154 provided for better understanding of performance increase in terms of 155 latency and jitter when the rate is lower than 99.98%. The receiving 156 rate of the traffic SHOULD be captured during this test in % of line 157 rate. 159 The test MUST provide the statistics of minimum, average and maximum 160 of the latency distribution, for the exact same iteration of the 161 test. 163 The test MUST provide the statistics of minimum, average and maximum 164 of the jitter distribution, for the exact same iteration of the test. 166 Alternatively when a traffic generator CAN NOT be connected to all 167 ports on the DUT, a snake test MUST be used for line rate testing, 168 excluding latency and jitter as those became then irrelevant. The 169 snake test consists in the following method: 171 -connect the first and last port of the DUT to a traffic generator 173 -connect back to back sequentially all the ports in between: port 2 174 to 3, port 4 to 5 etc to port n-2 to port n-1; where n is the total 175 number of ports of the DUT 177 -configure port 1 and 2 in the same vlan X, port 3 and 4 in the same 178 vlan Y, etc. port n-1 and port n in the same vlan ZZZ. 180 This snake test provides a capability to test line rate for Layer 2 181 and Layer 3 RFC 2544/3918 in instance where a traffic generator with 182 only two ports is available. The latency and jitter are not to be 183 considered with this test. 185 2.3 Reporting Format 187 The report MUST include: 189 -physical layer calibration information as defined into Data Center 190 Benchmarking Terminology (draft-ietf-bmwg-dcbench-terminology) 191 section 4. 193 -number of ports used 194 -reading for "Throughput received in percentage of bandwidth", while 195 sending 99.98% of port capacity on each port, for each packet size 196 from 64 bytes to 9216 bytes. As guidance, an increment of 64 byte 197 packet size between each iteration being ideal, a 256 byte and 512 198 bytes being are also often used. The most common packets sizes order 199 for the report is: 64b,128b,256b,512b,1024b,1518b,4096,8000,9216b. 201 The pattern for testing can be expressed using RFC 6985 [IMIX Genome: 202 Specification of Variable Packet Sizes for Additional Testing] 204 -Throughput needs to be expressed in % of total transmitted frames 206 -For packet drops, they MUST be expressed as a count of packets and 207 SHOULD be expressed in % of line rate 209 -For latency and jitter, values expressed in unit of time [usually 210 microsecond or nanosecond] reading across packet size from 64 bytes 211 to 9216 bytes 213 -For latency and jitter, provide minimum, average and maximum values. 214 If different iterations are done to gather the minimum, average and 215 maximum, it SHOULD be specified in the report along with a 216 justification on why the information could not have been gathered at 217 the same test iteration 219 -for jitter, a histogram describing the population of packets 220 measured per latency or latency buckets is RECOMMENDED 222 -The tests for Throughput, latency and jitter MAY be conducted as 223 individual independent trials, with proper documentation in the 224 report but SHOULD be conducted at the same time. 226 3. Buffering Testing 228 3.1 Objective 230 To measure the size of the buffer of a DUT under 231 typical|many|multiple conditions. Buffer architectures between 232 multiple DUTs can differ and include egress buffering, shared egress 233 buffering SoC (Switch-on-Chip), ingress buffering or a combination. 234 The test methodology covers the buffer measurement regardless of 235 buffer architecture used in the DUT. 237 3.2 Methodology 238 A traffic generator MUST be connected to all ports on the DUT. 240 The methodology for measuring buffering for a data-center switch is 241 based on using known congestion of known fixed packet size along with 242 maximum latency value measurements. The maximum latency will increase 243 until the first packet drop occurs. At this point, the maximum 244 latency value will remain constant. This is the point of inflection 245 of this maximum latency change to a constant value. There MUST be 246 multiple ingress ports receiving known amount of frames at a known 247 fixed size, destined for the same egress port in order to create a 248 known congestion condition. The total amount of packets sent from the 249 oversubscribed port minus one, multiplied by the packet size 250 represents the maximum port buffer size at the measured inflection 251 point. 253 1) Measure the highest buffer efficiency 255 First iteration: ingress port 1 sending line rate to egress port 2, 256 while port 3 sending a known low amount of over-subscription traffic 257 (1% recommended) with a packet size of 64 bytes to egress port 2. 258 Measure the buffer size value of the number of frames sent from the 259 port sending the oversubscribed traffic up to the inflection point 260 multiplied by the frame size. 262 Second iteration: ingress port 1 sending line rate to egress port 2, 263 while port 3 sending a known low amount of over-subscription traffic 264 (1% recommended) with same packet size 65 bytes to egress port 2. 265 Measure the buffer size value of the number of frames sent from the 266 port sending the oversubscribed traffic up to the inflection point 267 multiplied by the frame size. 269 Last iteration: ingress port 1 sending line rate to egress port 2, 270 while port 3 sending a known low amount of over-subscription traffic 271 (1% recommended) with same packet size B bytes to egress port 2. 272 Measure the buffer size value of the number of frames sent from the 273 port sending the oversubscribed traffic up to the inflection point 274 multiplied by the frame size. 276 When the B value is found to provide the largest buffer size, then 277 size B allows the highest buffer efficiency. 279 2) Measure maximum port buffer size 281 At fixed packet size B determined in procedure 1), for a fixed 282 default DSCP/COS value of 0 and for unicast traffic proceed with the 283 following: 285 First iteration: ingress port 1 sending line rate to egress port 2, 286 while port 3 sending a known low amount of over-subscription traffic 287 (1% recommended) with same packet size to the egress port 2. Measure 288 the buffer size value by multiplying the number of extra frames sent 289 by the frame size. 291 Second iteration: ingress port 2 sending line rate to egress port 3, 292 while port 4 sending a known low amount of over-subscription traffic 293 (1% recommended) with same packet size to the egress port 3. Measure 294 the buffer size value by multiplying the number of extra frames sent 295 by the frame size. 297 Last iteration: ingress port N-2 sending line rate traffic to egress 298 port N-1, while port N sending a known low amount of over- 299 subscription traffic (1% recommended) with same packet size to the 300 egress port N. Measure the buffer size value by multiplying the 301 number of extra frames sent by the frame size. 303 This test series MAY be repeated using all different DSCP/COS values 304 of traffic and then using Multicast type of traffic, in order to find 305 if there is any DSCP/COS impact on the buffer size. 307 3) Measure maximum port pair buffer sizes 309 First iteration: ingress port 1 sending line rate to egress port 2; 310 ingress port 3 sending line rate to egress port 4 etc. Ingress port 311 N-1 and N will respectively over subscribe at 1% of line rate egress 312 port 2 and port 3. Measure the buffer size value by multiplying the 313 number of extra frames sent by the frame size for each egress port. 315 Second iteration: ingress port 1 sending line rate to egress port 2; 316 ingress port 3 sending line rate to egress port 4 etc. Ingress port 317 N-1 and N will respectively over subscribe at 1% of line rate egress 318 port 4 and port 5. Measure the buffer size value by multiplying the 319 number of extra frames sent by the frame size for each egress port. 321 Last iteration: ingress port 1 sending line rate to egress port 2; 322 ingress port 3 sending line rate to egress port 4 etc. Ingress port 323 N-1 and N will respectively over subscribe at 1% of line rate egress 324 port N-3 and port N-2. Measure the buffer size value by multiplying 325 the number of extra frames sent by the frame size for each egress 326 port. 328 This test series MAY be repeated using all different DSCP/COS values 329 of traffic and then using Multicast type of traffic. 331 4) Measure maximum DUT buffer size with many to one ports 333 First iteration: ingress ports 1,2,... N-1 sending each [(1/[N- 334 1])*99.98]+[1/[N-1]] % of line rate per port to the N egress port. 336 Second iteration: ingress ports 2,... N sending each [(1/[N- 337 1])*99.98]+[1/[N-1]] % of line rate per port to the 1 egress port. 339 Last iteration: ingress ports N,1,2...N-2 sending each [(1/[N- 340 1])*99.98]+[1/[N-1]] % of line rate per port to the N-1 egress port. 342 This test series MAY be repeated using all different COS values of 343 traffic and then using Multicast type of traffic. 345 Unicast traffic and then Multicast traffic SHOULD be used in order to 346 determine the proportion of buffer for documented selection of tests. 347 Also the COS value for the packets SHOULD be provided for each test 348 iteration as the buffer allocation size MAY differ per COS value. It 349 is RECOMMENDED that the ingress and egress ports are varied in a 350 random, but documented fashion in multiple tests to measure the 351 buffer size for each port of the DUT. 353 3.3 Reporting format 355 The report MUST include: 357 - The packet size used for the most efficient buffer used, along 358 with DSCP/COS value 360 - The maximum port buffer size for each port 362 - The maximum DUT buffer size 364 - The packet size used in the test 366 - The amount of over-subscription if different than 1% 368 - The number of ingress and egress ports along with their location 369 on the DUT 371 - The repeatability of the test needs to be indicated: number of 372 iteration of the same test and percentage of variation between 373 results for each of the tests (min, max, avg) 375 The percentage of variation is a metric providing a sense of how big 376 the difference between the measured value and the previous ones. 378 For example, for a latency test where the minimum latency is 379 measured, the percentage of variation of the minimum latency will 380 indicate by how much this value has varied between the current test 381 executed and the previous one. 383 PV=((x2-x1)/x1)*100 where x2 is the minimum latency value in the 384 current test and x1 is the minimum latency value obtained in the 385 previous test. 387 The same formula is used for max and avg variations measured. 389 4 Microburst Testing 391 4.1 Objective 393 To find the maximum amount of packet bursts a DUT can sustain under 394 various configurations. 396 4.2 Methodology 398 A traffic generator MUST be connected to all ports on the DUT. In 399 order to cause congestion, two or more ingress ports MUST send bursts 400 of packets destined for the same egress port. The simplest of the 401 setups would be two ingress ports and one egress port (2-to-1). 403 The burst MUST be sent with an intensity of 100%, meaning the burst 404 of packets will be sent with a minimum inter-packet gap. The amount 405 of packet contained in the burst will be trial variable and increase 406 until there is a non-zero packet loss measured. The aggregate amount 407 of packets from all the senders will be used to calculate the maximum 408 amount of microburst the DUT can sustain. 410 It is RECOMMENDED that the ingress and egress ports are varied in 411 multiple tests to measure the maximum microburst capacity. 413 The intensity of a microburst MAY be varied in order to obtain the 414 microburst capacity at various ingress rates. 416 It is RECOMMENDED that all ports on the DUT will be tested 417 simultaneously and in various configurations in order to understand 418 all the combinations of ingress ports, egress ports and intensities. 420 An example would be: 422 First Iteration: N-1 Ingress ports sending to 1 Egress Ports 424 Second Iterations: N-2 Ingress ports sending to 2 Egress Ports 426 Last Iterations: 2 Ingress ports sending to N-2 Egress Ports 428 4.3 Reporting Format 429 The report MUST include: 431 - The maximum number of packets received per ingress port with the 432 maximum burst size obtained with zero packet loss 434 - The packet size used in the test 436 - The number of ingress and egress ports along with their location 437 on the DUT 439 - The repeatability of the test needs to be indicated: number of 440 iterations of the same test and percentage of variation between 441 results (min, max, avg) 443 5. Head of Line Blocking 445 5.1 Objective 447 Head-of-line blocking (HOL blocking) is a performance-limiting 448 phenomenon that occurs when packets are held-up by the first packet 449 ahead waiting to be transmitted to a different output port. This is 450 defined in RFC 2889 section 5.5, Congestion Control. This section 451 expands on RFC 2889 in the context of Data Center Benchmarking. 453 The objective of this test is to understand the DUT behavior under 454 head of line blocking scenario and measure the packet loss. 456 5.2 Methodology 458 In order to cause congestion in the form of head of line blocking, 459 groups of four ports are used. A group has 2 ingress and 2 egress 460 ports. The first ingress port MUST have two flows configured each 461 going to a different egress port. The second ingress port will 462 congest the second egress port by sending line rate. The goal is to 463 measure if there is loss on the flow for the first egress port which 464 is not over-subscribed. 466 A traffic generator MUST be connected to at least eight ports on the 467 DUT and SHOULD be connected using all the DUT ports. 469 1) Measure two groups with eight DUT ports 471 First iteration: measure the packet loss for two groups with 472 consecutive ports 474 The first group is composed of: ingress port 1 is sending 50% of 475 traffic to egress port 3 and ingress port 1 is sending 50% of traffic 476 to egress port 4. Ingress port 2 is sending line rate to egress port 477 4. Measure the amount of traffic loss for the traffic from ingress 478 port 1 to egress port 3. 480 The second group is composed of: ingress port 5 is sending 50% of 481 traffic to egress port 7 and ingress port 5 is sending 50% of traffic 482 to egress port 8. Ingress port 6 is sending line rate to egress port 483 8. Measure the amount of traffic loss for the traffic from ingress 484 port 5 to egress port 7. 486 Second iteration: repeat the first iteration by shifting all the 487 ports from N to N+1. 489 The first group is composed of: ingress port 2 is sending 50% of 490 traffic to egress port 4 and ingress port 2 is sending 50% of traffic 491 to egress port 5. Ingress port 3 is sending line rate to egress port 492 5. Measure the amount of traffic loss for the traffic from ingress 493 port 2 to egress port 4. 495 The second group is composed of: ingress port 6 is sending 50% of 496 traffic to egress port 8 and ingress port 6 is sending 50% of traffic 497 to egress port 9. Ingress port 7 is sending line rate to egress port 498 9. Measure the amount of traffic loss for the traffic from ingress 499 port 6 to egress port 8. 501 Last iteration: when the first port of the first group is connected 502 on the last DUT port and the last port of the second group is 503 connected to the seventh port of the DUT. 505 Measure the amount of traffic loss for the traffic from ingress port 506 N to egress port 2 and from ingress port 4 to egress port 6. 508 2) Measure with N/4 groups with N DUT ports 510 The traffic from ingress split across 4 egress ports (100/4=25%). 512 First iteration: Expand to fully utilize all the DUT ports in 513 increments of four. Repeat the methodology of 1) with all the group 514 of ports possible to achieve on the device and measure for each port 515 group the amount of traffic loss. 517 Second iteration: Shift by +1 the start of each consecutive ports of 518 groups 520 Last iteration: Shift by N-1 the start of each consecutive ports of 521 groups and measure the traffic loss for each port group. 523 5.3 Reporting Format 525 For each test the report MUST include: 527 - The port configuration including the number and location of ingress 528 and egress ports located on the DUT 530 - If HOLB was observed in accordance with the HOLB test in section 5 532 - Percent of traffic loss 534 - The repeatability of the test needs to be indicated: number of 535 iteration of the same test and percentage of variation between 536 results (min, max, avg) 538 6. Incast Stateful and Stateless Traffic 540 6.1 Objective 542 The objective of this test is to measure the values for TCP Goodput 543 [4] and latency with a mix of large and small flows. The test is 544 designed to simulate a mixed environment of stateful flows that 545 require high rates of goodput and stateless flows that require low 546 latency. 548 6.2 Methodology 550 In order to simulate the effects of stateless and stateful traffic on 551 the DUT, there MUST be multiple ingress ports receiving traffic 552 destined for the same egress port. There also MAY be a mix of 553 stateful and stateless traffic arriving on a single ingress port. The 554 simplest setup would be 2 ingress ports receiving traffic destined to 555 the same egress port. 557 One ingress port MUST be maintaining a TCP connection trough the 558 ingress port to a receiver connected to an egress port. Traffic in 559 the TCP stream MUST be sent at the maximum rate allowed by the 560 traffic generator. At the same time, the TCP traffic is flowing 561 through the DUT the stateless traffic is sent destined to a receiver 562 on the same egress port. The stateless traffic MUST be a microburst 563 of 100% intensity. 565 It is RECOMMENDED that the ingress and egress ports are varied in 566 multiple tests to measure the maximum microburst capacity. 568 The intensity of a microburst MAY be varied in order to obtain the 569 microburst capacity at various ingress rates. 571 It is RECOMMENDED that all ports on the DUT be used in the test. 573 For example: 575 Stateful Traffic port variation: 577 During Iterations number of Egress ports MAY vary as well. 579 First Iteration: 1 Ingress port receiving stateful TCP traffic and 1 580 Ingress port receiving stateless traffic destined to 1 Egress Port 582 Second Iteration: 2 Ingress port receiving stateful TCP traffic and 1 583 Ingress port receiving stateless traffic destined to 1 Egress Port 585 Last Iteration: N-2 Ingress port receiving stateful TCP traffic and 1 586 Ingress port receiving stateless traffic destined to 1 Egress Port 588 Stateless Traffic port variation: 590 During Iterations, the number of Egress ports MAY vary as well. First 591 Iteration: 1 Ingress port receiving stateful TCP traffic and 1 592 Ingress port receiving stateless traffic destined to 1 Egress Port 594 Second Iteration: 1 Ingress port receiving stateful TCP traffic and 2 595 Ingress port receiving stateless traffic destined to 1 Egress Port 597 Last Iteration: 1 Ingress port receiving stateful TCP traffic and N-2 598 Ingress port receiving stateless traffic destined to 1 Egress Port 600 6.3 Reporting Format 602 The report MUST include the following: 604 - Number of ingress and egress ports along with designation of 605 stateful or stateless flow assignment. 607 - Stateful flow goodput 609 - Stateless flow latency 611 - The repeatability of the test needs to be indicated: number of 612 iteration of the same test and percentage of variation between 613 results (min, max, avg) 615 7. Security Considerations 617 Benchmarking activities as described in this memo are limited to 618 technology characterization using controlled stimuli in a laboratory 619 environment, with dedicated address space and the constraints 620 specified in the sections above. 622 The benchmarking network topology will be an independent test setup 623 and MUST NOT be connected to devices that may forward the test 624 traffic into a production network, or misroute traffic to the test 625 management network. 627 Further, benchmarking is performed on a "black-box" basis, relying 628 solely on measurements observable external to the DUT/SUT. 630 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 631 benchmarking purposes. Any implications for network security arising 632 from the DUT/SUT SHOULD be identical in the lab and in production 633 networks. 635 8. IANA Considerations 637 NO IANA Action is requested at this time. 639 9. References 640 9.1. Normative References 642 [RFC1242] Bradner, S. "Benchmarking Terminology for Network 643 Interconnection Devices", BCP 14, RFC 1242, DOI 644 10.17487/RFC1242, July 1991, 647 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 648 Network Interconnect Devices", BCP 14, RFC 2544, DOI 649 10.17487/RFC2544, March 1999, 652 9.2. Informative References 654 [1] Avramov L. and Rapp J., "Data Center Benchmarking Terminology", 655 April 2017. 657 [2] Mandeville R. and Perser J., "Benchmarking Methodology for LAN 658 Switching Devices", RFC 2889, August 2000. 660 [3] Stopp D. and Hickman B., "Methodology for IP Multicast 661 Benchmarking", RFC 3918, October 2004. 663 [4] Yanpei Chen, Rean Griffith, Junda Liu, Randy H. Katz, Anthony D. 664 Joseph, "Understanding TCP Incast Throughput Collapse in 665 Datacenter Networks, 666 "http://yanpeichen.com/professional/usenixLoginIncastReady.pdf" 668 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 669 Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, 670 March 1997, 672 [RFC2432] Dubray, K., "Terminology for IP Multicast 673 Benchmarking", BCP 14, RFC 2432, DOI 10.17487/RFC2432, October 674 1998, 676 9.2. Acknowledgements 678 The authors would like to thank Alfred Morton and Scott Bradner 679 for their reviews and feedback. 681 Authors' Addresses 682 Lucien Avramov 683 Google 684 1600 Amphitheatre Parkway 685 Mountain View, CA 94043 686 United States 687 Phone: +1 408 774 9077 688 Email: lucienav@google.com 690 Jacob Rapp 691 VMware 692 3401 Hillview Ave 693 Palo Alto, CA 694 United States 695 Phone: +1 650 857 3367 696 Email: jrapp@vmware.com