idnits 2.17.1 draft-ietf-bmwg-dcbench-methodology-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There is 1 instance of too long lines in the document, the longest one being 10 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 89 has weird spacing: '...contain many-...' == Line 95 has weird spacing: '...latency attri...' -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: 'DUT' on line 96 -- Looks like a reference, but probably isn't: 'SoC' on line 233 == Unused Reference: '1' is defined on line 617, but no explicit reference was found in the text == Unused Reference: '3' is defined on line 625, but no explicit reference was found in the text == Unused Reference: '4' is defined on line 628, but no explicit reference was found in the text == Unused Reference: '5' is defined on line 631, but no explicit reference was found in the text Summary: 3 errors (**), 0 flaws (~~), 7 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force L. Avramov 3 Internet-Draft, Intended status: Informational Google 4 Expires July 3, 2017 J. Rapp 5 December 30, 2016 VMware 7 Data Center Benchmarking Methodology 8 draft-ietf-bmwg-dcbench-methodology-03 10 Abstract 12 The purpose of this informational document is to establish test and 13 evaluation methodology and measurement techniques for physical 14 network equipment in the data center. 16 Status of this Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that 23 other groups may also distribute working documents as Internet- 24 Drafts. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 The list of current Internet-Drafts can be accessed at 32 http://www.ietf.org/1id-abstracts.html 34 The list of Internet-Draft Shadow Directories can be accessed at 35 http://www.ietf.org/shadow.html 37 Copyright Notice 39 Copyright (c) 2016 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (http://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 55 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 5 56 1.2. Methodology format and repeatability recommendation . . . . 5 57 2. Line Rate Testing . . . . . . . . . . . . . . . . . . . . . . . 5 58 2.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 5 59 2.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 5 60 2.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 6 61 3. Buffering Testing . . . . . . . . . . . . . . . . . . . . . . . 7 62 3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 7 63 3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 7 64 3.3 Reporting format . . . . . . . . . . . . . . . . . . . . . . 10 65 4 Microburst Testing . . . . . . . . . . . . . . . . . . . . . . . 11 66 4.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 11 67 4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 11 68 4.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 11 69 5. Head of Line Blocking . . . . . . . . . . . . . . . . . . . . . 12 70 5.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 12 71 5.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 12 72 5.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 13 73 6. Incast Stateful and Stateless Traffic . . . . . . . . . . . . . 14 74 6.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 14 75 6.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 14 76 6.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 15 77 7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 15 78 7.1. Normative References . . . . . . . . . . . . . . . . . . . 16 79 7.2. Informative References . . . . . . . . . . . . . . . . . . 16 80 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16 82 1. Introduction 84 Traffic patterns in the data center are not uniform and are 85 constantly changing. They are dictated by the nature and variety of 86 applications utilized in the data center. It can be largely east-west 87 traffic flows in one data center and north-south in another, while 88 some may combine both. Traffic patterns can be bursty in nature and 89 contain many-to-one, many-to-many, or one-to-many flows. Each flow 90 may also be small and latency sensitive or large and throughput 91 sensitive while containing a mix of UDP and TCP traffic. All of which 92 can coexist in a single cluster and flow through a single network 93 device all at the same time. Benchmarking of network devices have 94 long used RFC1242, RFC2432, RFC2544, RFC2889 and RFC3918 which have 95 largely been focused around various latency attributes and 96 Throughput [2] of the Device Under Test [DUT] being benchmarked. 97 These standards are good at measuring theoretical Throughput, 98 forwarding rates and latency under testing conditions however, they 99 do not represent real traffic patterns that may affect these 100 networking devices. 102 The following provides a methodology for benchmarking Data Center DUT 103 including congestion scenarios, switch buffer analysis, microburst, 104 head of line blocking, while also using a wide mix of traffic 105 conditions. 107 1.1. Requirements Language 109 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 110 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 111 document are to be interpreted as described in RFC 2119 [6]. 113 1.2. Methodology format and repeatability recommendation 115 The format used for each section of this document is the following: 117 -Objective 119 -Methodology 121 -Reporting Format: Additional interpretation of RFC2119 terms: 123 MUST: required metric or benchmark for the scenario described 124 (minimum) 126 SHOULD or RECOMMENDED: strongly suggested metric for the scenario 127 described 129 MAY: Comprehensive metric for the scenario described 131 For each test methodology described, it is critical to obtain 132 repeatability in the results. The recommendation is to perform enough 133 iterations of the given test and to make sure the result is 134 consistent, this is especially important for section 3, as the 135 buffering testing has been historically the least reliable. The 136 number of iterations SHOULD be explicitly reported. The relative 137 standard deviation SHOULD be below 10%. 139 2. Line Rate Testing 141 2.1 Objective 143 Provide a maximum rate test for the performance values for 144 Throughput, latency and jitter. It is meant to provide the tests to 145 perform and methodology to verify that a DUT is capable of forwarding 146 packets at line rate under non-congested conditions. 148 2.2 Methodology 150 A traffic generator SHOULD be connected to all ports on the DUT. Two 151 tests MUST be conducted: a port-pair test [RFC 2544/3918 section 15 152 compliant] and also in a full mesh type of DUT test [RFC 2889/3918 153 section 16 compliant]. 155 For all tests, the percentage of traffic per port capacity sent MUST 156 be 99.98% at most, with no PPM adjustment to ensure stressing the DUT 157 in worst case conditions. Tests results at a lower rate MAY be 158 provided for better understanding of performance increase in terms of 159 latency and jitter when the rate is lower than 99.98%. The receiving 160 rate of the traffic SHOULD be captured during this test in % of line 161 rate. 163 The test MUST provide the statistics of minimum, average and maximum 164 of the latency distribution, for the exact same iteration of the 165 test. 167 The test MUST provide the statistics of minimum, average and maximum 168 of the jitter distribution, for the exact same iteration of the test. 170 Alternatively when a traffic generator CAN NOT be connected to all 171 ports on the DUT, a snake test MUST be used for line rate testing, 172 excluding latency and jitter as those became then irrelevant. The 173 snake test consists in the following method: -connect the first and 174 last port of the DUT to a traffic generator-connect back to back 175 sequentially all the ports in between: port 2 to 3, port 4 to 5 etc 176 to port n-2 to port n-1; where n is the total number of ports of the 177 DUT-configure port 1 and 2 in the same vlan X, port 3 and 4 in the 178 same vlan Y, etc. port n-1 and port n in the same vlan ZZZ. This 179 snake test provides a capability to test line rate for Layer 2 and 180 Layer 3 RFC 2544/3918 in instance where a traffic generator with only 181 two ports is available. The latency and jitter are not to be 182 considered with this test. 184 2.3 Reporting Format 186 The report MUST include: 188 -physical layer calibration information as defined into Data Center 189 Benchmarking Terminology (draft-ietf-bmwg-dcbench-terminology) 190 section 4. 192 -number of ports used 194 -reading for Throughput received in percentage of bandwidth, while 195 sending 99.98% of port capacity on each port, for each packet size 196 from 64 bytes to 9216 bytes. As guidance, an increment of 64 byte 197 packet size between each iteration being ideal, a 256 byte and 512 198 bytes being also often time used, the most common packets sizes order 199 for the report is: 64b,128b,256b,512b,1024b,1518b,4096,8000,9216b. 201 The pattern for testing can be expressed using RFC 6985 [IMIX Genome: 202 Specification of Variable Packet Sizes for Additional Testing] 204 -Throughput needs to be expressed in % of total transmitted frames 206 -for packet drops, they MUST be expressed as a count of packets and 207 SHOULD be expressed in % of line rate 209 -for latency and jitter, values expressed in unit of time [usually 210 microsecond or nanosecond] reading across packet size from 64 bytes 211 to 9216 bytes 213 -for latency and jitter, provide minimum, average and maximum values. 214 if different iterations are done to gather the minimum, average and 215 maximum, it SHOULD be specified in the report along with a 216 justification on why the information could not have been gathered at 217 the same test iteration 219 -for jitter, a histogram describing the population of packets 220 measured per latency or latency buckets is RECOMMENDED 222 -The tests for Throughput, latency and jitter MAY be conducted as 223 individual independent trials, with proper documentation in the 224 report but SHOULD be conducted at the same time. 226 3. Buffering Testing 228 3.1 Objective 230 To measure the size of the buffer of a DUT under 231 typical|many|multiple conditions. Buffer architectures between 232 multiple DUTs can differ and include egress buffering, shared egress 233 buffering switch-on-chip [SoC], ingress buffering or a combination. 234 The test methodology covers the buffer measurement regardless of 235 buffer architecture used in the DUT. 237 3.2 Methodology 239 A traffic generator MUST be connected to all ports on the DUT. 241 The methodology for measuring buffering for a data-center switch is 242 based on using known congestion of known fixed packet size along with 243 maximum latency value measurements. The maximum latency will increase 244 until the first packet drop occurs. At this point, the maximum 245 latency value will remain constant. This is the point of inflexion of 246 this maximum latency change to a constant value. There MUST be 247 multiple ingress ports receiving known amount of frames at a known 248 fixed size, destined for the same egress port in order to create a 249 known congestion condition. The total amount of packets sent from the 250 oversubscribed port minus one, multiplied by the packet size 251 represents the maximum port buffer size at the measured inflexion 252 point. 254 1) Measure the highest buffer efficiency 256 First iteration: ingress port 1 sending line rate to egress port 2, 257 while port 3 sending a known low amount of over-subscription traffic 258 (1% recommended) with a packet size of 64 bytes to egress port 2. 259 Measure the buffer size value of the number of frames sent from the 260 port sending the oversubscribed traffic up to the inflexion point 261 multiplied by the frame size. 263 Second iteration: ingress port 1 sending line rate to egress port 2, 264 while port 3 sending a known low amount of over-subscription traffic 265 (1% recommended) with same packet size 65 bytes to egress port 2. 266 Measure the buffer size value of the number of frames sent from the 267 port sending the oversubscribed traffic up to the inflexion point 268 multiplied by the frame size. 270 Last iteration: ingress port 1 sending line rate to egress port 2, 271 while port 3 sending a known low amount of over-subscription traffic 272 (1% recommended) with same packet size B bytes to egress port 2. 273 Measure the buffer size value of the number of frames sent from the 274 port sending the oversubscribed traffic up to the inflexion point 275 multiplied by the frame size. 277 When the B value is found to provide the largest buffer size, then 278 size B allows the highest buffer efficiency. 280 2) Measure maximum port buffer size 282 At fixed packet size B determined in procedure 1), for a fixed 283 default DSCP/COS value of 0 and for unicast traffic proceed with the 284 following: 286 First iteration: ingress port 1 sending line rate to egress port 2, 287 while port 3 sending a known low amount of over-subscription traffic 288 (1% recommended) with same packet size to the egress port 2. Measure 289 the buffer size value by multiplying the number of extra frames sent 290 by the frame size. 292 Second iteration: ingress port 2 sending line rate to egress port 3, 293 while port 4 sending a known low amount of over-subscription traffic 294 (1% recommended) with same packet size to the egress port 3. Measure 295 the buffer size value by multiplying the number of extra frames sent 296 by the frame size. 298 Last iteration: ingress port N-2 sending line rate traffic to egress 299 port N-1, while port N sending a known low amount of over- 300 subscription traffic (1% recommended) with same packet size to the 301 egress port N. Measure the buffer size value by multiplying the 302 number of extra frames sent by the frame size. 304 This test series MAY be repeated using all different DSCP/COS values 305 of traffic and then using Multicast type of traffic, in order to find 306 if there is any DSCP/COS impact on the buffer size. 308 3) Measure maximum port pair buffer sizes 310 First iteration: ingress port 1 sending line rate to egress port 2; 311 ingress port 3 sending line rate to egress port 4 etc. Ingress port 312 N-1 and N will respectively over subscribe at 1% of line rate egress 313 port 2 and port 3. Measure the buffer size value by multiplying the 314 number of extra frames sent by the frame size for each egress port. 316 Second iteration: ingress port 1 sending line rate to egress port 2; 317 ingress port 3 sending line rate to egress port 4 etc. Ingress port 318 N-1 and N will respectively over subscribe at 1% of line rate egress 319 port 4 and port 5. Measure the buffer size value by multiplying the 320 number of extra frames sent by the frame size for each egress port. 322 Last iteration: ingress port 1 sending line rate to egress port 2; 323 ingress port 3 sending line rate to egress port 4 etc. Ingress port 324 N-1 and N will respectively over subscribe at 1% of line rate egress 325 port N-3 and port N-2. Measure the buffer size value by multiplying 326 the number of extra frames sent by the frame size for each egress 327 port. 329 This test series MAY be repeated using all different DSCP/COS values 330 of traffic and then using Multicast type of traffic. 332 4) Measure maximum DUT buffer size with many to one ports 334 First iteration: ingress ports 1,2,... N-1 sending each [(1/[N- 335 1])*99.98]+[1/[N-1]] % of line rate per port to the N egress port. 337 Second iteration: ingress ports 2,... N sending each [(1/[N- 338 1])*99.98]+[1/[N-1]] % of line rate per port to the 1 egress port. 340 Last iteration: ingress ports N,1,2...N-2 sending each [(1/[N- 341 1])*99.98]+[1/[N-1]] % of line rate per port to the N-1 egress port. 343 This test series MAY be repeated using all different COS values of 344 traffic and then using Multicast type of traffic. 346 Unicast traffic and then Multicast traffic SHOULD be used in order to 347 determine the proportion of buffer for documented selection of tests. 348 Also the COS value for the packets SHOULD be provided for each test 349 iteration as the buffer allocation size MAY differ per COS value. It 350 is RECOMMENDED that the ingress and egress ports are varied in a 351 random, but documented fashion in multiple tests to measure the 352 buffer size for each port of the DUT. 354 3.3 Reporting format 356 The report MUST include: 358 - The packet size used for the most efficient buffer used, along 359 with DSCP/COS value 361 - The maximum port buffer size for each port 363 - The maximum DUT buffer size 365 - The packet size used in the test 367 - The amount of over-subscription if different than 1% 369 - The number of ingress and egress ports along with their location 370 on the DUT 372 - The repeatability of the test needs to be indicated: number of 373 iteration of the same test and percentage of variation between 374 results for each of the tests (min, max, avg) 376 The percentage of variation is a metric providing a sense of how big 377 the difference between the measured value and the previous ones. 379 For example, for a latency test where the minimum latency is 380 measured, the percentage of variation of the minimum latency will 381 indicate by how much this value has varied between the current test 382 executed and the previous one. 384 PV=((x2-x1)/x1)*100 where x2 is the minimum latency value in the 385 current test and x1 is the minimum latency value obtained in the 386 previous test. 388 The same formula is used for max and avg variations measured. 390 4 Microburst Testing 392 4.1 Objective 394 To find the maximum amount of packet bursts a DUT can sustain under 395 various configurations. 397 4.2 Methodology 399 A traffic generator MUST be connected to all ports on the DUT. In 400 order to cause congestion, two or more ingress ports MUST send bursts 401 of packets destined for the same egress port. The simplest of the 402 setups would be two ingress ports and one egress port (2-to-1). 404 The burst MUST be sent with an intensity of 100%, meaning the burst 405 of packets will be sent with a minimum inter-packet gap. The amount 406 of packet contained in the burst will be trial variable and increase 407 until there is a non-zero packet loss measured. The aggregate amount 408 of packets from all the senders will be used to calculate the maximum 409 amount of microburst the DUT can sustain. 411 It is RECOMMENDED that the ingress and egress ports are varied in 412 multiple tests to measure the maximum microburst capacity. 414 The intensity of a microburst MAY be varied in order to obtain the 415 microburst capacity at various ingress rates. 417 It is RECOMMENDED that all ports on the DUT will be tested 418 simultaneously and in various configurations in order to understand 419 all the combinations of ingress ports, egress ports and intensities. 421 An example would be: 423 First Iteration: N-1 Ingress ports sending to 1 Egress Ports 425 Second Iterations: N-2 Ingress ports sending to 2 Egress Ports 427 Last Iterations: 2 Ingress ports sending to N-2 Egress Ports 429 4.3 Reporting Format 431 The report MUST include: 433 - The maximum number of packets received per ingress port with the 434 maximum burst size obtained with zero packet loss 435 - The packet size used in the test 437 - The number of ingress and egress ports along with their location 438 on the DUT 440 - The repeatability of the test needs to be indicated: number of 441 iterations of the same test and percentage of variation between 442 results (min, max, avg) 444 5. Head of Line Blocking 446 5.1 Objective 448 Head-of-line blocking (HOL blocking) is a performance-limiting 449 phenomenon that occurs when packets are held-up by the first packet 450 ahead waiting to be transmitted to a different output port. This is 451 defined in RFC 2889 section 5.5, Congestion Control. This section 452 expands on RFC 2889 in the context of Data Center Benchmarking. 454 The objective of this test is to understand the DUT behavior under 455 head of line blocking scenario and measure the packet loss. 457 5.2 Methodology 459 In order to cause congestion in the form of head of line blocking, 460 groups of four ports are used. A group has 2 ingress and 2 egress 461 ports. The first ingress port MUST have two flows configured each 462 going to a different egress port. The second ingress port will 463 congest the second egress port by sending line rate. The goal is to 464 measure if there is loss on the flow for the first egress port which 465 is not over-subscribed. 467 A traffic generator MUST be connected to at least eight ports on the 468 DUT and SHOULD be connected using all the DUT ports. 470 1) Measure two groups with eight DUT ports 472 First iteration: measure the packet loss for two groups with 473 consecutive ports 475 The first group is composed of: ingress port 1 is sending 50% of 476 traffic to egress port 3 and ingress port 1 is sending 50% of traffic 477 to egress port 4. Ingress port 2 is sending line rate to egress port 478 4. Measure the amount of traffic loss for the traffic from ingress 479 port 1 to egress port 3. 481 The second group is composed of: ingress port 5 is sending 50% of 482 traffic to egress port 7 and ingress port 5 is sending 50% of traffic 483 to egress port 8. Ingress port 6 is sending line rate to egress port 484 8. Measure the amount of traffic loss for the traffic from ingress 485 port 5 to egress port 7. 487 Second iteration: repeat the first iteration by shifting all the 488 ports from N to N+1 490 the first group is composed of: ingress port 2 is sending 50% of 491 traffic to egress port 4 and ingress port 2 is sending 50% of traffic 492 to egress port 5. Ingress port 3 is sending line rate to egress port 493 5. Measure the amount of traffic loss for the traffic from ingress 494 port 2 to egress port 4. 496 the second group is composed of: ingress port 6 is sending 50% of 497 traffic to egress port 8 and ingress port 6 is sending 50% of traffic 498 to egress port 9. Ingress port 7 is sending line rate to egress port 499 9. Measure the amount of traffic loss for the traffic from ingress 500 port 6 to egress port 8. 502 Last iteration: when the first port of the first group is connected 503 on the last DUT port and the last port of the second group is 504 connected to the seventh port of the DUT 506 Measure the amount of traffic loss for the traffic from ingress port 507 N to egress port 2 and from ingress port 4 to egress port 6. 509 2) Measure with N/4 groups with N DUT ports 511 The traffic from ingress split across 4 egress ports (100/4=25%). 513 First iteration: Expand to fully utilize all the DUT ports in 514 increments of four. Repeat the methodology of 1) with all the group 515 of ports possible to achieve on the device and measure for each port 516 group the amount of traffic loss. 518 Second iteration: Shift by +1 the start of each consecutive ports of 519 groups 521 Last iteration: Shift by N-1 the start of each consecutive ports of 522 groups and measure the traffic loss for each port group. 524 5.3 Reporting Format 525 For each test the report MUST include: 527 - The port configuration including the number and location of ingress 528 and egress ports located on the DUT 530 - If HOLB was observed in accordance with the HOLB test in section 5 532 - Percent of traffic loss 534 - The repeatability of the test needs to be indicated: number of 535 iteration of the same test and percentage of variation between 536 results (min, max, avg) 538 6. Incast Stateful and Stateless Traffic 540 6.1 Objective 542 The objective of this test is to measure the values for TCP Goodput 543 and latency with a mix of large and small flows. The test is designed 544 to simulate a mixed environment of stateful flows that require high 545 rates of goodput and stateless flows that require low latency. 547 6.2 Methodology 549 In order to simulate the effects of stateless and stateful traffic on 550 the DUT there MUST be multiple ingress ports receiving traffic 551 destined for the same egress port. There also MAY be a mix of 552 stateful and stateless traffic arriving on a single ingress port. The 553 simplest setup would be 2 ingress ports receiving traffic destined to 554 the same egress port. 556 One ingress port MUST be maintaining a TCP connection trough the 557 ingress port to a receiver connected to an egress port. Traffic in 558 the TCP stream MUST be sent at the maximum rate allowed by the 559 traffic generator. At the same time the TCP traffic is flowing 560 through the DUT the stateless traffic is sent destined to a receiver 561 on the same egress port. The stateless traffic MUST be a microburst 562 of 100% intensity. 564 It is RECOMMENDED that the ingress and egress ports are varied in 565 multiple tests to measure the maximum microburst capacity. 567 The intensity of a microburst MAY be varied in order to obtain the 568 microburst capacity at various ingress rates. 570 It is RECOMMENDED that all ports on the DUT be used in the test. 572 For example: 574 Stateful Traffic port variation: 576 During Iterations number of Egress ports MAY vary as well. 578 First Iteration: 1 Ingress port receiving stateful TCP traffic and 1 579 Ingress port receiving stateless traffic destined to 1 Egress Port 581 Second Iteration: 2 Ingress port receiving stateful TCP traffic and 1 582 Ingress port receiving stateless traffic destined to 1 Egress Port 584 Last Iteration: N-2 Ingress port receiving stateful TCP traffic and 1 585 Ingress port receiving stateless traffic destined to 1 Egress Port 587 Stateless Traffic port variation: 589 During Iterations number of Egress ports MAY vary as well. First 590 Iteration: 1 Ingress port receiving stateful TCP traffic and 1 591 Ingress port receiving stateless traffic destined to 1 Egress Port 593 Second Iteration: 1 Ingress port receiving stateful TCP traffic and 2 594 Ingress port receiving stateless traffic destined to 1 Egress Port 596 Last Iteration: 1 Ingress port receiving stateful TCP traffic and N-2 597 Ingress port receiving stateless traffic destined to 1 Egress Port 599 6.3 Reporting Format 601 The report MUST include the following: 603 - Number of ingress and egress ports along with designation of 604 stateful or stateless flow assignment. 606 - Stateful flow goodput 608 - Stateless flow latency 610 - The repeatability of the test needs to be indicated: number of 611 iteration of the same test and percentage of variation between 612 results (min, max, avg) 614 7. References 615 7.1. Normative References 617 [1] Bradner, S. "Benchmarking Terminology for Network 618 Interconnection Devices", RFC 1242, July 1991. 620 [2] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 621 Network Interconnect Devices", RFC 2544, March 1999. 623 7.2. Informative References 625 [3] Avramov L. and Rapp J., "Data Center Benchmarking Terminology", 626 April 2016. 628 [4] Mandeville R. and Perser J., "Benchmarking Methodology for LAN 629 Switching Devices", RFC 2889, August 2000. 631 [5] Stopp D. and Hickman B., "Methodology for IP Multicast 632 Benchmarking", RFC 3918, October 2004. 634 [6] Yanpei Chen, Rean Griffith, Junda Liu, Randy H. Katz, Anthony D. 635 Joseph, "Understanding TCP Incast Throughput Collapse in 636 Datacenter Networks", 637 http://www.eecs.berkeley.edu/~ychen2/professional/TCPIncastWREN2009.pdf". 639 Authors' Addresses 641 Lucien Avramov 642 Google 643 1600 Amphitheatre Parkway 644 Mountain View, CA 94043 645 United States 646 Email: lucienav@google.com 648 Jacob Rapp 649 VMware 650 3401 Hillview Ave 651 Palo Alto, CA 652 United States 653 Phone: +1 650 857 3367 654 Email: jrapp@vmware.com