idnits 2.17.1 draft-ietf-bmwg-dcbench-methodology-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There is 1 instance of too long lines in the document, the longest one being 10 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 90 has weird spacing: '...contain many-...' -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: 'DUT' on line 97 == Missing Reference: '6' is mentioned on line 113, but not defined -- Looks like a reference, but probably isn't: 'SoC' on line 226 == Missing Reference: '5' is mentioned on line 609, but not defined == Unused Reference: '1' is defined on line 593, but no explicit reference was found in the text == Unused Reference: '2' is defined on line 596, but no explicit reference was found in the text == Unused Reference: '3' is defined on line 601, but no explicit reference was found in the text == Unused Reference: '4' is defined on line 604, but no explicit reference was found in the text Summary: 3 errors (**), 0 flaws (~~), 8 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force L. Avramov 3 Internet-Draft, Intended status: Informational Cisco Systems 4 Expires April 21, 2016 J. Rapp 5 October 19, 2015 VMware 7 Data Center Benchmarking Methodology 8 draft-ietf-bmwg-dcbench-methodology-01 10 Abstract 12 The purpose of this informational document is to establish test and 13 evaluation methodology and measurement techniques for physical 14 network equipment in the data center. 16 Status of this Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that 23 other groups may also distribute working documents as Internet- 24 Drafts. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 The list of current Internet-Drafts can be accessed at 32 http://www.ietf.org/1id-abstracts.html 34 The list of Internet-Draft Shadow Directories can be accessed at 35 http://www.ietf.org/shadow.html 37 Copyright Notice 39 Copyright (c) 2013 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (http://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 55 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 5 56 1.2. Methodology format and repeatability recommendation . . . . 5 57 2. Line Rate Testing . . . . . . . . . . . . . . . . . . . . . . . 5 58 2.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 5 59 2.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 5 60 2.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 6 61 3. Buffering Testing . . . . . . . . . . . . . . . . . . . . . . . 7 62 3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 7 63 3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 7 64 3.3 Reporting format . . . . . . . . . . . . . . . . . . . . . . 10 65 4 Microburst Testing . . . . . . . . . . . . . . . . . . . . . . . 10 66 4.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 10 67 4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 10 68 4.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 11 69 5. Head of Line Blocking . . . . . . . . . . . . . . . . . . . . . 11 70 5.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 11 71 5.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 12 72 5.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 13 73 6. Incast Stateful and Stateless Traffic . . . . . . . . . . . . . 13 74 6.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 13 75 6.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 13 76 6.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 15 77 7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 15 78 7.1. Normative References . . . . . . . . . . . . . . . . . . . 16 79 7.2. Informative References . . . . . . . . . . . . . . . . . . 16 80 7.3. URL References . . . . . . . . . . . . . . . . . . . . . . 16 81 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16 83 1. Introduction 85 Traffic patterns in the data center are not uniform and are 86 constantly changing. They are dictated by the nature and variety of 87 applications utilized in the data center. It can be largely east-west 88 traffic flows in one data center and north-south in another, while 89 some may combine both. Traffic patterns can be bursty in nature and 90 contain many-to-one, many-to-many, or one-to-many flows. Each flow 91 may also be small and latency sensitive or large and throughput 92 sensitive while containing a mix of UDP and TCP traffic. All of which 93 can coexist in a single cluster and flow through a single network 94 device all at the same time. Benchmarking of network devices have 95 long used RFC1242, RFC2432, RFC2544, RFC2889 and RFC3918. These 96 benchmarks have largely been focused around various latency 97 attributes and max throughput of the Device Under Test [DUT] being 98 benchmarked. These standards are good at measuring theoretical max 99 throughput, forwarding rates and latency under testing conditions 100 however, they do not represent real traffic patterns that may affect 101 these networking devices. 103 The following provides a methodology for benchmarking Data Center DUT 104 including congestion scenarios, switch buffer analysis, microburst, 105 head of line blocking, while also using a wide mix of traffic 106 conditions. 108 1.1. Requirements Language 110 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 111 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 112 document are to be interpreted as described in RFC 2119 [6]. 114 1.2. Methodology format and repeatability recommendation 116 The format used for each section of this document is the following: 118 -Objective 120 -Methodology 122 -Reporting Format 124 MUST: minimum test for the scenario described 126 SHOULD: recommended test for the scenario described 128 MAY: ideal test for the scenario described 130 For each test methodology described, it is key to obtain 131 repeatability of the results. The recommendation is to perform enough 132 iterations of the given test to make sure the result is accurate, 133 this is especially important for section 3) as the buffering testing 134 has been historically the least reliable. 136 2. Line Rate Testing 138 2.1 Objective 140 Provide at maximum rate test for the performance values for 141 throughput, latency and jitter. It is meant to provide the tests to 142 run and methodology to verify that a DUT is capable of forwarding 143 packets at line rate under non-congested conditions. 145 2.2 Methodology 147 A traffic generator SHOULD be connected to all ports on the DUT. Two 148 tests MUST be conducted: a port-pair test [RFC 2544/3918 compliant] 149 and also in a full mesh type of DUT test [RFC 2889/3918 compliant]. 151 For all tests, the percentage of traffic per port capacity sent MUST 152 be 99.98% at most, with no PPM adjustment to ensure stressing the DUT 153 in worst case conditions. Tests results at a lower rate MAY be 154 provided for better understanding of performance increase in terms of 155 latency and jitter when the rate is lower than 99.98%. The receiving 156 rate of the traffic needs to be captured during this test in % of 157 line rate. 159 The test MUST provide the latency values for minimum, average and 160 maximum, for the exact same iteration of the test. 162 The test MUST provide the jitter values for minimum, average and 163 maximum, for the exact same iteration of the test. 165 Alternatively when a traffic generator CAN NOT be connected to all 166 ports on the DUT, a snake test MUST be used for line rate testing, 167 excluding latency and jitter as those became then irrelevant. The 168 snake test consists in the following method: -connect the first and 169 last port of the DUT to a traffic generator-connect back to back 170 sequentially all the ports in between: port 2 to 3, port 4 to 5 etc 171 to port n-2 to port n-1; where n is the total number of ports of the 172 DUT-configure port 1 and 2 in the same vlan X, port 3 and 4 in the 173 same vlan Y, etc. port n-1 and port n in the same vlan ZZZ. This 174 snake test provides a capability to test line rate for Layer 2 and 175 Layer 3 RFC 2544/3918 in instance where a traffic generator with only 176 two ports is available. The latency and jitter are not to be 177 considered with this test. 179 2.3 Reporting Format 181 The report MUST include: 183 -physical layer calibration information as defined into (Placeholder 184 for definitions draft) 186 -number of ports used 188 -reading for throughput received in percentage of bandwidth, while 189 sending 99.98% of port capacity on each port, across packet size from 190 64 byte all the way to 9216. As guidance, an increment of 64 byte 191 packet size between each iteration being ideal, a 256 byte and 512 192 bytes being also often time used, the most common packets sizes order 193 for the report is: 64b,128b,256b,512b,1024b,1518b,4096,8000,9216b. 195 The pattern for testing can be expressed using RFC 6985 [IMIX Genome: 196 Specification of Variable Packet Sizes for Additional Testing] 198 -throughput needs to be expressed in % of total transmitted frames 199 -for packet drops, they MUST be expressed in packet count value and 200 SHOULD be expressed in % of line rate 202 -for latency and jitter, values expressed in unit of time [usually 203 microsecond or nanosecond] reading across packet size from 64 bytes 204 to 9216 bytes 206 -for latency and jitter, provide minimum, average and maximum values. 207 if different iterations are done to gather the minimum, average and 208 maximum, it SHOULD be specified in the report along with a 209 justification on why the information could not have been gathered at 210 the same test iteration 212 -for jitter, a histogram describing the population of packets 213 measured per latency or latency buckets is RECOMMENDED 215 -The tests for throughput, latency and jitter MAY be conducted as 216 individual independent events, with proper documentation in the 217 report but SHOULD be conducted at the same time. 219 3. Buffering Testing 221 3.1 Objective 223 To measure the size of the buffer of a DUT under 224 typical|many|multiple conditions. Buffer architectures between 225 multiple DUTs can differ and include egress buffering, shared egress 226 buffering switch-on-chip [SoC], ingress buffering or a combination. 227 The test methodology covers the buffer measurement regardless of 228 buffer architecture used in the DUT. 230 3.2 Methodology 232 A traffic generator MUST be connected to all ports on the DUT. 234 The methodology for measuring buffering for a data-center switch is 235 based on using known congestion of known fixed packet size along with 236 maximum latency value measurements. The maximum latency will increase 237 until the first packet drop occurs. At this point, the maximum 238 latency value will remain constant. This is the point of inflexion of 239 this maximum latency change to a constant value. There MUST be 240 multiple ingress ports receiving known amount of frames at a known 241 fixed size, destined for the same egress port in order to create a 242 known congestion event. The total amount of packets sent from the 243 oversubscribed port minus one, multiplied by the packet size 244 represents the maximum port buffer size at the measured inflexion 245 point. 247 1) Measure the highest buffer efficiency 249 First iteration: ingress port 1 sending line rate to egress port 2, 250 while port 3 sending a known low amount of over subscription traffic 251 (1% recommended) with a packet size of 64 bytes to egress port 2. 252 Measure the buffer size value of the number of frames sent from the 253 port sending the oversubscribed traffic up to the inflexion point 254 multiplied by the frame size. 256 Second iteration: ingress port 1 sending line rate to egress port 2, 257 while port 3 sending a known low amount of over subscription traffic 258 (1% recommended) with same packet size 65 bytes to egress port 2. 259 Measure the buffer size value of the number of frames sent from the 260 port sending the oversubscribed traffic up to the inflexion point 261 multiplied by the frame size. 263 Last iteration: ingress port 1 sending line rate to egress port 2, 264 while port 3 sending a known low amount of over subscription traffic 265 (1% recommended) with same packet size B bytes to egress port 2. 266 Measure the buffer size value of the number of frames sent from the 267 port sending the oversubscribed traffic up to the inflexion point 268 multiplied by the frame size.. 270 When the B value is found to provide the highest buffer size, this is 271 the highest buffer efficiency 273 2) Measure maximum port buffer size 275 At fixed packet size B determined in 3.2.1, for a fixed default COS 276 value of 0 and for unicast traffic proceed with the following: 278 First iteration: ingress port 1 sending line rate to egress port 2, 279 while port 3 sending a known low amount of over subscription traffic 280 (1% recommended) with same packet size to the egress port 2. Measure 281 the buffer size value by multiplying the number of extra frames sent 282 by the frame size. 284 Second iteration: ingress port 2 sending line rate to egress port 3, 285 while port 4 sending a known low amount of over subscription traffic 286 (1% recommended) with same packet size to the egress port 3. Measure 287 the buffer size value by multiplying the number of extra frames sent 288 by the frame size. 290 Last iteration: ingress port N-2 sending line rate traffic to egress 291 port N-1, while port N sending a known low amount of over 292 subscription traffic (1% recommended) with same packet size to the 293 egress port N Measure the buffer size value by multiplying the number 294 of extra frames sent by the frame size. 296 This test series MAY be repeated using all different COS values of 297 traffic and then using Multicast type of traffic, in order to find if 298 there is any COS impact on the buffer size. 300 3) Measure maximum port pair buffer sizes 302 First iteration: ingress port 1 sending line rate to egress port 2; 303 ingress port 3 sending line rate to egress port 4 etc. Ingress port 304 N-1 and N will respectively over subscribe at 1% of line rate egress 305 port 2 and port 3. Measure the buffer size value by multiplying the 306 number of extra frames sent by the frame size for each egress port. 308 Second iteration: ingress port 1 sending line rate to egress port 2; 309 ingress port 3 sending line rate to egress port 4 etc. Ingress port 310 N-1 and N will respectively over subscribe at 1% of line rate egress 311 port 4 and port 5. Measure the buffer size value by multiplying the 312 number of extra frames sent by the frame size for each egress port. 314 Last iteration: ingress port 1 sending line rate to egress port 2; 315 ingress port 3 sending line rate to egress port 4 etc. Ingress port 316 N-1 and N will respectively over subscribe at 1% of line rate egress 317 port N-3 and port N-2. Measure the buffer size value by multiplying 318 the number of extra frames sent by the frame size for each egress 319 port. 321 This test series MAY be repeated using all different COS values of 322 traffic and then using Multicast type of traffic. 324 4) Measure maximum DUT buffer size with many to one ports 326 First iteration: ingress ports 1,2,... N-1 sending each [(1/[N- 327 1])*99.98]+[1/[N-1]] % of line rate per port to the N egress port. 329 Second iteration: ingress ports 2,... N sending each [(1/[N- 330 1])*99.98]+[1/[N-1]] % of line rate per port to the 1 egress port. 332 Last iteration: ingress ports N,1,2...N-2 sending each [(1/[N- 333 1])*99.98]+[1/[N-1]] % of line rate per port to the N-1 egress port. 335 This test series MAY be repeated using all different COS values of 336 traffic and then using Multicast type of traffic. 338 Unicast traffic and then Multicast traffic SHOULD be used in order to 339 determine the proportion of buffer for documented selection of tests. 341 Also the COS value for the packets SHOULD be provided for each test 342 iteration as the buffer allocation size MAY differ per COS value. It 343 is RECOMMENDED that the ingress and egress ports are varied in a 344 random, but documented fashion in multiple tests to measure the 345 buffer size for each port of the DUT. 347 3.3 Reporting format 349 The report MUST include: 351 - The packet size used for the most efficient buffer used, along 352 with COS value 354 - The maximum port buffer size for each port 356 - The maximum DUT buffer size 358 - The packet size used in the test 360 - The amount of over subscription if different than 1% 362 - The number of ingress and egress ports along with their location 363 on the DUT. 365 - The repeatability of the test needs to be indicated: number of 366 iteration of the same test and percentage of variation between 367 results (min, max, avg) 369 4 Microburst Testing 371 4.1 Objective 373 To find the maximum amount of packet bursts a DUT can sustain under 374 various configurations. 376 4.2 Methodology 378 A traffic generator MUST be connected to all ports on the DUT. In 379 order to cause congestion, two or more ingress ports MUST bursts 380 packets destined for the same egress port. The simplest of the setups 381 would be two ingress ports and one egress port (2-to-1). 383 The burst MUST be measure with an intensity of 100%, meaning the 384 burst of packets will be sent with a minimum inter-packet gap. The 385 amount of packet contained in the burst will be variable and increase 386 until there is a non-zero packet loss measured. The aggregate amount 387 of packets from all the senders will be used to calculate the maximum 388 amount of microburst the DUT can sustain. 390 It is RECOMMENDED that the ingress and egress ports are varied in 391 multiple tests to measure the maximum microburst capacity. 393 The intensity of a microburst MAY be varied in order to obtain the 394 microburst capacity at various ingress rates. 396 It is RECOMMENDED that all ports on the DUT will be tested 397 simultaneously and in various configurations in order to understand 398 all the combinations of ingress ports, egress ports and intensities. 400 An example would be: 402 First Iteration: N-1 Ingress ports sending to 1 Egress Ports 404 Second Iterations: N-2 Ingress ports sending to 2 Egress Ports 406 Last Iterations: 2 Ingress ports sending to N-2 Egress Ports 408 4.3 Reporting Format 410 The report MUST include: 412 - The maximum value of packets received per ingress port with the 413 maximum burst size obtained with zero packet loss 415 - The packet size used in the test 417 - The number of ingress and egress ports along with their location 418 on the DUT 420 - The repeatability of the test needs to be indicated: number of 421 iteration of the same test and percentage of variation between 422 results (min, max, avg) 424 5. Head of Line Blocking 426 5.1 Objective 428 Head-of-line blocking (HOL blocking) is a performance-limiting 429 phenomenon that occurs when packets are held-up by the first packet 430 ahead waiting to be transmitted to a different output port. This is 431 defined in RFC 2889 section 5.5. Congestion Control. This section 432 expands on RFC 2889 in the context of Data Center Benchmarking 433 The objective of this test is to understand the DUT behavior under 434 head of line blocking scenario and measure the packet loss. 436 5.2 Methodology 438 In order to cause congestion, head of line blocking, groups of four 439 ports are used. A group has 2 ingress and 2 egress ports. The first 440 ingress port MUST have two flows configured each going to a different 441 egress port. The second ingress port will congest the second egress 442 port by sending line rate. The goal is to measure if there is loss 443 for the first egress port which is not not oversubscribed. 445 A traffic generator MUST be connected to at least eight ports on the 446 DUT and SHOULD be connected using all the DUT ports. 448 1) Measure two groups with eight DUT ports 450 First iteration: measure the packet loss for two groups with 451 consecutive ports 453 The first group is composed of: ingress port 1 is sending 50% of 454 traffic to egress port 3 and ingress port 1 is sending 50% of traffic 455 to egress port 4. Ingress port 2 is sending line rate to egress port 456 4. Measure the amount of traffic loss for the traffic from ingress 457 port 1 to egress port 3. 459 The second group is composed of: ingress port 5 is sending 50% of 460 traffic to egress port 7 and ingress port 5 is sending 50% of traffic 461 to egress port 8. Ingress port 6 is sending line rate to egress port 462 8. Measure the amount of traffic loss for the traffic from ingress 463 port 5 to egress port 7. 465 Second iteration: repeat the first iteration by shifting all the 466 ports from N to N+1 468 the first group is composed of: ingress port 2 is sending 50% of 469 traffic to egress port 4 and ingress port 2 is sending 50% of traffic 470 to egress port 5. Ingress port 3 is sending line rate to egress port 471 5. Measure the amount of traffic loss for the traffic from ingress 472 port 2 to egress port 4. 474 the second group is composed of: ingress port 6 is sending 50% of 475 traffic to egress port 8 and ingress port 6 is sending 50% of traffic 476 to egress port 9. Ingress port 7 is sending line rate to egress port 477 9. Measure the amount of traffic loss for the traffic from ingress 478 port 6 to egress port 8. 480 Last iteration: when the first port of the first group is connected 481 on the last DUT port and the last port of the second group is 482 connected to the seventh port of the DUT 484 Measure the amount of traffic loss for the traffic from ingress port 485 N to egress port 2 and from ingress port 4 to egress port 6. 487 2) Measure with N/4 groups with N DUT ports 489 First iteration: Expand to fully utilize all the DUT ports in 490 increments of four. Repeat the methodology of 1) with all the group 491 of ports possible to achieve on the device and measure for each port 492 group the amount of traffic loss. 494 Second iteration: Shift by +1 the start of each consecutive ports of 495 groups 497 Last iteration: Shift by N-1 the start of each consecutive ports of 498 groups and measure the traffic loss for each port group. 500 5.3 Reporting Format 502 For each test the report MUST include: 504 - The port configuration including the number and location of ingress 505 and egress ports located on the DUT 507 - If HOLB was observed 509 - Percent of traffic loss 511 - The repeatability of the test needs to be indicated: number of 512 iteration of the same test and percentage of variation between 513 results (min, max, avg) 515 6. Incast Stateful and Stateless Traffic 517 6.1 Objective 519 The objective of this test is to measure the effect of TCP Goodput 520 and latency with a mix of large and small flows. The test is designed 521 to simulate a mixed environment of stateful flows that require high 522 rates of goodput and stateless flows that require low latency. 524 6.2 Methodology 525 In order to simulate the effects of stateless and stateful traffic on 526 the DUT there MUST be multiple ingress ports receiving traffic 527 destined for the same egress port. There also MAY be a mix of 528 stateful and stateless traffic arriving on a single ingress port. The 529 simplest setup would be 2 ingress ports receiving traffic destined to 530 the same egress port. 532 One ingress port MUST be maintaining a TCP connection trough the 533 ingress port to a receiver connected to an egress port. Traffic in 534 the TCP stream MUST be sent at the maximum rate allowed by the 535 traffic generator. At the same time the TCP traffic is flowing 536 through the DUT the stateless traffic is sent destined to a receiver 537 on the same egress port. The stateless traffic MUST be a microburst 538 of 100% intensity. 540 It is RECOMMENDED that the ingress and egress ports are varied in 541 multiple tests to measure the maximum microburst capacity. 543 The intensity of a microburst MAY be varied in order to obtain the 544 microburst capacity at various ingress rates. 546 It is RECOMMENDED that all ports on the DUT be used in the test. 548 For example: 550 Stateful Traffic port variation: 552 During Iterations number of Egress ports MAY vary as well. 554 First Iteration: 1 Ingress port receiving stateful TCP traffic and 1 555 Ingress port receiving stateless traffic destined to 1 Egress Ports 557 Second Iteration: 2 Ingress port receiving stateful TCP traffic and 1 558 Ingress port receiving stateless traffic destined to 1 Egress Ports 560 Last Iteration: N-2 Ingress port receiving stateful TCP traffic and 1 561 Ingress port receiving stateless traffic destined to 1 Egress Ports 563 Stateless Traffic port variation: 565 During Iterations number of Egress ports MAY vary as well. First 566 Iteration: 1 Ingress port receiving stateful TCP traffic and 1 567 Ingress port receiving stateless traffic destined to 1 Egress Ports 569 Second Iteration: 1 Ingress port receiving stateful TCP traffic and 2 570 Ingress port receiving stateless traffic destined to 1 Egress Ports 572 Last Iteration: 1 Ingress port receiving stateful TCP traffic and N-2 573 Ingress port receiving stateless traffic destined to 1 Egress Ports 575 6.3 Reporting Format 577 The report MUST include the following: 579 - Number of ingress and egress ports along with designation of 580 stateful or stateless. 582 - Stateful goodput 584 - Stateless latency 586 - The repeatability of the test needs to be indicated: number of 587 iteration of the same test and percentage of variation between 588 results (min, max, avg) 590 7. References 591 7.1. Normative References 593 [1] Bradner, S. "Benchmarking Terminology for Network 594 Interconnection Devices", RFC 1242, July 1991. 596 [2] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 597 Network Interconnect Devices", RFC 2544, March 1999. 599 7.2. Informative References 601 [3] Mandeville R. and Perser J., "Benchmarking Methodology for LAN 602 Switching Devices", RFC 2889, August 2000. 604 [4] Stopp D. and Hickman B., "Methodology for IP Multicast 605 Benchmarking", BCP 26, RFC 3918, October 2004. 607 7.3. URL References 609 [5] Yanpei Chen, Rean Griffith, Junda Liu, Randy H. Katz, Anthony D. 610 Joseph, "Understanding TCP Incast Throughput Collapse in 611 Datacenter Networks", 612 http://www.eecs.berkeley.edu/~ychen2/professional/TCPIncastWREN2009.pdf". 614 Authors' Addresses 616 Lucien Avramov 617 Cisco Systems 618 170 West Tasman drive 619 San Jose, CA 95134 620 United States 621 Phone: +1 408 526 7686 622 Email: lavramov@cisco.com 624 Jacob Rapp 625 VMware 626 3401 Hillview Ave 627 Palo Alto, CA 628 United States 629 Phone: +1 650 857 3367 630 Email: jrapp@vmware.com