idnits 2.17.1 draft-ietf-bmwg-dcbench-methodology-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force L. Avramov 3 INTERNET-DRAFT, Intended Status: Informational Google 4 Expires December 11,2017 J. Rapp 5 June 9, 2017 VMware 7 Data Center Benchmarking Methodology 8 draft-ietf-bmwg-dcbench-methodology-09 10 Abstract 12 The purpose of this informational document is to establish test and 13 evaluation methodology and measurement techniques for physical 14 network equipment in the data center. 16 Status of this Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF). Note that other groups may also distribute working 23 documents as Internet-Drafts. The list of current Internet-Drafts is 24 at http://datatracker.ietf.org/drafts/current. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 Copyright Notice 33 Copyright (c) 2017 IETF Trust and the persons identified as the 34 document authors. All rights reserved. 36 This document is subject to BCP 78 and the IETF Trust's Legal 37 Provisions Relating to IETF Documents 38 (http://trustee.ietf.org/license-info) in effect on the date of 39 publication of this document. Please review these documents 40 carefully, as they describe your rights and restrictions with respect 41 to this document. Code Components extracted from this document must 42 include Simplified BSD License text as described in Section 4.e of 43 the Trust Legal Provisions and are provided without warranty as 44 described in the Simplified BSD License. 46 Table of Contents 48 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 49 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 5 50 1.2. Methodology format and repeatability recommendation . . . . 5 51 2. Line Rate Testing . . . . . . . . . . . . . . . . . . . . . . . 5 52 2.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 5 53 2.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 5 54 2.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 6 55 3. Buffering Testing . . . . . . . . . . . . . . . . . . . . . . . 7 56 3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 7 57 3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 8 58 3.3 Reporting format . . . . . . . . . . . . . . . . . . . . . . 10 59 4 Microburst Testing . . . . . . . . . . . . . . . . . . . . . . . 11 60 4.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 11 61 4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 11 62 4.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 12 63 5. Head of Line Blocking . . . . . . . . . . . . . . . . . . . . . 12 64 5.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 12 65 5.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 12 66 5.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 14 67 6. Incast Stateful and Stateless Traffic . . . . . . . . . . . . . 14 68 6.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 14 69 6.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 14 70 6.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 15 71 7. Security Considerations . . . . . . . . . . . . . . . . . . . 15 72 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 73 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16 74 9.1. Normative References . . . . . . . . . . . . . . . . . . . 17 75 9.2. Informative References . . . . . . . . . . . . . . . . . . 17 76 9.2. Acknowledgements . . . . . . . . . . . . . . . . . . . . . 17 77 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 18 79 1. Introduction 81 Traffic patterns in the data center are not uniform and are 82 constantly changing. They are dictated by the nature and variety of 83 applications utilized in the data center. It can be largely east-west 84 traffic flows in one data center and north-south in another, while 85 others may combine both. Traffic patterns can be bursty in nature and 86 contain many-to-one, many-to-many, or one-to-many flows. Each flow 87 may also be small and latency sensitive or large and throughput 88 sensitive while containing a mix of UDP and TCP traffic. All of these 89 can coexist in a single cluster and flow through a single network 90 device simultaneously. Benchmarking of network devices have long used 91 [RFC1242], [RFC2432], [RFC2544], [RFC2889] and [RFC3918] which have 92 largely been focused around various latency attributes and Throughput 93 [RFC2889] of the Device Under Test (DUT) being benchmarked. These 94 standards are good at measuring theoretical Throughput, forwarding 95 rates and latency under testing conditions; however, they do not 96 represent real traffic patterns that may affect these networking 97 devices. 99 The following provides a methodology for benchmarking Data Center 100 physical network equipment DUT including congestion scenarios, switch 101 buffer analysis, microburst, head of line blocking, while also using 102 a wide mix of traffic conditions. The terminology [1] is a pre- 103 requisite. 105 1.1. Requirements Language 107 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 108 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 109 document are to be interpreted as described in RFC 2119 [RFC2119]. 111 1.2. Methodology format and repeatability recommendation 113 The format used for each section of this document is the following: 115 -Objective 117 -Methodology 119 -Reporting Format: Additional interpretation of RFC2119 terms: 121 MUST: required metric or benchmark for the scenario described 122 (minimum) 124 SHOULD or RECOMMENDED: strongly suggested metric for the scenario 125 described 127 MAY: Optional metric for the scenario described 129 For each test methodology described, it is critical to obtain 130 repeatability in the results. The recommendation is to perform enough 131 iterations of the given test and to make sure the result is 132 consistent. This is especially important for section 3, as the 133 buffering testing has been historically the least reliable. The 134 number of iterations SHOULD be explicitly reported. The relative 135 standard deviation SHOULD be below 10%. 137 2. Line Rate Testing 139 2.1 Objective 141 Provide a maximum rate test for the performance values for 142 Throughput, latency and jitter. It is meant to provide the tests to 143 perform, and methodology to verify that a DUT is capable of 144 forwarding packets at line rate under non-congested conditions. 146 2.2 Methodology 148 A traffic generator SHOULD be connected to all ports on the DUT. Two 149 tests MUST be conducted: a port-pair test [RFC 2544/3918 section 15 150 compliant] and also in a full mesh type of DUT test [2889/3918 151 section 16 compliant]. 153 For all tests, the percentage of traffic per port capacity sent MUST 154 be 99.98% at most, with no PPM adjustment to ensure stressing the DUT 155 in worst case conditions. Tests results at a lower rate MAY be 156 provided for better understanding of performance increase in terms of 157 latency and jitter when the rate is lower than 99.98%. The receiving 158 rate of the traffic SHOULD be captured during this test in % of line 159 rate. 161 The test MUST provide the statistics of minimum, average and maximum 162 of the latency distribution, for the exact same iteration of the 163 test. 165 The test MUST provide the statistics of minimum, average and maximum 166 of the jitter distribution, for the exact same iteration of the test. 168 Alternatively when a traffic generator can not be connected to all 169 ports on the DUT, a snake test MUST be used for line rate testing, 170 excluding latency and jitter as those became then irrelevant. The 171 snake test consists in the following method: 173 -connect the first and last port of the DUT to a traffic generator 175 -connect back to back sequentially all the ports in between: port 2 176 to 3, port 4 to 5 etc to port n-2 to port n-1; where n is the total 177 number of ports of the DUT 179 -configure port 1 and 2 in the same vlan X, port 3 and 4 in the same 180 vlan Y, etc. port n-1 and port n in the same vlan Z. 182 This snake test provides a capability to test line rate for Layer 2 183 and Layer 3 RFC 2544/3918 in instance where a traffic generator with 184 only two ports is available. The latency and jitter are not to be 185 considered with this test. 187 2.3 Reporting Format 189 The report MUST include: 191 -physical layer calibration information as defined into [1] section 192 4. 194 -number of ports used 196 -reading for "Throughput received in percentage of bandwidth", while 197 sending 99.98% of port capacity on each port, for each packet size 198 from 64 bytes to 9216 bytes. As guidance, an increment of 64 byte 199 packet size between each iteration being ideal, a 256 byte and 512 200 bytes being are also often used. The most common packets sizes order 201 for the report is: 64b,128b,256b,512b,1024b,1518b,4096,8000,9216b. 203 The pattern for testing can be expressed using [RFC 6985]. 205 -Throughput needs to be expressed in % of total transmitted frames 207 -For packet drops, they MUST be expressed as a count of packets and 208 SHOULD be expressed in % of line rate 210 -For latency and jitter, values expressed in unit of time [usually 211 microsecond or nanosecond] reading across packet size from 64 bytes 212 to 9216 bytes 214 -For latency and jitter, provide minimum, average and maximum values. 215 If different iterations are done to gather the minimum, average and 216 maximum, it SHOULD be specified in the report along with a 217 justification on why the information could not have been gathered at 218 the same test iteration 220 -For jitter, a histogram describing the population of packets 221 measured per latency or latency buckets is RECOMMENDED 223 -The tests for Throughput, latency and jitter MAY be conducted as 224 individual independent trials, with proper documentation in the 225 report but SHOULD be conducted at the same time. 227 -The methodology makes an assumption that the DUT has at least nine 228 ports, as certain methodologies require that number of ports or 229 more. 231 3. Buffering Testing 233 3.1 Objective 235 To measure the size of the buffer of a DUT under 236 typical|many|multiple conditions. Buffer architectures between 237 multiple DUTs can differ and include egress buffering, shared egress 238 buffering SoC (Switch-on-Chip), ingress buffering or a combination. 239 The test methodology covers the buffer measurement regardless of 240 buffer architecture used in the DUT. 242 3.2 Methodology 244 A traffic generator MUST be connected to all ports on the DUT. 246 The methodology for measuring buffering for a data-center switch is 247 based on using known congestion of known fixed packet size along with 248 maximum latency value measurements. The maximum latency will increase 249 until the first packet drop occurs. At this point, the maximum 250 latency value will remain constant. This is the point of inflection 251 of this maximum latency change to a constant value. There MUST be 252 multiple ingress ports receiving known amount of frames at a known 253 fixed size, destined for the same egress port in order to create a 254 known congestion condition. The total amount of packets sent from the 255 oversubscribed port minus one, multiplied by the packet size 256 represents the maximum port buffer size at the measured inflection 257 point. 259 1) Measure the highest buffer efficiency 261 First iteration: ingress port 1 sending line rate to egress port 2, 262 while port 3 sending a known low amount of over-subscription traffic 263 (1% recommended) with a packet size of 64 bytes to egress port 2. 264 Measure the buffer size value of the number of frames sent from the 265 port sending the oversubscribed traffic up to the inflection point 266 multiplied by the frame size. 268 Second iteration: ingress port 1 sending line rate to egress port 2, 269 while port 3 sending a known low amount of over-subscription traffic 270 (1% recommended) with same packet size 65 bytes to egress port 2. 271 Measure the buffer size value of the number of frames sent from the 272 port sending the oversubscribed traffic up to the inflection point 273 multiplied by the frame size. 275 Last iteration: ingress port 1 sending line rate to egress port 2, 276 while port 3 sending a known low amount of over-subscription traffic 277 (1% recommended) with same packet size B bytes to egress port 2. 278 Measure the buffer size value of the number of frames sent from the 279 port sending the oversubscribed traffic up to the inflection point 280 multiplied by the frame size. 282 When the B value is found to provide the largest buffer size, then 283 size B allows the highest buffer efficiency. 285 2) Measure maximum port buffer size 287 At fixed packet size B determined in procedure 1), for a fixed 288 default Differentiated Services Code Point (DSCP)/Class of Service 289 (COS) value of 0 and for unicast traffic proceed with the following: 291 First iteration: ingress port 1 sending line rate to egress port 2, 292 while port 3 sending a known low amount of over-subscription traffic 293 (1% recommended) with same packet size to the egress port 2. Measure 294 the buffer size value by multiplying the number of extra frames sent 295 by the frame size. 297 Second iteration: ingress port 2 sending line rate to egress port 3, 298 while port 4 sending a known low amount of over-subscription traffic 299 (1% recommended) with same packet size to the egress port 3. Measure 300 the buffer size value by multiplying the number of extra frames sent 301 by the frame size. 303 Last iteration: ingress port N-2 sending line rate traffic to egress 304 port N-1, while port N sending a known low amount of over- 305 subscription traffic (1% recommended) with same packet size to the 306 egress port N. Measure the buffer size value by multiplying the 307 number of extra frames sent by the frame size. 309 This test series MAY be repeated using all different DSCP/COS values 310 of traffic and then using Multicast type of traffic, in order to find 311 if there is any DSCP/COS impact on the buffer size. 313 3) Measure maximum port pair buffer sizes 315 First iteration: ingress port 1 sending line rate to egress port 2; 316 ingress port 3 sending line rate to egress port 4 etc. Ingress port 317 N-1 and N will respectively over subscribe at 1% of line rate egress 318 port 2 and port 3. Measure the buffer size value by multiplying the 319 number of extra frames sent by the frame size for each egress port. 321 Second iteration: ingress port 1 sending line rate to egress port 2; 322 ingress port 3 sending line rate to egress port 4 etc. Ingress port 323 N-1 and N will respectively over subscribe at 1% of line rate egress 324 port 4 and port 5. Measure the buffer size value by multiplying the 325 number of extra frames sent by the frame size for each egress port. 327 Last iteration: ingress port 1 sending line rate to egress port 2; 328 ingress port 3 sending line rate to egress port 4 etc. Ingress port 329 N-1 and N will respectively over subscribe at 1% of line rate egress 330 port N-3 and port N-2. Measure the buffer size value by multiplying 331 the number of extra frames sent by the frame size for each egress 332 port. 334 This test series MAY be repeated using all different DSCP/COS values 335 of traffic and then using Multicast type of traffic. 337 4) Measure maximum DUT buffer size with many to one ports 338 First iteration: ingress ports 1,2,... N-1 sending each [(1/[N- 339 1])*99.98]+[1/[N-1]] % of line rate per port to the N egress port. 341 Second iteration: ingress ports 2,... N sending each [(1/[N- 342 1])*99.98]+[1/[N-1]] % of line rate per port to the 1 egress port. 344 Last iteration: ingress ports N,1,2...N-2 sending each [(1/[N- 345 1])*99.98]+[1/[N-1]] % of line rate per port to the N-1 egress port. 347 This test series MAY be repeated using all different COS values of 348 traffic and then using Multicast type of traffic. 350 Unicast traffic and then Multicast traffic SHOULD be used in order to 351 determine the proportion of buffer for documented selection of tests. 352 Also the COS value for the packets SHOULD be provided for each test 353 iteration as the buffer allocation size MAY differ per COS value. It 354 is RECOMMENDED that the ingress and egress ports are varied in a 355 random, but documented fashion in multiple tests to measure the 356 buffer size for each port of the DUT. 358 3.3 Reporting format 360 The report MUST include: 362 - The packet size used for the most efficient buffer used, along 363 with DSCP/COS value 365 - The maximum port buffer size for each port 367 - The maximum DUT buffer size 369 - The packet size used in the test 371 - The amount of over-subscription if different than 1% 373 - The number of ingress and egress ports along with their location 374 on the DUT 376 - The repeatability of the test needs to be indicated: number of 377 iterations of the same test and percentage of variation between 378 results for each of the tests (min, max, avg) 380 The percentage of variation is a metric providing a sense of how big 381 the difference between the measured value and the previous ones. 383 For example, for a latency test where the minimum latency is 384 measured, the percentage of variation of the minimum latency will 385 indicate by how much this value has varied between the current test 386 executed and the previous one. 388 PV=((x2-x1)/x1)*100 where x2 is the minimum latency value in the 389 current test and x1 is the minimum latency value obtained in the 390 previous test. 392 The same formula is used for max and avg variations measured. 394 4 Microburst Testing 396 4.1 Objective 398 To find the maximum amount of packet bursts a DUT can sustain under 399 various configurations. 401 4.2 Methodology 403 A traffic generator MUST be connected to all ports on the DUT. In 404 order to cause congestion, two or more ingress ports MUST send bursts 405 of packets destined for the same egress port. The simplest of the 406 setups would be two ingress ports and one egress port (2-to-1). 408 The burst MUST be sent with an intensity of 100%, meaning the burst 409 of packets will be sent with a minimum inter-packet gap. The amount 410 of packet contained in the burst will be trial variable and increase 411 until there is a non-zero packet loss measured. The aggregate amount 412 of packets from all the senders will be used to calculate the maximum 413 amount of microburst the DUT can sustain. 415 It is RECOMMENDED that the ingress and egress ports are varied in 416 multiple tests to measure the maximum microburst capacity. 418 The intensity of a microburst MAY be varied in order to obtain the 419 microburst capacity at various ingress rates. Intensity of microburst 420 is defined in [1]. 422 It is RECOMMENDED that all ports on the DUT will be tested 423 simultaneously and in various configurations in order to understand 424 all the combinations of ingress ports, egress ports and intensities. 426 An example would be: 428 First Iteration: N-1 Ingress ports sending to 1 Egress Ports 430 Second Iterations: N-2 Ingress ports sending to 2 Egress Ports 432 Last Iterations: 2 Ingress ports sending to N-2 Egress Ports 434 4.3 Reporting Format 436 The report MUST include: 438 - The maximum number of packets received per ingress port with the 439 maximum burst size obtained with zero packet loss 441 - The packet size used in the test 443 - The number of ingress and egress ports along with their location 444 on the DUT 446 - The repeatability of the test needs to be indicated: number of 447 iterations of the same test and percentage of variation between 448 results (min, max, avg) 450 5. Head of Line Blocking 452 5.1 Objective 454 Head-of-line blocking (HOL blocking) is a performance-limiting 455 phenomenon that occurs when packets are held-up by the first packet 456 ahead waiting to be transmitted to a different output port. This is 457 defined in RFC 2889 section 5.5, Congestion Control. This section 458 expands on RFC 2889 in the context of Data Center Benchmarking. 460 The objective of this test is to understand the DUT behavior under 461 head of line blocking scenario and measure the packet loss. 463 5.2 Methodology 465 In order to cause congestion in the form of head of line blocking, 466 groups of four ports are used. A group has 2 ingress and 2 egress 467 ports. The first ingress port MUST have two flows configured each 468 going to a different egress port. The second ingress port will 469 congest the second egress port by sending line rate. The goal is to 470 measure if there is loss on the flow for the first egress port which 471 is not over-subscribed. 473 A traffic generator MUST be connected to at least eight ports on the 474 DUT and SHOULD be connected using all the DUT ports. 476 1) Measure two groups with eight DUT ports 478 First iteration: measure the packet loss for two groups with 479 consecutive ports 480 The first group is composed of: ingress port 1 is sending 50% of 481 traffic to egress port 3 and ingress port 1 is sending 50% of traffic 482 to egress port 4. Ingress port 2 is sending line rate to egress port 483 4. Measure the amount of traffic loss for the traffic from ingress 484 port 1 to egress port 3. 486 The second group is composed of: ingress port 5 is sending 50% of 487 traffic to egress port 7 and ingress port 5 is sending 50% of traffic 488 to egress port 8. Ingress port 6 is sending line rate to egress port 489 8. Measure the amount of traffic loss for the traffic from ingress 490 port 5 to egress port 7. 492 Second iteration: repeat the first iteration by shifting all the 493 ports from N to N+1. 495 The first group is composed of: ingress port 2 is sending 50% of 496 traffic to egress port 4 and ingress port 2 is sending 50% of traffic 497 to egress port 5. Ingress port 3 is sending line rate to egress port 498 5. Measure the amount of traffic loss for the traffic from ingress 499 port 2 to egress port 4. 501 The second group is composed of: ingress port 6 is sending 50% of 502 traffic to egress port 8 and ingress port 6 is sending 50% of traffic 503 to egress port 9. Ingress port 7 is sending line rate to egress port 504 9. Measure the amount of traffic loss for the traffic from ingress 505 port 6 to egress port 8. 507 Last iteration: when the first port of the first group is connected 508 on the last DUT port and the last port of the second group is 509 connected to the seventh port of the DUT. 511 Measure the amount of traffic loss for the traffic from ingress port 512 N to egress port 2 and from ingress port 4 to egress port 6. 514 2) Measure with N/4 groups with N DUT ports 516 The traffic from ingress split across 4 egress ports (100/4=25%). 518 First iteration: Expand to fully utilize all the DUT ports in 519 increments of four. Repeat the methodology of 1) with all the group 520 of ports possible to achieve on the device and measure for each port 521 group the amount of traffic loss. 523 Second iteration: Shift by +1 the start of each consecutive ports of 524 groups 526 Last iteration: Shift by N-1 the start of each consecutive ports of 527 groups and measure the traffic loss for each port group. 529 5.3 Reporting Format 531 For each test the report MUST include: 533 - The port configuration including the number and location of ingress 534 and egress ports located on the DUT 536 - If HOLB was observed in accordance with the HOLB test in section 5 538 - Percent of traffic loss 540 - The repeatability of the test needs to be indicated: number of 541 iteration of the same test and percentage of variation between 542 results (min, max, avg) 544 6. Incast Stateful and Stateless Traffic 546 6.1 Objective 548 The objective of this test is to measure the values for TCP Goodput 549 [4] and latency with a mix of large and small flows. The test is 550 designed to simulate a mixed environment of stateful flows that 551 require high rates of goodput and stateless flows that require low 552 latency. 554 6.2 Methodology 556 In order to simulate the effects of stateless and stateful traffic on 557 the DUT, there MUST be multiple ingress ports receiving traffic 558 destined for the same egress port. There also MAY be a mix of 559 stateful and stateless traffic arriving on a single ingress port. The 560 simplest setup would be 2 ingress ports receiving traffic destined to 561 the same egress port. 563 One ingress port MUST be maintaining a TCP connection trough the 564 ingress port to a receiver connected to an egress port. Traffic in 565 the TCP stream MUST be sent at the maximum rate allowed by the 566 traffic generator. At the same time, the TCP traffic is flowing 567 through the DUT the stateless traffic is sent destined to a receiver 568 on the same egress port. The stateless traffic MUST be a microburst 569 of 100% intensity. 571 It is RECOMMENDED that the ingress and egress ports are varied in 572 multiple tests to measure the maximum microburst capacity. 574 The intensity of a microburst MAY be varied in order to obtain the 575 microburst capacity at various ingress rates. 577 It is RECOMMENDED that all ports on the DUT be used in the test. 579 For example: 581 Stateful Traffic port variation: 583 During Iterations number of Egress ports MAY vary as well. 585 First Iteration: 1 Ingress port receiving stateful TCP traffic and 1 586 Ingress port receiving stateless traffic destined to 1 Egress Port 588 Second Iteration: 2 Ingress port receiving stateful TCP traffic and 1 589 Ingress port receiving stateless traffic destined to 1 Egress Port 591 Last Iteration: N-2 Ingress port receiving stateful TCP traffic and 1 592 Ingress port receiving stateless traffic destined to 1 Egress Port 594 Stateless Traffic port variation: 596 During Iterations, the number of Egress ports MAY vary as well. First 597 Iteration: 1 Ingress port receiving stateful TCP traffic and 1 598 Ingress port receiving stateless traffic destined to 1 Egress Port 600 Second Iteration: 1 Ingress port receiving stateful TCP traffic and 2 601 Ingress port receiving stateless traffic destined to 1 Egress Port 603 Last Iteration: 1 Ingress port receiving stateful TCP traffic and N-2 604 Ingress port receiving stateless traffic destined to 1 Egress Port 606 6.3 Reporting Format 608 The report MUST include the following: 610 - Number of ingress and egress ports along with designation of 611 stateful or stateless flow assignment. 613 - Stateful flow goodput 615 - Stateless flow latency 617 - The repeatability of the test needs to be indicated: number of 618 iterations of the same test and percentage of variation between 619 results (min, max, avg) 621 7. Security Considerations 622 Benchmarking activities as described in this memo are limited to 623 technology characterization using controlled stimuli in a laboratory 624 environment, with dedicated address space and the constraints 625 specified in the sections above. 627 The benchmarking network topology will be an independent test setup 628 and MUST NOT be connected to devices that may forward the test 629 traffic into a production network, or misroute traffic to the test 630 management network. 632 Further, benchmarking is performed on a "black-box" basis, relying 633 solely on measurements observable external to the DUT/SUT. 635 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 636 benchmarking purposes. Any implications for network security arising 637 from the DUT/SUT SHOULD be identical in the lab and in production 638 networks. 640 8. IANA Considerations 642 NO IANA Action is requested at this time. 644 9. References 645 9.1. Normative References 647 [RFC1242] Bradner, S. "Benchmarking Terminology for Network 648 Interconnection Devices", BCP 14, RFC 1242, DOI 649 10.17487/RFC1242, July 1991, 652 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 653 Network Interconnect Devices", BCP 14, RFC 2544, DOI 654 10.17487/RFC2544, March 1999, 657 9.2. Informative References 659 [1] Avramov L. and Rapp J., "Data Center Benchmarking Terminology", 660 April 2017. 662 [RFC2889] Mandeville R. and Perser J., "Benchmarking Methodology for 663 LAN Switching Devices", RFC 2889, August 2000. 665 [RFC3918] Stopp D. and Hickman B., "Methodology for IP Multicast 666 Benchmarking", RFC 3918, October 2004. 668 [RFC 6985] A. Morton, "IMIX Genome: Specification of Variable 669 Packet Sizes for Additional Testing", RFC 6985, July 2013 671 [4] Yanpei Chen, Rean Griffith, Junda Liu, Randy H. Katz, Anthony D. 672 Joseph, "Understanding TCP Incast Throughput Collapse in 673 Datacenter Networks, 674 "http://yanpeichen.com/professional/usenixLoginIncastReady.pdf" 676 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 677 Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, 678 March 1997, 680 [RFC2432] Dubray, K., "Terminology for IP Multicast 681 Benchmarking", BCP 14, RFC 2432, DOI 10.17487/RFC2432, October 682 1998, 684 9.2. Acknowledgements 686 The authors would like to thank Alfred Morton and Scott Bradner 687 for their reviews and feedback. 689 Authors' Addresses 691 Lucien Avramov 692 Google 693 1600 Amphitheatre Parkway 694 Mountain View, CA 94043 695 United States 696 Phone: +1 408 774 9077 697 Email: lucienav@google.com 699 Jacob Rapp 700 VMware 701 3401 Hillview Ave 702 Palo Alto, CA 703 United States 704 Phone: +1 650 857 3367 705 Email: jrapp@vmware.com