idnits 2.17.1 draft-ietf-bmwg-dcbench-methodology-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 11 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 86 has weird spacing: '...contain many-...' == Line 92 has weird spacing: '...latency attri...' -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'DUT' is mentioned on line 93, but not defined == Missing Reference: 'SoC' is mentioned on line 230, but not defined == Unused Reference: 'RFC1242' is defined on line 638, but no explicit reference was found in the text == Unused Reference: 'RFC2544' is defined on line 641, but no explicit reference was found in the text == Unused Reference: '1' is defined on line 646, but no explicit reference was found in the text == Unused Reference: '3' is defined on line 652, but no explicit reference was found in the text == Unused Reference: '4' is defined on line 655, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 10 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force L. Avramov 3 Internet-Draft, Intended status: Informational Google 4 Expires October 29,2017 J. Rapp 5 April 27, 2017 VMware 7 Data Center Benchmarking Methodology 8 draft-ietf-bmwg-dcbench-methodology-04 10 Abstract 12 The purpose of this informational document is to establish test and 13 evaluation methodology and measurement techniques for physical 14 network equipment in the data center. 16 Status of this Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF). Note that other groups may also distribute working 23 documents as Internet-Drafts. The list of current Internet-Drafts is 24 at http://datatracker.ietf.org/drafts/current. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 Copyright Notice 33 Copyright (c) 2017 IETF Trust and the persons identified as the 34 document authors. All rights reserved. 36 This document is subject to BCP 78 and the IETF Trust's Legal 37 Provisions Relating to IETF Documents 38 (http://trustee.ietf.org/license-info) in effect on the date of 39 publication of this document. Please review these documents 40 carefully, as they describe your rights and restrictions with respect 41 to this document. Code Components extracted from this document must 42 include Simplified BSD License text as described in Section 4.e of 43 the Trust Legal Provisions and are provided without warranty as 44 described in the Simplified BSD License. 46 Table of Contents 48 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 49 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 5 50 1.2. Methodology format and repeatability recommendation . . . . 5 51 2. Line Rate Testing . . . . . . . . . . . . . . . . . . . . . . . 5 52 2.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 5 53 2.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 5 54 2.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 6 55 3. Buffering Testing . . . . . . . . . . . . . . . . . . . . . . . 7 56 3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 7 57 3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 7 58 3.3 Reporting format . . . . . . . . . . . . . . . . . . . . . . 10 59 4 Microburst Testing . . . . . . . . . . . . . . . . . . . . . . . 11 60 4.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 11 61 4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 11 62 4.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 11 63 5. Head of Line Blocking . . . . . . . . . . . . . . . . . . . . . 12 64 5.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 12 65 5.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 12 66 5.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 13 67 6. Incast Stateful and Stateless Traffic . . . . . . . . . . . . . 14 68 6.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 14 69 6.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 14 70 6.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 15 71 7. Security Considerations . . . . . . . . . . . . . . . . . . . 15 72 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 73 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16 74 9.1. Normative References . . . . . . . . . . . . . . . . . . . 17 75 9.2. Informative References . . . . . . . . . . . . . . . . . . 17 76 9.2. Acknowledgements . . . . . . . . . . . . . . . . . . . . . 17 77 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17 79 1. Introduction 81 Traffic patterns in the data center are not uniform and are 82 constantly changing. They are dictated by the nature and variety of 83 applications utilized in the data center. It can be largely east-west 84 traffic flows in one data center and north-south in another, while 85 some may combine both. Traffic patterns can be bursty in nature and 86 contain many-to-one, many-to-many, or one-to-many flows. Each flow 87 may also be small and latency sensitive or large and throughput 88 sensitive while containing a mix of UDP and TCP traffic. All of which 89 can coexist in a single cluster and flow through a single network 90 device all at the same time. Benchmarking of network devices have 91 long used RFC1242, RFC2432, RFC2544, RFC2889 and RFC3918 which have 92 largely been focused around various latency attributes and 93 Throughput [2] of the Device Under Test [DUT] being benchmarked. 94 These standards are good at measuring theoretical Throughput, 95 forwarding rates and latency under testing conditions however, they 96 do not represent real traffic patterns that may affect these 97 networking devices. 99 The following provides a methodology for benchmarking Data Center DUT 100 including congestion scenarios, switch buffer analysis, microburst, 101 head of line blocking, while also using a wide mix of traffic 102 conditions. 104 1.1. Requirements Language 106 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 107 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 108 document are to be interpreted as described in RFC 2119 [RFC2119]. 110 1.2. Methodology format and repeatability recommendation 112 The format used for each section of this document is the following: 114 -Objective 116 -Methodology 118 -Reporting Format: Additional interpretation of RFC2119 terms: 120 MUST: required metric or benchmark for the scenario described 121 (minimum) 123 SHOULD or RECOMMENDED: strongly suggested metric for the scenario 124 described 126 MAY: Comprehensive metric for the scenario described 128 For each test methodology described, it is critical to obtain 129 repeatability in the results. The recommendation is to perform enough 130 iterations of the given test and to make sure the result is 131 consistent, this is especially important for section 3, as the 132 buffering testing has been historically the least reliable. The 133 number of iterations SHOULD be explicitly reported. The relative 134 standard deviation SHOULD be below 10%. 136 2. Line Rate Testing 138 2.1 Objective 140 Provide a maximum rate test for the performance values for 141 Throughput, latency and jitter. It is meant to provide the tests to 142 perform and methodology to verify that a DUT is capable of forwarding 143 packets at line rate under non-congested conditions. 145 2.2 Methodology 147 A traffic generator SHOULD be connected to all ports on the DUT. Two 148 tests MUST be conducted: a port-pair test [RFC 2544/3918 section 15 149 compliant] and also in a full mesh type of DUT test [RFC 2889/3918 150 section 16 compliant]. 152 For all tests, the percentage of traffic per port capacity sent MUST 153 be 99.98% at most, with no PPM adjustment to ensure stressing the DUT 154 in worst case conditions. Tests results at a lower rate MAY be 155 provided for better understanding of performance increase in terms of 156 latency and jitter when the rate is lower than 99.98%. The receiving 157 rate of the traffic SHOULD be captured during this test in % of line 158 rate. 160 The test MUST provide the statistics of minimum, average and maximum 161 of the latency distribution, for the exact same iteration of the 162 test. 164 The test MUST provide the statistics of minimum, average and maximum 165 of the jitter distribution, for the exact same iteration of the test. 167 Alternatively when a traffic generator CAN NOT be connected to all 168 ports on the DUT, a snake test MUST be used for line rate testing, 169 excluding latency and jitter as those became then irrelevant. The 170 snake test consists in the following method: -connect the first and 171 last port of the DUT to a traffic generator-connect back to back 172 sequentially all the ports in between: port 2 to 3, port 4 to 5 etc 173 to port n-2 to port n-1; where n is the total number of ports of the 174 DUT-configure port 1 and 2 in the same vlan X, port 3 and 4 in the 175 same vlan Y, etc. port n-1 and port n in the same vlan ZZZ. This 176 snake test provides a capability to test line rate for Layer 2 and 177 Layer 3 RFC 2544/3918 in instance where a traffic generator with only 178 two ports is available. The latency and jitter are not to be 179 considered with this test. 181 2.3 Reporting Format 183 The report MUST include: 185 -physical layer calibration information as defined into Data Center 186 Benchmarking Terminology (draft-ietf-bmwg-dcbench-terminology) 187 section 4. 189 -number of ports used 191 -reading for Throughput received in percentage of bandwidth, while 192 sending 99.98% of port capacity on each port, for each packet size 193 from 64 bytes to 9216 bytes. As guidance, an increment of 64 byte 194 packet size between each iteration being ideal, a 256 byte and 512 195 bytes being also often time used, the most common packets sizes order 196 for the report is: 64b,128b,256b,512b,1024b,1518b,4096,8000,9216b. 198 The pattern for testing can be expressed using RFC 6985 [IMIX Genome: 199 Specification of Variable Packet Sizes for Additional Testing] 201 -Throughput needs to be expressed in % of total transmitted frames 203 -for packet drops, they MUST be expressed as a count of packets and 204 SHOULD be expressed in % of line rate 206 -for latency and jitter, values expressed in unit of time [usually 207 microsecond or nanosecond] reading across packet size from 64 bytes 208 to 9216 bytes 210 -for latency and jitter, provide minimum, average and maximum values. 211 if different iterations are done to gather the minimum, average and 212 maximum, it SHOULD be specified in the report along with a 213 justification on why the information could not have been gathered at 214 the same test iteration 216 -for jitter, a histogram describing the population of packets 217 measured per latency or latency buckets is RECOMMENDED 219 -The tests for Throughput, latency and jitter MAY be conducted as 220 individual independent trials, with proper documentation in the 221 report but SHOULD be conducted at the same time. 223 3. Buffering Testing 225 3.1 Objective 227 To measure the size of the buffer of a DUT under 228 typical|many|multiple conditions. Buffer architectures between 229 multiple DUTs can differ and include egress buffering, shared egress 230 buffering switch-on-chip [SoC], ingress buffering or a combination. 231 The test methodology covers the buffer measurement regardless of 232 buffer architecture used in the DUT. 234 3.2 Methodology 236 A traffic generator MUST be connected to all ports on the DUT. 238 The methodology for measuring buffering for a data-center switch is 239 based on using known congestion of known fixed packet size along with 240 maximum latency value measurements. The maximum latency will increase 241 until the first packet drop occurs. At this point, the maximum 242 latency value will remain constant. This is the point of inflexion of 243 this maximum latency change to a constant value. There MUST be 244 multiple ingress ports receiving known amount of frames at a known 245 fixed size, destined for the same egress port in order to create a 246 known congestion condition. The total amount of packets sent from the 247 oversubscribed port minus one, multiplied by the packet size 248 represents the maximum port buffer size at the measured inflexion 249 point. 251 1) Measure the highest buffer efficiency 253 First iteration: ingress port 1 sending line rate to egress port 2, 254 while port 3 sending a known low amount of over-subscription traffic 255 (1% recommended) with a packet size of 64 bytes to egress port 2. 256 Measure the buffer size value of the number of frames sent from the 257 port sending the oversubscribed traffic up to the inflexion point 258 multiplied by the frame size. 260 Second iteration: ingress port 1 sending line rate to egress port 2, 261 while port 3 sending a known low amount of over-subscription traffic 262 (1% recommended) with same packet size 65 bytes to egress port 2. 263 Measure the buffer size value of the number of frames sent from the 264 port sending the oversubscribed traffic up to the inflexion point 265 multiplied by the frame size. 267 Last iteration: ingress port 1 sending line rate to egress port 2, 268 while port 3 sending a known low amount of over-subscription traffic 269 (1% recommended) with same packet size B bytes to egress port 2. 270 Measure the buffer size value of the number of frames sent from the 271 port sending the oversubscribed traffic up to the inflexion point 272 multiplied by the frame size. 274 When the B value is found to provide the largest buffer size, then 275 size B allows the highest buffer efficiency. 277 2) Measure maximum port buffer size 279 At fixed packet size B determined in procedure 1), for a fixed 280 default DSCP/COS value of 0 and for unicast traffic proceed with the 281 following: 283 First iteration: ingress port 1 sending line rate to egress port 2, 284 while port 3 sending a known low amount of over-subscription traffic 285 (1% recommended) with same packet size to the egress port 2. Measure 286 the buffer size value by multiplying the number of extra frames sent 287 by the frame size. 289 Second iteration: ingress port 2 sending line rate to egress port 3, 290 while port 4 sending a known low amount of over-subscription traffic 291 (1% recommended) with same packet size to the egress port 3. Measure 292 the buffer size value by multiplying the number of extra frames sent 293 by the frame size. 295 Last iteration: ingress port N-2 sending line rate traffic to egress 296 port N-1, while port N sending a known low amount of over- 297 subscription traffic (1% recommended) with same packet size to the 298 egress port N. Measure the buffer size value by multiplying the 299 number of extra frames sent by the frame size. 301 This test series MAY be repeated using all different DSCP/COS values 302 of traffic and then using Multicast type of traffic, in order to find 303 if there is any DSCP/COS impact on the buffer size. 305 3) Measure maximum port pair buffer sizes 307 First iteration: ingress port 1 sending line rate to egress port 2; 308 ingress port 3 sending line rate to egress port 4 etc. Ingress port 309 N-1 and N will respectively over subscribe at 1% of line rate egress 310 port 2 and port 3. Measure the buffer size value by multiplying the 311 number of extra frames sent by the frame size for each egress port. 313 Second iteration: ingress port 1 sending line rate to egress port 2; 314 ingress port 3 sending line rate to egress port 4 etc. Ingress port 315 N-1 and N will respectively over subscribe at 1% of line rate egress 316 port 4 and port 5. Measure the buffer size value by multiplying the 317 number of extra frames sent by the frame size for each egress port. 319 Last iteration: ingress port 1 sending line rate to egress port 2; 320 ingress port 3 sending line rate to egress port 4 etc. Ingress port 321 N-1 and N will respectively over subscribe at 1% of line rate egress 322 port N-3 and port N-2. Measure the buffer size value by multiplying 323 the number of extra frames sent by the frame size for each egress 324 port. 326 This test series MAY be repeated using all different DSCP/COS values 327 of traffic and then using Multicast type of traffic. 329 4) Measure maximum DUT buffer size with many to one ports 331 First iteration: ingress ports 1,2,... N-1 sending each [(1/[N- 332 1])*99.98]+[1/[N-1]] % of line rate per port to the N egress port. 334 Second iteration: ingress ports 2,... N sending each [(1/[N- 335 1])*99.98]+[1/[N-1]] % of line rate per port to the 1 egress port. 337 Last iteration: ingress ports N,1,2...N-2 sending each [(1/[N- 338 1])*99.98]+[1/[N-1]] % of line rate per port to the N-1 egress port. 340 This test series MAY be repeated using all different COS values of 341 traffic and then using Multicast type of traffic. 343 Unicast traffic and then Multicast traffic SHOULD be used in order to 344 determine the proportion of buffer for documented selection of tests. 345 Also the COS value for the packets SHOULD be provided for each test 346 iteration as the buffer allocation size MAY differ per COS value. It 347 is RECOMMENDED that the ingress and egress ports are varied in a 348 random, but documented fashion in multiple tests to measure the 349 buffer size for each port of the DUT. 351 3.3 Reporting format 353 The report MUST include: 355 - The packet size used for the most efficient buffer used, along 356 with DSCP/COS value 358 - The maximum port buffer size for each port 360 - The maximum DUT buffer size 362 - The packet size used in the test 364 - The amount of over-subscription if different than 1% 366 - The number of ingress and egress ports along with their location 367 on the DUT 369 - The repeatability of the test needs to be indicated: number of 370 iteration of the same test and percentage of variation between 371 results for each of the tests (min, max, avg) 373 The percentage of variation is a metric providing a sense of how big 374 the difference between the measured value and the previous ones. 376 For example, for a latency test where the minimum latency is 377 measured, the percentage of variation of the minimum latency will 378 indicate by how much this value has varied between the current test 379 executed and the previous one. 381 PV=((x2-x1)/x1)*100 where x2 is the minimum latency value in the 382 current test and x1 is the minimum latency value obtained in the 383 previous test. 385 The same formula is used for max and avg variations measured. 387 4 Microburst Testing 389 4.1 Objective 391 To find the maximum amount of packet bursts a DUT can sustain under 392 various configurations. 394 4.2 Methodology 396 A traffic generator MUST be connected to all ports on the DUT. In 397 order to cause congestion, two or more ingress ports MUST send bursts 398 of packets destined for the same egress port. The simplest of the 399 setups would be two ingress ports and one egress port (2-to-1). 401 The burst MUST be sent with an intensity of 100%, meaning the burst 402 of packets will be sent with a minimum inter-packet gap. The amount 403 of packet contained in the burst will be trial variable and increase 404 until there is a non-zero packet loss measured. The aggregate amount 405 of packets from all the senders will be used to calculate the maximum 406 amount of microburst the DUT can sustain. 408 It is RECOMMENDED that the ingress and egress ports are varied in 409 multiple tests to measure the maximum microburst capacity. 411 The intensity of a microburst MAY be varied in order to obtain the 412 microburst capacity at various ingress rates. 414 It is RECOMMENDED that all ports on the DUT will be tested 415 simultaneously and in various configurations in order to understand 416 all the combinations of ingress ports, egress ports and intensities. 418 An example would be: 420 First Iteration: N-1 Ingress ports sending to 1 Egress Ports 422 Second Iterations: N-2 Ingress ports sending to 2 Egress Ports 424 Last Iterations: 2 Ingress ports sending to N-2 Egress Ports 426 4.3 Reporting Format 428 The report MUST include: 430 - The maximum number of packets received per ingress port with the 431 maximum burst size obtained with zero packet loss 432 - The packet size used in the test 434 - The number of ingress and egress ports along with their location 435 on the DUT 437 - The repeatability of the test needs to be indicated: number of 438 iterations of the same test and percentage of variation between 439 results (min, max, avg) 441 5. Head of Line Blocking 443 5.1 Objective 445 Head-of-line blocking (HOL blocking) is a performance-limiting 446 phenomenon that occurs when packets are held-up by the first packet 447 ahead waiting to be transmitted to a different output port. This is 448 defined in RFC 2889 section 5.5, Congestion Control. This section 449 expands on RFC 2889 in the context of Data Center Benchmarking. 451 The objective of this test is to understand the DUT behavior under 452 head of line blocking scenario and measure the packet loss. 454 5.2 Methodology 456 In order to cause congestion in the form of head of line blocking, 457 groups of four ports are used. A group has 2 ingress and 2 egress 458 ports. The first ingress port MUST have two flows configured each 459 going to a different egress port. The second ingress port will 460 congest the second egress port by sending line rate. The goal is to 461 measure if there is loss on the flow for the first egress port which 462 is not over-subscribed. 464 A traffic generator MUST be connected to at least eight ports on the 465 DUT and SHOULD be connected using all the DUT ports. 467 1) Measure two groups with eight DUT ports 469 First iteration: measure the packet loss for two groups with 470 consecutive ports 472 The first group is composed of: ingress port 1 is sending 50% of 473 traffic to egress port 3 and ingress port 1 is sending 50% of traffic 474 to egress port 4. Ingress port 2 is sending line rate to egress port 475 4. Measure the amount of traffic loss for the traffic from ingress 476 port 1 to egress port 3. 478 The second group is composed of: ingress port 5 is sending 50% of 479 traffic to egress port 7 and ingress port 5 is sending 50% of traffic 480 to egress port 8. Ingress port 6 is sending line rate to egress port 481 8. Measure the amount of traffic loss for the traffic from ingress 482 port 5 to egress port 7. 484 Second iteration: repeat the first iteration by shifting all the 485 ports from N to N+1 487 the first group is composed of: ingress port 2 is sending 50% of 488 traffic to egress port 4 and ingress port 2 is sending 50% of traffic 489 to egress port 5. Ingress port 3 is sending line rate to egress port 490 5. Measure the amount of traffic loss for the traffic from ingress 491 port 2 to egress port 4. 493 the second group is composed of: ingress port 6 is sending 50% of 494 traffic to egress port 8 and ingress port 6 is sending 50% of traffic 495 to egress port 9. Ingress port 7 is sending line rate to egress port 496 9. Measure the amount of traffic loss for the traffic from ingress 497 port 6 to egress port 8. 499 Last iteration: when the first port of the first group is connected 500 on the last DUT port and the last port of the second group is 501 connected to the seventh port of the DUT 503 Measure the amount of traffic loss for the traffic from ingress port 504 N to egress port 2 and from ingress port 4 to egress port 6. 506 2) Measure with N/4 groups with N DUT ports 508 The traffic from ingress split across 4 egress ports (100/4=25%). 510 First iteration: Expand to fully utilize all the DUT ports in 511 increments of four. Repeat the methodology of 1) with all the group 512 of ports possible to achieve on the device and measure for each port 513 group the amount of traffic loss. 515 Second iteration: Shift by +1 the start of each consecutive ports of 516 groups 518 Last iteration: Shift by N-1 the start of each consecutive ports of 519 groups and measure the traffic loss for each port group. 521 5.3 Reporting Format 522 For each test the report MUST include: 524 - The port configuration including the number and location of ingress 525 and egress ports located on the DUT 527 - If HOLB was observed in accordance with the HOLB test in section 5 529 - Percent of traffic loss 531 - The repeatability of the test needs to be indicated: number of 532 iteration of the same test and percentage of variation between 533 results (min, max, avg) 535 6. Incast Stateful and Stateless Traffic 537 6.1 Objective 539 The objective of this test is to measure the values for TCP Goodput 540 and latency with a mix of large and small flows. The test is designed 541 to simulate a mixed environment of stateful flows that require high 542 rates of goodput and stateless flows that require low latency. 544 6.2 Methodology 546 In order to simulate the effects of stateless and stateful traffic on 547 the DUT there MUST be multiple ingress ports receiving traffic 548 destined for the same egress port. There also MAY be a mix of 549 stateful and stateless traffic arriving on a single ingress port. The 550 simplest setup would be 2 ingress ports receiving traffic destined to 551 the same egress port. 553 One ingress port MUST be maintaining a TCP connection trough the 554 ingress port to a receiver connected to an egress port. Traffic in 555 the TCP stream MUST be sent at the maximum rate allowed by the 556 traffic generator. At the same time the TCP traffic is flowing 557 through the DUT the stateless traffic is sent destined to a receiver 558 on the same egress port. The stateless traffic MUST be a microburst 559 of 100% intensity. 561 It is RECOMMENDED that the ingress and egress ports are varied in 562 multiple tests to measure the maximum microburst capacity. 564 The intensity of a microburst MAY be varied in order to obtain the 565 microburst capacity at various ingress rates. 567 It is RECOMMENDED that all ports on the DUT be used in the test. 569 For example: 571 Stateful Traffic port variation: 573 During Iterations number of Egress ports MAY vary as well. 575 First Iteration: 1 Ingress port receiving stateful TCP traffic and 1 576 Ingress port receiving stateless traffic destined to 1 Egress Port 578 Second Iteration: 2 Ingress port receiving stateful TCP traffic and 1 579 Ingress port receiving stateless traffic destined to 1 Egress Port 581 Last Iteration: N-2 Ingress port receiving stateful TCP traffic and 1 582 Ingress port receiving stateless traffic destined to 1 Egress Port 584 Stateless Traffic port variation: 586 During Iterations number of Egress ports MAY vary as well. First 587 Iteration: 1 Ingress port receiving stateful TCP traffic and 1 588 Ingress port receiving stateless traffic destined to 1 Egress Port 590 Second Iteration: 1 Ingress port receiving stateful TCP traffic and 2 591 Ingress port receiving stateless traffic destined to 1 Egress Port 593 Last Iteration: 1 Ingress port receiving stateful TCP traffic and N-2 594 Ingress port receiving stateless traffic destined to 1 Egress Port 596 6.3 Reporting Format 598 The report MUST include the following: 600 - Number of ingress and egress ports along with designation of 601 stateful or stateless flow assignment. 603 - Stateful flow goodput 605 - Stateless flow latency 607 - The repeatability of the test needs to be indicated: number of 608 iteration of the same test and percentage of variation between 609 results (min, max, avg) 611 7. Security Considerations 613 Benchmarking activities as described in this memo are limited to 614 technology characterization using controlled stimuli in a laboratory 615 environment, with dedicated address space and the constraints 616 specified in the sections above. 618 The benchmarking network topology will be an independent test setup 619 and MUST NOT be connected to devices that may forward the test 620 traffic into a production network, or misroute traffic to the test 621 management network. 623 Further, benchmarking is performed on a "black-box" basis, relying 624 solely on measurements observable external to the DUT/SUT. 626 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 627 benchmarking purposes. Any implications for network security arising 628 from the DUT/SUT SHOULD be identical in the lab and in production 629 networks. 631 8. IANA Considerations 633 NO IANA Action is requested at this time. 635 9. References 636 9.1. Normative References 638 [RFC1242] Bradner, S. "Benchmarking Terminology for Network 639 Interconnection Devices", RFC 1242, July 1991. 641 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 642 Network Interconnect Devices", RFC 2544, March 1999. 644 9.2. Informative References 646 [1] Avramov L. and Rapp J., "Data Center Benchmarking Terminology", 647 April 2017. 649 [2] Mandeville R. and Perser J., "Benchmarking Methodology for LAN 650 Switching Devices", RFC 2889, August 2000. 652 [3] Stopp D. and Hickman B., "Methodology for IP Multicast 653 Benchmarking", RFC 3918, October 2004. 655 [4] Yanpei Chen, Rean Griffith, Junda Liu, Randy H. Katz, Anthony D. 656 Joseph, "Understanding TCP Incast Throughput Collapse in 657 Datacenter Networks, 658 "http://www.eecs.berkeley.edu/~ychen2/professional/TCPIncastWREN2009.pdf". 660 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 661 Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, 662 March 1997, 664 9.2. Acknowledgements 666 The authors would like to thank Alfred Morton and Scott Bradner 667 for their reviews and feedback. 669 Authors' Addresses 671 Lucien Avramov 672 Google 673 1600 Amphitheatre Parkway 674 Mountain View, CA 94043 675 United States 676 Email: lucienav@google.com 678 Jacob Rapp 679 VMware 680 3401 Hillview Ave 681 Palo Alto, CA 682 United States 683 Phone: +1 650 857 3367 684 Email: jrapp@vmware.com