idnits 2.17.1 draft-ietf-bmwg-traffic-management-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 9 longer pages, the longest (page 35) being 79 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 40 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 155 instances of too long lines in the document, the longest one being 15 characters in excess of 72. ** There are 3 instances of lines with control characters in the document. == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 136 has weird spacing: '...isioned limit...' -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC 6349' is mentioned on line 465, but not defined -- Looks like a reference, but probably isn't: '1' on line 1587 == Unused Reference: 'RFC2234' is defined on line 1815, but no explicit reference was found in the text == Unused Reference: 'RFC6349' is defined on line 1835, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2234 (Obsoleted by RFC 4234) ** Obsolete normative reference: RFC 2680 (Obsoleted by RFC 7680) Summary: 4 errors (**), 0 flaws (~~), 8 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group B. Constantine 2 Internet Draft JDSU 3 Intended status: Informational T. Copley 4 Expires: May 2015 Level-3 5 November 12, 2014 R. Krishnan 6 Brocade Communications 8 Traffic Management Benchmarking 9 draft-ietf-bmwg-traffic-management-01.txt 11 Status of this Memo 13 This Internet-Draft is submitted in full conformance with the 14 provisions of BCP 78 and BCP 79. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF). Note that other groups may also distribute 18 working documents as Internet-Drafts. The list of current Internet- 19 Drafts is at http://datatracker.ietf.org/drafts/current/. 21 Internet-Drafts are draft documents valid for a maximum of six months 22 and may be updated, replaced, or obsoleted by other documents at any 23 time. It is inappropriate to use Internet-Drafts as reference 24 material or to cite them other than as "work in progress." 26 This Internet-Draft will expire on May 12, 2015. 28 Copyright Notice 30 Copyright (c) 2014 IETF Trust and the persons identified as the 31 document authors. All rights reserved. 33 This document is subject to BCP 78 and the IETF Trust's Legal 34 Provisions Relating to IETF Documents 35 (http://trustee.ietf.org/license-info) in effect on the date of 36 publication of this document. Please review these documents 37 carefully, as they describe your rights and restrictions with respect 38 to this document. Code Components extracted from this document must 39 include Simplified BSD License text as described in Section 4.e of 40 the Trust Legal Provisions and are provided without warranty as 41 described in the Simplified BSD License. 43 Abstract 45 This framework describes a practical methodology for benchmarking the 46 traffic management capabilities of networking devices (i.e. policing, 47 shaping, etc.). The goal is to provide a repeatable test method that 48 objectively compares performance of the device's traffic management 49 capabilities and to specify the means to benchmark traffic management 50 with representative application traffic. 52 Table of Contents 54 1. Introduction...................................................4 55 1.1. Traffic Management Overview...............................4 56 1.2. DUT Lab Configuration and Testing Overview................5 57 2. Conventions used in this document..............................7 58 3. Scope and Goals................................................8 59 4. Traffic Benchmarking Metrics...................................9 60 4.1. Metrics for Stateless Traffic Tests.......................9 61 4.2. Metrics for Stateful Traffic Tests.......................11 62 5. Tester Capabilities...........................................11 63 5.1. Stateless Test Traffic Generation........................11 64 5.2. Stateful Test Pattern Generation.........................12 65 5.2.1. TCP Test Pattern Definitions........................13 66 6. Traffic Benchmarking Methodology..............................15 67 6.1. Policing Tests...........................................15 68 6.1.1 Policer Individual Tests................................15 69 6.1.2 Policer Capacity Tests..............................16 70 6.1.2.1 Maximum Policers on Single Physical Port..........16 71 6.1.2.2 Single Policer on All Physical Ports..............17 72 6.1.2.3 Maximum Policers on All Physical Ports............17 73 6.2. Queue/Scheduler Tests....................................17 74 6.2.1 Queue/Scheduler Individual Tests........................17 75 6.2.1.1 Testing Queue/Scheduler with Stateless Traffic....17 76 6.2.1.2 Testing Queue/Scheduler with Stateful Traffic.....18 77 6.2.2 Queue / Scheduler Capacity Tests......................19 78 6.2.2.1 Multiple Queues / Single Port Active..............19 79 6.2.2.1.1 Strict Priority on Egress Port..................19 80 6.2.2.1.2 Strict Priority + Weighted Fair Queue (WFQ).....19 81 6.2.2.2 Single Queue per Port / All Ports Active..........19 82 6.2.2.3 Multiple Queues per Port, All Ports Active........20 83 6.3. Shaper tests.............................................20 84 6.3.1 Shaper Individual Tests...............................20 85 6.3.1.1 Testing Shaper with Stateless Traffic.............20 86 6.3.1.2 Testing Shaper with Stateful Traffic..............21 87 6.3.2 Shaper Capacity Tests.................................22 88 6.3.2.1 Single Queue Shaped, All Physical Ports Active....22 89 6.3.2.2 All Queues Shaped, Single Port Active.............22 90 6.3.2.3 All Queues Shaped, All Ports Active...............22 91 6.4. Concurrent Capacity Load Tests...........................24 92 7. Security Considerations.......................................24 93 8. IANA Considerations...........................................24 94 9. Conclusions...................................................24 95 10. References...................................................24 96 10.1. Normative References....................................25 97 10.2. Informative References..................................25 98 11. Acknowledgments..............................................25 100 1. Introduction 102 Traffic management (i.e. policing, shaping, etc.) is an increasingly 103 important component when implementing network Quality of Service 104 (QoS). 106 There is currently no framework to benchmark these features 107 although some standards address specific areas which are described 108 in Section 1.1. 110 This draft provides a framework to conduct repeatable traffic 111 management benchmarks for devices and systems in a lab environment. 113 Specifically, this framework defines the methods to characterize 114 the capacity of the following traffic management features in network 115 devices; classification, policing, queuing / scheduling, and 116 traffic shaping. 118 This benchmarking framework can also be used as a test procedure to 119 assist in the tuning of traffic management parameters before service 120 activation. In addition to Layer 2/3 (Ethernet / IP) benchmarking, 121 Layer 4 (TCP) test patterns are proposed by this draft in order to 122 more realistically benchmark end-user traffic. 124 1.1. Traffic Management Overview 126 In general, a device with traffic management capabilities performs 127 the following functions: 129 - Traffic classification: identifies traffic according to various 130 configuration rules for example IEEE 802.1Q Virtual LAN (VLAN), 131 Differential Services Code Point (DSCP) etc. and marks this traffic 132 internally to the network device. Multiple external priorities 133 (DSCP, 802.1p, etc.) can map to the same priority in the device. 134 - Traffic policing: limits the rate of traffic that enters a network 135 device according to the traffic classification. If the traffic 136 exceeds the provisioned limits, the traffic is either dropped or 137 remarked and forwarded onto to the next network device 138 - Traffic Scheduling: provides traffic classification within the 139 network device by directing packets to various types of queues and 140 applies a dispatching algorithm to assign the forwarding sequence 141 of packets 142 - Traffic shaping: a traffic control technique that actively buffers 143 and smooths the output rate in an attempt to adapt bursty traffic 144 to the configured limits 145 - Active Queue Management (AQM): 146 AQM involves monitoring the status of internal queues and proactively 147 dropping (or remarking) packets, which causes hosts using 148 congestion-aware protocols to back-off and in turn alleviate queue 149 congestion [AQM-RECO]. On the other hand, classic traffic management 150 techniques reactively drop (or remark) packets based on queue full 151 condition. The benchmarking scenarios for AQM are different and is 152 outside of the scope of this testing framework. 154 The following diagram is a generic model of the traffic management 155 capabilities within a network device. It is not intended to 156 represent all variations of manufacturer traffic management 157 capabilities, but provide context to this test framework. 159 |----------| |----------------| |--------------| |----------| 160 | | | | | | | | 161 |Interface | |Ingress Actions | |Egress Actions| |Interface | 162 |Input | |(classification,| |(scheduling, | |Output | 163 |Queues | | marking, | | shaping, | |Queues | 164 | |-->| policing or |-->| active queue |-->| | 165 | | | shaping) | | management | | | 166 | | | | | remarking) | | | 167 |----------| |----------------| |--------------| |----------| 169 Figure 1: Generic Traffic Management capabilities of a Network Device 171 Ingress actions such as classification are defined in RFC 4689 [RFC4689] 172 and include IP addresses, port numbers, DSCP, etc. In terms of marking, 173 RFC 2697 [RFC2697] and RFC 2698 [RFC2698] define a single rate and dual 174 rate, three color marker, respectively. 176 The Metro Ethernet Forum (MEF) specifies policing and shaping in terms 177 of Ingress and Egress Subscriber/Provider Conditioning Functions in 178 MEF12.1 [MEF-12.1]; Ingress and Bandwidth Profile attributes in MEF10.2 179 [MEF-10.2] and MEF 26 [MEF-26]. 181 1.2 Lab Configuration and Testing Overview 183 The following is the description of the lab set-up for the traffic 184 management tests: 186 +--------------+ +-------+ +----------+ +-----------+ 187 | Transmitting | | | | | | Receiving | 188 | Test Host | | | | | | Test Host | 189 | |-----| Device|---->| Network |--->| | 190 | | | Under | | Delay | | | 191 | | | Test | | Emulator | | | 192 | |<----| |<----| |<---| | 193 | | | | | | | | 194 +--------------+ +-------+ +----------+ +-----------+ 196 As shown in the test diagram, the framework supports uni-directional 197 and bi-directional traffic management tests (where the transmitting 198 and receiving roles would be reversed on the return path). 200 This testing framework describes the tests and metrics for each of 201 the following traffic management functions: 202 - Policing 203 - Queuing / Scheduling 204 - Shaping 206 The tests are divided into individual and rated capacity tests. 207 The individual tests are intended to benchmark the traffic management 208 functions according to the metrics defined in Section 4. The 209 capacity tests verify traffic management functions under the load of 210 many simultaneous individual tests and their flows. 212 This involves concurrent testing of multiple interfaces with the 213 specific traffic management function enabled, and increasing load to 214 the capacity limit of each interface. 216 As an example: a device is specified to be capable of shaping on all 217 of its egress ports. The individual test would first be conducted to 218 benchmark the specified shaping function against the metrics defined 219 in section 4. Then the capacity test would be executed to test the 220 shaping function concurrently on all interfaces and with maximum 221 traffic load. 223 The Network Delay Emulator (NDE) is required for TCP stateful tests 224 in order to allow TCP to utilize a significant size TCP window in its 225 control loop. 227 Also note that the Network Delay Emulator (NDE) should be passive in 228 nature such as a fiber spool. This is recommended to eliminate the 229 potential effects that an active delay element (i.e. test impairment 230 generator) may have on the test flows. In the case where a fiber 231 spool is not practical due to the desired latency, an active NDE must 232 be independently verified to be capable of adding the configured delay 233 without loss. In other words, the DUT would be removed and the NDE 234 performance benchmarked independently. 236 Note the NDE should be used in "full pipe" delay mode. Most NDEs 237 allow for per flow delay actions, emulating QoS prioritization. For 238 this framework, the NDE's sole purpose is simply to add delay to all 239 packets (emulate network latency). So to benchmark the performance of 240 the NDE, maximum offered load should be tested against the following 241 frame sizes: 128, 256, 512, 768, 1024, 1500,and 9600 bytes. The delay 242 accuracy at each of these packet sizes can then be used to calibrate 243 the range of expected Bandwidth Delay Product (BDP) for the TCP stateful 244 tests. 246 2. Conventions used in this document 248 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 249 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 250 document are to be interpreted as described in RFC 2119 [RFC2119]. 252 The following acronyms are used: 254 AQM: Active Queue Management 256 BB: Bottleneck Bandwidth 258 BDP: Bandwidth Delay Product 260 BSA: Burst Size Achieved 262 CBS: Committed Burst Size 264 CIR: Committed Information Rate 266 DUT: Device Under Test 268 EBS: Excess Burst Size 270 EIR: Excess Information Rate 272 NDE: Network Delay Emulator 274 SP: Strict Priority Queuing 276 QL: Queue Length 278 QoS: Quality of Service 280 RTH: Receiving Test Host 282 RTT: Round Trip Time 284 SBB: Shaper Burst Bytes 286 SBI: Shaper Burst Interval 288 SR: Shaper Rate 290 SSB: Send Socket Buffer 292 Tc: CBS Time Interval 294 Te: EBS Time Interval 296 Ti Transmission Interval 298 TTH: Transmitting Test Host 299 TTP: TCP Test Pattern 301 TTPET: TCP Test Pattern Execution Time 303 3. Scope and Goals 305 The scope of this work is to develop a framework for benchmarking and 306 testing the traffic management capabilities of network devices in the 307 lab environment. These network devices may include but are not 308 limited to: 309 - Switches (including Layer 2/3 devices) 310 - Routers 311 - Firewalls 312 - General Layer 4-7 appliances (Proxies, WAN Accelerators, etc.) 314 Essentially, any network device that performs traffic management as 315 defined in section 1.1 can be benchmarked or tested with this 316 framework. 318 The primary goal is to assess the maximum forwarding performance deemed 319 to be within the provisioned traffic limits that a network device can 320 sustain without dropping or impairing packets, or compromising the 321 accuracy of multiple instances of traffic management functions. This 322 is the benchmark for comparison between devices. 324 Within this framework, the metrics are defined for each traffic 325 management test but do not include pass / fail criterion, which is 326 not within the charter of BMWG. This framework provides the test 327 methods and metrics to conduct repeatable testing, which will 328 provide the means to compare measured performance between DUTs. 330 As mentioned in section 1.2, these methods describe the individual 331 tests and metrics for several management functions. It is also within 332 scope that this framework will benchmark each function in terms of 333 overall rated capacity. This involves concurrent testing of multiple 334 interfaces with the specific traffic management function enabled, up 335 to the capacity limit of each interface. 337 It is not within scope of this of this framework to specify the 338 procedure for testing multiple configurations of traffic management 339 functions concurrently. The multitudes of possible combinations is 340 almost unbounded and the ability to identify functional "break points" 341 would be almost impossible. 343 However, section 6.4 provides suggestions for some profiles of 344 concurrent functions that would be useful to benchmark. The key 345 requirement for any concurrent test function is that tests must 346 produce reliable and repeatable results. 348 Also, it is not within scope to perform conformance testing. Tests 349 defined in this framework benchmark the traffic management functions 350 according to the metrics defined in section 4 and do not address any 351 conformance to standards related to traffic management. The current 352 specifications don't specify exact behavior or implementation and the 353 specifications that do exist (cited in section 1.1) allow 354 implementations to vary w.r.t. short term rate accuracy and other 355 factors. This is a primary driver for this framework with the key 356 goal to provide an objective means to compare vendor traffic 357 management functions. 359 Another goal is to devise methods that utilize flows with 360 congestion-aware transport (TCP) as part of the traffic load and 361 still produce repeatable results in the isolated test environment. 362 This framework will derive stateful test patterns (TCP or 363 application layer) that can also be used to further benchmark the 364 performance of applicable traffic management techniques such as 365 queuing / scheduling and traffic shaping. In cases where the 366 network device is stateful in nature (i.e. firewall, etc.), 367 stateful test pattern traffic is important to test along with 368 stateless, UDP traffic in specific test scenarios (i.e. 369 applications using TCP transport and UDP VoIP, etc.) 371 As mentioned earlier in the document, repeatability of test results 372 is critical, especially considering the nature of stateful TCP traffic. 373 To this end, the stateful tests will use TCP test patterns to emulate 374 applications. This framework also provides guidelines for application 375 modeling and open source tools to achieve the repeatable stimulus. 376 And finally, TCP metrics from RFC 6349 are specified to report for 377 each stateful test and provide the means to compare each repeated 378 test. 380 4. Traffic Benchmarking Metrics 382 The metrics to be measured during the benchmarks are divided into two 383 (2) sections: packet layer metrics used for the stateless traffic 384 testing and TCP layer metrics used for the stateful traffic 385 testing. 387 4.1. Metrics for Stateless Traffic Tests 389 Stateless traffic measurements require that sequence number and 390 time-stamp be inserted into the payload for lost packet analysis. 391 Delay analysis may be achieved by insertion of timestamps directly 392 into the packets or timestamps stored elsewhere (packet captures). 393 This framework does not specify the packet format to carry sequence 394 number or timing information. 396 However, RFC 4737 [RFC4737] and RFC 4689 provide recommendations 397 for sequence tracking along with definitions of in-sequence and 398 out-of-order packets. 400 The following are the metrics to be used during the stateless traffic 401 benchmarking components of the tests: 403 - Burst Size Achieved (BSA): for the traffic policing and network 404 queue tests, the tester will be configured to send bursts to test 405 either the Committed Burst Size (CBS) or Excess Burst Size (EBS) of 406 a policer or the queue / buffer size configured in the DUT. The 407 Burst Size Achieved metric is a measure of the actual burst size 408 received at the egress port of the DUT with no lost packets. As an 409 example, the configured CBS of a DUT is 64KB and after the burst test, 410 only a 63 KB can be achieved without packet loss. Then 63KB is the 411 BSA. Also, the average Packet Delay Variation (PDV see below) as 412 experienced by the packets sent at the BSA burst size should be 413 recorded. 415 - Lost Packets (LP): For all traffic management tests, the tester will 416 transmit the test packets into the DUT ingress port and the number of 417 packets received at the egress port will be measured. The difference 418 between packets transmitted into the ingress port and received at the 419 egress port is the number of lost packets as measured at the egress 420 port. These packets must have unique identifiers such that only the 421 test packets are measured. For cases where multiple flows are 422 transmitted from ingress to egress port (e.g. IP conversations), each 423 flow must have sequence numbers within the test packets stream. 425 RFC 4737 and RFC 2680 [RFC2680] describe the need to to establish the 426 time threshold to wait before a packet is declared as lost. packet as 427 lost, and this threshold MUST be reported with the results. 429 - Out of Sequence (OOS): in additions to the LP metric, the test 430 packets must be monitored for sequence and the out-of-sequence (OOS) 431 packets. RFC 4689 defines the general function of sequence tracking, as 432 well as definitions for in-sequence and out-of-order packets. Out-of- 433 order packets will be counted per RFC 4737 and RFC 2680. 435 - Packet Delay (PD): the Packet Delay metric is the difference between 436 the timestamp of the received egress port packets and the packets 437 transmitted into the ingress port and specified in RFC 2285. The 438 transmitting host and receiving host time must be in time sync using 439 NTP , GPS, etc. 441 - Packet Delay Variation (PDV): the Packet Delay Variation metric is 442 the variation between the timestamp of the received egress port 443 packets and specified in RFC 5481. Note that per RFC 5481, this PDV 444 is the variation of one-way delay across many packets in the traffic 445 flow. 447 - Shaper Rate (SR): the Shaper Rate is only applicable to the 448 traffic shaping tests. The SR represents the average egress output 449 rate (bps) over the test interval. 451 - Shaper Burst Bytes (SBB): the Shaper Burst Bytes is only applicable 452 to the traffic shaping tests. A traffic shaper will emit packets in 453 different size "trains" (bytes back-to-back). This metric 454 characterizes the method by which the shaper emits traffic. Some 455 shapers transmit larger bursts per interval, and a burst of 1 packet 456 would apply to the extreme case of a shaper sending a CBR stream of 457 single packets. 459 - Shaper Burst Interval(SBI): the interval is only applicable to the 460 traffic shaping tests and again is the time between shaper emitted 461 bursts. 463 4.2. Metrics for Stateful Traffic Tests 465 The stateful metrics will be based on RFC 6349 [RFC 6349] TCP metrics and will 466 include: 468 - TCP Test Pattern Execution Time (TTPET): RFC 6349 defined the TCP 469 Transfer Time for bulk transfers, which is simply the measured time 470 to transfer bytes across single or concurrent TCP connections. The 471 TCP test patterns used in traffic management tests will include bulk 472 transfer and interactive applications. The interactive patterns include 473 instances such as HTTP business applications, database applications, 474 etc. The TTPET will be the measure of the time for a single execution 475 of a TCP Test Pattern (TTP). Average, minimum, and maximum times will 476 be measured or calculated. 478 An example would be an interactive HTTP TTP session which should take 479 5 seconds on a GigE network with 0.5 millisecond latency. During ten (10) 480 executions of this TTP, the TTPET results might be: average of 6.5 481 seconds, minimum of 5.0 seconds, and maximum of 7.9 seconds. 483 - TCP Efficiency: after the execution of the TCP Test Pattern, TCP 484 Efficiency represents the percentage of Bytes that were not 485 retransmitted. 487 Transmitted Bytes - Retransmitted Bytes 489 TCP Efficiency % = --------------------------------------- X 100 491 Transmitted Bytes 493 Transmitted Bytes are the total number of TCP Bytes to be transmitted 494 including the original and the retransmitted Bytes. These retransmitted 495 bytes should be recorded from the sender's TCP/IP stack perspective, 496 to avoid any misinterpretation that a reordered packet is a retransmitted 497 packet (as may be the case with packet decode interpretation). 499 - Buffer Delay: represents the increase in RTT during a TCP test 500 versus the baseline DUT RTT (non congested, inherent latency). RTT 501 and the technique to measure RTT (average versus baseline) are defined 502 in RFC 6349. Referencing RFC 6349, the average RTT is derived from 503 the total of all measured RTTs during the actual test sampled at every 504 second divided by the test duration in seconds. 506 Total RTTs during transfer 507 Average RTT during transfer = ----------------------------- 508 Transfer duration in seconds 510 Average RTT during Transfer - Baseline RTT 511 Buffer Delay % = ------------------------------------------ X 100 512 Baseline RTT 514 Note that even though this was not explicitly stated in RFC 6349, 515 retransmitted packets should not be used in RTT measurements. 517 Also, the test results should record the average RTT in millisecond 518 across the entire test duration and number of samples. 520 5. Tester Capabilities 522 The testing capabilities of the traffic management test environment 523 are divided into two (2) sections: stateless traffic testing and 524 stateful traffic testing 526 5.1. Stateless Test Traffic Generation 528 The test device must be capable of generating traffic at up to the 529 link speed of the DUT. The test device must be calibrated to verify 530 that it will not drop any packets. The test device's inherent PD and 531 PDV must also be calibrated and subtracted from the PD and PDV metrics. 532 The test device must support the encapsulation to be tested such as 533 IEEE 802.1Q VLAN, IEEE 802.1ad Q-in-Q, Multiprotocol Label Switching 534 (MPLS), etc. Also, the test device must allow control of the 535 classification techniques defined in RFC 4689 (i.e. IP address, DSCP, 536 TOS, etc classification). 538 The open source tool "iperf" can be used to generate stateless UDP 539 traffic and is discussed in Appendix A. Since iperf is a software 540 based tool, there will be performance limitations at higher link 541 speeds (e.g. GigE, 10 GigE, etc.). Careful calibration of any test 542 environment using iperf is important. At higher link speeds, it is 543 recommended to use hardware based packet test equipment. 545 5.1.1 Burst Hunt with Stateless Traffic 547 A central theme for the traffic management tests is to benchmark the 548 specified burst parameter of traffic management function, since burst 549 parameters of SLAs are specified in bytes. For testing efficiency, 550 it is recommended to include a burst hunt feature, which automates 551 the manual process of determining the maximum burst size which can 552 be supported by a traffic management function. 554 The burst hunt algorithm should start at the target burst size (maximum 555 burst size supported by the traffic management function) and will send 556 single bursts until it can determine the largest burst that can pass 557 without loss. If the target burst size passes, then the test is 558 complete. The hunt aspect occurs when the target burst size is not 559 achieved; the algorithm will drop down to a configured minimum burst 560 size and incrementally increase the burst until the maximum burst 561 supported by the DUT is discovered. The recommended granularity 562 of the incremental burst size increase is 1 KB. 564 Optionally for a policer function and if the burst size passes, the burst 565 should be increased by increments of 1 KB to verify that the policer is 566 truly configured properly (or enabled at all). 568 5.2. Stateful Test Pattern Generation 570 The TCP test host will have many of the same attributes as the TCP test 571 host defined in RFC 6349. The TCP test device may be a standard 572 computer or a dedicated communications test instrument. In both cases, 573 it must be capable of emulating both a client and a server. 575 For any test using stateful TCP test traffic, the Network Delay Emulator 576 (NDE function from the lab set-up diagram) must be used in order to 577 provide a meaningful BDP. As referenced in section 2, the target 578 traffic rate and configured RTT must be verified independently using 579 just the NDE for all stateful tests (to ensure the NDE can delay without 580 loss). 582 The TCP test host must be capable to generate and receive stateful TCP 583 test traffic at the full link speed of the DUT. As a general rule of 584 thumb, testing TCP Throughput at rates greater than 500 Mbps may require 585 high performance server hardware or dedicated hardware based test tools. 587 The TCP test host must allow adjusting both Send and Receive Socket 588 Buffer sizes. The Socket Buffers must be large enough to fill the BDP 589 for bulk transfer TCP test application traffic. 591 Measuring RTT and retransmissions per connection will generally require 592 a dedicated communications test instrument. In the absence of 593 dedicated hardware based test tools, these measurements may need to be 594 conducted with packet capture tools, i.e. conduct TCP Throughput 595 tests and analyze RTT and retransmissions in packet captures. 597 The TCP implementation used by the test host must be specified in the 598 test results (e.g. TCP New Reno, 599 TCP options supported, etc.). 601 While RFC 6349 defined the means to conduct throughput tests of TCP bulk 602 transfers, the traffic management framework will extend TCP test 603 execution into interactive TCP application traffic. Examples include 604 email, HTTP, business applications, etc. This interactive traffic is 605 bi-directional and can be chatty. 607 The test device must not only support bulk TCP transfer application 608 traffic but also chatty traffic. A valid stress test SHOULD include 609 both traffic types. This is due to the non-uniform, bursty nature of 610 chatty applications versus the relatively uniform nature of bulk 611 transfers (the bulk transfer smoothly stabilizes to equilibrium state 612 under lossless conditions). 614 While iperf is an excellent choice for TCP bulk transfer testing, the 615 netperf open source tool provides the ability to control the client 616 and server request / response behavior. The netperf-wrapper tool is 617 a Python wrapper to run multiple simultaneous netperf instances and 618 aggregate the results. Appendix A provides an overview of netperf / 619 netperf-wrapper and another open source application emulation, 620 Flowgrind. As with any software based tool, the performance must be 621 qualified to the link speed to be tested. Hardware-based test 622 equipment should be considered for reliable results at higher links 623 speeds (e.g. 1 GigE, 10 GigE). 625 5.2.1. TCP Test Pattern Definitions 627 As mentioned in the goals of this framework, techniques are defined 628 to specify TCP traffic test patterns to benchmark traffic 629 management technique(s) and produce repeatable results. Some 630 network devices such as firewalls, will not process stateless test 631 traffic which is another reason why stateful TCP test traffic must 632 be used. 634 An application could be fully emulated up to Layer 7, however this 635 framework proposes that stateful TCP test patterns be used in order 636 to provide granular and repeatable control for the benchmarks. The 637 following diagram illustrates a simple Web Browsing application 638 (HTTP). 640 GET url 642 Client ------------------------> Web 644 Web 200 OK 100ms | 646 Browser <------------------------ Server 647 In this example, the Client Web Browser (Client) requests a URL and 648 then the Web Server delivers the web page content to the Client 649 (after a Server delay of 100 millisecond). This asynchronous, 650 "request/response" behavior is intrinsic to most TCP based 651 applications such as Email (SMTP), File Transfers (FTP and SMB), 652 Database (SQL), Web Applications (SOAP), REST, etc. The impact to 653 the network elements is due to the multitudes of Clients and the 654 variety of bursty traffic, which stresses traffic management functions. 655 The actual emulation of the specific application protocols is not 656 required and TCP test patterns can be defined to mimic the 657 application network traffic flows and produce repeatable results. 659 Application modeling techniques have been proposed in 660 "3GPP2 C.R1002-0 v1.0" and provides examples to model the behavior of 661 HTTP, FTP, and WAP applications at the TCP layer. The models have 662 been defined with various mathematical distributions for the 663 Request/Response bytes and inter-request gap times. 665 This framework does not specify a fixed set of TCP test patterns, but 666 does provide recommended test cases in Appendix B. Some of these 667 examples reflect those specified in "draft-ietf-bmwg-ca-bench-meth-04" 668 which suggests traffic mixes for a variety of representative 669 application profiles. Other examples are simply well-known 670 application traffic types such as HTTP. 672 6. Traffic Benchmarking Methodology 674 The traffic benchmarking methodology uses the test set-up from 675 section 2 and metrics defined in section 4. 677 Each test should compare the network device's internal statistics 678 (available via command line management interface, SNMP, etc.) to the 679 measured metrics defined in section 4. This evaluates the accuracy 680 of the internal traffic management counters under individual test 681 conditions and capacity test conditions that are defined in each 682 subsection. 684 From a device configuration standpoint, scheduling and shaping 685 functionality can be applied to logical ports such Link Aggregation 686 (LAG). This would result in the same scheduling and shaping 687 configuration applied to all the member physical ports. The focus of 688 this draft is only on tests at a physical port level. 690 The following sections provide the objective, procedure, metrics, and 691 reporting format for each test. For all test steps, the following 692 global parameters must be specified: 694 Test Runs (Tr). Defines the number of times the test needs to be run 695 to ensure accurate and repeatable results. The recommended value is 3. 697 Test Duration (Td). Defines the duration of a test iteration, expressed 698 in seconds. The recommended value it 60 seconds. 700 6.1. Policing Tests 702 Policer is defined as the entity performing the policy function. The 703 intent of the policing tests is to verify the policer performance 704 (i.e. CIR-CBS and EIR-EBS parameters). The tests will verify that the 705 network device can handle the CIR with CBS and the EIR with EBS and 706 will use back-back packet testing concepts from RFC 2544 (but adapted 707 to burst size algorithms and terminology). Also MEF-14,19,37 provide 708 some basis for specific components of this test. The burst hunt 709 algorithm defined in section 5.1.1 can also be used to automate the 710 measurement of the CBS value. 712 The tests are divided into two (2) sections; individual policer 713 tests and then full capacity policing tests. It is important to 714 benchmark the basic functionality of the individual policer then 715 proceed into the fully rated capacity of the device. This capacity may 716 include the number of policing policies per device and the number of 717 policers simultaneously active across all ports. 719 6.1.1 Policer Individual Tests 721 Objective: 722 Test a policer as defined by RFC 4115 or MEF 10.2, depending upon the 723 equipment's specification. In addition to verifying that the policer 724 allows the specified CBS and EBS bursts to pass, the policer test MUST 725 verify that the policer will remark or drop excess, and pass traffic at 726 the specified CBS/EBS values. 728 Test Summary: 729 Policing tests should use stateless traffic. Stateful TCP test traffic 730 will generally be adversely affected by a policer in the absence of 731 traffic shaping. So while TCP traffic could be used, it is more 732 accurate to benchmark a policer with stateless traffic. 734 As an example for RFC 4115, consider a CBS and EBS of 64KB and CIR and 735 EIR of 100 Mbps on a 1GigE physical link (in color-blind mode). A 736 stateless traffic burst of 64KB would be sent into the policer at the 737 GigE rate. This equates to approximately a 0.512 millisecond burst 738 time (64 KB at 1 GigE). The traffic generator must space these bursts 739 to ensure that the aggregate throughput does not exceed the CIR. The 740 Ti between the bursts would equal CBS * 8 / CIR = 5.12 millisecond 741 in this example. 743 Test Metrics: 744 The metrics defined in section 4.1 (BSA, LP, OOS, PD, and PDV) SHALL 745 be measured at the egress port and recorded. 747 Procedure: 748 1. Configure the DUT policing parameters for the desired CIR/EIR and 749 CBS/EBS values to be tested 751 2. Configure the tester to generate a stateless traffic burst equal 752 to CBS and an interval equal to Ti (CBS in bits / CIR) 754 3. Compliant Traffic Step: Generate bursts of CBS + EBS traffic into 755 the policer ingress port and measure the metrics defined in 756 section 4.1 (BSA, LP. OOS, PD, and PDV) at the egress port and across 757 the entire Td (default 60 seconds duration) 759 4. Excess Traffic Test: Generate bursts of greater than CBS + EBS limit 760 traffic into the policer ingress port and verify that the policer 761 only allowed the BSA bytes to exit the egress. The excess burst MUST 762 be recorded and the recommended value is 1000 bytes. Additional tests 763 beyond the simple color-blind example might include: color-aware mode, 764 configurations where EIR is greater than CIR, etc. 766 Reporting Format: 767 The policer individual report MUST contain all results for each 768 CIR/EIR/CBS/EBS test run and a recommended format is as follows: 770 ******************************************************** 771 Test Configuration Summary: Tr, Td 773 DUT Configuration Summary: CIR, EIR, CBS, EBS 775 The results table should contain entries for each test run, (Test #1 776 to Test #Tr). 778 Compliant Traffic Test: BSA, LP, OOS, PD, and PDV 780 Excess Traffic Test: BSA 781 ******************************************************** 783 6.1.2 Policer Capacity Tests 785 Objective: 786 The intent of the capacity tests is to verify the policer performance 787 in a scaled environment with multiple ingress customer policers on 788 multiple physical ports. This test will benchmark the maximum number 789 of active policers as specified by the device manufacturer. 791 Test Summary: 792 The specified policing function capacity is generally expressed in 793 terms of the number of policers active on each individual physical 794 port as well as the number of unique policer rates that are utilized. 795 For all of the capacity tests, the benchmarking test procedure and 796 report format described in Section 6.1.1 for a single policer MUST 797 be applied to each of the physical port policers. 799 As an example, a Layer 2 switching device may specify that each of the 800 32 physical ports can be policed using a pool of policing service 801 policies. The device may carry a single customer's traffic on each 802 physical port and a single policer is instantiated per physical port. 803 Another possibility is that a single physical port may carry multiple 804 customers, in which case many customer flows would be policed 805 concurrently on an individual physical port (separate policers per 806 customer on an individual port). 808 Test Metrics: 809 The metrics defined in section 4.1 (BSA, LP, OOS, PD, and PDV) SHALL 810 be measured at the egress port and recorded. 812 The following sections provide the specific test scenarios, 813 procedures, and reporting formats for each policer capacity test. 815 6.1.2.1 Maximum Policers on Single Physical Port Test 817 Test Summary: 818 The first policer capacity test will benchmark a single physical port, 819 maximum policers on that physical port. 821 Assume multiple categories of ingress policers at rates r1, r2,...rn. 822 There are multiple customers on a single physical port. Each customer 823 could be represented by a single tagged vlan, double tagged vlan, 824 VPLS instance etc. Each customer is mapped to a different policer. 825 Each of the policers can be of rates r1, r2,..., rn. 827 An example configuration would be 828 - Y1 customers, policer rate r1 829 - Y2 customers, policer rate r2 830 - Y3 customers, policer rate r3 831 ... 832 - Yn customers, policer rate rn 833 Some bandwidth on the physical port is dedicated for other traffic (non 834 customer traffic); this includes network control protocol traffic. There 835 is a separate policer for the other traffic. Typical deployments have 3 836 categories of policers; there may be some deployments with more or less 837 than 3 categories of ingress policers. 839 Test Procedure: 840 1. Configure the DUT policing parameters for the desired CIR/EIR and 841 CBS/EBS values for each policer rate (r1-rn) to be tested 843 2. Configure the tester to generate a stateless traffic burst equal to 844 CBS and an interval equal to TI (CBS in bits/CIR) for each customer 845 stream (Y1 - Yn). The encapsulation for each customer must also be 846 configured according to the service tested (VLAN, VPLS, IP mapping, 847 etc.). 849 3. Compliant Traffic Step: Generate bursts of CBS + EBS traffic into the 850 policer ingress port for each customer traffic stream and measure the 851 metrics defined in section 4.1 (BSA, LP, OOS, PD, and PDV) at the 852 egress port for each stream and across the entire Td (default 30 853 seconds duration) 855 4. Excess Traffic Test: Generate bursts of greater than CBS + EBS limit 856 traffic into the policer ingress port for each customer traffic 857 stream and verify that the policer only allowed the BSA bytes to exit 858 the egress for each stream. The excess burst MUST recorded and the 859 recommended value is 1000 bytes. 861 Reporting Format: 862 The policer individual report MUST contain all results for each 863 CIR/EIR/CBS/EBS test run, per customer traffic stream. 865 A recommended format is as follows: 867 ******************************************************** 868 Test Configuration Summary: Tr, Td 870 Customer traffic stream Encapsulation: Map each stream to VLAN, 871 VPLS, IP address 873 DUT Configuration Summary per Customer Traffic Stream: CIR, EIR, 874 CBS, EBS 876 The results table should contain entries for each test run, (Test #1 877 to Test #Tr). 879 Customer Stream Y1-Yn (see note), Compliant Traffic Test: BSA, LP, 880 OOS, PD, and PDV 882 Customer Stream Y1-Yn (see note), Excess Traffic Test: BSA 883 ******************************************************** 885 Note: For each test run, there will be a two (2) rows for each 886 customer stream, the compliant traffic result and the excess traffic 887 result. 889 6.1.2.2 Single Policer on All Physical Ports 891 Test Summary: 892 The second policer capacity test involves a single Policer function per 893 physical port with all physical ports active. In this test, there is a 894 single policer per physical port. The policer can have one of the rates 895 r1, r2,.., rn. All the physical ports in the networking device are 896 active. 898 Procedure: 899 The procedure is identical to 6.1.1, the configured parameters must be 900 reported per port and the test report must include results per 901 measured egress port 903 6.1.2.3 Maximum Policers on All Physical Ports 905 Finally the third policer capacity test involves a combination of the 906 first and second capacity test, namely maximum policers active per 907 physical port and all physical ports are active. 909 Procedure: 910 Uses the procedural method from 6.1.2.1 and the configured parameters 911 must be reported per port and the test report must include per stream 912 results per measured egress port. 914 6.2. Queue and Scheduler Tests 916 Queues and traffic Scheduling are closely related in that a queue's 917 priority dictates the manner in which the traffic scheduler 918 transmits packets out of the egress port. 920 Since device queues / buffers are generally an egress function, this 921 test framework will discuss testing at the egress (although the 922 technique can be applied to ingress side queues). 924 Similar to the policing tests, the tests are divided into two 925 sections; individual queue/scheduler function tests and then full 926 capacity tests. 928 6.2.1 Queue/Scheduler Individual Tests Overview 930 The various types of scheduling techniques include FIFO, Strict 931 Priority (SP), Weighted Fair Queueing (WFQ) along with other 932 variations. This test framework recommends to test at a minimum 933 of three techniques although it is the discretion of the tester 934 to benchmark other device scheduling algorithms. 936 6.2.1.1 Queue/Scheduler with Stateless Traffic Test 938 Objective: 939 Verify that the configured queue and scheduling technique can 940 handle stateless traffic bursts up to the queue depth. 942 Test Summary: 943 A network device queue is memory based unlike a policing function, 944 which is token or credit based. However, the same concepts from 945 section 6.1 can be applied to testing network device queues. 947 The device's network queue should be configured to the desired size 948 in KB (queue length, QL) and then stateless traffic should be 949 transmitted to test this QL. 951 A queue should be able to handle repetitive bursts with the 952 transmission gaps proportional to the bottleneck bandwidth. This 953 gap is referred to as the transmission interval (Ti). Ti can 954 be defined for the traffic bursts and is based off of the QL and 955 Bottleneck Bandwidth (BB) of the egress interface. 957 Ti = QL * 8 / BB 959 Note that this equation is similar to the Ti required for transmission 960 into a policer (QL = CBS, BB = CIR). Also note that the burst hunt 961 algorithm defined in section 5.1.1 can also be used to automate the 962 measurement of the queue value. 964 The stateless traffic burst shall be transmitted at the link speed 965 and spaced within the Ti time interval. The metrics defined in section 966 4.1 shall be measured at the egress port and recorded; the primary 967 result is to verify the BSA and that no packets are dropped. 969 The scheduling function must also be characterized to benchmark the 970 device's ability to schedule the queues according to the priority. 971 An example would be 2 levels of priority including SP and FIFO 972 queueing. Under a flow load greater the egress port speed, the 973 higher priority packets should be transmitted without drops (and 974 also maintain low latency), while the lower priority (or best 975 effort) queue may be dropped. 977 Test Metrics: 978 The metrics defined in section 4.1 (BSA, LP, OOS, PD, and PDV) SHALL 979 be measured at the egress port and recorded. 981 Procedure: 982 1. Configure the DUT queue length (QL) and scheduling technique 983 (FIFO, SP, etc) parameters 985 2. Configure the tester to generate a stateless traffic burst equal 986 to QL and an interval equal to Ti (QL in bits/BB) 988 3. Generate bursts of QL traffic into the DUT and measure the 989 metrics defined in section 4.1 (LP, OOS, PD, and PDV) at the egress 990 port and across the entire Td (default 30 seconds duration) 992 Report Format: 993 The Queue/Scheduler Stateless Traffic individual report MUST contain 994 all results for each QL/BB test run and a recommended format is as 995 follows: 997 ******************************************************** 998 Test Configuration Summary: Tr, Td 1000 DUT Configuration Summary: Scheduling technique, BB and QL 1002 The results table should contain entries for each test run as follows, 1003 (Test #1 to Test #Tr). 1005 - LP, OOS, PD, and PDV 1006 ******************************************************** 1008 6.2.1.2 Testing Queue/Scheduler with Stateful Traffic 1010 Objective: 1011 Verify that the configured queue and scheduling technique can handle 1012 stateless traffic bursts up to the queue depth. 1014 Test Background and Summary: 1015 To provide a more realistic benchmark and to test queues in layer 4 1016 devices such as firewalls, stateful traffic testing is recommended 1017 for the queue tests. Stateful traffic tests will also utilize the 1018 Network Delay Emulator (NDE) from the network set-up configuration in 1019 section 2. 1021 The BDP of the TCP test traffic must be calibrated to the QL of the 1022 device queue. Referencing RFC 6349, the BDP is equal to: 1024 BB * RTT / 8 (in bytes) 1026 The NDE must be configured to an RTT value which is large enough to 1027 allow the BDP to be greater than QL. An example test scenario is 1028 defined below: 1030 - Ingress link = GigE 1031 - Egress link = 100 Mbps (BB) 1032 - QL = 32KB 1034 RTT(min) = QL * 8 / BB and would equal 2.56 millisecond (and the 1035 BDP = 32KB) 1037 In this example, one (1) TCP connection with window size / SSB of 1038 32KB would be required to test the QL of 32KB. This Bulk Transfer 1039 Test can be accomplished using iperf as described in Appendix A. 1041 Two types of TCP tests must be performed: Bulk Transfer test and Micro 1042 Burst Test Pattern as documented in Appendix B. The Bulk Transfer 1043 Test only bursts during the TCP Slow Start (or Congestion Avoidance) 1044 state, while the Micro Burst test emulates application layer bursting 1045 which may occur any time during the TCP connection. 1047 Other tests types should include: Simple Web Site, Complex Web Site, 1048 Business Applications, Email, SMB/CIFS File Copy (which are also 1049 documented in Appendix B). 1051 Test Metrics: 1052 The test results will be recorded per the stateful metrics defined in 1053 section 4.2, primarily the TCP Test Pattern Execution Time (TTPET), 1054 TCP Efficiency, and Buffer Delay. 1056 Procedure: 1058 1. Configure the DUT queue length (QL) and scheduling technique 1059 (FIFO, SP, etc) parameters 1061 2. Configure the tester* to generate a profile of emulated of an 1062 application traffic mixture 1064 - The application mixture MUST be defined in terms of percentage 1065 of the total bandwidth to be tested 1067 - The rate of transmission for each application within the mixture 1068 MUST be also be configurable 1070 * The tester MUST be capable of generating a precise TCP test 1071 patterns for each application specified, to ensure repeatable results. 1073 3. Generate application traffic between the ingress (client side) and 1074 egress (server side) ports of the DUT and measure the metrics (TTPET, 1075 TCP Efficiency, and Buffer Delay) per application stream and at the 1076 ingress and egress port (across the entire Td, default 60 seconds 1077 duration). 1079 Reporting Format: 1080 The Queue/Scheduler Stateful Traffic individual report MUST contain all 1081 results for each traffic scheduler and QL/BB test run and a recommended 1082 format is as follows: 1084 ******************************************************** 1085 Test Configuration Summary: Tr, Td 1087 DUT Configuration Summary: Scheduling technique, BB and QL 1089 Application Mixture and Intensities: this is the percent configured of 1090 each application type 1092 The results table should contain entries for each test run as follows, 1093 (Test #1 to Test #Tr). 1095 - Per Application Throughout (bps) and TTPET 1096 - Per Application Bytes In and Bytes Out 1097 - Per Application TCP Efficiency, and Buffer Delay 1098 ******************************************************** 1100 6.2.2 Queue / Scheduler Capacity Tests 1102 Objective: 1103 The intent of these capacity tests is to benchmark queue/scheduler 1104 performance in a scaled environment with multiple queues/schedulers 1105 active on multiple egress physical ports. This test will benchmark 1106 the maximum number of queues and schedulers as specified by the 1107 device manufacturer. Each priority in the system will map to a 1108 separate queue. 1110 Test Metrics: 1111 The metrics defined in section 4.1 (BSA, LP, OOS, PD, and PDV) SHALL 1112 be measured at the egress port and recorded. 1114 The following sections provide the specific test scenarios, procedures, 1115 and reporting formats for each queue / scheduler capacity test. 1117 6.2.2.1 Multiple Queues / Single Port Active 1119 For the first scheduler / queue capacity test, multiple queues per 1120 port will be tested on a single physical port. In this case, 1121 all the queues (typically 8) are active on a single physical port. 1122 Traffic from multiple ingress physical ports are directed to the 1123 same egress physical port which will cause oversubscription on the 1124 egress physical port. 1126 There are many types of priority schemes and combinations of 1127 priorities that are managed by the scheduler. The following 1128 sections specify the priority schemes that should be tested. 1130 6.2.2.1.1 Strict Priority on Egress Port 1132 Test Summary: 1133 For this test, Strict Priority (SP) scheduling on the egress 1134 physical port should be tested and the benchmarking methodology 1135 specified in section 6.2.1.1 and 6.2.1.2 (procedure, metrics, 1136 and reporting format) should be applied here. For a given 1137 priority, each ingress physical port should get a fair share of 1138 the egress physical port bandwidth. 1140 TBD: RAMKI, do we need a concrete example? 1142 Since this is a capacity test, the configuration and report 1143 results format from 6.2.1.1 and 6.2.1.2 MUST also include: 1145 Configuration: 1146 - The number of physical ingress ports active during the test 1147 - The classication marking (DSCP, VLAN, etc.) for each physical 1148 ingress port 1149 - The traffic rate for stateful traffic and the traffic rate 1150 / mixture for stateful traffic for each physical ingress port 1152 Report results: 1153 - For each ingress port traffic stream, the achieved throughput 1154 rate and metrics at the egress port 1156 6.2.2.1.2 Strict Priority + Weighted Fair Queue (WFQ) on Egress Port 1158 Test Summary: 1159 For this test, Strict Priority (SP) and Weighted Fair Queue (WFQ) 1160 should be enabled simultaneously in the scheduler but on a single 1161 egress port. The benchmarking methodology specified in Section 1162 6.2.1.1 and 6.2.1.2 (procedure, metrics, and reporting format) 1163 should be applied here. Additionally, the egress port bandwidth 1164 sharing among weighted queues should be proportional to the assigned 1165 weights. For a given priority, each ingress physical port should get 1166 a fair share of the egress physical port bandwidth. 1168 TBD: RAMKI, do we need a concrete example? 1170 Since this is a capacity test, the configuration and report results 1171 format from 6.2.1.1 and 6.2.1.2 MUST also include: 1173 Configuration: 1174 - The number of physical ingress ports active during the test 1175 - The classication marking (DSCP, VLAN, etc.) for each physical 1176 ingress port 1177 - The traffic rate for stateful traffic and the traffic rate / 1178 mixture for stateful traffic for each physical ingress port 1180 Report results: 1181 - For each ingress port traffic stream, the achieved throughput rate 1182 and metrics at each queue of the egress port queue (both the SP 1183 and WFQ queue). 1185 Example: 1186 - Egress Port SP Queue: throughput and metrics for ingress streams 1-n 1187 - Egress Port WFQ Queue: throughput and metrics for ingress streams 1-n 1189 6.2.2.2 Single Queue per Port / All Ports Active 1191 Test Summary: 1192 Traffic from multiple ingress physical ports are directed to the 1193 same egress physical port, which will cause oversubscription on the 1194 egress physical port. Also, the same amount of traffic is directed 1195 to each egress physical port. 1197 The benchmarking methodology specified in Section 6.2.1.1 1198 and 6.2.1.2 (procedure, metrics, and reporting format) should be 1199 applied here. Each ingress physical port should get a fair share of 1200 the egress physical port bandwidth. Additionally, each egress 1201 physical port should receive the same amount of traffic. 1203 Since this is a capacity test, the configuration and report results 1204 format from 6.2.1.1 and 6.2.1.2 MUST also include: 1206 Configuration: 1207 - The number of ingress ports active during the test 1208 - The number of egress ports active during the test 1209 - The classication marking (DSCP, VLAN, etc.) for each physical 1210 ingress port 1211 - The traffic rate for stateful traffic and the traffic rate / 1212 mixture for stateful traffic for each physical ingress port 1214 Report results: 1215 - For each egress port, the achieved throughput rate and metrics at 1216 the egress port queue for each ingress port stream. 1218 Example: 1219 - Egress Port 1: throughput and metrics for ingress streams 1-n 1220 - Egress Port n: throughput and metrics for ingress streams 1-n 1222 6.2.2.3 Multiple Queues per Port, All Ports Active 1224 Traffic from multiple ingress physical ports are directed to all 1225 queues of each egress physical port, which will cause 1226 oversubscription on the egress physical ports. Also, the same 1227 amount of traffic is directed to each egress physical port. 1229 The benchmarking methodology specified in Section 6.2.1.1 1230 and 6.2.1.2 (procedure, metrics, and reporting format) should be 1231 applied here. For a given priority, each ingress physical port 1232 should get a fair share of the egress physical port bandwidth. 1233 Additionally, each egress physical port should receive the same 1234 amount of traffic. 1236 Since this is a capacity test, the configuration and report results 1237 format from 6.2.1.1 and 6.2.1.2 MUST also include: 1239 Configuration: 1240 - The number of physical ingress ports active during the test 1241 - The classication marking (DSCP, VLAN, etc.) for each physical 1242 ingress port 1243 - The traffic rate for stateful traffic and the traffic rate / 1244 mixture for stateful traffic for each physical ingress port 1246 Report results: 1247 - For each egress port, the achieved throughput rate and metrics at 1248 each egress port queue for each ingress port stream. 1250 Example: 1251 - Egress Port 1, SP Queue: throughput and metrics for ingress streams 1-n 1252 - Egress Port 2, WFQ Queue: throughput and metrics for ingress streams 1-n 1253 . 1254 . 1255 - Egress Port n, SP Queue: throughput and metrics for ingress streams 1-n 1256 - Egress Port n, WFQ Queue: throughput and metrics for ingress streams 1-n 1258 6.3. Shaper tests 1260 A traffic shaper is memory based like a queue, but with the added 1261 intelligence of an active traffic scheduler. The same concepts from 1262 section 6.2 (Queue testing) can be applied to testing network device 1263 shaper. 1265 Again, the tests are divided into two sections; individual shaper 1266 benchmark tests and then full capacity shaper benchmark tests. 1268 6.3.1 Shaper Individual Tests Overview 1270 A traffic shaper generally has three (3) components that can be 1271 configured: 1273 - Ingress Queue bytes 1274 - Shaper Rate, bps 1275 - Burst Committed (Bc) and Burst Excess (Be), bytes 1277 The Ingress Queue holds burst traffic and the shaper then meters 1278 traffic out of the egress port according to the Shaper Rate and 1279 Bc/Be parameters. Shapers generally transmit into policers, so 1280 the idea is for the emitted traffic to conform to the policer's 1281 limits. 1283 6.3.1.1 Testing Shaper with Stateless Traffic 1285 Objective: 1286 Test a shaper by transmitting stateless traffic bursts into the 1287 shaper ingress port and verifying that the egress traffic is shaped 1288 according to the shaper traffic profile. 1290 Test Summary: 1291 The stateless traffic must be burst into the DUT ingress port and 1292 not exceed the Ingress Queue. The burst can be a single burst or 1293 multiple bursts. If multiple bursts are transmitted, then the 1294 Ti (Time interval) must be large enough so that the Shaper Rate is 1295 not exceeded. An example will clarify single and multiple burst 1296 test cases. 1298 In the example, the shaper's ingress and egress ports are both full 1299 duplex Gigabit Ethernet. The Ingress Queue is configured to be 1300 512,000 bytes, the Shaper Rate (SR) = 50 Mbps, and both Bc/Be configured 1301 to be 32,000 bytes. For a single burst test, the transmitting test 1302 device would burst 512,000 bytes maximum into the ingress port and 1303 then stop transmitting. 1305 If a multiple burst test is to be conducted, then the burst bytes 1306 divided by the time interval between the 512,000 byte bursts must 1307 not exceed the Shaper Rate. The time interval (Ti) must adhere to 1308 a similar formula as described in section 6.2.1.1 for queues, namely: 1310 Ti = Ingress Queue x 8 / Shaper Rate 1312 So for the example from the previous paragraph, Ti between bursts must 1313 be greater than 82 millisecond (512,000 bytes x 8 / 50,000,000 bps). 1314 This yields an average rate of 50 Mbps so that an Input Queue 1315 would not overflow. 1317 Test Metrics: 1318 - The metrics defined in section 4.1 (LP, OOS, PDV, SR, SBB, SBI) SHALL 1319 be measured at the egress port and recorded. 1321 Procedure: 1322 1. Configure the DUT shaper ingress queue length (QL) and shaper 1323 egress rate parameters (SR, Bc, Be) parameters 1325 2. Configure the tester to generate a stateless traffic burst equal 1326 to QL and an interval equal to Ti (QL in bits/BB) 1328 3. Generate bursts of QL traffic into the DUT and measure the metrics 1329 defined in section 4.1 (LP, OOS, PDV, SR, SBB, SBI) at the egress 1330 port and across the entire Td (default 30 seconds duration) 1332 Report Format: 1333 The Shaper Stateless Traffic individual report MUST contain all results 1334 for each QL/SR test run and a recommended format is as follows: 1335 ******************************************************** 1336 Test Configuration Summary: Tr, Td 1338 DUT Configuration Summary: Ingress Burst Rate, QL, SR 1340 The results table should contain entries for each test run as follows, 1341 (Test #1 to Test #Tr). 1343 - LP, OOS, PDV, SR, SBB, SBI 1344 ******************************************************** 1346 6.3.1.2 Testing Shaper with Stateful Traffic 1348 Objective: 1349 Test a shaper by transmitting stateful traffic bursts into the shaper 1350 ingress port and verifying that the egress traffic is shaped according 1351 to the shaper traffic profile. 1353 Test Summary: 1354 To provide a more realistic benchmark and to test queues in layer 4 1355 devices such as firewalls, stateful traffic testing is also 1356 recommended for the shaper tests. Stateful traffic tests will also 1357 utilize the Network Delay Emulator (NDE) from the network set-up 1358 configuration in section 2. 1360 The BDP of the TCP test traffic must be calculated as described in 1361 section 6.2.2. To properly stress network buffers and the traffic 1362 shaping function, the cumulative TCP window should exceed the BDP 1363 which will stress the shaper. BDP factors of 1.1 to 1.5 are 1364 recommended, but the values are the discretion of the tester and 1365 should be documented. 1367 The cumulative TCP Window Sizes* (RWND at the receiving end & CWND 1368 at the transmitting end) equates to: 1370 TCP window size* for each connection x number of connections 1372 * as described in section 3 of RFC6349, the SSB MUST be large 1373 enough to fill the BDP 1375 Example, if the BDP is equal to 256 Kbytes and a connection size of 1376 64Kbytes is used for each connection, then it would require four (4) 1377 connections to fill the BDP and 5-6 connections (over subscribe the 1378 BDP) to stress test the traffic shaping function. 1380 Two types of TCP tests must be performed: Bulk Transfer test and Micro 1381 Burst Test Pattern as documented in Appendix B. The Bulk Transfer 1382 Test only bursts during the TCP Slow Start (or Congestion Avoidance) 1383 state, while the Micro Burst test emulates application layer bursting 1384 which may any time during the TCP connection. 1386 Other tests types should include: Simple Web Site, Complex Web Site, 1387 Business Applications, Email, SMB/CIFS File Copy (which are also 1388 documented in Appendix B). 1390 Test Metrics: 1391 The test results will be recorded per the stateful metrics defined in 1392 section 4.2, primarily the TCP Test Pattern Execution Time (TTPET), 1393 TCP Efficiency, and Buffer Delay. 1395 Procedure: 1396 1. Configure the DUT shaper ingress queue length (QL) and shaper 1397 egress rate parameters (SR, Bc, Be) parameters 1399 2. Configure the tester* to generate a profile of emulated of an 1400 application traffic mixture 1402 - The application mixture MUST be defined in terms of percentage 1403 of the total bandwidth to be tested 1405 - The rate of transmission for each application within the mixture 1406 MUST be also be configurable 1408 *The tester MUST be capable of generating precise TCP test patterns for 1409 each application specified, to ensure repeatable results. 1411 3. Generate application traffic between the ingress (client side) and 1412 egress (server side) ports of the DUT and measure the metrics (TTPET, 1413 TCP Efficiency, and Buffer Delay) per application stream and at the 1414 ingress and egress port (across the entire Td, default 30 seconds 1415 duration). 1417 Reporting Format: 1418 The Shaper Stateful Traffic individual report MUST contain all results 1419 for each traffic scheduler and QL/SR test run and a recommended format 1420 is as follows: 1422 ******************************************************** 1423 Test Configuration Summary: Tr, Td 1425 DUT Configuration Summary: Ingress Burst Rate, QL, SR 1427 Application Mixture and Intensities: this is the percent configured of 1428 each application type 1430 The results table should contain entries for each test run as follows, 1431 (Test #1 to Test #Tr). 1433 - Per Application Throughout (bps) and TTPET 1434 - Per Application Bytes In and Bytes Out 1435 - Per Application TCP Efficiency, and Buffer Delay 1436 ******************************************************** 1438 6.3.2 Shaper Capacity Tests 1440 Objective: 1441 The intent of these scalability tests is to verify shaper performance 1442 in a scaled environment with shapers active on multiple queues on 1443 multiple egress physical ports. This test will benchmark the maximum 1444 number of shapers as specified by the device manufacturer. 1446 The following sections provide the specific test scenarios, procedures, 1447 and reporting formats for each shaper capacity test. 1449 6.3.2.1 Single Queue Shaped, All Physical Ports Active 1451 Test Summary: 1452 The first shaper capacity test involves per port shaping, all physical 1453 ports active. Traffic from multiple ingress physical ports are directed 1454 to the same egress physical port and this will cause oversubscription 1455 on the egress physical port. Also, the same amount of traffic is 1456 directed to each egress physical port. 1458 The benchmarking methodology specified in Section 6.3.1 (procedure, 1459 metrics, and reporting format) should be applied here. Since this is a 1460 capacity test, the configuration and report results format from 6.3.1 1461 MUST also include: 1463 Configuration: 1464 - The number of physical ingress ports active during the test 1465 - The classication marking (DSCP, VLAN, etc.) for each physical ingress 1466 port 1467 - The traffic rate for stateful traffic and the traffic rate / mixture 1468 for stateful traffic for each physical ingress port 1469 - The shaped egress ports shaper parameters (QL, SR, Bc, Be) 1471 Report results: 1472 - For each active egress port, the achieved throughput rate and shaper 1473 metrics for each ingress port traffic stream 1475 Example: 1476 - Egress Port 1: throughput and metrics for ingress streams 1-n 1477 - Egress Port n: throughput and metrics for ingress streams 1-n 1479 6.3.2.2 All Queues Shaped, Single Port Active 1481 Test Summary: 1482 The second shaper capacity test is conducted with all queues actively 1483 shaping on a single physical port. The benchmarking methodology 1484 described in per port shaping test (previous section) serves as the 1485 foundation for this. Additionally, each of the SP queues on the 1486 egress physical port is configured with a shaper. For the highest 1487 priority queue, the maximum amount of bandwidth available is limited 1488 by the bandwidth of the shaper. For the lower priority queues, the 1489 maximum amount of bandwidth available is limited by the bandwidth of 1490 the shaper and traffic in higher priority queues. 1492 The benchmarking methodology specified in Section 6.3.1 (procedure, 1493 metrics, and reporting format) should be applied here. Since this is 1494 a capacity test, the configuration and report results format from 1495 6.3.1 MUST also include: 1497 Configuration: 1498 - The number of physical ingress ports active during the test 1499 - The classication marking (DSCP, VLAN, etc.) for each physical 1500 ingress port 1501 - The traffic rate for stateful traffic and the traffic rate/mixture 1502 for stateful traffic for each physical ingress port 1503 - For the active egress port, each shaper queue parameters (QL, SR, Bc, Be) 1505 Report results: 1506 - For each queue of the active egress port, the achieved throughput 1507 rate and shaper metrics for each ingress port traffic stream 1509 Example: 1510 - Egress Port High Priority Queue: throughput and metrics for ingress streams 1-n 1511 - Egress Port Lower Priority Queue: throughput and metrics for ingress streams 1-n 1513 6.3.2.3 All Queues Shaped, All Ports Active 1515 Test Summary: 1516 And for the third shaper capacity test (which is a combination of the 1517 tests in the previous two sections),all queues will be actively 1518 shaping and all physical ports active. 1520 The benchmarking methodology specified in Section 6.3.1 (procedure, metrics, 1521 and reporting format) should be applied here. Since this is a capacity test, 1522 the configuration and report results format from 6.3.1 MUST also include: 1524 Configuration: 1525 - The number of physical ingress ports active during the test 1526 - The classication marking (DSCP, VLAN, etc.) for each physical ingress port 1527 - The traffic rate for stateful traffic and the traffic rate / mixture for 1528 stateful traffic for each physical ingress port 1529 - For each of the active egress ports, shaper port and per queue parameters 1530 (QL, SR, Bc, Be) 1532 Report results: 1533 - For each queue of each active egress port, the achieved throughput rate 1534 and shaper metrics for each ingress port traffic stream 1536 Example: 1537 - Egress Port 1 High Priority Queue: throughput and metrics for ingress streams 1-n 1538 - Egress Port 1 Lower Priority Queue: throughput and metrics for ingress streams 1-n 1539 . 1540 . 1541 - Egress Port n High Priority Queue: throughput and metrics for ingress streams 1-n 1542 - Egress Port n Lower Priority Queue: throughput and metrics for ingress streams 1-n 1544 6.4 Concurrent Capacity Load Tests 1546 As mentioned in the scope of this document, it is impossible to 1547 specify the various permutations of concurrent traffic management 1548 functions that should be tested in a device for capacity testing. 1549 However, some profiles are listed below which may be useful 1550 to test under capacity as well: 1552 - Policers on ingress and queuing on egress 1553 - Policers on ingress and shapers on egress (not intended for a 1554 flow to be policed then shaped, these would be two different 1555 flows tested at the same time) 1556 - etc. 1558 The test procedures and reporting formatting from the previous sections may 1559 be modified to accommodate the capacity test profile. 1561 Appendix A: Open Source Tools for Traffic Management Testing 1563 This framework specifies that stateless and stateful behaviors should 1564 both be tested. Three (3) open source tools that can be used are 1565 iperf, netperf (with netperf-wrapper),and Flowgrind to accomplish 1566 many of the tests proposed in this framework. 1568 Iperf can generate UDP or TCP based traffic; a client and server must 1569 both run the iperf software in the same traffic mode. The server is 1570 set up to listen and then the test traffic is controlled from the 1571 client. Both uni-directional and bi-directional concurrent testing 1572 are supported. 1574 The UDP mode can be used for the stateless traffic testing. The 1575 target bandwidth, packet size, UDP port, and test duration can be 1576 controlled. A report of bytes transmitted, packets lost, and delay 1577 variation are provided by the iperf receiver. 1579 The TCP mode can be used for stateful traffic testing to test bulk 1580 transfer traffic. The TCP Window size (which is actually the SSB), 1581 the number of connections, the packet size, TCP port and the test 1582 duration can be controlled. A report of bytes transmitted and 1583 throughput achieved are provided by the iperf sender. 1585 Netperf is a software application that provides network bandwidth 1586 testing between two hosts on a network. It supports Unix domain 1587 sockets, TCP, SCTP, DLPI and UDP via BSD Sockets.[1] Netperf provides 1588 a number of predefined tests e.g. to measure bulk (unidirectional) 1589 data transfer or request response performance (add reference to Wiki, 1590 http://en.wikipedia.org/wiki/Netperf). Netperf-wrapper is a Python 1591 script that runs multiple simultaneous netperf instances and 1592 aggregate the results. 1594 Flowgrind is a distributed network performance measurement tool. 1595 Using the flowgrind controller, tests can be setup between hosts 1596 running flowgrind. For the purposes of this traffic management 1597 testing framework, the key benefit of Flowgrind is that it can 1598 emulate non-bulk transfer applications such as HTTP, Email, etc. 1599 Traffic generation options include the request size, response size, 1600 inter-request gap, and response time gap. Additionally, various 1601 distribution types are supported including constant, normal, 1602 exponential, pareto, etc. 1604 Both netperf-wrapper and flowgrind's traffic generation parameters 1605 facilitate the emulation of the TCP test patterns which are 1606 discussed in Appendix B. 1608 Appendix B: Stateful TCP Test Patterns 1610 This framework recommends at a minimum the following TCP test patterns 1611 since they are representative of real world application traffic (section 1612 5.2.1 describes some methods to derive other application-based TCP test 1613 patterns). 1615 - Bulk Transfer: generate concurrent TCP connections whose aggregate 1616 number of in-flight data bytes would fill the BDP. Guidelines 1617 from RFC 6349 are used to create this TCP traffic pattern. 1619 - Micro Burst: generate precise burst patterns within a single or multiple 1620 TCP connections(s). The idea is for TCP to establish equilibrium and then 1621 burst application bytes at defined sizes. The test tool must allow the 1622 burst size and burst time interval to be configurable. 1624 - Web Site Patterns: The HTTP traffic model from "3GPP2 C.R1002-0 v1.0" 1625 is referenced (Table 4.1.3.2-1) to develop these TCP test patterns. In 1626 summary, the HTTP traffic model consists of the following parameters: 1627 - Main object size (Sm) 1628 - Embedded object size (Se) 1629 - Number of embedded objects per page (Nd) 1630 - Client processing time (Tcp) 1631 - Server processing time (Tsp) 1633 Web site test patterns are illustrated with the following examples: 1635 - Simple Web Site: mimic the request / response and object download 1636 behavior of a basic web site (small company). 1637 - Complex Web Site: mimic the request / response and object download 1638 behavior of a complex web site (ecommerce site). 1640 Referencing the HTTP traffic model parameters , the following table 1641 was derived (by analysis and experimentation) for Simple and Complex 1642 Web site TCP test patterns: 1644 Simple Complex 1645 Parameter Web Site Web Site 1646 ----------------------------------------------------- 1647 Main object Ave. = 10KB Ave. = 300KB 1648 size (Sm) Min. = 100B Min. = 50KB 1649 Max. = 500KB Max. = 2MB 1651 Embedded object Ave. = 7KB Ave. = 10KB 1652 size (Se) Min. = 50B Min. = 100B 1653 Max. = 350KB Max. = 1MB 1655 Number of embedded Ave. = 5 Ave. = 25 1656 objects per page (Nd) Min. = 2 Min. = 10 1657 Max. = 10 Max. = 50 1659 Client processing Ave. = 3s Ave. = 10s 1660 time (Tcp)* Min. = 1s Min. = 3s 1661 Max. = 10s Max. = 30s 1663 Server processing Ave. = 5s Ave. = 8s 1664 time (Tsp)* Min. = 1s Min. = 2s 1665 Max. = 15s Max. = 30s 1667 * The client and server processing time is distributed across the 1668 transmission / receipt of all of the main and embedded objects 1669 To be clear, the parameters in this table are reasonable guidelines 1670 for the TCP test pattern traffic generation. The test tool can use 1671 fixed parameters for simpler tests and mathematical distributions for 1672 more complex tests. However, the test pattern must be repeatable to 1673 ensure that the benchmark results can be reliably compared. 1675 - Inter-active Patterns: While Web site patterns are inter-active 1676 to a degree, they mainly emulate the downloading of various 1677 complexity web sites. Inter-active patterns are more chatty in nature 1678 since there is alot of user interaction with the servers. Examples 1679 include business applications such as Peoplesoft, Oracle and consumer 1680 applications such as Facebook, IM, etc. For the inter-active patterns, 1681 the packet capture technique was used to characterize some business 1682 applications and also the email application. 1684 In summary, an inter-active application can be described by the following 1685 parameters: 1686 - Client message size (Scm) 1687 - Number of Client messages (Nc) 1688 - Server response size (Srs) 1689 - Number of server messages (Ns) 1690 - Client processing time (Tcp) 1691 - Server processing Time (Tsp) 1692 - File size upload (Su)* 1693 - File size download (Sd)* 1695 * The file size parameters account for attachments uploaded or downloaded 1696 and may not be present in all inter-active applications 1698 Again using packet capture as a means to characterize, the following 1699 table reflects the guidelines for Simple Business Application, Complex 1700 Business Application, eCommerce, and Email Send / Receive: 1702 Simple Complex 1703 Parameter Biz. App. Biz. App eCommerce* Email 1704 -------------------------------------------------------------------- 1705 Client message Ave. = 450B Ave. = 2KB Ave. = 1KB Ave. = 200B 1706 size (Scm) Min. = 100B Min. = 500B Min. = 100B Min. = 100B 1707 Max. = 1.5KB Max. = 100KB Max. = 50KB Max. = 1KB 1709 Number of client Ave. = 10 Ave. = 100 Ave. = 20 Ave. = 10 1710 messages (Nc) Min. = 5 Min. = 50 Min. = 10 Min. = 5 1711 Max. = 25 Max. = 250 Max. = 100 Max. = 25 1713 Client processing Ave. = 10s Ave. = 30s Ave. = 15s Ave. = 5s 1714 time (Tcp)** Min. = 3s Min. = 3s Min. = 5s Min. = 3s 1715 Max. = 30s Max. = 60s Max. = 120s Max. = 45s 1717 Server response Ave. = 2KB Ave. = 5KB Ave. = 8KB Ave. = 200B 1718 size (Srs) Min. = 500B Min. = 1KB Min. = 100B Min. = 150B 1719 Max. = 100KB Max. = 1MB Max. = 50KB Max. = 750B 1721 Number of server Ave. = 50 Ave. = 200 Ave. = 100 Ave. = 15 1722 messages (Ns) Min. = 10 Min. = 25 Min. = 15 Min. = 5 1723 Max. = 200 Max. = 1000 Max. = 500 Max. = 40 1725 Server processing Ave. = 0.5s Ave. = 1s Ave. = 2s Ave. = 4s 1726 time (Tsp)** Min. = 0.1s Min. = 0.5s Min. = 1s Min. = 0.5s 1727 Max. = 5s Max. = 20s Max. = 10s Max. = 15s 1729 File size Ave. = 50KB Ave. = 100KB Ave. = N/A Ave. = 100KB 1730 upload (Su) Min. = 2KB Min. = 10KB Min. = N/A Min. = 20KB 1731 Max. = 200KB Max. = 2MB Max. = N/A Max. = 10MB 1733 File size Ave. = 50KB Ave. = 100KB Ave. = N/A Ave. = 100KB 1734 download (Sd) Min. = 2KB Min. = 10KB Min. = N/A Min. = 20KB 1735 Max. = 200KB Max. = 2MB Max. = N/A Max. = 10MB 1737 * eCommerce used a combination of packet capture techniques and 1738 reference traffic flows from "SPECweb2009" (need proper reference) 1739 ** The client and server processing time is distributed across the 1740 transmission / receipt of all of messages. Client processing time 1741 consists mainly of the delay between user interactions (not machine 1742 processing). 1744 And again, the parameters in this table are the guidelines for the 1745 TCP test pattern traffic generation. The test tool can use fixed 1746 parameters for simpler tests and mathematical distributions for more 1747 complex tests. However, the test pattern must be repeatable to ensure 1748 that the benchmark results can be reliably compared. 1750 - SMB/CIFS File Copy: mimic a network file copy, both read and write. 1751 As opposed to FTP which is a bulk transfer and is only flow controlled 1752 via TCP, SMB/CIFS divides a file into application blocks and utilizes 1753 application level handshaking in addition to TCP flow control. 1755 In summary, an SMB/CIFS file copy can be described by the following 1756 parameters: 1757 - Client message size (Scm) 1758 - Number of client messages (Nc) 1759 - Server response size (Srs) 1760 - Number of Server messages (Ns) 1761 - Client processing time (Tcp) 1762 - Server processing time (Tsp) 1763 - Block size (Sb) 1765 The client and server messages are SMB control messages. The Block size 1766 is the data portion of th file transfer. 1768 Again using packet capture as a means to characterize the following 1769 table reflects the guidelines for SMB/CIFS file copy: 1771 SMB 1772 Parameter File Copy 1773 ------------------------------ 1774 Client message Ave. = 450B 1775 size (Scm) Min. = 100B 1776 Max. = 1.5KB 1777 Number of client Ave. = 10 1778 messages (Nc) Min. = 5 1779 Max. = 25 1780 Client processing Ave. = 1ms 1781 time (Tcp) Min. = 0.5ms 1782 Max. = 2 1783 Server response Ave. = 2KB 1784 size (Srs) Min. = 500B 1785 Max. = 100KB 1786 Number of server Ave. = 10 1787 messages (Ns) Min. = 10 1788 Max. = 200 1789 Server processing Ave. = 1ms 1790 time (Tsp) Min. = 0.5ms 1791 Max. = 2ms 1792 Block Ave. = N/A 1793 Size (Sb)* Min. = 16KB 1794 Max. = 128KB 1796 *Depending upon the tested file size, the block size will be 1797 transferred n number of times to complete the example. An example 1798 would be a 10 MB file test and 64KB block size. In this case 160 1799 blocks would be transferred after the control channel is opened 1800 between the client and server. 1802 7. Security Considerations 1804 8. IANA Considerations 1806 9. Conclusions 1808 10. References 1810 10.1. Normative References 1812 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1813 Requirement Levels", BCP 14, RFC 2119, March 1997. 1815 [RFC2234] Crocker, D. and Overell, P.(Editors), "Augmented BNF for 1816 Syntax Specifications: ABNF", RFC 2234, Internet Mail 1817 Consortium and Demon Internet Ltd., November 1997. 1819 [RFC2680] G. Almes et al., "A One-way Packet Loss Metric for IPPM," 1820 RFC 2680 September 1999 1822 [RFC2697] J. Heinanen et al., "A Single Rate Three Color Marker," 1823 RFC 2697, September 1999 1825 [RFC2698] J. Heinanen et al., "A Two Rate Three Color Marker, " 1826 RFC 2698, September 1999 1828 [RFC4689] S. Poretsky et al., "Terminology for Benchmarking 1829 Network-layer Traffic Control Mechanisms," RFC 4689, 1830 October 2006 1832 [RFC4737] A. Morton et al., "Packet Reordering Metrics," RFC 4737, 1833 November 2006 1835 [RFC6349] Barry Constantine et al., "Framework for TCP Throughput 1836 Testing," RFC 6349, August 2011 1838 [AQM-RECO] Fred Baker et al., "IETF Recommendations Regarding 1839 Active Queue Management," August 2014, 1840 https://datatracker.ietf.org/doc/draft-ietf-aqm-recommendation/ 1842 [MEF-10.2] "MEF 10.2: Ethernet Services Attributes Phase 2," October 2009, 1843 http://metroethernetforum.org/PDF_Documents/technical- 1844 specifications/MEF10.2.pdf 1846 [MEF-12.1] "MEF 12.1: Carrier Ethernet Network Architecture Framework -- 1847 Part 2: Ethernet Services Layer - Base Elements," April 2010, 1848 https://www.metroethernetforum.org/Assets/Technical_Specifications 1849 /PDF/MEF12.1.pdf 1851 [MEF-26] "MEF 26: External Network Network Interface (ENNI) - Phase 1," 1852 January 2010, http://www.metroethernetforum.org/PDF_Documents 1853 /technical-specifications/MEF26.pdf 1855 10.2. Informative References 1857 11. Acknowledgments 1858 Authors' Addresses 1860 Barry Constantine 1862 JDSU, Test and Measurement Division 1864 Germantown, MD 20876-7100, USA 1866 Phone: +1 240 404 2227 1868 Email: barry.constantine@jdsu.com 1870 Timothy Copley 1872 Level 3 Communications 1874 14605 S 50th Street 1876 Phoenix, AZ 85044 1878 Email: Timothy.copley@level3.com 1880 Ram Krishnan 1882 Brocade Communications 1884 San Jose, 95134, USA 1886 Phone: +001-408-406-7890 1888 Email: ramk@brocade.com