idnits 2.17.1 draft-ietf-ippm-tcp-throughput-tm-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 23, 2011) is 4782 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 1323 (Obsoleted by RFC 7323) Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group B. Constantine 2 Internet-Draft JDSU 3 Intended status: Informational G. Forget 4 Expires: August 23, 2011 Bell Canada (Ext. Consultant) 5 Ruediger Geib 6 Deutsche Telekom 7 Reinhard Schrage 8 Schrage Consulting 10 February 23, 2011 12 Framework for TCP Throughput Testing 13 draft-ietf-ippm-tcp-throughput-tm-12.txt 15 Abstract 17 This framework describes a practical methodology for measuring end- 18 to-end TCP Throughput in a managed IP network. The goal is to provide 19 a better indication in regards to user experience. In this framework, 20 TCP and IP parameters are specified and should be configured as 21 recommended. 23 Requirements Language 25 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 26 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 27 document are to be interpreted as described in RFC 2119 [RFC2119]. 29 Status of this Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at http://datatracker.ietf.org/drafts/current/. 39 Internet-Drafts are draft documents valid for a maximum of six months 40 and may be updated, replaced, or obsoleted by other documents at any 41 time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress." 44 This Internet-Draft will expire on August 23, 2011. 46 Copyright Notice 48 Copyright (c) 2011 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 1.1 Terminology. . . . . . . . . . . . . . . . . . . . . . . . 4 65 1.2 TCP Equilibrium . . . . . . . . . . . . . . . . . . . . . 5 66 2. Scope and Goals . . . . . . . . . . . . . . . . . . . . . . . 6 67 3. Methodology. . . . . . . . . . . . . . . . . . . . . . . . . . 7 68 3.1 Path MTU . . . . . . . . . . . . . . . . . . . . . . . . . 9 69 3.2 RTT and Bandwidth . . . . . . . . . . . . . . . . . . . . 9 70 3.2.1 Measuring RTT . . . . . . . . . . . . . . . . . . . . 9 71 3.2.2 Measuring Bandwidth . . . . . . . . . . . . . . . . . 10 72 3.3. Measuring TCP Throughput . . . . . . . . . . . . . . . . . 11 73 3.3.1 Minimum TCP RWND . . . . . . . . . . . . . . . . . . . 11 74 4. TCP Metrics . . . . . . . . . . . . . . . . . . . . . . . . . 14 75 4.1 Transfer Time Ratio. . . . . . . . . . . . . . . . . . . . 14 76 4.1.1 Maximum Achievable TCP Throughput calculation . . . . 15 77 4.1.2 Transfer Time and Transfer Time Ratio calculation. . . 16 78 4.2 TCP Efficiency . . . . . . . . . . . . . . . . . . . . . . 16 79 4.2.1 TCP Efficiency Percentage calculation . . . . . . . . 17 80 4.3 Buffer Delay . . . . . . . . . . . . . . . . . . . . . . . 17 81 4.3.1 Buffer Delay Percentage calculation. . . . . . . . . . 17 82 5. Conducting TCP Throughput Tests. . . . . . . . . . . . . . . . 18 83 5.1 Single versus Multiple Connections . . . . . . . . . . . . 18 84 5.2 Results Interpretation . . . . . . . . . . . . . . . . . . 19 85 6. Security Considerations . . . . . . . . . . . . . . . . . . . 21 86 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 87 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 21 88 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 22 89 9.1 Normative References . . . . . . . . . . . . . . . . . . . 22 90 9.2 Informative References . . . . . . . . . . . . . . . . . . 22 92 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 23 94 1. Introduction 96 In the network industry, the SLA (Service Level Agreement) provided 97 to business class customers is generally based upon Layer 2/3 98 criteria such as: Bandwidth, latency, packet loss and delay 99 variations (jitter). Network providers are coming to the realization 100 that Layer 2/3 testing is not enough to adequately ensure end-user's 101 satisfaction. In addition to Layer 2/3 testing, this framework 102 recommends a methodology for measuring TCP Throughput in order to 103 provide meaningful results with respect to user experience. 105 Additionally, business class customers seek to conduct repeatable TCP 106 Throughput tests between locations. Since these organizations rely on 107 the networks of the providers, a common test methodology with 108 predefined metrics would benefit both parties. 110 Note that the primary focus of this methodology is managed business 111 class IP networks; i.e. those Ethernet terminated services for which 112 organizations are provided an SLA from the network provider. Because 113 of the SLA, the expectation is that the TCP Throughput should achieve 114 the guaranteed bandwidth. End-users with "best effort" access could 115 use this methodology, but this framework and its metrics are intended 116 to be used in a predictable managed IP network. No end-to-end 117 performance can be guaranteed when only the access portion is being 118 provisioned to a specific bandwidth capacity. 120 The intent behind this document is to define a methodology for 121 testing sustained TCP Layer performance. In this document, the 122 achievable TCP Throughput is that amount of data per unit time that 123 TCP transports when in the TCP Equilibrium state. (See Section 1.2 124 for TCP Equilibrium definition). Throughout this document, maximum 125 achievable throughput refers to the theoretical achievable throughput 126 when TCP is in the Equilibrium state. 128 TCP is connection oriented and at the transmitting side it uses a 129 congestion window, (TCP CWND). At the receiving end, TCP uses a 130 receive window, (TCP RWND) to inform the transmitting end on how 131 many Bytes it is capable to accept at a given time. 133 Derived from Round Trip Time (RTT) and network path bandwidth, the 134 Bandwidth Delay Product (BDP) determines the Send and Received Socket 135 buffers sizes required to achieve the maximum TCP Throughput. Then, 136 with the help of slow start and congestion avoidance algorithms, a 137 TCP CWND is calculated based on the IP network path loss rate. 138 Finally, the minimum value between the calculated TCP CWND and the 139 TCP RWND advertised by the opposite end will determine how many Bytes 140 can actually be sent by the transmitting side at a given time. 142 Both TCP Window sizes (RWND and CWND) may vary during any given TCP 143 session, although up to bandwidth limits, larger RWND and larger CWND 144 will achieve higher throughputs by permitting more in-flight Bytes. 146 At both ends of the TCP connection and for each socket, there are 147 default buffer sizes. There are also kernel enforced maximum buffer 148 sizes. These buffer sizes can be adjusted at both ends (transmitting 149 and receiving). Some TCP/IP stack implementations use Receive Window 150 Auto-Tuning, although in order to obtain the maximum throughput it is 151 critical to use large enough TCP Send and Receive Socket Buffer 152 sizes. In fact, they should be equal to or greater than BDP. 154 Many variables are involved in TCP Throughput performance, but this 155 methodology focuses on: 156 - BB (Bottleneck Bandwidth) 157 - RTT (Round Trip Time) 158 - Send and Receive Socket Buffers 159 - Minimum TCP RWND 160 - Path MTU (Maximum Transmission Unit) 162 This methodology proposes TCP testing that should be performed in 163 addition to traditional Layer 2/3 type tests. In fact, Layer 2/3 164 tests are required to verify the integrity of the network before 165 conducting TCP tests. Examples include iperf (UDP mode) and manual 166 packet layer test techniques where packet throughput, loss, and delay 167 measurements are conducted. When available, standardized testing 168 similar to [RFC2544] but adapted for use in operational networks may 169 be used. 171 Note: [RFC2544] was never meant to be used outside a lab environment. 173 Sections 2 and 3 of this document provides a general overview of the 174 proposed methodology. Section 4 defines the metrics while Section 175 5 explains how to conduct the tests and interpret the results. 177 1.1 Terminology 179 The common definitions used in this methodology are: 181 - TCP Throughput Test Device (TCP TTD), refers to compliant TCP 182 host that generates traffic and measures metrics as defined in 183 this methodology. i.e. a dedicated communications test instrument. 184 - Customer Provided Equipment (CPE), refers to customer owned 185 equipment (routers, switches, computers, etc.) 186 - Customer Edge (CE), refers to provider owned demarcation device. 187 - Provider Edge (PE), refers to provider's distribution equipment. 188 - Bottleneck Bandwidth (BB), lowest bandwidth along the complete 189 path. Bottleneck Bandwidth and Bandwidth are used synonymously 190 in this document. Most of the time the Bottleneck Bandwidth is 191 in the access portion of the wide area network (CE - PE). 192 - Provider (P), refers to provider core network equipment. 193 - Network Under Test (NUT), refers to the tested IP network path. 194 - Round Trip Time (RTT), refers to Layer 4 back and forth delay. 196 Figure 1.1 Devices, Links and Paths 198 +----+ +----+ +----+ +----+ +---+ +---+ +----+ +----+ +----+ +----+ 199 | TCP|-| CPE|-| CE |--| PE |-| P |--| P |-| PE |--| CE |-| CPE|-| TCP| 200 | TTD| | | | |BB| | | | | | | |BB| | | | | TTD| 201 +----+ +----+ +----+ +----+ +---+ +---+ +----+ +----+ +----+ +----+ 202 <------------------------ NUT -------------------------> 203 R >-----------------------------------------------------------| 204 T | 205 T <-----------------------------------------------------------| 207 Note that the NUT may be built with of a variety of devices including 208 but not limited to, load balancers, proxy servers or WAN acceleration 209 appliances. The detailed topology of the NUT should be well known 210 when conducting the TCP Throughput tests, although this methodology 211 makes no attempt to characterize specific network architectures. 213 1.2 TCP Equilibrium 215 TCP connections have three (3) fundamental congestion window phases, 216 which are depicted in Figure 1.2. 218 1 - The Slow Start phase, which occurs at the beginning of a TCP 219 transmission or after a retransmission time out. 221 2 - The Congestion Avoidance phase, during which TCP ramps up to 222 establish the maximum achievable throughput. It is important to note 223 that retransmissions are a natural by-product of the TCP congestion 224 avoidance algorithm as it seeks to achieve maximum throughput. 226 3 - The Loss Recovery phase, which could include Fast Retransmit 227 (Tahoe) or Fast Recovery (Reno & New Reno). When packet loss occurs, 228 Congestion Avoidance phase transitions either to Fast Retransmission 229 or Fast Recovery depending upon the TCP implementation. If a Time-Out 230 occurs, TCP transitions back to the Slow Start phase. 232 Figure 1.2 TCP CWND Phases 234 /\ | 235 /\ |High ssthresh TCP CWND TCP 236 /\ |Loss Event * halving 3-Loss Recovery Equilibrium 237 /\ | * \ upon loss 238 /\ | * \ / \ Time-Out Adjusted 239 /\ | * \ / \ +--------+ * ssthresh 240 /\ | * \/ \ / Multiple| * 241 /\ | * 2-Congestion\ / Loss | * 242 /\ | * Avoidance \/ Event | * 243 TCP | * Half | * 244 Through- | * TCP CWND | * 1-Slow Start 245 put | * 1-Slow Start Min TCP CWND after T-O 246 +----------------------------------------------------------- 247 Time > > > > > > > > > > > > > > > > > > > > > > > > > > > 249 Note: ssthresh = Slow Start threshold. 251 A well tuned and managed IP network with appropriate TCP adjustments 252 in the IP hosts and applications should perform very close to the 253 BB (Bottleneck Bandwidth) when TCP is in the Equilibrium state. 255 This TCP methodology provides guidelines to measure the maximum 256 achievable TCP Throughput when TCP is in the Equilibrium state. 257 All maximum achievable TCP Throughputs specified in Section 3.3 are 258 with respect to this condition. 260 It is important to clarify the interaction between the sender's Send 261 Socket Buffer and the receiver's advertised TCP RWND Size. TCP test 262 programs such as iperf, ttcp, etc. allows the sender to control the 263 quantity of TCP Bytes transmitted and unacknowledged (in-flight), 264 commonly referred to as the Send Socket Buffer. This is done 265 independently of the TCP RWND Size advertised by the receiver. 267 2. Scope and Goals 269 Before defining the goals, it is important to clearly define the 270 areas that are out-of-scope. 272 - This methodology is not intended to predict the TCP Throughput 273 during the transient stages of a TCP connection, such as during the 274 slow start phase. 276 - This methodology is not intended to definitively benchmark TCP 277 implementations of one OS to another, although some users may find 278 value in conducting qualitative experiments. 280 - This methodology is not intended to provide detailed diagnosis 281 of problems within end-points or within the network itself as 282 related to non-optimal TCP performance, although results 283 interpretation for each test step may provide insights to potential 284 issues. 286 - This methodology does not propose to operate permanently with high 287 measurement loads. TCP performance and optimization within 288 operational networks may be captured and evaluated by using data 289 from the "TCP Extended Statistics MIB" [RFC4898]. 291 - This methodology is not intended to measure TCP Throughput as part 292 of an SLA, or to compare the TCP performance between service 293 providers or to compare between implementations of this methodology 294 in dedicated communications test instruments. 296 In contrast to the above exclusions, the primary goal is to define a 297 method to conduct a practical end-to-end assessment of sustained 298 TCP performance within a managed business class IP network. Another 299 key goal is to establish a set of "best practices" that a non-TCP 300 expert should apply when validating the ability of a managed IP 301 network to carry end-user TCP applications. 303 Specific goals are to: 305 - Provide a practical test approach that specifies tunable parameters 306 (such as MTU (Maximum Transmit Unit) and Socket Buffer sizes) and how 307 these affect the outcome of TCP performances over an IP network. 309 - Provide specific test conditions like link speed, RTT, MTU, Socket 310 Buffer sizes and achievable TCP Throughput when TCP is in the 311 Equilibrium state. For guideline purposes, provide examples of 312 test conditions and their maximum achievable TCP Throughput. 313 Section 1.2 provides specific details concerning the definition of 314 TCP Equilibrium within this methodology while Section 3 provides 315 specific test conditions with examples. 317 - Define three (3) basic metrics to compare the performance of TCP 318 connections under various network conditions. See Section 4. 320 - In test situations where the recommended procedure does not yield 321 the maximum achievable TCP Throughput, this methodology provides 322 some possible areas within the end host or the network that should 323 be considered for investigation. Although again, this methodology 324 is not intended to provide detailed diagnosis on these issues. 325 See Section 5.2. 327 3. Methodology 329 This methodology is intended for operational and managed IP networks. 330 A multitude of network architectures and topologies can be tested. 331 The diagram in Figure 1.1 is very general and is only there to 332 illustrate typical segmentation within end-user and network provider 333 domains. 335 Also, as stated earlier in Section 1, it is considered best practice 336 to verify the integrity of the network by conducting Layer 2/3 tests 337 such as [RFC2544] or other methods of network stress tests. 338 Although, it is important to mention here that [RFC2544] was never 339 meant to be used outside a lab environment. 341 If the network is not performing properly in terms of packet loss, 342 jitter, etc., then TCP Layer testing will not be meaningful. A 343 dysfunctional network will not achieve optimal TCP Throughputs in 344 regards with the available bandwidth. 346 TCP Throughput testing may require cooperation between the end-user 347 customer and the network provider. As an example, in an MPLS (Multi- 348 Protocol Label Switching) network architecture, the testing should be 349 conducted either on the CPE or on the CE device and not on the PE 350 (Provider Edge) router. 352 The following represents the sequential order of steps for this 353 testing methodology: 355 1 - Identify the Path MTU. Packetization Layer Path MTU Discovery 356 or PLPMTUD, [RFC4821], MUST be conducted. It is important to 357 identify the path MTU so that the TCP TTD is configured properly to 358 avoid fragmentation. 360 2 - Baseline Round Trip Time and Bandwidth. This step establishes the 361 inherent, non-congested Round Trip Time (RTT) and the Bottleneck 362 Bandwidth of the end-to-end network path. These measurements are 363 used to provide estimates of the TCP RWND and Send Socket Buffer 364 Sizes that SHOULD be used during subsequent test steps. These 365 measurements refers to [RFC2681] and [RFC4898] in order to measure 366 RTD and associated RTT. 368 3 - TCP Connection Throughput Tests. With baseline measurements 369 of Round Trip Time and Bottleneck Bandwidth, single and multiple TCP 370 connection throughput tests SHOULD be conducted to baseline network 371 performances. 373 Important to note are some of the key characteristics and 374 considerations for the TCP test instrument. The test host may be a 375 standard computer or a dedicated communications test instrument. 376 In both cases, it must be capable of emulating both a client and a 377 server. 379 The following criteria should be considered when selecting whether 380 the TCP test host can be a standard computer or has to be a dedicated 381 communications test instrument: 383 - TCP implementation used by the test host, OS version, i.e. LINUX OS 384 kernel using TCP New Reno, TCP options supported, etc. These will 385 obviously be more important when using dedicated communications test 386 instruments where the TCP implementation may be customized or tuned 387 to run in higher performance hardware. When a compliant TCP TTD is 388 used, the TCP implementation MUST be identified in the test results. 389 The compliant TCP TTD should be usable for complete end-to-end 390 testing through network security elements and should also be usable 391 for testing network sections. 393 - More important, the TCP test host MUST be capable to generate 394 and receive stateful TCP test traffic at the full link speed of the 395 network under test. Stateful TCP test traffic means that the test 396 host MUST fully implement a TCP/IP stack; this is generally a comment 397 aimed at dedicated communications test equipments which sometimes 398 "blast" packets with TCP headers. As a general rule of thumb, testing 399 TCP Throughput at rates greater than 100 Mbit/sec MAY require high 400 performance server hardware or dedicated hardware based test tools. 402 - A compliant TCP Throughput Test Device MUST allow adjusting both 403 Send and Receive Socket Buffer sizes. The Socket Buffers MUST be 404 large enough to fill the BDP. 406 - Measuring RTT and retransmissions per connection will generally 407 require a dedicated communications test instrument. In the absence of 408 dedicated hardware based test tools, these measurements may need to 409 be conducted with packet capture tools, i.e. conduct TCP Throughput 410 tests and analyze RTT and retransmissions in packet captures. 411 Another option may be to use "TCP Extended Statistics MIB" per 412 [RFC4898]. 414 - The [RFC4821] PLPMTUD test SHOULD be conducted with a dedicated 415 tester which exposes the ability to run the PLPMTUD algorithm 416 independently from the OS stack. 418 3.1. Path MTU 420 TCP implementations should use Path MTU Discovery techniques (PMTUD). 421 PMTUD relies on ICMP 'need to frag' messages to learn the path MTU. 422 When a device has a packet to send which has the Don't Fragment (DF) 423 bit in the IP header set and the packet is larger than the Maximum 424 Transmission Unit (MTU) of the next hop, the packet is dropped and 425 the device sends an ICMP 'need to frag' message back to the host that 426 originated the packet. The ICMP 'need to frag' message includes 427 the next hop MTU which PMTUD uses to adjust itself. Unfortunately, 428 because many network managers completely disable ICMP, this technique 429 does not always prove reliable. 431 Packetization Layer Path MTU Discovery or PLPMTUD [RFC4821] MUST then 432 be conducted to verify the network path MTU. PLPMTUD can be used 433 with or without ICMP. [RFC4821] specifies search_high and search_low 434 parameters for the MTU and we recommend to use those. The goal is to 435 avoid fragmentation during all subsequent tests. 437 3.2. RTT and Bandwidth 439 Before stateful TCP testing can begin, it is important to determine 440 the baseline Round Trip Time (i.e. non-congested inherent delay) and 441 Bottleneck Bandwidth of the end-to-end network to be tested. These 442 measurements are used to calculate the BDP and to provide estimates 443 of the TCP RWND and Send Socket Buffer Sizes that SHOULD be used in 444 subsequent test steps. 446 3.2.1 Measuring RTT 448 Complementing the definition from Section 1.1, Round Trip Time(RTT) 449 is the elapsed time between the clocking in of the first bit of a 450 TCP segment sent and the receipt of the last bit of the corresponding 451 TCP Acknowledgment. Round Trip Delay (RTD) is used synonymously to 452 twice the Link Latency. RTT measurements SHOULD use techniques 453 defined in [RFC2681] or statistics available from MIBs defined in 454 [RFC4898]. 456 The RTT SHOULD be baselined during off-peak hours in order to obtain 457 a reliable figure of the inherent network latency. Otherwise, 458 additional delay caused by network buffering can occur. Also, when 459 sampling RTT values over a given test interval, the minimum 460 measured value SHOULD be used as the baseline RTT. This will most 461 closely estimate the real inherent RTT. This value is also used to 462 determine the Buffer Delay Percentage metric defined in Section 4.3. 464 The following list is not meant to be exhaustive, although it 465 summarizes some of the most common ways to determine Round Trip Time. 466 The desired measurement precision (i.e. msec versus usec) may dictate 467 whether the RTT measurement can be achieved with ICMP pings or by a 468 dedicated communications test instrument with precision timers. 470 The objective in this section is to list several techniques 471 in order of decreasing accuracy. 473 - Use test equipment on each end of the network, "looping" the 474 far-end tester so that a packet stream can be measured back and forth 475 from end-to-end. This RTT measurement may be compatible with delay 476 measurement protocols specified in [RFC5357]. 478 - Conduct packet captures of TCP test sessions using "iperf" or FTP, 479 or other TCP test applications. By running multiple experiments, 480 packet captures can then be analyzed to estimate RTT. It is 481 important to note that results based upon the SYN -> SYN-ACK at the 482 beginning of TCP sessions should be avoided since Firewalls might 483 slow down 3 way handshakes. Also, at the senders side, Ostermann's 484 LINUX TCPTRACE utility with -l -r arguments can be used to extract 485 the RTT results directly from the packet captures. 487 - ICMP pings may also be adequate to provide Round Trip Time 488 estimates, provided that the packet size is factored into the 489 estimates (i.e. pings with different packet sizes might be required). 490 Some limitations with ICMP Ping may include msec resolution and 491 whether the network elements are responding to pings or not. Also, 492 ICMP is often rate-limited or segregated into different buffer 493 queues. ICMP might not work if QoS (Quality of Service) 494 reclassification is done at any hop. ICMP is not as reliable and 495 accurate as in-band measurements. 497 3.2.2 Measuring Bandwidth 499 Before any TCP Throughput test can be conducted, bandwidth 500 measurement tests MUST be run with stateless IP streams (i.e. not 501 stateful TCP) in order to determine the available path bandwidth. 502 These measurements SHOULD be conducted in both directions, 503 especially in asymmetrical access networks (e.g. ADSL access). 504 These tests should obviously be performed at various intervals 505 throughout a business day or even across a week. Ideally, the 506 bandwidth tests should produce logged outputs of the achieved 507 bandwidths across the complete test duration. 509 There are many well established techniques available to provide 510 estimated measures of bandwidth over a network. It is a common 511 practice for network providers to conduct Layer 2/3 bandwidth 512 capacity tests using [RFC2544], although it is understood that 513 [RFC2544] was never meant to be used outside a lab environment. 514 Ideally, these bandwidth measurements SHOULD use network capacity 515 techniques as defined in [RFC5136]. 517 3.3. Measuring TCP Throughput 519 This methodology specifically defines TCP Throughput measurement 520 techniques to verify maximum achievable TCP performance in a managed 521 business class IP network. 523 With baseline measurements of Round Trip Time and bandwidth from 524 Section 3.2, a series of single and / or multiple TCP connection 525 throughput tests SHOULD be conducted. 527 The number of trials and single versus multiple TCP connections 528 choice will be based on the intention of the test. A single TCP 529 connection test might be enough to measure the achievable throughput 530 of a Metro Ethernet connectivity. Although, it is important to note 531 that various traffic management techniques can be used in an IP 532 network and that some of those can only be tested with multiple 533 connections. As an example, multiple TCP sessions might be required 534 to detect traffic shaping versus policing. Multiple sessions might 535 also be needed to measure Active Queue Management performances. 536 However, traffic management testing is not within the scope of this 537 test methodology. 539 In all circumstances, it is RECOMMENDED to run the tests in each 540 direction independently first and then to run in both directions 541 simultaneously. It is also RECOMMENDED to run the tests at 542 different times of day. 544 In each case, the TCP Transfer Time Ratio, the TCP Efficiency 545 Percentage, and the Buffer Delay Percentage MUST be measured in 546 each direction. These 3 metrics are defined in Section 4. 548 3.3.1 Minimum TCP RWND 550 The TCP TTD MUST allow the Send Buffer and Receive Window sizes to 551 be set higher than the BDP, other wise TCP performance will be 552 limited. 554 In the business customer environment, these settings are not 555 generally adjustable by the user. They are either hard coded in the 556 application or configured within the OS as part of a corporate image. 557 And in many cases, the user's host Send Buffer and Receive Window 558 size settings are not optimal. 560 This section provides derivations of BDPs under various network 561 conditions. It also provides examples of achievable TCP Throughput 562 with various TCP RWND sizes. This provides important guidelines 563 showing what can be achieved with settings higher than the BDP, 564 versus what would be achieved in a variery of real world conditions. 566 The minimum required TCP RWND Size can be calculated from the 567 Bandwidth Delay Product (BDP), which is: 569 BDP (bits) = RTT (sec) x Bandwidth (bps) 570 Note that the RTT is being used as the "Delay" variable for the BDP. 572 Then, by dividing the BDP by 8, we obtain the minimum required TCP 573 RWND Size in Bytes. For optimal results, the Send Socket Buffer must 574 be adjusted to the same value at the opposite end of the network. 576 Minimum required TCP RWND = BDP / 8 578 An example would be a T3 link with 25 msec RTT. The BDP would equal 579 ~1,105,000 bits and the minimum required TCP RWND would be ~138 KB. 581 Note that separate calculations are required on asymmetrical paths. 582 An asymmetrical path example would be a 90 msec RTT ADSL line with 583 5Mbps downstream and 640Kbps upstream. The downstream BDP would equal 584 ~450,000 bits while the upstream one would be only ~57,600 bits. 586 The following table provides some representative network Link Speeds, 587 RTT, BDP, and their associated minimum required TCP RWND Sizes. 589 Table 3.3.1: Link Speed, RTT, calculated BDP & min. TCP RWND 591 Link Minimum required 592 Speed* RTT BDP TCP RWND 593 (Mbps) (ms) (bits) (KBytes) 594 --------------------------------------------------------------------- 595 1.536 20.00 30,720 3.84 596 1.536 50.00 76,800 9.60 597 1.536 100.00 153,600 19.20 598 44.210 10.00 442,100 55.26 599 44.210 15.00 663,150 82.89 600 44.210 25.00 1,105,250 138.16 601 100.000 1.00 100,000 12.50 602 100.000 2.00 200,000 25.00 603 100.000 5.00 500,000 62.50 604 1,000.000 0.10 100,000 12.50 605 1,000.000 0.50 500,000 62.50 606 1,000.000 1.00 1,000,000 125.00 607 10,000.000 0.05 500,000 62.50 608 10,000.000 0.30 3,000,000 375.00 610 * Note that link speed is the Bottleneck Bandwidth (BB) for the NUT 611 In the above table, the following serial link speeds are used: 612 - T1 = 1.536 Mbps (for a B8ZS line encoding facility) 613 - T3 = 44.21 Mbps (for a C-Bit Framing facility) 614 The previous table illustrates the minimum required TCP RWND. 615 If a smaller TCP RWND Size is used, then the TCP Throughput 616 can not be optimal. To calculate the TCP Throughput, the following 617 formula is used: TCP Throughput = TCP RWND X 8 / RTT 619 An example could be a 100 Mbps IP path with 5 ms RTT and a TCP RWND 620 of 16KB, then: 622 TCP Throughput = 16 KBytes X 8 bits / 5 ms. 623 TCP Throughput = 128,000 bits / 0.005 sec. 624 TCP Throughput = 25.6 Mbps. 626 Another example for a T3 using the same calculation formula is 627 illustrated in Figure 3.3.1a: 629 TCP Throughput = 16 KBytes X 8 bits / 10 ms. 630 TCP Throughput = 128,000 bits / 0.01 sec. 631 TCP Throughput = 12.8 Mbps. * 633 When the TCP RWND Size exceeds the BDP (T3 link and 64 KBytes TCP 634 RWND on a 10 ms RTT path), the maximum frames per second limit of 635 3664 is reached and then the formula is: 637 TCP Throughput = Max FPS X (MTU - 40) X 8. 638 TCP Throughput = 3664 FPS X 1460 Bytes X 8 bits. 639 TCP Throughput = 42.8 Mbps. ** 641 The following diagram compares achievable TCP Throughputs on a T3 642 with Send Socket Buffer & TCP RWND Sizes of 16KB vs. 64KB. 644 Figure 3.3.1a TCP Throughputs on a T3 at different RTTs 646 45| 647 | _______**42.8 648 40| |64KB | 649 TCP | | | 650 Throughput 35| | | 651 in Mbps | | | +-----+34.1 652 30| | | |64KB | 653 | | | | | 654 25| | | | | 655 | | | | | 656 20| | | | | _______20.5 657 | | | | | |64KB | 658 15| | | | | | | 659 |*12.8+-----| | | | | | 660 10| |16KB | | | | | | 661 | | | |8.5 +-----| | | | 662 5| | | | |16KB | |5.1 +-----| | 663 |_____|_____|_____|____|_____|_____|____|16KB |_____|_____ 664 10 15 25 665 RTT in milliseconds 667 The following diagram shows the achievable TCP Throughput on a 25ms 668 T3 when Send Socket Buffer & TCP RWND Sizes are increased. 670 Figure 3.3.1b TCP Throughputs on a T3 with different TCP RWND 672 45| 673 | 674 40| +-----+40.9 675 TCP | | | 676 Throughput 35| | | 677 in Mbps | | | 678 30| | | 679 | | | 680 25| | | 681 | | | 682 20| +-----+20.5 | | 683 | | | | | 684 15| | | | | 685 | | | | | 686 10| +-----+10.2 | | | | 687 | | | | | | | 688 5| +-----+5.1 | | | | | | 689 |_____|_____|______|_____|______|_____|_______|_____|_____ 690 16 32 64 128* 691 TCP RWND Size in KBytes 693 * Note that 128KB requires [RFC1323] TCP Window scaling option. 695 4. TCP Metrics 697 This methodology focuses on a TCP Throughput and provides 3 basic 698 metrics that can be used for better understanding of the results. 699 It is recognized that the complexity and unpredictability of TCP 700 makes it very difficult to develop a complete set of metrics that 701 accounts for the myriad of variables (i.e. RTT variations, loss 702 conditions, TCP implementations, etc.). However, these 3 metrics 703 facilitate TCP Throughput comparisons under varying network 704 conditions and host buffer size / RWND settings. 706 4.1 Transfer Time Ratio 708 The first metric is the TCP Transfer Time Ratio, which is simply the 709 ratio between the Actual versus the Ideal TCP Transfer Times. 711 Actual TCP Transfer Time 712 TCP Transfer Time Ratio = ------------------------- 713 Ideal TCP Transfer Time 715 The Ideal TCP Transfer Time is derived from the Maximum Achievable 716 TCP Throughput, which is related to the Bottleneck Bandwidth and 717 Layer 1/2/3/4 overheads associated with the network path. The 718 following sections provide derivations for the Maximum Achievable TCP 719 Throughput and example calculations for the TCP Transfer Time Ratio. 721 4.1.1 Maximum Achievable TCP Throughput calculation 723 This section provides formulas to calculate the Maximum Achievable 724 TCP Throughput with examples for T3 (44.21 Mbps) and Ethernet. 726 All calculations are based on an MTU of 1500 Bytes and TCP/IP 727 headers of 20 Bytes each (20 Bytes for TCP + 20 Bytes for IP). 729 First, the maximum achievable Layer 2 throughput of a T3 Interface 730 is limitted by the maximum quantity of Frames Per Second (FPS) 731 permitted by the actual physical layer (Layer 1) speed. 733 The calculation formula is: 734 FPS = T3 Physical Speed / ((MTU + PPP + Flags + CRC16) X 8) 735 FPS = (44.21Mbps /((1500 Bytes + 4 Bytes + 2 Bytes + 2 Bytes) X 8 ))) 736 FPS = (44.21Mbps /(1508 Bytes X 8)) 737 FPS = 44.21Mbps / 12064 bits 738 FPS = 3664 740 Then, to obtain the Maximum Achievable TCP Throughput (Layer 4), we 741 simply use: (MTU - 40) in Bytes X 8 bits X max FPS. 742 For a T3, the maximum TCP Throughput = 1460 Bytes X 8 bits X 3664 FPS 743 Maximum TCP Throughput = 11680 bits X 3664 FPS 744 Maximum TCP Throughput = 42.8 Mbps. 746 On Ethernet, the maximum achievable Layer 2 throughput is limitted by 747 the maximum Frames Per Second permitted by the IEEE802.3 standard. 749 The maximum FPS for 100 Mbps Ethernet is 8127 and the calculation is: 750 FPS = (100Mbps /(1538 Bytes X 8 bits)) 752 The maximum FPS for GigE is 81274 and the calculation formula is: 753 FPS = (1Gbps /(1538 Bytes X 8 bits)) 755 The maximum FPS for 10GigE is 812743 and the calculation formula is: 756 FPS = (10Gbps /(1538 Bytes X 8 bits)) 758 The 1538 Bytes equates to: 760 MTU + Ethernet + CRC32 + IFG + Preamble + SFD 761 (IFG = Inter-Frame Gap and SFD = Start of Frame Delimiter) 762 Where MTU is 1500 Bytes, Ethernet is 14 Bytes, CRC32 is 4 Bytes, 763 IFG is 12 Bytes, Preamble is 7 Bytes and SFD is 1 Byte. 765 Then, to obtain the Maximum Achievable TCP Throughput (Layer 4), we 766 simply use: (MTU - 40) in Bytes X 8bits X max FPS. 767 For a 100Mbps, the max TCP Throughput = 1460Bytes X 8 bits X 8127 FPS 768 Maximum TCP Throughput = 11680 bits X 8127 FPS 769 Maximum TCP Throughput = 94.9 Mbps. 771 It is important to note that better results could be obtained with 772 jumbo frames on Gigabit and 10 Gigabit Ethernet interfaces. 774 4.1.2 TCP Transfer Time and Transfer Time Ratio calculation 776 The following table illustrates the Ideal TCP Transfer time of a 777 single TCP connection when its TCP RWND and Send Socket Buffer Sizes 778 equals or exceeds the BDP. 780 Table 4.1.1: Link Speed, RTT, BDP, TCP Throughput, and 781 Ideal TCP Transfer time for a 100 MB File 783 Link Maximum Ideal TCP 784 Speed BDP Achievable TCP Transfer time 785 (Mbps) RTT (ms) (KBytes) Throughput(Mbps) (seconds)* 786 -------------------------------------------------------------------- 787 1.536 50.00 9.6 1.4 571.0 788 44.210 25.00 138.2 42.8 18.0 789 100.000 2.00 25.0 94.9 9.0 790 1,000.000 1.00 125.0 949.2 1.0 791 10,000.000 0.05 62.5 9,492.0 0.1 793 * Transfer times are rounded for simplicity. 795 For a 100MB file (100 x 8 = 800 Mbits), the Ideal TCP Transfer Time 796 is derived as follows: 798 800 Mbits 799 Ideal TCP Transfer Time = ----------------------------------- 800 Maximum Achievable TCP Throughput 802 To illustrate the TCP Transfer Time Ratio, an example would be the 803 bulk transfer of 100 MB over 5 simultaneous TCP connections (each 804 connection transferring 100 MB). In this example, the Ethernet 805 service provides a Committed Access Rate (CAR) of 500 Mbit/s. Each 806 connection may achieve different throughputs during a test and the 807 overall throughput rate is not always easy to determine (especially 808 as the number of connections increases). 810 The ideal TCP Transfer Time would be ~8 seconds, but in this example, 811 the actual TCP Transfer Time was 12 seconds. The TCP Transfer Time 812 Ratio would then be 12/8 = 1.5, which indicates that the transfer 813 across all connections took 1.5 times longer than the ideal. 815 4.2 TCP Efficiency 817 This second metric represents the percentage of Bytes that were not 818 retransmitted. 820 Transmitted Bytes - Retransmitted Bytes 821 TCP Efficiency % = --------------------------------------- X 100 822 Transmitted Bytes 824 Transmitted Bytes are the total number of TCP Bytes to be transmitted 825 including the original and the retransmitted Bytes. 827 4.2.1 TCP Efficiency Percentage calculation 829 As an example, if 100,000 Bytes were sent and 2,000 had to be 830 retransmitted, the TCP Efficiency Percentage should be calculated as: 832 102,000 - 2,000 833 TCP Efficiency % = ----------------- x 100 = 98.03% 834 102,000 836 Note that the Retransmitted Bytes may have occurred more than once, 837 if so, then these multiple retransmissions are added to the 838 Retransmitted Bytes and to the Transmitted Bytes counts. 840 4.3 Buffer Delay 842 The third metric is the Buffer Delay Percentage, which represents 843 the increase in RTT during a TCP Throughput test versus the inherent 844 or baseline RTT. The baseline RTT is the Round Trip Time inherent to 845 the network path under non-congested conditions as defined in Section 846 3.2.1. The average RTT is derived from the total of all measured 847 RTTs during the actual test at every second divided by the test 848 duration in seconds. 850 Total RTTs during transfer 851 Average RTT during transfer = ----------------------------- 852 Transfer duration in seconds 854 Average RTT during Transfer - Baseline RTT 855 Buffer Delay % = ------------------------------------------ X 100 856 Baseline RTT 858 4.3.1 Buffer Delay calculation 860 As an example, consider a network path with a baseline RTT of 25 861 msec. During the course of a TCP transfer, the average RTT across 862 the entire transfer increases to 32 msec. Then, the Buffer Delay 863 Percentage would be calculated as: 865 32 - 25 866 Buffer Delay % = ------- x 100 = 28% 867 25 869 Note that the TCP Transfer Time Ratio, TCP Efficiency Percentage, and 870 the Buffer Delay Percentage MUST all be measured during each 871 throughput test. Poor TCP Transfer Time Ratio (i.e. TCP Transfer 872 Time greater than the Ideal TCP Transfer Time) may be diagnosed by 873 correlating with sub-optimal TCP Efficiency Percentage and/or Buffer 874 Delay Percentage metrics. 876 5. Conducting TCP Throughput Tests 878 Several TCP tools are currently used in the network world and one of 879 the most common is "iperf". With this tool, hosts are installed at 880 each end of the network path; one acts as client and the other as 881 a server. The Send Socket Buffer and the TCP RWND Sizes of both 882 client and server can be manually set. The achieved throughput can 883 then be measured, either uni-directionally or bi-directionally. For 884 higher BDP situations in lossy networks (Long Fat Networks (LFNs) or 885 satellite links, etc.), TCP options such as Selective Acknowledgment 886 SHOULD be considered and become part of the window size / throughput 887 characterization. 889 Host hardware performance must be well understood before conducting 890 the tests described in the following sections. A dedicated 891 communications test instrument will generally be required, especially 892 for line rates of GigE and 10 GigE. A compliant TCP TTD SHOULD 893 provide a warning message when the expected test throughput will 894 exceed 10% of the network bandwidth capacity. If the throughput test 895 is expected to exceed 10% of the provider bandwidth, then the test 896 should be coordinated with the network provider. This does not 897 include the customer premise bandwidth, the 10% refers directly to 898 the provider's bandwidth (Provider Edge to Provider router). 900 The TCP Throughput test should be run over a long enough duration 901 to properly exercise network buffers (i.e. greater than 30 seconds) 902 and should also characterize performance at different times of day. 904 5.1 Single versus Multiple TCP Connections 906 The decision whether to conduct single or multiple TCP connection 907 tests depends upon the size of the BDP in relation to the TCP RWND 908 configured in the end-user environment. For example, if the BDP for 909 a Long Fat Network (LFN) turns out to be 2MB, then it is probably 910 more realistic to test this network path with multiple connections. 911 Assuming typical host TCP RWND Sizes of 64 KB (i.e. Windows XP), 912 using 32 TCP connections would emulate a small office scenario. 914 The following table is provided to illustrate the relationship 915 between the TCP RWND and the number of TCP connections required to 916 fill the available capacity of a given BDP. For this example, the 917 network bandwidth is 500 Mbps and the RTT is 5 ms, then the BDP 918 equates to 312.5 KBytes. 920 Table 5.1 Number of TCP connections versus TCP RWND 922 Number of TCP Connections 923 TCP RWND to fill available bandwidth 924 ------------------------------------- 925 16KB 20 926 32KB 10 927 64KB 5 928 128KB 3 930 The TCP Transfer Time Ratio metric is useful when conducting multiple 931 connection tests. Each connection should be configured to transfer 932 payloads of the same size (i.e. 100 MB), then the TCP Transfer Time 933 Ratio provides a simple metric to verify the actual versus expected 934 results. 936 Note that the TCP Transfer Time is the time required for each 937 connection to complete the transfer of the predetermined payload 938 size. From the previous table, the 64KB window is considered. Each 939 of the 5 TCP connections would be configured to transfer 100MB, and 940 each one should obtain a maximum of 100 Mbps. So for this example, 941 the 100MB payload should be transferred across the connections in 942 approximately 8 seconds (which would be the Ideal TCP Transfer Time 943 under these conditions). 945 Additionally, the TCP Efficiency Percentage metric MUST be computed 946 for each connection as defined in Section 4.2. 948 5.2 Results Interpretation 950 At the end, a TCP Throughput Test Device (TCP TTD) should generate a 951 report with the calculated BDP and a set of Window Size experiments. 952 Window Size refers to the minimum of the Send Socket Buffer and TCP 953 RWND. The report should include TCP Throughput results for each TCP 954 Window Size tested. The goal is to provide clear acheivable versus 955 actual TCP Throughputs results with respect to the TCP Window Size 956 when no fragmentation occurs. The report should also include the 957 results for the 3 metrics defined in Section 4. The goal is to 958 provide a clear relationship between these 3 metrics and user 959 experience. As an example, for the same results in regards with 960 Transfer Time Ratio, a better TCP Efficiency could be obtained at the 961 cost of higher Buffer Delays. 963 For cases where the test results are not equal to the ideal values, 964 some possible causes are: 966 - Network congestion causing packet loss which MAY be inferred from 967 a poor TCP Efficiency % (i.e., higher TCP Efficiency % = less packet 968 loss) 970 - Network congestion causing an increase in RTT which MAY be inferred 971 from the Buffer Delay Percentage (i.e., 0% = no increase in RTT over 972 baseline) 974 - Intermediate network devices which actively regenerate the TCP 975 connection and can alter TCP RWND Size, MTU, etc. 977 - Rate limiting by policing instead of shaping. 979 - Maximum TCP Buffer space. All operating systems have a global 980 mechanism to limit the quantity of system memory to be used by TCP 981 connections. On some systems, each connection is subject to a memory 982 limit that is applied to the total memory used for input data, output 983 data and controls. On other systems, there are separate limits for 984 input and output buffer spaces per connection. Client/server IP 985 hosts might be configured with Maximum Buffer Space limits that are 986 far too small for high performance networks. 988 - Socket Buffer Sizes. Most operating systems support separate per 989 connection send and receive buffer limits that can be adjusted as 990 long as they stay within the maximum memory limits. These socket 991 buffers must be large enough to hold a full BDP of TCP Bytes plus 992 some overhead. There are several methods that can be used to adjust 993 socket buffer sizes, but TCP Auto-Tuning automatically adjusts these 994 as needed to optimally balance TCP performance and memory usage. 996 It is important to note that Auto-Tuning is enabled by default in 997 LINUX since the kernel release 2.6.6 and in UNIX since FreeBSD 7.0. 998 It is also enabled by default in Windows since Vista and in MAC since 999 OS X version 10.5 (leopard). Over buffering can cause some 1000 applications to behave poorly, typically causing sluggish interactive 1001 response and risk running the system out of memory. Large default 1002 socket buffers have to be considered carefully on multi-user systems. 1004 - TCP Window Scale Option, [RFC1323]. This option enables TCP to 1005 support large BDP paths. It provides a scale factor which is 1006 required for TCP to support window sizes larger than 64KB. Most 1007 systems automatically request WSCALE under some conditions, such as 1008 when the receive socket buffer is larger than 64KB or when the other 1009 end of the TCP connection requests it first. WSCALE can only be 1010 negotiated during the 3 way handshake. If either end fails to 1011 request WSCALE or requests an insufficient value, it cannot be 1012 renegotiated. Different systems use different algorithms to select 1013 WSCALE, but it is very important to have large enough buffer 1014 sizes. Note that under these constraints, a client application 1015 wishing to send data at high rates may need to set its own receive 1016 buffer to something larger than 64K Bytes before it opens the 1017 connection to ensure that the server properly negotiates WSCALE. 1018 A system administrator might have to explicitly enable [RFC1323] 1019 extensions. Otherwise, the client/server IP host would not support 1020 TCP window sizes (BDP) larger than 64KB. Most of the time, 1021 performance gains will be obtained by enabling this option in LFNs. 1023 - TCP Timestamps Option, [RFC1323]. This feature provides better 1024 measurements of the Round Trip Time and protects TCP from data 1025 corruption that might occur if packets are delivered so late that the 1026 sequence numbers wrap before they are delivered. Wrapped sequence 1027 numbers do not pose a serious risk below 100 Mbps, but the risk 1028 increases at higher data rates. Most of the time, performance gains 1029 will be obtained by enabling this option in Gigabit bandwidth 1030 networks. 1032 - TCP Selective Acknowledgments Option (SACK), [RFC2018]. This allows 1033 a TCP receiver to inform the sender about exactly which data segment 1034 is missing and needs to be retransmitted. Without SACK, TCP has to 1035 estimate which data segment is missing, which works just fine if all 1036 losses are isolated (i.e. only one loss in any given round trip). 1037 Without SACK, TCP takes a very long time to recover after multiple 1038 and consecutive losses. SACK is now supported by most operating 1039 systems, but it may have to be explicitly enabled by the system 1040 administrator. In networks with unknown load and error patterns, TCP 1041 SACK will improve throughput performances. On the other hand, 1042 security appliances vendors might have implemented TCP randomization 1043 without considering TCP SACK and under such circumstances, SACK might 1044 need to be disabled in the client/server IP hosts until the vendor 1045 corrects the issue. Also, poorly implemented SACK algorithms might 1046 cause extreme CPU loads and might need to be disabled. 1048 - Path MTU. The client/server IP host system must use the largest 1049 possible MTU for the path. This may require enabling Path MTU 1050 Discovery [RFC1191] & [RFC4821]. Since [RFC1191] is flawed, it is 1051 sometimes not enabled by default and may need to be explicitly 1052 enabled by the system administrator. [RFC4821] describes a new, more 1053 robust algorithm for MTU discovery and ICMP black hole recovery. 1055 - TOE (TCP Offload Engine). Some recent Network Interface Cards (NIC) 1056 are equipped with drivers that can do part or all of the TCP/IP 1057 protocol processing. TOE implementations require additional work 1058 (i.e. hardware-specific socket manipulation) to set up and tear down 1059 connections. Because TOE NICs configuration parameters are vendor 1060 specific and not necessarily RFC-compliant, they are poorly 1061 integrated with UNIX & LINUX. Occasionally, TOE might need to be 1062 disabled in a server because its NIC does not have enough memory 1063 resources to buffer thousands of connections. 1065 Note that both ends of a TCP connection must be properly tuned. 1067 6. Security Considerations 1069 The security considerations that apply to any active measurement of 1070 live networks are relevant here as well. See [RFC4656] and 1071 [RFC5357]. 1073 7. IANA Considerations 1075 This document does not REQUIRE an IANA registration for ports 1076 dedicated to the TCP testing described in this document. 1078 8. Acknowledgments 1080 Thanks to Lars Eggert, Al Morton, Matt Mathis, Matt Zekauskas, 1081 Yaakov Stein, and Loki Jorgenson for many good comments and for 1082 pointing us to great sources of information pertaining to past works 1083 in the TCP capacity area. 1085 9. References 1087 9.1 Normative References 1089 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1090 Requirement Levels", BCP 14, RFC 2119, March 1997. 1092 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 1093 Zekauskas, "A One-way Active Measurement Protocol 1094 (OWAMP)", RFC 4656, September 2006. 1096 [RFC2544] Bradner, S., McQuaid, J., "Benchmarking Methodology for 1097 Network Interconnect Devices", RFC 2544, June 1999 1099 [RFC5357] Hedayat, K., Krzanowski, R., Morton, A., Yum, K., Babiarz, 1100 J., "A Two-Way Active Measurement Protocol (TWAMP)", 1101 RFC 5357, October 2008 1103 [RFC4821] Mathis, M., Heffner, J., "Packetization Layer Path MTU 1104 Discovery", RFC 4821, June 2007 1106 draft-ietf-ippm-btc-cap-00.txt Allman, M., "A Bulk 1107 Transfer Capacity Methodology for Cooperating Hosts", 1108 August 2001 1110 [RFC2681] Almes G., Kalidindi S., Zekauskas, M., "A Round-trip Delay 1111 Metric for IPPM", RFC 2681, September, 1999 1113 [RFC4898] Mathis, M., Heffner, J., Raghunarayan, R., "TCP Extended 1114 Statistics MIB", May 2007 1116 [RFC5136] Chimento P., Ishac, J., "Defining Network Capacity", 1117 February 2008 1119 [RFC1323] Jacobson, V., Braden, R., Borman D., "TCP Extensions for 1120 High Performance", May 1992 1122 [RFC2018] Mathis, M., Mahdavi, J., Floyd, S., Romanow, A., "TCP 1123 Selective Acknowledgment Options", 1996 1125 [RFC1191] Mogul, A., Deering, S., "Path MTU Discovery", 1990 1127 9.2. Informative References 1128 Authors' Addresses 1130 Barry Constantine 1131 JDSU, Test and Measurement Division 1132 One Milesone Center Court 1133 Germantown, MD 20876-7100 1134 USA 1136 Phone: +1 240 404 2227 1137 barry.constantine@jdsu.com 1139 Gilles Forget 1140 Independent Consultant to Bell Canada. 1141 308, rue de Monaco, St-Eustache 1142 Qc. CANADA, Postal Code: J7P-4T5 1144 Phone: (514) 895-8212 1145 gilles.forget@sympatico.ca 1147 Ruediger Geib 1148 Heinrich-Hertz-Strasse (Number: 3-7) 1149 Darmstadt, Germany, 64295 1151 Phone: +49 6151 6282747 1152 Ruediger.Geib@telekom.de 1154 Reinhard Schrage 1155 Schrage Consulting 1157 Phone: +49 (0) 5137 909540 1158 reinhard@schrageconsult.com