idnits 2.17.1 draft-ietf-ippm-capacity-metric-method-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (April 26, 2021) is 1068 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'T' is mentioned on line 549, but not defined == Missing Reference: 'I' is mentioned on line 549, but not defined == Missing Reference: 'PM' is mentioned on line 548, but not defined ** Downref: Normative reference to an Informational RFC: RFC 2330 ** Downref: Normative reference to an Informational RFC: RFC 7497 ** Downref: Normative reference to an Informational RFC: RFC 8468 Summary: 4 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft AT&T Labs 4 Intended status: Standards Track R. Geib 5 Expires: October 28, 2021 Deutsche Telekom 6 L. Ciavattone 7 AT&T Labs 8 April 26, 2021 10 Metrics and Methods for One-way IP Capacity 11 draft-ietf-ippm-capacity-metric-method-10 13 Abstract 15 This memo revisits the problem of Network Capacity metrics first 16 examined in RFC 5136. The memo specifies a more practical Maximum 17 IP-Layer Capacity metric definition catering for measurement 18 purposes, and outlines the corresponding methods of measurement. 20 Status of This Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at https://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on October 28, 2021. 37 Copyright Notice 39 Copyright (c) 2021 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (https://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 55 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 56 2. Scope, Goals, and Applicability . . . . . . . . . . . . . . . 4 57 3. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 5 58 4. General Parameters and Definitions . . . . . . . . . . . . . 6 59 5. IP-Layer Capacity Singleton Metric Definitions . . . . . . . 7 60 5.1. Formal Name . . . . . . . . . . . . . . . . . . . . . . . 7 61 5.2. Parameters . . . . . . . . . . . . . . . . . . . . . . . 7 62 5.3. Metric Definitions . . . . . . . . . . . . . . . . . . . 8 63 5.4. Related Round-Trip Delay and One-way Loss Definitions . . 9 64 5.5. Discussion . . . . . . . . . . . . . . . . . . . . . . . 10 65 5.6. Reporting the Metric . . . . . . . . . . . . . . . . . . 10 66 6. Maximum IP-Layer Capacity Metric Definitions (Statistic) . . 10 67 6.1. Formal Name . . . . . . . . . . . . . . . . . . . . . . . 10 68 6.2. Parameters . . . . . . . . . . . . . . . . . . . . . . . 10 69 6.3. Metric Definitions . . . . . . . . . . . . . . . . . . . 10 70 6.4. Related Round-Trip Delay and One-way Loss Definitions . . 12 71 6.5. Discussion . . . . . . . . . . . . . . . . . . . . . . . 12 72 6.6. Reporting the Metric . . . . . . . . . . . . . . . . . . 13 73 7. IP-Layer Sender Bit Rate Singleton Metric Definitions . . . . 13 74 7.1. Formal Name . . . . . . . . . . . . . . . . . . . . . . . 13 75 7.2. Parameters . . . . . . . . . . . . . . . . . . . . . . . 14 76 7.3. Metric Definition . . . . . . . . . . . . . . . . . . . . 14 77 7.4. Discussion . . . . . . . . . . . . . . . . . . . . . . . 15 78 7.5. Reporting the Metric . . . . . . . . . . . . . . . . . . 15 79 8. Method of Measurement . . . . . . . . . . . . . . . . . . . . 15 80 8.1. Load Rate Adjustment Algorithm . . . . . . . . . . . . . 15 81 8.2. Measurement Qualification or Verification . . . . . . . . 20 82 8.3. Measurement Considerations . . . . . . . . . . . . . . . 22 83 8.4. Running Code . . . . . . . . . . . . . . . . . . . . . . 23 84 9. Reporting Formats . . . . . . . . . . . . . . . . . . . . . . 24 85 9.1. Configuration and Reporting Data Formats . . . . . . . . 26 86 10. Security Considerations . . . . . . . . . . . . . . . . . . . 26 87 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 27 88 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 27 89 13. Appendix A - Load Rate Adjustment Pseudo Code . . . . . . . . 27 90 14. Appendix B - RFC 8085 UDP Guidelines Check . . . . . . . . . 28 91 14.1. Assessment of Mandatory Requirements . . . . . . . . . . 28 92 14.2. Assessment of Recommendations . . . . . . . . . . . . . 30 93 15. References . . . . . . . . . . . . . . . . . . . . . . . . . 33 94 15.1. Normative References . . . . . . . . . . . . . . . . . . 33 95 15.2. Informative References . . . . . . . . . . . . . . . . . 34 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 36 98 1. Introduction 100 The IETF's efforts to define Network and Bulk Transport Capacity have 101 been chartered and progressed for over twenty years. Over that time, 102 the performance community has seen development of Informative 103 definitions in [RFC3148] for Framework for Bulk Transport Capacity 104 (BTC), RFC 5136 for Network Capacity and Maximum IP-Layer Capacity, 105 and the Experimental metric definitions and methods in [RFC8337], 106 Model-Based Metrics for BTC. 108 This memo revisits the problem of Network Capacity metrics examined 109 first in [RFC3148] and later in [RFC5136]. Maximum IP-Layer Capacity 110 and [RFC3148] Bulk Transfer Capacity (goodput) are different metrics. 111 Maximum IP-Layer Capacity is like the theoretical goal for goodput. 112 There are many metrics in [RFC5136], such as Available Capacity. 113 Measurements depend on the network path under test and the use case. 114 Here, the main use case is to assess the maximum capacity of the 115 access network, with specific performance criteria used in the 116 measurement. 118 This memo recognizes the importance of a definition of a Maximum IP- 119 Layer Capacity Metric at a time when access speeds have increased 120 dramatically; a definition that is both practical and effective for 121 the performance community's needs, including Internet users. The 122 metric definition is intended to use Active Methods of Measurement 123 [RFC7799], and a method of measurement is included. 125 The most direct active measurement of IP-Layer Capacity would use IP 126 packets, but in practice a transport header is needed to traverse 127 address and port translators. UDP offers the most direct assessment 128 possibility, and in the [copycat] measurement study to investigate 129 whether UDP is viable as a general Internet transport protocol, the 130 authors found that a high percentage of paths tested support UDP 131 transport. A number of liaisons have been exchanged on this topic 132 [LS-SG12-A] [LS-SG12-B], discussing the laboratory and field tests 133 that support the UDP-based approach to IP-Layer Capacity measurement. 135 This memo also recognizes the many updates to the IP Performance 136 Metrics Framework [RFC2330] published over twenty years, and makes 137 use of [RFC7312] for Advanced Stream and Sampling Framework, and 138 [RFC8468] with IPv4, IPv6, and IPv4-IPv6 Coexistence Updates. 140 Appendix A describes the load rate adjustment algorithm in pseudo- 141 code. Appendix B discusses the algorithm's compliance with 142 [RFC8085]. 144 1.1. Requirements Language 146 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 147 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 148 "OPTIONAL" in this document are to be interpreted as described in BCP 149 14[RFC2119] [RFC8174] when, and only when, they appear in all 150 capitals, as shown here. 152 2. Scope, Goals, and Applicability 154 The scope of this memo is to define a metric and corresponding method 155 to unambiguously perform Active measurements of Maximum IP-Layer 156 Capacity, along with related metrics and methods. 158 Another goal is to harmonize the specified metric and method across 159 the industry, and this memo is the vehicle that captures IETF 160 consensus, possibly resulting in changes to the specifications of 161 other Standards Development Organizations (SDO) (through each SDO's 162 normal contribution process, or through liaison exchange). 164 A local goal is to aid efficient test procedures where possible, and 165 to recommend reporting with additional interpretation of the results. 166 Fostering the development of protocol support for this metric and 167 method of measurement is also a goal of this memo (all active testing 168 protocols currently defined by the IPPM WG are UDP-based, meeting a 169 key requirement of these methods). The supporting protocol 170 development to measure this metric according to the specified method 171 is a key future contribution to Internet measurement. 173 The load rate adjustment algorithm's scope is limited to helping 174 determine the Maximum IP-Layer Capacity in the context of an 175 infrequent, diagnostic, short term measurement. It is RECOMMENDED to 176 discontinue non-measurement traffic that shares a subscriber's 177 dedicated resources while testing: measurements may not be accurate 178 and throughput of competing elastic traffic may be greatly reduced. 180 The primary application of the metric and method of measurement 181 described here is the same as in Section 2 of [RFC7497] where: 183 o The access portion of the network is the focus of this problem 184 statement. The user typically subscribes to a service with 185 bidirectional access partly described by rates in bits per second. 187 In addition, the use of the load rate adjustment algorithm described 188 in section 8.1 has the following additional applicability 189 limitations: 191 - MUST only be used in the application of diagnostic and operations 192 measurements as described in this memo 194 - MUST only be used in circumstances consistent with Section 10, 195 Security Considerations 197 - If a network operator is certain of the access capacity to be 198 validated, then testing MAY start with a fixed rate test at the 199 access capacity and avoid activating the load adjustment algorithm. 200 However, the stimulus for a diagnostic test (such as a subscriber 201 request) strongly implies that there is no certainty and the load 202 adjustment algorithm will be needed. 204 Further, the metric and method of measurement are intended for use 205 where specific exact path information is unknown within a range of 206 possible values: 208 - the subscriber's exact Maximum IP-Layer Capacity is unknown (which 209 is sometimes the case; service rates can be increased due to upgrades 210 without a subscriber's request, or to provide a surplus to compensate 211 for possible underestimates of TCP-based testing. 213 - the size of the access bottleneck buffer is unknown. 215 Finally, the measurement system's load rate adjustment algorithm 216 SHALL NOT be provided with the exact capacity value to be validated a 217 priori. This restriction fosters a fair result, and removes an 218 opportunity for bad actors to operate with knowledge of the "right 219 answer". 221 3. Motivation 223 As with any problem that has been worked for many years in various 224 SDOs without any special attempts at coordination, various solutions 225 for metrics and methods have emerged. 227 There are five factors that have changed (or begun to change) in the 228 2013-2019 time frame, and the presence of any one of them on the path 229 requires features in the measurement design to account for the 230 changes: 232 1. Internet access is no longer the bottleneck for many users. 234 2. Both transfer rate and latency are important to user's 235 satisfaction. 237 3. UDP's growing role in Transport, in areas where TCP once 238 dominated. 240 4. Content and applications are moving physically closer to users. 242 5. There is less emphasis on ISP gateway measurements, possibly due 243 to less traffic crossing ISP gateways in future. 245 4. General Parameters and Definitions 247 This section lists the REQUIRED input factors to specify a Sender or 248 Receiver metric. 250 o Src, the address of a host (such as the globally routable IP 251 address). 253 o Dst, the address of a host (such as the globally routable IP 254 address). 256 o MaxHops, the limit on the number of Hops a specific packet may 257 visit as it traverses from the host at Src to the host at Dst 258 (implemented in the TTL or Hop Limit). 260 o T0, the time at the start of measurement interval, when packets 261 are first transmitted from the Source. 263 o I, the nominal duration of a measurement interval at the 264 destination (default 10 sec) 266 o dt, the nominal duration of m equal sub-intervals in I at the 267 destination (default 1 sec) 269 o dtn, the beginning boundary of a specific sub-interval, n, one of 270 m sub-intervals in I 272 o FT, the feedback time interval between status feedback messages 273 communicating measurement results, sent from the receiver to 274 control the sender. The results are evaluated throughout the test 275 to determine how to adjust the current offered load rate at the 276 sender (default 50ms) 278 o Tmax, a maximum waiting time for test packets to arrive at the 279 destination, set sufficiently long to disambiguate packets with 280 long delays from packets that are discarded (lost), such that the 281 distribution of one-way delay is not truncated. 283 o F, the number of different flows synthesized by the method 284 (default 1 flow) 286 o flow, the stream of packets with the same n-tuple of designated 287 header fields that (when held constant) result in identical 288 treatment in a multi-path decision (such as the decision taken in 289 load balancing). Note: The IPv6 flow label MAY be included in the 290 flow definition when routers have complied with [RFC6438] 291 guidelines. 293 o Type-P, the complete description of the test packets for which 294 this assessment applies (including the flow-defining fields). 295 Note that the UDP transport layer is one requirement for test 296 packets specified below. Type-P is a parallel concept to 297 "population of interest" defined in clause 6.1.1 of[Y.1540]. 299 o PM, a list of fundamental metrics, such as loss, delay, and 300 reordering, and corresponding target performance threshold. At 301 least one fundamental metric and target performance threshold MUST 302 be supplied (such as One-way IP Packet Loss [RFC7680] equal to 303 zero). 305 A non-Parameter which is required for several metrics is defined 306 below: 308 o T, the host time of the *first* test packet's *arrival* as 309 measured at the destination Measurement Point, or MP(Dst). There 310 may be other packets sent between Source and Destination hosts 311 that are excluded, so this is the time of arrival of the first 312 packet used for measurement of the metric. 314 Note that time stamp format and resolution, sequence numbers, etc. 315 will be established by the chosen test protocol standard or 316 implementation. 318 5. IP-Layer Capacity Singleton Metric Definitions 320 This section sets requirements for the singleton metric that supports 321 the Maximum IP-Layer Capacity Metric definition in Section 6. 323 5.1. Formal Name 325 Type-P-One-way-IP-Capacity, or informally called IP-Layer Capacity. 327 Note that Type-P depends on the chosen method. 329 5.2. Parameters 331 This section lists the REQUIRED input factors to specify the metric, 332 beyond those listed in Section 4. 334 No additional Parameters are needed. 336 5.3. Metric Definitions 338 This section defines the REQUIRED aspects of the measurable IP-Layer 339 Capacity metric (unless otherwise indicated) for measurements between 340 specified Source and Destination hosts: 342 Define the IP-Layer Capacity, C(T,dt,PM), to be the number of IP- 343 Layer bits (including header and data fields) in packets that can be 344 transmitted from the Src host and correctly received by the Dst host 345 during one contiguous sub-interval, dt in length. The IP-Layer 346 Capacity depends on the Src and Dst hosts, the host addresses, and 347 the path between the hosts. 349 The number of these IP-Layer bits is designated n0[dtn,dtn+1] for a 350 specific dt. 352 When the packet size is known and of fixed size, the packet count 353 during a single sub-interval dt multiplied by the total bits in IP 354 header and data fields is equal to n0[dtn,dtn+1]. 356 Anticipating a Sample of Singletons, the number of sub-intervals with 357 duration dt MUST be set to a natural number m, so that T+I = T + m*dt 358 with dtn+1 - dtn = dt for 1 <= n <= m. 360 Parameter PM represents other performance metrics [see section 5.4 361 below]; their measurement results SHALL be collected during 362 measurement of IP-Layer Capacity and associated with the 363 corresponding dtn for further evaluation and reporting. Users SHALL 364 specify the parameter Tmax as required by each metric's reference 365 definition. 367 Mathematically, this definition is represented as (for each n): 369 ( n0[dtn,dtn+1] ) 370 C(T,dt,PM) = ------------------------- 371 dt 373 Equation for IP-Layer Capacity 375 and: 377 o n0 is the total number of IP-Layer header and payload bits that 378 can be transmitted in standard-formed packets [RFC8468] from the 379 Src host and correctly received by the Dst host during one 380 contiguous sub-interval, dt in length, during the interval [T, 381 T+I], 383 o C(T,dt,PM) the IP-Layer Capacity, corresponds to the value of n0 384 measured in any sub-interval beginning at dtn, divided by the 385 length of sub-interval, dt. 387 o PM represents other performance metrics [see section 5.4 below]; 388 their measurement results SHALL be collected during measurement of 389 IP-Layer Capacity and associated with the corresponding dtn for 390 further evaluation and reporting. 392 o all sub-intervals MUST be of equal duration. Choosing dt as non- 393 overlapping consecutive time intervals allows for a simple 394 implementation. 396 o The bit rate of the physical interface of the measurement devices 397 MUST be higher than the smallest of the links on the path whose 398 C(T,I,PM) is to be measured (the bottleneck link). 400 Measurements according to these definitions SHALL use the UDP 401 transport layer. Standard-formed packets are specified in Section 5 402 of [RFC8468]. The measurement SHOULD use a randomized Source port or 403 equivalent technique, and SHOULD send responses from the Source 404 address matching the test packet destination address. 406 Some compression affects on measurement are discussed in Section 6 of 407 [RFC8468]. 409 5.4. Related Round-Trip Delay and One-way Loss Definitions 411 RTD[dtn,dtn+1] is defined as a Sample of the [RFC2681] Round-trip 412 Delay between the Src host and the Dst host over the interval [T,T+I] 413 (that contains equal non-overlapping intervals of dt). The 414 "reasonable period of time" in [RFC2681] is the parameter Tmax in 415 this memo. The statistics used to summarize RTD[dtn,dtn+1] MAY 416 include the minimum, maximum, median, and mean, and the range = 417 (maximum - minimum) is referred to below in Section 8.1 for load 418 adjustment purposes. 420 OWL[dtn,dtn+1] is defined as a Sample of the [RFC7680] One-way Loss 421 between the Src host and the Dst host over the interval [T,T+I] (that 422 contains equal non-overlapping intervals of dt). The statistics used 423 to summarize OWL[dtn,dtn+1] MAY include the lost packet count and the 424 lost packet ratio. 426 Other metrics MAY be measured: one-way reordering, duplication, and 427 delay variation. 429 5.5. Discussion 431 See the corresponding section for Maximum IP-Layer Capacity. 433 5.6. Reporting the Metric 435 The IP-Layer Capacity SHOULD be reported with at least single Megabit 436 resolution, in units of Megabits per second (Mbps), (which is 437 1,000,000 bits per second to avoid any confusion). 439 The related One-way Loss metric and Round Trip Delay measurements for 440 the same Singleton SHALL be reported, also with meaningful resolution 441 for the values measured. 443 Individual Capacity measurements MAY be reported in a manner 444 consistent with the Maximum IP-Layer Capacity, see Section 9. 446 6. Maximum IP-Layer Capacity Metric Definitions (Statistic) 448 This section sets requirements for the following components to 449 support the Maximum IP-Layer Capacity Metric. 451 6.1. Formal Name 453 Type-P-One-way-Max-IP-Capacity, or informally called Maximum IP-Layer 454 Capacity. 456 Note that Type-P depends on the chosen method. 458 6.2. Parameters 460 This section lists the REQUIRED input factors to specify the metric, 461 beyond those listed in Section 4. 463 No additional Parameters or definitions are needed. 465 6.3. Metric Definitions 467 This section defines the REQUIRED aspects of the Maximum IP-Layer 468 Capacity metric (unless otherwise indicated) for measurements between 469 specified Source and Destination hosts: 471 Define the Maximum IP-Layer Capacity, Maximum_C(T,I,PM), to be the 472 maximum number of IP-Layer bits n0[dtn,dtn+1] divided by dt that can 473 be transmitted in packets from the Src host and correctly received by 474 the Dst host, over all dt length intervals in [T, T+I], and meeting 475 the PM criteria. Equivalently the Maximum of a Sample of size m of 476 C(T,I,PM) collected during the interval [T, T+I] and meeting the PM 477 criteria. 479 The number of sub-intervals with duration dt MUST be set to a natural 480 number m, so that T+I = T + m*dt with dtn+1 - dtn = dt for 1 <= n <= 481 m. 483 Parameter PM represents the other performance metrics (see 484 Section 6.4 below) and their measurement results for the Maximum IP- 485 Layer Capacity. At least one target performance threshold (PM 486 criterion) MUST be defined. If more than one metric and target 487 performance threshold are defined, then the sub-interval with maximum 488 number of bits transmitted MUST meet all the target performance 489 thresholds. Users SHALL specify the parameter Tmax as required by 490 each metric's reference definition. 492 Mathematically, this definition can be represented as: 494 max ( n0[dtn,dtn+1] ) 495 [T,T+I] 496 Maximum_C(T,I,PM) = ------------------------- 497 dt 498 where: 499 T T+I 500 _________________________________________ 501 | | | | | | | | | | | 502 dtn=1 2 3 4 5 6 7 8 9 10 n+1 503 n=m 505 Equation for Maximum Capacity 507 and: 509 o n0 is the total number of IP-Layer header and payload bits that 510 can be transmitted in standard-formed packets from the Src host 511 and correctly received by the Dst host during one contiguous sub- 512 interval, dt in length, during the interval [T, T+I], 514 o Maximum_C(T,I,PM) the Maximum IP-Layer Capacity, corresponds to 515 the maximum value of n0 measured in any sub-interval beginning at 516 dtn, divided by the constant length of all sub-intervals, dt. 518 o PM represents the other performance metrics (see Section 5.4) and 519 their measurement results for the Maximum IP-Layer Capacity. At 520 least one target performance threshold (PM criterion) MUST be 521 defined. 523 o all sub-intervals MUST be of equal duration. Choosing dt as non- 524 overlapping consecutive time intervals allows for a simple 525 implementation. 527 o The bit rate of the physical interface of the measurement systems 528 MUST be higher than than the smallest of the links on the path 529 whose Maximum_C(T,I,PM) is to be measured (the bottleneck link). 531 In this definition, the m sub-intervals can be viewed as trials when 532 the Src host varies the transmitted packet rate, searching for the 533 maximum n0 that meets the PM criteria measured at the Dst host in a 534 test of duration, I. When the transmitted packet rate is held 535 constant at the Src host, the m sub-intervals may also be viewed as 536 trials to evaluate the stability of n0 and metric(s) in the PM list 537 over all dt-length intervals in I. 539 Measurements according to these definitions SHALL use the UDP 540 transport layer. 542 6.4. Related Round-Trip Delay and One-way Loss Definitions 544 RTD[dtn,dtn+1] and OWL[dtn,dtn+1] are defined in Section 5.4. Here, 545 the test intervals are increased to match the capacity Samples, 546 RTD[T,I] and OWL[T,I]. 548 The interval dtn,dtn+1 where Maximum_C[T,I,PM] occurs is the 549 reporting sub-interval within RTD[T,I] and OWL[T,I]. 551 Other metrics MAY be measured: one-way reordering, duplication, and 552 delay variation. 554 6.5. Discussion 556 If traffic conditioning (e.g., shaping, policing) applies along a 557 path for which Maximum_C(T,I,PM) is to be determined, different 558 values for dt SHOULD be picked and measurements be executed during 559 multiple intervals [T, T+I]. Each duration dt SHOULD be chosen so 560 that it is an integer multiple of increasing values k times 561 serialization delay of a path MTU at the physical interface speed 562 where traffic conditioning is expected. This should avoid taking 563 configured burst tolerance singletons as a valid Maximum_C(T,I,PM) 564 result. 566 A Maximum_C(T,I,PM) without any indication of bottleneck congestion, 567 be that an increasing latency, packet loss or ECN marks during a 568 measurement interval I, is likely to underestimate Maximum_C(T,I,PM). 570 6.6. Reporting the Metric 572 The IP-Layer Capacity SHOULD be reported with at least single Megabit 573 resolution, in units of Megabits per second (Mbps) (which is 574 1,000,000 bits per second to avoid any confusion). 576 The related One-way Loss metric and Round Trip Delay measurements for 577 the same Singleton SHALL be reported, also with meaningful resolution 578 for the values measured. 580 When there are demonstrated and repeatable Capacity modes in the 581 Sample, then the Maximum IP-Layer Capacity SHALL be reported for each 582 mode, along with the relative time from the beginning of the stream 583 that the mode was observed to be present. Bimodal Maximum IP-Layer 584 Capacities have been observed with some services, sometimes called a 585 "turbo mode" intending to deliver short transfers more quickly, or 586 reduce the initial buffering time for some video streams. Note that 587 modes lasting less than dt duration will not be detected. 589 Some transmission technologies have multiple methods of operation 590 that may be activated when channel conditions degrade or improve, and 591 these transmission methods may determine the Maximum IP-Layer 592 Capacity. Examples include line-of-sight microwave modulator 593 constellations, or cellular modem technologies where the changes may 594 be initiated by a user moving from one coverage area to another. 595 Operation in the different transmission methods may be observed over 596 time, but the modes of Maximum IP-Layer Capacity will not be 597 activated deterministically as with the "turbo mode" described in the 598 paragraph above. 600 7. IP-Layer Sender Bit Rate Singleton Metric Definitions 602 This section sets requirements for the following components to 603 support the IP-Layer Sender Bitrate Metric. This metric helps to 604 check that the sender actually generated the desired rates during a 605 test, and measurement takes place at the Src host to network path 606 interface (or as close as practical within the Src host). It is not 607 a metric for path performance. 609 7.1. Formal Name 611 Type-P-IP-Sender-Bit-Rate, or informally called IP-Layer Sender 612 Bitrate. 614 Note that Type-P depends on the chosen method. 616 7.2. Parameters 618 This section lists the REQUIRED input factors to specify the metric, 619 beyond those listed in Section 4. 621 o S, the duration of the measurement interval at the Source 623 o st, the nominal duration of N sub-intervals in S (default st = 624 0.05 seconds) 626 o stn, the beginning boundary of a specific sub-interval, n, one of 627 N sub-intervals in S 629 S SHALL be longer than I, primarily to account for on-demand 630 activation of the path, or any preamble to testing required, and the 631 delay of the path. 633 st SHOULD be much smaller than the sub-interval dt and on the same 634 order as FT, otherwise the rate measurement will include many rate 635 adjustments and include more time smoothing, thus missing the Maximum 636 IP-Layer Capacity. The st parameter does not have relevance when the 637 Source is transmitting at a fixed rate throughout S. 639 7.3. Metric Definition 641 This section defines the REQUIRED aspects of the IP-Layer Sender 642 Bitrate metric (unless otherwise indicated) for measurements at the 643 specified Source on packets addressed for the intended Destination 644 host and matching the required Type-P: 646 Define the IP-Layer Sender Bit Rate, B(S,st), to be the number of IP- 647 Layer bits (including header and data fields) that are transmitted 648 from the Source with address pair Src and Dst during one contiguous 649 sub-interval, st, during the test interval S (where S SHALL be longer 650 than I), and where the fixed-size packet count during that single 651 sub-interval st also provides the number of IP-Layer bits in any 652 interval, [stn,stn+1]. 654 Measurements according to these definitions SHALL use the UDP 655 transport layer. Any feedback from Dst host to Src host received by 656 Src host during an interval [stn,stn+1] SHOULD NOT result in an 657 adaptation of the Src host traffic conditioning during this interval 658 (rate adjustment occurs on st interval boundaries). 660 7.4. Discussion 662 Both the Sender and Receiver or (Source and Destination) bit rates 663 SHOULD be assessed as part of an IP-Layer Capacity measurement. 664 Otherwise, an unexpected sending rate limitation could produce an 665 erroneous Maximum IP-Layer Capacity measurement. 667 7.5. Reporting the Metric 669 The IP-Layer Sender Bit Rate SHALL be reported with meaningful 670 resolution, in units of Megabits per second (which is 1,000,000 bits 671 per second to avoid any confusion). 673 Individual IP-Layer Sender Bit Rate measurements are discussed 674 further in Section 9. 676 8. Method of Measurement 678 The architecture of the method REQUIRES two cooperating hosts 679 operating in the roles of Src (test packet sender) and Dst 680 (receiver), with a measured path and return path between them. 682 The duration of a test, parameter I, MUST be constrained in a 683 production network, since this is an active test method and it will 684 likely cause congestion on the Src to Dst host path during a test. 686 8.1. Load Rate Adjustment Algorithm 688 The algorithm described in this section MUST NOT be used as a general 689 Congestion Control Algorithm (CCA). As stated in the Scope 690 Section 2, the load rate adjustment algorithm's goal is to help 691 determine the Maximum IP-Layer Capacity in the context of an 692 infrequent, diagnostic, short term measurement. There is a tradeoff 693 between test duration (also the test data volume) and algorithm 694 agressiveness (speed of ramp-up and down to the Maximum IP-Layer 695 Capacity). The parameter values chosen below strike a well-tested 696 balance among these factors. 698 A table SHALL be pre-built defining all the offered load rates that 699 will be supported (R1 through Rn, in ascending order, corresponding 700 to indexed rows in the table). It is RECOMMENDED that rates begin 701 with 0.5 Mbps at index zero, use 1 Mbps at index one, and then 702 continue in 1 Mbps increments to 1 Gbps. Above 1 Gbps, and up to 10 703 Gbps, it is RECOMMENDED that 100 Mbps increments be used. Above 10 704 Gbps, increments of 1 Gbps are RECOMMENDED. A higher initial IP- 705 Layer Sender Bitrate might be configured when the test operator is 706 certain that the Maximum IP-Layer Capacity is well-above the initial 707 IP-Layer Sender Bitrate and factors such as test duration and total 708 test traffic play an important role. 710 Each rate is defined as datagrams of size ss, sent as a burst of 711 count cc, each time interval tt (default for tt is 1ms, a likely 712 system tick-interval). While it is advantageous to use datagrams of 713 as large a size as possible, it may be prudent to use a slightly 714 smaller maximum that allows for secondary protocol headers and/or 715 tunneling without resulting in IP-Layer fragmentation. Selection of 716 a new rate is indicated by a calculation on the current row, Rx. For 717 example: 719 "Rx+1": the sender uses the next higher rate in the table. 721 "Rx-10": the sender uses the rate 10 rows lower in the table. 723 At the beginning of a test, the sender begins sending at rate R1 and 724 the receiver starts a feedback timer of duration FT (while awaiting 725 inbound datagrams). As datagrams are received they are checked for 726 sequence number anomalies (loss, out-of-order, duplication, etc.) and 727 the delay range is measured (one-way or round-trip). This 728 information is accumulated until the feedback timer FT expires and a 729 status feedback message is sent from the receiver back to the sender, 730 to communicate this information. The accumulated statistics are then 731 reset by the receiver for the next feedback interval. As feedback 732 messages are received back at the sender, they are evaluated to 733 determine how to adjust the current offered load rate (Rx). 735 If the feedback indicates that no sequence number anomalies were 736 detected AND the delay range was below the lower threshold, the 737 offered load rate is increased. If congestion has not been confirmed 738 up to this point, the offered load rate is increased by more than one 739 rate (e.g., Rx+10). This allows the offered load to quickly reach a 740 near-maximum rate. Conversely, if congestion has been previously 741 confirmed, the offered load rate is only increased by one (Rx+1). 742 However, if a rate threshold between high and very high sending rates 743 (such as 1 Gbps) is exceeded, the offered load rate is only increased 744 by one (Rx+1) above the rate threshold in any congestion state. 746 If the feedback indicates that sequence number anomalies were 747 detected OR the delay range was above the upper threshold, the 748 offered load rate is decreased. The RECOMMENDED values are 0 for 749 sequence number gaps and 30 ms for lower and 90 ms for upper delay 750 thresholds, respectively. Also, if congestion is now confirmed for 751 the first time by the current feedback message being processed, then 752 the offered load rate is decreased by more than one rate (e.g., Rx- 753 30). This one-time reduction is intended to compensate for the fast 754 initial ramp-up. In all other cases, the offered load rate is only 755 decreased by one (Rx-1). 757 If the feedback indicates that there were no sequence number 758 anomalies AND the delay range was above the lower threshold, but 759 below the upper threshold, the offered load rate is not changed. 760 This allows time for recent changes in the offered load rate to 761 stabilize, and the feedback to represent current conditions more 762 accurately. 764 Lastly, the method for inferring congestion is that there were 765 sequence number anomalies AND/OR the delay range was above the upper 766 threshold for two consecutive feedback intervals. The algorithm 767 described above is also illustrated in ITU-T Rec. Y.1540, 2020 768 version[Y.1540], in Annex B, and implemented in the Appendix on Load 769 Rate Adjustment Pseudo Code in this memo. 771 The load rate adjustment algorithm MUST include timers that stop the 772 test when received packet streams cease unexpectedly. The timeout 773 thresholds are provided in the table below, along with values for all 774 other parameters and variables described in this section. Operation 775 of non-obvious parameters appear below: 777 load packet timeout Operation: The load packet timeout SHALL be 778 reset to the configured value each time a load packet received. 779 If the timeout expires, the receiver SHALL be closed and no 780 further feedback sent. 782 feedback message timeout Operation: The feedback message timeout 783 SHALL be reset to the configured value each time a feedback 784 message is received. If the timeout expires, the sender SHALL be 785 closed and no further load packets sent. 787 +-------------+-------------+---------------+-----------------------+ 788 | Parameter | Default | Tested Range | Expected Safe Range | 789 | | | or values | (not entirely tested, | 790 | | | | other values NOT | 791 | | | | RECOMMENDED) | 792 +-------------+-------------+---------------+-----------------------+ 793 | FT, | 50ms | 20ms, 50ms, | 20ms <= FT <= 250ms | 794 | feedback | | 100ms | Larger values may | 795 | time | | | slow the rate | 796 | interval | | | increase and fail to | 797 | | | | find the max | 798 +-------------+-------------+---------------+-----------------------+ 799 | Feedback | L*FT, L=20 | L=100 with | 0.5sec <= L*FT <= | 800 | message | (1sec with | FT=50ms | 30sec Upper limit for | 801 | timeout | FT=50ms) | (5sec) | very unreliable test | 802 | (stop test) | | | paths only | 803 +-------------+-------------+---------------+-----------------------+ 804 | load packet | 1sec | 5sec | 0.250sec - 30sec | 805 | timeout | | | Upper limit for very | 806 | (stop test) | | | unreliable test paths | 807 | | | | only | 808 +-------------+-------------+---------------+-----------------------+ 809 | table index | 0.5Mbps | 0.5Mbps | when testing <=10Gbps | 810 | 0 | | | | 811 +-------------+-------------+---------------+-----------------------+ 812 | table index | 1Mbps | 1Mbps | when testing <=10Gbps | 813 | 1 | | | | 814 +-------------+-------------+---------------+-----------------------+ 815 | table index | 1Mbps | 1Mbps<=rate<= | same as tested | 816 | (step) size | | 1Gbps | | 817 +-------------+-------------+---------------+-----------------------+ 818 | table index | 100Mbps | 1Gbps<=rate<= | same as tested | 819 | (step) | | 10Gbps | | 820 | size, | | | | 821 | rate>1Gbps | | | | 822 +-------------+-------------+---------------+-----------------------+ 823 | table index | 1Gbps | untested | >10Gbps | 824 | (step) | | | | 825 | size, | | | | 826 | rate>10Gbps | | | | 827 +-------------+-------------+---------------+-----------------------+ 828 | ss, UDP | none | <=1222 | Recommend max at | 829 | payload | | | largest value that | 830 | size, bytes | | | avoids fragmentation; | 831 | | | | use of too-small | 832 | | | | payload size might | 833 | | | | result in unexpected | 834 | | | | sender limitations. | 835 +-------------+-------------+---------------+-----------------------+ 836 | cc, burst | none | 1<=cc<= 100 | same as tested. Vary | 837 | count | | | cc as needed to | 838 | | | | create the desired | 839 | | | | maximum sending rate. | 840 | | | | Sender buffer size | 841 | | | | may limit cc in | 842 | | | | implementation. | 843 +-------------+-------------+---------------+-----------------------+ 844 | tt, burst | 100microsec | 100microsec, | available range of | 845 | interval | | 1msec | "tick" values (HZ | 846 | | | | param) | 847 +-------------+-------------+---------------+-----------------------+ 848 | low delay | 30ms | 5ms, 30ms | same as tested | 849 | range | | | | 850 | threshold | | | | 851 +-------------+-------------+---------------+-----------------------+ 852 | high delay | 90ms | 10ms, 90ms | same as tested | 853 | range | | | | 854 | threshold | | | | 855 +-------------+-------------+---------------+-----------------------+ 856 | sequence | 0 | 0, 100 | same as tested | 857 | error | | | | 858 | threshold | | | | 859 +-------------+-------------+---------------+-----------------------+ 860 | consecutive | 2 | 2 | Use values >1 to | 861 | errored | | | avoid misinterpreting | 862 | status | | | transient loss | 863 | report | | | | 864 | threshold | | | | 865 +-------------+-------------+---------------+-----------------------+ 866 | Fast mode | 10 | 10 | 2 <= steps <= 30 | 867 | increase, | | | | 868 | in table | | | | 869 | index steps | | | | 870 +-------------+-------------+---------------+-----------------------+ 871 | Fast mode | 3 * Fast | 3 * Fast mode | same as tested | 872 | decrease, | mode | increase | | 873 | in table | increase | | | 874 | index steps | | | | 875 +-------------+-------------+---------------+-----------------------+ 877 Parameters for Load Rate Adjustment Algorithm 879 As a consequence of default parameterization, the Number of table 880 steps in total for rates <10Gbps is 2000 (excluding index 0). 882 A related sender backoff response to network conditions occurs when 883 one or more status feedback messages fail to arrive at the sender. 885 If no status feedback messages arrive at the sender for the interval 886 greater than the Lost Status Backoff timeout: 888 UDRT + (2+w)*FT = Lost Status Backoff timeout 890 where: 891 UDRT = upper delay range threshold (default 90ms) 892 FT = feedback time interval (default 50ms) 893 w = number of repeated timeouts (w=0 initially, w++ on each 894 timeout, and reset to 0 when a message is received) 896 beginning when the last message (of any type) was successfully 897 received at the sender: 899 Then the offered load SHALL be decreased, following the same process 900 as when the feedback indicates presence of one or more sequence 901 number anomalies OR the delay range was above the upper threshold (as 902 described above), with the same load rate adjustment algorithm 903 variables in their current state. This means that rate reduction and 904 congestion confirmation can result from a three-way OR that includes 905 lost status feedback messages, sequence errors, or delay variation. 907 The RECOMMENDED initial value for w is 0, taking Round Trip Time 908 (RTT) less than FT into account. A test with RTT longer than FT is a 909 valid reason to increase the initial value of w appropriately. 910 Variable w SHALL be incremented by 1 whenever the Lost Status Backoff 911 timeout is exceeded. So with FT = 50ms and UDRT = 90ms, a status 912 feedback message loss would be declared at 190ms following a 913 successful message, again at 50ms after that (240ms total), and so 914 on. 916 Also, if congestion is now confirmed for the first time by a Lost 917 Status Backoff timeout, then the offered load rate is decreased by 918 more than one rate (e.g., Rx-30). This one-time reduction is 919 intended to compensate for the fast initial ramp-up. In all other 920 cases, the offered load rate is only decreased by one (Rx-1). 922 Appendix B discusses compliance with the applicable mandatory 923 requirements of [RFC8085], consistent with the goals of the IP-Layer 924 Capacity Metric and Method, including the load rate adjustment 925 algorithm described in this section. 927 8.2. Measurement Qualification or Verification 929 It is of course necessary to calibrate the equipment performing the 930 IP-Layer Capacity measurement, to ensure that the expected capacity 931 can be measured accurately, and that equipment choices (processing 932 speed, interface bandwidth, etc.) are suitably matched to the 933 measurement range. 935 When assessing a Maximum rate as the metric specifies, artificially 936 high (optimistic) values might be measured until some buffer on the 937 path is filled. Other causes include bursts of back-to-back packets 938 with idle intervals delivered by a path, while the measurement 939 interval (dt) is small and aligned with the bursts. The artificial 940 values might result in an un-sustainable Maximum Capacity observed 941 when the method of measurement is searching for the Maximum, and that 942 would not do. This situation is different from the bi-modal service 943 rates (discussed under Reporting), which are characterized by a 944 multi-second duration (much longer than the measured RTT) and 945 repeatable behavior. 947 There are many ways that the Method of Measurement could handle this 948 false-max issue. The default value for measurement of singletons (dt 949 = 1 second) has proven to a be of practical value during tests of 950 this method, allows the bimodal service rates to be characterized, 951 and it has an obvious alignment with the reporting units (Mbps). 953 Another approach comes from Section 24 of RFC 2544[RFC2544] and its 954 discussion of Trial duration, where relatively short trials conducted 955 as part of the search are followed by longer trials to make the final 956 determination. In the production network, measurements of Singletons 957 and Samples (the terms for trials and tests of Lab Benchmarking) must 958 be limited in duration because they may be service-affecting. But 959 there is sufficient value in repeating a Sample with a fixed sending 960 rate determined by the previous search for the Maximum IP-Layer 961 Capacity, to qualify the result in terms of the other performance 962 metrics measured at the same time. 964 A qualification measurement for the search result is a subsequent 965 measurement, sending at a fixed 99.x % of the Maximum IP-Layer 966 Capacity for I, or an indefinite period. The same Maximum Capacity 967 Metric is applied, and the Qualification for the result is a Sample 968 without packet loss or a growing minimum delay trend in subsequent 969 singletons (or each dt of the measurement interval, I). Samples 970 exhibiting losses or increasing queue occupation require a repeated 971 search and/or test at reduced fixed sender rate for qualification. 973 Here, as with any Active Capacity test, the test duration must be 974 kept short. 10 second tests for each direction of transmission are 975 common today. The default measurement interval specified here is I = 976 10 seconds. The combination of a fast and congestion-aware search 977 method and user-network coordination make a unique contribution to 978 production testing. The Maximum IP Capacity metric and method for 979 assessing performance is very different from classic [RFC2544] 980 Throughput metric and methods : it uses near-real-time load 981 adjustments that are sensitive to loss and delay, similar to other 982 congestion control algorithms used on the Internet every day, along 983 with limited duration. On the other hand, [RFC2544] Throughput 984 measurements can produce sustained overload conditions for extended 985 periods of time. Individual trials in a test governed by a binary 986 search can last 60 seconds for each step, and the final confirmation 987 trial may be even longer. This is very different from "normal" 988 traffic levels, but overload conditions are not a concern in the 989 isolated test environment. The concerns raised in [RFC6815] were 990 that [RFC2544] methods would be let loose on production networks, and 991 instead the authors challenged the standards community to develop 992 metrics and methods like those described in this memo. 994 8.3. Measurement Considerations 996 In general, the wide-spread measurements that this memo encourages 997 will encounter wide-spread behaviors. The bimodal IP Capacity 998 behaviors already discussed in Section 6.6 are good examples. 1000 In general, it is RECOMMENDED to locate test endpoints as close to 1001 the intended measured link(s) as practical (this is not always 1002 possible for reasons of scale; there is a limit on number of test 1003 endpoints coming from many perspectives, management and measurement 1004 traffic for example). The testing operator MUST set a value for the 1005 MaxHops parameter, based on the expected path length. This parameter 1006 can keep measurement traffic from straying too far beyond the 1007 intended path. 1009 The path measured may be state-full based on many factors, and the 1010 Parameter "Time of day" when a test starts may not be enough 1011 information. Repeatable testing may require the time from the 1012 beginning of a measured flow, and how the flow is constructed 1013 including how much traffic has already been sent on that flow when a 1014 state-change is observed, because the state-change may be based on 1015 time or bytes sent or both. Both load packets and status feedback 1016 messages MUST contain sequence numbers, which helps with measurements 1017 based on those packets. 1019 Many different traffic shapers and on-demand access technologies may 1020 be encountered, as anticipated in [RFC7312], and play a key role in 1021 measurement results. Methods MUST be prepared to provide a short 1022 preamble transmission to activate on-demand access, and to discard 1023 the preamble from subsequent test results. 1025 Conditions which might be encountered during measurement, where 1026 packet losses may occur independently from the measurement sending 1027 rate: 1029 1. Congestion of an interconnection or backbone interface may appear 1030 as packet losses distributed over time in the test stream, due to 1031 much higher rate interfaces in the backbone. 1033 2. Packet loss due to use of Random Early Detection (RED) or other 1034 active queue management may or may not affect the measurement 1035 flow if competing background traffic (other flows) are 1036 simultaneously present. 1038 3. There may be only small delay variation independent of sending 1039 rate under these conditions, too. 1041 4. Persistent competing traffic on measurement paths that include 1042 shared transmission media may cause random packet losses in the 1043 test stream. 1045 It is possible to mitigate these conditions using the flexibility of 1046 the load-rate adjusting algorithm described in Section 8.1 above 1047 (tuning specific parameters). 1049 If the measurement flow burst duration happens to be on the order of 1050 or smaller than the burst size of a shaper or a policer in the path, 1051 then the line rate might be measured rather than the bandwidth limit 1052 imposed by the shaper or policer. If this condition is suspected, 1053 alternate configurations SHOULD be used. 1055 In general, results depend on the sending stream characteristics; the 1056 measurement community has known this for a long time, and needs to 1057 keep it front of mind. Although the default is a single flow (F=1) 1058 for testing, use of multiple flows may be advantageous for the 1059 following reasons: 1061 1. the test hosts may be able to create higher load than with a 1062 single flow, or parallel test hosts may be used to generate 1 1063 flow each. 1065 2. there may be link aggregation present (flow-based load balancing) 1066 and multiple flows are needed to occupy each member of the 1067 aggregate. 1069 3. access policies may limit the IP-Layer Capacity depending on the 1070 Type-P of packets, possibly reserving capacity for various stream 1071 types. 1073 Each flow would be controlled using its own implementation of the 1074 load rate adjustment (search) algorithm. 1076 As testing continues, implementers should expect some evolution in 1077 the methods. The ITU-T has published a Supplement (60) to the 1078 Y-series of Recommendations, "Interpreting ITU-T Y.1540 Maximum IP- 1079 Layer Capacity measurements", [Y.Sup60], which is the result of 1080 continued testing with the metric, and those results have improved 1081 the method described here. 1083 8.4. Running Code 1085 This section is for the benefit of the Document Shepherd's form, and 1086 will be deleted prior to final review. 1088 Much of the development of the method and comparisons with existing 1089 methods conducted at IETF Hackathons and elsewhere have been based on 1090 the example udpst Linux measurement tool (which is a working 1091 reference for further development) [udpst]. The current project: 1093 o is a utility that can function as a client or server daemon 1095 o requires a successful client-initiated setup handshake between 1096 cooperating hosts and allows firewalls to control inbound 1097 unsolicited UDP which either go to a control port [expected and w/ 1098 authentication] or to ephemeral ports that are only created as 1099 needed. Firewalls protecting each host can both continue to do 1100 their job normally. This aspect is similar to many other test 1101 utilities available. 1103 o is written in C, and built with gcc (release 9.3) and its standard 1104 run-time libraries 1106 o allows configuration of most of the parameters described in 1107 Sections 4 and 7. 1109 o supports IPv4 and IPv6 address families. 1111 o supports IP-Layer packet marking. 1113 9. Reporting Formats 1115 The singleton IP-Layer Capacity results SHOULD be accompanied by the 1116 context under which they were measured. 1118 o timestamp (especially the time when the maximum was observed in 1119 dtn) 1121 o Source and Destination (by IP or other meaningful ID) 1123 o other inner parameters of the test case (Section 4) 1125 o outer parameters, such as "test conducted in motion" or other 1126 factors belonging to the context of the measurement 1128 o result validity (indicating cases where the process was somehow 1129 interrupted or the attempt failed) 1131 o a field where unusual circumstances could be documented, and 1132 another one for "ignore/mask out" purposes in further processing 1134 The Maximum IP-Layer Capacity results SHOULD be reported in the 1135 format of a table with a row for each of the test Phases and Number 1136 of Flows. There SHOULD be columns for the phases with number of 1137 flows, and for the resultant Maximum IP-Layer Capacity results for 1138 the aggregate and each flow tested. 1140 As mentioned in Section 6.6, bi-modal (or multi-modal) maxima SHALL 1141 be reported for each mode separately. 1143 +-------------+-------------------------+----------+----------------+ 1144 | Phase, # | Maximum IP-Layer | Loss | RTT min, max, | 1145 | Flows | Capacity, Mbps | Ratio | msec | 1146 +-------------+-------------------------+----------+----------------+ 1147 | Search,1 | 967.31 | 0.0002 | 30, 58 | 1148 +-------------+-------------------------+----------+----------------+ 1149 | Verify,1 | 966.00 | 0.0000 | 30, 38 | 1150 +-------------+-------------------------+----------+----------------+ 1152 Maximum IP-layer Capacity Results 1154 Static and configuration parameters: 1156 The sub-interval time, dt, MUST accompany a report of Maximum IP- 1157 Layer Capacity results, and the remaining Parameters from Section 4, 1158 General Parameters. 1160 The PM list metrics corresponding to the sub-interval where the 1161 Maximum Capacity occurred MUST accompany a report of Maximum IP-Layer 1162 Capacity results, for each test phase. 1164 The IP-Layer Sender Bit rate results SHOULD be reported in the format 1165 of a table with a row for each of the test phases, sub-intervals (st) 1166 and number of flows. There SHOULD be columns for the phases with 1167 number of flows, and for the resultant IP-Layer Sender Bit rate 1168 results for the aggregate and each flow tested. 1170 +--------------------------+-------------+----------------------+ 1171 | Phase, Flow or Aggregate | st, sec | Sender Bitrate, Mbps | 1172 +--------------------------+-------------+----------------------+ 1173 | Search,1 | 0.00 - 0.05 | 345 | 1174 +--------------------------+-------------+----------------------+ 1175 | Search,2 | 0.00 - 0.05 | 289 | 1176 +--------------------------+-------------+----------------------+ 1177 | Search,Agg | 0.00 - 0.05 | 634 | 1178 +--------------------------+-------------+----------------------+ 1180 IP-layer Sender Bit Rate Results 1182 Static and configuration parameters: 1184 The subinterval time, st, MUST accompany a report of Sender IP-Layer 1185 Bit Rate results. 1187 Also, the values of the remaining Parameters from Section 4, General 1188 Parameters, MUST be reported. 1190 9.1. Configuration and Reporting Data Formats 1192 As a part of the multi-Standards Development Organization (SDO) 1193 harmonization of this metric and method of measurement, one of the 1194 areas where the Broadband Forum (BBF) contributed its expertise was 1195 in the definition of an information model and data model for 1196 configuration and reporting. These models are consistent with the 1197 metric parameters and default values specified as lists is this memo. 1198 [TR-471] provides the Information model that was used to prepare a 1199 full data model in related BBF work. The BBF has also carefully 1200 considered topics within its purview, such as placement of 1201 measurement systems within the access architecture. For example, 1202 timestamp resolution requirements that influence the choice of the 1203 test protocol are provided in Table 2 of [TR-471]. 1205 10. Security Considerations 1207 Active metrics and measurements have a long history of security 1208 considerations. The security considerations that apply to any active 1209 measurement of live paths are relevant here. See [RFC4656] and 1210 [RFC5357]. 1212 When considering privacy of those involved in measurement or those 1213 whose traffic is measured, the sensitive information available to 1214 potential observers is greatly reduced when using active techniques 1215 which are within this scope of work. Passive observations of user 1216 traffic for measurement purposes raise many privacy issues. We refer 1217 the reader to the privacy considerations described in the Large Scale 1218 Measurement of Broadband Performance (LMAP) Framework [RFC7594], 1219 which covers active and passive techniques. 1221 There are some new considerations for Capacity measurement as 1222 described in this memo. 1224 1. Cooperating Source and Destination hosts and agreements to test 1225 the path between the hosts are REQUIRED. Hosts perform in either 1226 the Src or Dst roles. 1228 2. It is REQUIRED to have a user client-initiated setup handshake 1229 between cooperating hosts that allows firewalls to control 1230 inbound unsolicited UDP traffic which either goes to a control 1231 port [expected and w/authentication] or to ephemeral ports that 1232 are only created as needed. Firewalls protecting each host can 1233 both continue to do their job normally. 1235 3. Client-server authentication and integrity protection for 1236 feedback messages conveying measurements is RECOMMENDED. 1238 4. Hosts MUST limit the number of simultaneous tests to avoid 1239 resource exhaustion and inaccurate results. 1241 5. Senders MUST be rate-limited. This can be accomplished using a 1242 pre-built table defining all the offered load rates that will be 1243 supported (Section 8.1). The recommended load-control search 1244 algorithm results in "ramp-up" from the lowest rate in the table. 1246 6. Service subscribers with limited data volumes who conduct 1247 extensive capacity testing might experience the effects of 1248 Service Provider controls on their service. Testing with the 1249 Service Provider's measurement hosts SHOULD be limited in 1250 frequency and/or overall volume of test traffic (for example, the 1251 range of I duration values SHOULD be limited). 1253 The exact specification of these features is left for the future 1254 protocol development. 1256 11. IANA Considerations 1258 This memo makes no requests of IANA. 1260 12. Acknowledgments 1262 Thanks to Joachim Fabini, Matt Mathis, J.Ignacio Alvarez-Hamelin, 1263 Wolfgang Balzer, Frank Brockners, Greg Mirsky, Martin Duke, Murray 1264 Kucherawy, and Benjamin Kaduk for their extensive comments on the 1265 memo and related topics. 1267 13. Appendix A - Load Rate Adjustment Pseudo Code 1269 The following is a pseudo-code implementation of the algorithm 1270 described in Section 8.1. 1272 Rx = 0 # The current sending rate (equivalent to a row of the table) 1273 seqErr = 0 # Measured count of any of Loss or Reordering impairments 1274 delay = 0 # Measured Range of Round Trip Delay, RTD, ms 1275 lowThresh = 30 # Low threshold on the Range of RTD, ms 1276 upperThresh = 90 # Upper threshold on the Range of RTD, ms 1277 hSpeedTresh = 1 Gbps # Threshold for transition between sending rate step 1278 sizes (such as 1 Mbps and 100 Mbps) 1279 slowAdjCount = 0 # Measured Number of consecutive status reports 1280 indicating loss and/or delay variation above upperThresh 1281 slowAdjThresh = 2 # Threshold on slowAdjCount used to infer congestion. 1282 Use values >1 to avoid misinterpreting transient loss 1283 highSpeedDelta = 10 # The number of rows to move in a single adjustment 1284 when initially increasing offered load (to ramp-up quickly) 1285 maxLoadRates = 2000 # Maximum table index (rows) 1287 if ( seqErr == 0 && delay < lowThresh ) { 1288 if ( Rx < hSpeedTresh && slowAdjCount < slowAdjThresh ) { 1289 Rx += highSpeedDelta; 1290 slowAdjCount = 0; 1291 } else { 1292 if ( Rx < maxLoadRates - 1 ) 1293 Rx++; 1294 } 1295 } else if ( seqErr > 0 || delay > upperThresh ) { 1296 slowAdjCount++; 1297 if ( Rx < hSpeedTresh && slowAdjCount == slowAdjThresh ) { 1298 if ( Rx > highSpeedDelta * 3 ) 1299 Rx -= highSpeedDelta * 3; 1300 else 1301 Rx = 0; 1302 } else { 1303 if ( Rx > 0 ) 1304 Rx--; 1305 } 1306 } 1308 14. Appendix B - RFC 8085 UDP Guidelines Check 1310 The BCP on UDP usage guidelines [RFC8085] focuses primarily on 1311 congestion control in section 3.1. The Guidelines appear in 1312 mandatory (MUST) and recommendation (SHOULD) categories. 1314 14.1. Assessment of Mandatory Requirements 1316 The mandatory requirements in Section 3 of [RFC8085] include: 1318 Internet paths can have widely varying characteristics, ... 1319 Consequently, applications that may be used on the Internet MUST 1320 NOT make assumptions about specific path characteristics. They 1321 MUST instead use mechanisms that let them operate safely under 1322 very different path conditions. Typically, this requires 1323 conservatively probing the current conditions of the Internet path 1324 they communicate over to establish a transmission behavior that it 1325 can sustain and that is reasonably fair to other traffic sharing 1326 the path. 1328 The purpose of the load rate adjustment algorithm in Section 8.1 is 1329 to probe the network and enable Maximum IP-Layer Capacity 1330 measurements with as few assumptions about the measured path as 1331 possible, and within the range application described in Section 2. 1332 The degree of probing conservatism is in tension with the need to 1333 minimize both the traffic dedicated to testing (especially with 1334 Gigabit rate measurements) and the duration of the test (which is one 1335 contributing factor to the overall algorithm fairness). 1337 The text of Section 3 of [RFC8085] goes on to recommend alternatives 1338 to UDP to meet the mandatory requirements, but none are suitable for 1339 the scope and purpose of the metrics and methods in this memo. In 1340 fact, ad hoc TCP-based methods fail to achieve the measurement 1341 accuracy repeatedly proven in comparison measurements with the 1342 running code [LS-SG12-A] [LS-SG12-B] [Y.Sup60]. Also, the UDP aspect 1343 of these methods is present primarily to support modern Internet 1344 transmission where a transport protocol is required [copycat]; the 1345 metric is based on the IP-Layer and UDP allows simple correlation to 1346 the IP-Layer. 1348 Section 3.1.1 of [RFC8085] discusses protocol timer guidelines: 1350 Latency samples MUST NOT be derived from ambiguous transactions. 1351 The canonical example is in a protocol that retransmits data, but 1352 subsequently cannot determine which copy is being acknowledged. 1354 Both load packets and status feedback messages MUST contain sequence 1355 numbers, which helps with measurements based on those packets, and 1356 there are no retransmissions needed. 1358 When a latency estimate is used to arm a timer that provides loss 1359 detection -- with or without retransmission -- expiry of the timer 1360 MUST be interpreted as an indication of congestion in the network, 1361 causing the sending rate to be adapted to a safe conservative 1362 rate... 1364 The method described in this memo uses timers for sending rate 1365 backoff when status feedback messages are lost (Lost Status Backoff 1366 timeout), and for stopping a test when connectivity is lost for an 1367 longer interval (Feedback message or load packet timeouts). 1369 There is no specific benefit foreseen by using Explicit Congestion 1370 Notification (ECN) in this memo. 1372 Section 3.2 of [RFC8085] discusses message size guidelines: 1374 To determine an appropriate UDP payload size, applications MUST 1375 subtract the size of the IP header (which includes any IPv4 1376 optional headers or IPv6 extension headers) as well as the length 1377 of the UDP header (8 bytes) from the PMTU size. 1379 The method uses a sending rate table with a maximum UDP payload size 1380 that anticipates significant header overhead and avoids 1381 fragmentation. 1383 Section 3.3 of [RFC8085] provides reliability guidelines: 1385 Applications that do require reliable message delivery MUST 1386 implement an appropriate mechanism themselves. 1388 The IP-Layer Capacity Metric and Method do not require reliable 1389 delivery. 1391 Applications that require ordered delivery MUST reestablish 1392 datagram ordering themselves. 1394 The IP-Layer Capacity Metric and Method does not need to reestablish 1395 packet order; it is preferred to measure packet reordering if it 1396 occurs [RFC4737]. 1398 14.2. Assessment of Recommendations 1400 The load rate adjustment algorithm's goal is to determine the Maximum 1401 IP-Layer Capacity in the context of an infrequent, diagnostic, short 1402 term measurement. This goal is a global exception to many [RFC8085] 1403 SHOULD-level requirements, of which many are intended for long-lived 1404 flows that must coexist with other traffic in more-or-less fair way. 1405 However, the algorithm (as specified in Section 8.1 and Appendix A 1406 above) reacts to indications of congestion in clearly defined ways. 1408 A specific recommendation is provided as an example. Section 3.1.5 1409 of [RFC8085] on implications of RTT and Loss Measurements on 1410 Congestion Control says: 1412 A congestion control designed for UDP SHOULD respond as quickly as 1413 possible when it experiences congestion, and it SHOULD take into 1414 account both the loss rate and the response time when choosing a 1415 new rate. 1417 The load rate adjustment algorithm responds to loss and RTT 1418 measurements with a clear and concise rate reduction when warrented, 1419 and the response makes use of direct measurements (more exact than 1420 can be inferred from TCP ACKs). 1422 Section 3.1.5 of [RFC8085] goes on to specify: 1424 The implemented congestion control scheme SHOULD result in 1425 bandwidth (capacity) use that is comparable to that of TCP within 1426 an order of magnitude, so that it does not starve other flows 1427 sharing a common bottleneck. 1429 This is a requirement for coexistent streams, and not for diagnostic 1430 and infrequent measurements using short durations. The rate 1431 oscillations during short tests allow other packets to pass, and 1432 don't starve other flows. 1434 Ironically, ad hoc TCP-based measurements of "Internet Speed" are 1435 also designed to work around this SHOULD-level requirement, by 1436 launching many flows (9, for example) to increase the outstanding 1437 data dedicated to testing. 1439 The load rate adjustment algorithm cannot become a TCP-like 1440 congestion control, or it will have the same weaknesses of TCP when 1441 trying to make a Maximum IP-Layer Capacity measurement, and will not 1442 achieve the goal. The results of the referenced testing [LS-SG12-A] 1443 [LS-SG12-B] [Y.Sup60] supported this statement hundreds of times, 1444 with comparisons to multi-connection TCP-based measurements. 1446 A brief review of some of the other SHOULD-level requirements follows 1447 (Yes or Not applicable = NA) : 1449 +--+---------------------------------------------------------+---------+ 1450 |Y?| RFC 8085 Recommendation | Section | 1451 +--+---------------------------------------------------------+---------+ 1452 Yes| MUST tolerate a wide range of Internet path conditions | 3 | 1453 NA | SHOULD use a full-featured transport (e.g., TCP) | | 1454 | | | 1455 Yes| SHOULD control rate of transmission | 3.1 | 1456 NA | SHOULD perform congestion control over all traffic | | 1457 | | | 1458 | for bulk transfers, | 3.1.2 | 1459 NA | SHOULD consider implementing TFRC | | 1460 NA | else, SHOULD in other ways use bandwidth similar to TCP | | 1461 | | | 1462 | for non-bulk transfers, | 3.1.3 | 1463 NA | SHOULD measure RTT and transmit max. 1 datagram/RTT | 3.1.1 | 1464 NA | else, SHOULD send at most 1 datagram every 3 seconds | | 1465 NA | SHOULD back-off retransmission timers following loss | | 1466 | | | 1467 Yes| SHOULD provide mechanisms to regulate the bursts of | 3.1.6 | 1468 | transmission | | 1469 | | | 1470 NA | MAY implement ECN; a specific set of application | 3.1.7 | 1471 | mechanisms are REQUIRED if ECN is used. | | 1472 | | | 1473 Yes| for DiffServ, SHOULD NOT rely on implementation of PHBs | 3.1.8 | 1474 | | | 1475 Yes| for QoS-enabled paths, MAY choose not to use CC | 3.1.9 | 1476 | | | 1477 Yes| SHOULD NOT rely solely on QoS for their capacity | 3.1.10 | 1478 | non-CC controlled flows SHOULD implement a transport | | 1479 | circuit breaker | | 1480 | MAY implement a circuit breaker for other applications | | 1481 | | | 1482 | for tunnels carrying IP traffic, | 3.1.11 | 1483 NA | SHOULD NOT perform congestion control | | 1484 NA | MUST correctly process the IP ECN field | | 1485 | | | 1486 | for non-IP tunnels or rate not determined by traffic, | | 1487 NA | SHOULD perform CC or use circuit breaker | 3.1.11 | 1488 NA | SHOULD restrict types of traffic transported by the | | 1489 | tunnel | | 1490 | | | 1491 Yes| SHOULD NOT send datagrams that exceed the PMTU, i.e., | 3.2 | 1492 Yes| SHOULD discover PMTU or send datagrams < minimum PMTU; | | 1493 NA | Specific application mechanisms are REQUIRED if PLPMTUD | | 1494 | is used. | | 1495 | | | 1496 Yes| SHOULD handle datagram loss, duplication, reordering | 3.3 | 1497 NA | SHOULD be robust to delivery delays up to 2 minutes | | 1498 | | | 1499 Yes| SHOULD enable IPv4 UDP checksum | 3.4 | 1500 Yes| SHOULD enable IPv6 UDP checksum; Specific application | 3.4.1 | 1501 | mechanisms are REQUIRED if a zero IPv6 UDP checksum is | | 1502 | used. | | 1503 | | | 1504 NA | SHOULD provide protection from off-path attacks | 5.1 | 1505 | else, MAY use UDP-Lite with suitable checksum coverage | 3.4.2 | 1506 | | | 1507 NA | SHOULD NOT always send middlebox keep-alive messages | 3.5 | 1508 NA | MAY use keep-alives when needed (min. interval 15 sec) | | 1509 | | | 1511 Yes| Applications specified for use in limited use (or | 3.6 | 1512 | controlled environments) SHOULD identify equivalent | | 1513 | mechanisms and describe their use case. | | 1514 | | | 1515 NA | Bulk-multicast apps SHOULD implement congestion control | 4.1.1 | 1516 | | | 1517 NA | Low volume multicast apps SHOULD implement congestion | 4.1.2 | 1518 | control | | 1519 | | | 1520 NA | Multicast apps SHOULD use a safe PMTU | 4.2 | 1521 | | | 1522 Yes| SHOULD avoid using multiple ports | 5.1.2 | 1523 Yes| MUST check received IP source address | | 1524 | | | 1525 NA | SHOULD validate payload in ICMP messages | 5.2 | 1526 | | | 1527 Yes| SHOULD use a randomized source port or equivalent | 6 | 1528 | technique, and, for client/server applications, SHOULD | | 1529 | send responses from source address matching request | | 1530 | 5.1 | | 1531 NA | SHOULD use standard IETF security protocols when needed | 6 | 1532 +---------------------------------------------------------+---------+ 1534 15. References 1536 15.1. Normative References 1538 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1539 Requirement Levels", BCP 14, RFC 2119, 1540 DOI 10.17487/RFC2119, March 1997, 1541 . 1543 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 1544 "Framework for IP Performance Metrics", RFC 2330, 1545 DOI 10.17487/RFC2330, May 1998, 1546 . 1548 [RFC2681] Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip 1549 Delay Metric for IPPM", RFC 2681, DOI 10.17487/RFC2681, 1550 September 1999, . 1552 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 1553 Zekauskas, "A One-way Active Measurement Protocol 1554 (OWAMP)", RFC 4656, DOI 10.17487/RFC4656, September 2006, 1555 . 1557 [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, 1558 S., and J. Perser, "Packet Reordering Metrics", RFC 4737, 1559 DOI 10.17487/RFC4737, November 2006, 1560 . 1562 [RFC5357] Hedayat, K., Krzanowski, R., Morton, A., Yum, K., and J. 1563 Babiarz, "A Two-Way Active Measurement Protocol (TWAMP)", 1564 RFC 5357, DOI 10.17487/RFC5357, October 2008, 1565 . 1567 [RFC6438] Carpenter, B. and S. Amante, "Using the IPv6 Flow Label 1568 for Equal Cost Multipath Routing and Link Aggregation in 1569 Tunnels", RFC 6438, DOI 10.17487/RFC6438, November 2011, 1570 . 1572 [RFC7497] Morton, A., "Rate Measurement Test Protocol Problem 1573 Statement and Requirements", RFC 7497, 1574 DOI 10.17487/RFC7497, April 2015, 1575 . 1577 [RFC7680] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, 1578 Ed., "A One-Way Loss Metric for IP Performance Metrics 1579 (IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January 1580 2016, . 1582 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1583 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1584 May 2017, . 1586 [RFC8468] Morton, A., Fabini, J., Elkins, N., Ackermann, M., and V. 1587 Hegde, "IPv4, IPv6, and IPv4-IPv6 Coexistence: Updates for 1588 the IP Performance Metrics (IPPM) Framework", RFC 8468, 1589 DOI 10.17487/RFC8468, November 2018, 1590 . 1592 15.2. Informative References 1594 [copycat] Edleine, K., Kuhlewind, K., Trammell, B., and B. Donnet, 1595 "copycat: Testing Differential Treatment of New Transport 1596 Protocols in the Wild (ANRW '17)", July 2017, 1597 . 1599 [LS-SG12-A] 1600 12, I. S., "LS - Harmonization of IP Capacity and Latency 1601 Parameters: Revision of Draft Rec. Y.1540 on IP packet 1602 transfer performance parameters and New Annex A with Lab 1603 Evaluation Plan", May 2019, 1604 . 1606 [LS-SG12-B] 1607 12, I. S., "LS on harmonization of IP Capacity and Latency 1608 Parameters: Consent of Draft Rec. Y.1540 on IP packet 1609 transfer performance parameters and New Annex A with Lab & 1610 Field Evaluation Plans", March 2019, 1611 . 1613 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1614 Network Interconnect Devices", RFC 2544, 1615 DOI 10.17487/RFC2544, March 1999, 1616 . 1618 [RFC3148] Mathis, M. and M. Allman, "A Framework for Defining 1619 Empirical Bulk Transfer Capacity Metrics", RFC 3148, 1620 DOI 10.17487/RFC3148, July 2001, 1621 . 1623 [RFC5136] Chimento, P. and J. Ishac, "Defining Network Capacity", 1624 RFC 5136, DOI 10.17487/RFC5136, February 2008, 1625 . 1627 [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, 1628 "Applicability Statement for RFC 2544: Use on Production 1629 Networks Considered Harmful", RFC 6815, 1630 DOI 10.17487/RFC6815, November 2012, 1631 . 1633 [RFC7312] Fabini, J. and A. Morton, "Advanced Stream and Sampling 1634 Framework for IP Performance Metrics (IPPM)", RFC 7312, 1635 DOI 10.17487/RFC7312, August 2014, 1636 . 1638 [RFC7594] Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 1639 Aitken, P., and A. Akhter, "A Framework for Large-Scale 1640 Measurement of Broadband Performance (LMAP)", RFC 7594, 1641 DOI 10.17487/RFC7594, September 2015, 1642 . 1644 [RFC7799] Morton, A., "Active and Passive Metrics and Methods (with 1645 Hybrid Types In-Between)", RFC 7799, DOI 10.17487/RFC7799, 1646 May 2016, . 1648 [RFC8085] Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage 1649 Guidelines", BCP 145, RFC 8085, DOI 10.17487/RFC8085, 1650 March 2017, . 1652 [RFC8337] Mathis, M. and A. Morton, "Model-Based Metrics for Bulk 1653 Transport Capacity", RFC 8337, DOI 10.17487/RFC8337, March 1654 2018, . 1656 [TR-471] Morton, A., "Broadband Forum TR-471: IP Layer Capacity 1657 Metrics and Measurement", July 2020, 1658 . 1661 [udpst] udpst Project Collaborators, "UDP Speed Test Open 1662 Broadband project", December 2020, 1663 . 1665 [Y.1540] Y.1540, I. R., "Internet protocol data communication 1666 service - IP packet transfer and availability performance 1667 parameters", December 2019, 1668 . 1670 [Y.Sup60] Morton, A., "Recommendation Y.Sup60, (09/20) Interpreting 1671 ITU-T Y.1540 maximum IP-layer capacity measurements, and 1672 Errata", September 2020, 1673 . 1675 Authors' Addresses 1677 Al Morton 1678 AT&T Labs 1679 200 Laurel Avenue South 1680 Middletown,, NJ 07748 1681 USA 1683 Phone: +1 732 420 1571 1684 Fax: +1 732 368 1192 1685 Email: acm@research.att.com 1687 Ruediger Geib 1688 Deutsche Telekom 1689 Heinrich Hertz Str. 3-7 1690 Darmstadt 64295 1691 Germany 1693 Phone: +49 6151 5812747 1694 Email: Ruediger.Geib@telekom.de 1695 Len Ciavattone 1696 AT&T Labs 1697 200 Laurel Avenue South 1698 Middletown,, NJ 07748 1699 USA 1701 Email: lencia@att.com