idnits 2.17.1 draft-ietf-ippm-capacity-metric-method-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (June 1, 2021) is 1060 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'T' is mentioned on line 560, but not defined == Missing Reference: 'I' is mentioned on line 560, but not defined == Missing Reference: 'PM' is mentioned on line 559, but not defined ** Downref: Normative reference to an Informational RFC: RFC 2330 ** Downref: Normative reference to an Informational RFC: RFC 7497 ** Downref: Normative reference to an Informational RFC: RFC 8468 Summary: 4 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft AT&T Labs 4 Intended status: Standards Track R. Geib 5 Expires: December 3, 2021 Deutsche Telekom 6 L. Ciavattone 7 AT&T Labs 8 June 1, 2021 10 Metrics and Methods for One-way IP Capacity 11 draft-ietf-ippm-capacity-metric-method-11 13 Abstract 15 This memo revisits the problem of Network Capacity metrics first 16 examined in RFC 5136. The memo specifies a more practical Maximum 17 IP-Layer Capacity metric definition catering for measurement 18 purposes, and outlines the corresponding methods of measurement. 20 Status of This Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at https://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on December 3, 2021. 37 Copyright Notice 39 Copyright (c) 2021 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (https://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 55 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 56 2. Scope, Goals, and Applicability . . . . . . . . . . . . . . . 4 57 3. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 5 58 4. General Parameters and Definitions . . . . . . . . . . . . . 6 59 5. IP-Layer Capacity Singleton Metric Definitions . . . . . . . 7 60 5.1. Formal Name . . . . . . . . . . . . . . . . . . . . . . . 8 61 5.2. Parameters . . . . . . . . . . . . . . . . . . . . . . . 8 62 5.3. Metric Definitions . . . . . . . . . . . . . . . . . . . 8 63 5.4. Related Round-Trip Delay and One-way Loss Definitions . . 9 64 5.5. Discussion . . . . . . . . . . . . . . . . . . . . . . . 10 65 5.6. Reporting the Metric . . . . . . . . . . . . . . . . . . 10 66 6. Maximum IP-Layer Capacity Metric Definitions (Statistic) . . 10 67 6.1. Formal Name . . . . . . . . . . . . . . . . . . . . . . . 10 68 6.2. Parameters . . . . . . . . . . . . . . . . . . . . . . . 10 69 6.3. Metric Definitions . . . . . . . . . . . . . . . . . . . 11 70 6.4. Related Round-Trip Delay and One-way Loss Definitions . . 12 71 6.5. Discussion . . . . . . . . . . . . . . . . . . . . . . . 12 72 6.6. Reporting the Metric . . . . . . . . . . . . . . . . . . 13 73 7. IP-Layer Sender Bit Rate Singleton Metric Definitions . . . . 13 74 7.1. Formal Name . . . . . . . . . . . . . . . . . . . . . . . 14 75 7.2. Parameters . . . . . . . . . . . . . . . . . . . . . . . 14 76 7.3. Metric Definition . . . . . . . . . . . . . . . . . . . . 14 77 7.4. Discussion . . . . . . . . . . . . . . . . . . . . . . . 15 78 7.5. Reporting the Metric . . . . . . . . . . . . . . . . . . 15 79 8. Method of Measurement . . . . . . . . . . . . . . . . . . . . 15 80 8.1. Load Rate Adjustment Algorithm . . . . . . . . . . . . . 15 81 8.2. Measurement Qualification or Verification . . . . . . . . 20 82 8.3. Measurement Considerations . . . . . . . . . . . . . . . 22 83 8.4. Running Code . . . . . . . . . . . . . . . . . . . . . . 24 84 9. Reporting Formats . . . . . . . . . . . . . . . . . . . . . . 24 85 9.1. Configuration and Reporting Data Formats . . . . . . . . 26 86 10. Security Considerations . . . . . . . . . . . . . . . . . . . 26 87 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 27 88 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 28 89 13. Appendix A - Load Rate Adjustment Pseudo Code . . . . . . . . 28 90 14. Appendix B - RFC 8085 UDP Guidelines Check . . . . . . . . . 29 91 14.1. Assessment of Mandatory Requirements . . . . . . . . . . 29 92 14.2. Assessment of Recommendations . . . . . . . . . . . . . 31 93 15. References . . . . . . . . . . . . . . . . . . . . . . . . . 34 94 15.1. Normative References . . . . . . . . . . . . . . . . . . 34 95 15.2. Informative References . . . . . . . . . . . . . . . . . 35 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 37 98 1. Introduction 100 The IETF's efforts to define Network and Bulk Transport Capacity have 101 been chartered and progressed for over twenty years. Over that time, 102 the performance community has seen development of Informative 103 definitions in [RFC3148] for Framework for Bulk Transport Capacity 104 (BTC), RFC 5136 for Network Capacity and Maximum IP-Layer Capacity, 105 and the Experimental metric definitions and methods in [RFC8337], 106 Model-Based Metrics for BTC. 108 This memo revisits the problem of Network Capacity metrics examined 109 first in [RFC3148] and later in [RFC5136]. Maximum IP-Layer Capacity 110 and [RFC3148] Bulk Transfer Capacity (goodput) are different metrics. 111 Maximum IP-Layer Capacity is like the theoretical goal for goodput. 112 There are many metrics in [RFC5136], such as Available Capacity. 113 Measurements depend on the network path under test and the use case. 114 Here, the main use case is to assess the maximum capacity of one or 115 more networks where the subscriber receives specific performance 116 assurances, sometimes referred to as the Internet access, or where a 117 limit of the technology used on a path is being tested. For example, 118 when a user subscribes to a 1 Gbps service, then the user, the 119 service provider, and possibly other parties want to assure that 120 performance level is delivered. When a test confirms the subscribed 121 performance level, then a tester can seek the location of a 122 bottleneck elsewhere. 124 This memo recognizes the importance of a definition of a Maximum IP- 125 Layer Capacity Metric at a time when Internet subscription speeds 126 have increased dramatically; a definition that is both practical and 127 effective for the performance community's needs, including Internet 128 users. The metric definition is intended to use Active Methods of 129 Measurement [RFC7799], and a method of measurement is included. 131 The most direct active measurement of IP-Layer Capacity would use IP 132 packets, but in practice a transport header is needed to traverse 133 address and port translators. UDP offers the most direct assessment 134 possibility, and in the [copycat] measurement study to investigate 135 whether UDP is viable as a general Internet transport protocol, the 136 authors found that a high percentage of paths tested support UDP 137 transport. A number of liaisons have been exchanged on this topic 138 [LS-SG12-A] [LS-SG12-B], discussing the laboratory and field tests 139 that support the UDP-based approach to IP-Layer Capacity measurement. 141 This memo also recognizes the many updates to the IP Performance 142 Metrics Framework [RFC2330] published over twenty years, and makes 143 use of [RFC7312] for Advanced Stream and Sampling Framework, and 144 [RFC8468] with IPv4, IPv6, and IPv4-IPv6 Coexistence Updates. 146 Appendix A describes the load rate adjustment algorithm in pseudo- 147 code. Appendix B discusses the algorithm's compliance with 148 [RFC8085]. 150 1.1. Requirements Language 152 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 153 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 154 "OPTIONAL" in this document are to be interpreted as described in BCP 155 14[RFC2119] [RFC8174] when, and only when, they appear in all 156 capitals, as shown here. 158 2. Scope, Goals, and Applicability 160 The scope of this memo is to define Active Measurement metrics and 161 corresponding methods to unambiguously determine Maximum IP-Layer 162 Capacity and useful secondary metrics. 164 Another goal is to harmonize the specified metric and method across 165 the industry, and this memo is the vehicle that captures IETF 166 consensus, possibly resulting in changes to the specifications of 167 other Standards Development Organizations (SDO) (through each SDO's 168 normal contribution process, or through liaison exchange). 170 Secondary goals are to add considerations for test procedures, and to 171 provide interpretation of the Maximum IP-Layer Capacity results (to 172 identify cases where more testing is warranted, possibly with 173 alternate configurations). Fostering the development of protocol 174 support for this metric and method of measurement is also a goal of 175 this memo (all active testing protocols currently defined by the IPPM 176 WG are UDP-based, meeting a key requirement of these methods). The 177 supporting protocol development to measure this metric according to 178 the specified method is a key future contribution to Internet 179 measurement. 181 The load rate adjustment algorithm's scope is limited to helping 182 determine the Maximum IP-Layer Capacity in the context of an 183 infrequent, diagnostic, short term measurement. It is RECOMMENDED to 184 discontinue non-measurement traffic that shares a subscriber's 185 dedicated resources while testing: measurements may not be accurate 186 and throughput of competing elastic traffic may be greatly reduced. 188 The primary application of the metric and method of measurement 189 described here is the same as in Section 2 of [RFC7497] where: 191 o The access portion of the network is the focus of this problem 192 statement. The user typically subscribes to a service with 193 bidirectional Internet access partly described by rates in bits 194 per second. 196 In addition, the use of the load rate adjustment algorithm described 197 in section 8.1 has the following additional applicability 198 limitations: 200 - MUST only be used in the application of diagnostic and operations 201 measurements as described in this memo 203 - MUST only be used in circumstances consistent with Section 10, 204 Security Considerations 206 - If a network operator is certain of the IP-layer capacity to be 207 validated, then testing MAY start with a fixed rate test at the IP- 208 layer capacity and avoid activating the load adjustment algorithm. 209 However, the stimulus for a diagnostic test (such as a subscriber 210 request) strongly implies that there is no certainty and the load 211 adjustment algorithm is RECOMMENDED. 213 Further, the metric and method of measurement are intended for use 214 where specific exact path information is unknown within a range of 215 possible values: 217 - the subscriber's exact Maximum IP-Layer Capacity is unknown (which 218 is sometimes the case; service rates can be increased due to upgrades 219 without a subscriber's request, or to provide a surplus to compensate 220 for possible underestimates of TCP-based testing). 222 - the size of the bottleneck buffer is unknown. 224 Finally, the measurement system's load rate adjustment algorithm 225 SHALL NOT be provided with the exact capacity value to be validated a 226 priori. This restriction fosters a fair result, and removes an 227 opportunity for bad actors to operate with knowledge of the "right 228 answer". 230 3. Motivation 232 As with any problem that has been worked for many years in various 233 SDOs without any special attempts at coordination, various solutions 234 for metrics and methods have emerged. 236 There are five factors that have changed (or begun to change) in the 237 2013-2019 time frame, and the presence of any one of them on the path 238 requires features in the measurement design to account for the 239 changes: 241 1. Internet access is no longer the bottleneck for many users (but 242 subscribers expect network providers to honor contracted 243 performance). 245 2. Both transfer rate and latency are important to user's 246 satisfaction. 248 3. UDP's growing role in Transport, in areas where TCP once 249 dominated. 251 4. Content and applications are moving physically closer to users. 253 5. There is less emphasis on ISP gateway measurements, possibly due 254 to less traffic crossing ISP gateways in the future. 256 4. General Parameters and Definitions 258 This section lists the REQUIRED input factors to specify a Sender or 259 Receiver metric. 261 o Src, the address of a host (such as the globally routable IP 262 address). 264 o Dst, the address of a host (such as the globally routable IP 265 address). 267 o MaxHops, the limit on the number of Hops a specific packet may 268 visit as it traverses from the host at Src to the host at Dst 269 (implemented in the TTL or Hop Limit). 271 o T0, the time at the start of measurement interval, when packets 272 are first transmitted from the Source. 274 o I, the nominal duration of a measurement interval at the 275 destination (default 10 sec) 277 o dt, the nominal duration of m equal sub-intervals in I at the 278 destination (default 1 sec) 280 o dtn, the beginning boundary of a specific sub-interval, n, one of 281 m sub-intervals in I 283 o FT, the feedback time interval between status feedback messages 284 communicating measurement results, sent from the receiver to 285 control the sender. The results are evaluated throughout the test 286 to determine how to adjust the current offered load rate at the 287 sender (default 50ms) 289 o Tmax, a maximum waiting time for test packets to arrive at the 290 destination, set sufficiently long to disambiguate packets with 291 long delays from packets that are discarded (lost), such that the 292 distribution of one-way delay is not truncated. 294 o F, the number of different flows synthesized by the method 295 (default 1 flow) 297 o flow, the stream of packets with the same n-tuple of designated 298 header fields that (when held constant) result in identical 299 treatment in a multi-path decision (such as the decision taken in 300 load balancing). Note: The IPv6 flow label MAY be included in the 301 flow definition when routers have complied with [RFC6438] 302 guidelines. 304 o Type-P, the complete description of the test packets for which 305 this assessment applies (including the flow-defining fields). 306 Note that the UDP transport layer is one requirement for test 307 packets specified below. Type-P is a parallel concept to 308 "population of interest" defined in clause 6.1.1 of[Y.1540]. 310 o PM, a list of fundamental metrics, such as loss, delay, and 311 reordering, and corresponding target performance threshold. At 312 least one fundamental metric and target performance threshold MUST 313 be supplied (such as One-way IP Packet Loss [RFC7680] equal to 314 zero). 316 A non-Parameter which is required for several metrics is defined 317 below: 319 o T, the host time of the *first* test packet's *arrival* as 320 measured at the destination Measurement Point, or MP(Dst). There 321 may be other packets sent between Source and Destination hosts 322 that are excluded, so this is the time of arrival of the first 323 packet used for measurement of the metric. 325 Note that time stamp format and resolution, sequence numbers, etc. 326 will be established by the chosen test protocol standard or 327 implementation. 329 5. IP-Layer Capacity Singleton Metric Definitions 331 This section sets requirements for the singleton metric that supports 332 the Maximum IP-Layer Capacity Metric definition in Section 6. 334 5.1. Formal Name 336 Type-P-One-way-IP-Capacity, or informally called IP-Layer Capacity. 338 Note that Type-P depends on the chosen method. 340 5.2. Parameters 342 This section lists the REQUIRED input factors to specify the metric, 343 beyond those listed in Section 4. 345 No additional Parameters are needed. 347 5.3. Metric Definitions 349 This section defines the REQUIRED aspects of the measurable IP-Layer 350 Capacity metric (unless otherwise indicated) for measurements between 351 specified Source and Destination hosts: 353 Define the IP-Layer Capacity, C(T,dt,PM), to be the number of IP- 354 Layer bits (including header and data fields) in packets that can be 355 transmitted from the Src host and correctly received by the Dst host 356 during one contiguous sub-interval, dt in length. The IP-Layer 357 Capacity depends on the Src and Dst hosts, the host addresses, and 358 the path between the hosts. 360 The number of these IP-Layer bits is designated n0[dtn,dtn+1] for a 361 specific dt. 363 When the packet size is known and of fixed size, the packet count 364 during a single sub-interval dt multiplied by the total bits in IP 365 header and data fields is equal to n0[dtn,dtn+1]. 367 Anticipating a Sample of Singletons, the number of sub-intervals with 368 duration dt MUST be set to a natural number m, so that T+I = T + m*dt 369 with dtn+1 - dtn = dt for 1 <= n <= m. 371 Parameter PM represents other performance metrics [see section 5.4 372 below]; their measurement results SHALL be collected during 373 measurement of IP-Layer Capacity and associated with the 374 corresponding dtn for further evaluation and reporting. Users SHALL 375 specify the parameter Tmax as required by each metric's reference 376 definition. 378 Mathematically, this definition is represented as (for each n): 380 ( n0[dtn,dtn+1] ) 381 C(T,dt,PM) = ------------------------- 382 dt 384 Equation for IP-Layer Capacity 386 and: 388 o n0 is the total number of IP-Layer header and payload bits that 389 can be transmitted in standard-formed packets [RFC8468] from the 390 Src host and correctly received by the Dst host during one 391 contiguous sub-interval, dt in length, during the interval [T, 392 T+I], 394 o C(T,dt,PM) the IP-Layer Capacity, corresponds to the value of n0 395 measured in any sub-interval beginning at dtn, divided by the 396 length of sub-interval, dt. 398 o PM represents other performance metrics [see section 5.4 below]; 399 their measurement results SHALL be collected during measurement of 400 IP-Layer Capacity and associated with the corresponding dtn for 401 further evaluation and reporting. 403 o all sub-intervals MUST be of equal duration. Choosing dt as non- 404 overlapping consecutive time intervals allows for a simple 405 implementation. 407 o The bit rate of the physical interface of the measurement devices 408 MUST be higher than the smallest of the links on the path whose 409 C(T,I,PM) is to be measured (the bottleneck link). 411 Measurements according to these definitions SHALL use the UDP 412 transport layer. Standard-formed packets are specified in Section 5 413 of [RFC8468]. The measurement SHOULD use a randomized Source port or 414 equivalent technique, and SHOULD send responses from the Source 415 address matching the test packet destination address. 417 Some compression affects on measurement are discussed in Section 6 of 418 [RFC8468]. 420 5.4. Related Round-Trip Delay and One-way Loss Definitions 422 RTD[dtn,dtn+1] is defined as a Sample of the [RFC2681] Round-trip 423 Delay between the Src host and the Dst host over the interval [T,T+I] 424 (that contains equal non-overlapping intervals of dt). The 425 "reasonable period of time" in [RFC2681] is the parameter Tmax in 426 this memo. The statistics used to summarize RTD[dtn,dtn+1] MAY 427 include the minimum, maximum, median, and mean, and the range = 428 (maximum - minimum) is referred to below in Section 8.1 for load 429 adjustment purposes. 431 OWL[dtn,dtn+1] is defined as a Sample of the [RFC7680] One-way Loss 432 between the Src host and the Dst host over the interval [T,T+I] (that 433 contains equal non-overlapping intervals of dt). The statistics used 434 to summarize OWL[dtn,dtn+1] MAY include the lost packet count and the 435 lost packet ratio. 437 Other metrics MAY be measured: one-way reordering, duplication, and 438 delay variation. 440 5.5. Discussion 442 See the corresponding section for Maximum IP-Layer Capacity. 444 5.6. Reporting the Metric 446 The IP-Layer Capacity SHOULD be reported with at least single Megabit 447 resolution, in units of Megabits per second (Mbps), (which is 448 1,000,000 bits per second to avoid any confusion). 450 The related One-way Loss metric and Round Trip Delay measurements for 451 the same Singleton SHALL be reported, also with meaningful resolution 452 for the values measured. 454 Individual Capacity measurements MAY be reported in a manner 455 consistent with the Maximum IP-Layer Capacity, see Section 9. 457 6. Maximum IP-Layer Capacity Metric Definitions (Statistic) 459 This section sets requirements for the following components to 460 support the Maximum IP-Layer Capacity Metric. 462 6.1. Formal Name 464 Type-P-One-way-Max-IP-Capacity, or informally called Maximum IP-Layer 465 Capacity. 467 Note that Type-P depends on the chosen method. 469 6.2. Parameters 471 This section lists the REQUIRED input factors to specify the metric, 472 beyond those listed in Section 4. 474 No additional Parameters or definitions are needed. 476 6.3. Metric Definitions 478 This section defines the REQUIRED aspects of the Maximum IP-Layer 479 Capacity metric (unless otherwise indicated) for measurements between 480 specified Source and Destination hosts: 482 Define the Maximum IP-Layer Capacity, Maximum_C(T,I,PM), to be the 483 maximum number of IP-Layer bits n0[dtn,dtn+1] divided by dt that can 484 be transmitted in packets from the Src host and correctly received by 485 the Dst host, over all dt length intervals in [T, T+I], and meeting 486 the PM criteria. Equivalently the Maximum of a Sample of size m of 487 C(T,I,PM) collected during the interval [T, T+I] and meeting the PM 488 criteria. 490 The number of sub-intervals with duration dt MUST be set to a natural 491 number m, so that T+I = T + m*dt with dtn+1 - dtn = dt for 1 <= n <= 492 m. 494 Parameter PM represents the other performance metrics (see 495 Section 6.4 below) and their measurement results for the Maximum IP- 496 Layer Capacity. At least one target performance threshold (PM 497 criterion) MUST be defined. If more than one metric and target 498 performance threshold are defined, then the sub-interval with maximum 499 number of bits transmitted MUST meet all the target performance 500 thresholds. Users SHALL specify the parameter Tmax as required by 501 each metric's reference definition. 503 Mathematically, this definition can be represented as: 505 max ( n0[dtn,dtn+1] ) 506 [T,T+I] 507 Maximum_C(T,I,PM) = ------------------------- 508 dt 509 where: 510 T T+I 511 _________________________________________ 512 | | | | | | | | | | | 513 dtn=1 2 3 4 5 6 7 8 9 10 n+1 514 n=m 516 Equation for Maximum Capacity 518 and: 520 o n0 is the total number of IP-Layer header and payload bits that 521 can be transmitted in standard-formed packets from the Src host 522 and correctly received by the Dst host during one contiguous sub- 523 interval, dt in length, during the interval [T, T+I], 525 o Maximum_C(T,I,PM) the Maximum IP-Layer Capacity, corresponds to 526 the maximum value of n0 measured in any sub-interval beginning at 527 dtn, divided by the constant length of all sub-intervals, dt. 529 o PM represents the other performance metrics (see Section 5.4) and 530 their measurement results for the Maximum IP-Layer Capacity. At 531 least one target performance threshold (PM criterion) MUST be 532 defined. 534 o all sub-intervals MUST be of equal duration. Choosing dt as non- 535 overlapping consecutive time intervals allows for a simple 536 implementation. 538 o The bit rate of the physical interface of the measurement systems 539 MUST be higher than the smallest of the links on the path whose 540 Maximum_C(T,I,PM) is to be measured (the bottleneck link). 542 In this definition, the m sub-intervals can be viewed as trials when 543 the Src host varies the transmitted packet rate, searching for the 544 maximum n0 that meets the PM criteria measured at the Dst host in a 545 test of duration, I. When the transmitted packet rate is held 546 constant at the Src host, the m sub-intervals may also be viewed as 547 trials to evaluate the stability of n0 and metric(s) in the PM list 548 over all dt-length intervals in I. 550 Measurements according to these definitions SHALL use the UDP 551 transport layer. 553 6.4. Related Round-Trip Delay and One-way Loss Definitions 555 RTD[dtn,dtn+1] and OWL[dtn,dtn+1] are defined in Section 5.4. Here, 556 the test intervals are increased to match the capacity Samples, 557 RTD[T,I] and OWL[T,I]. 559 The interval dtn,dtn+1 where Maximum_C[T,I,PM] occurs is the 560 reporting sub-interval within RTD[T,I] and OWL[T,I]. 562 Other metrics MAY be measured: one-way reordering, duplication, and 563 delay variation. 565 6.5. Discussion 567 If traffic conditioning (e.g., shaping, policing) applies along a 568 path for which Maximum_C(T,I,PM) is to be determined, different 569 values for dt SHOULD be picked and measurements be executed during 570 multiple intervals [T, T+I]. Each duration dt SHOULD be chosen so 571 that it is an integer multiple of increasing values k times 572 serialization delay of a path MTU at the physical interface speed 573 where traffic conditioning is expected. This should avoid taking 574 configured burst tolerance singletons as a valid Maximum_C(T,I,PM) 575 result. 577 A Maximum_C(T,I,PM) without any indication of bottleneck congestion, 578 be that an increasing latency, packet loss or ECN marks during a 579 measurement interval I, is likely to underestimate Maximum_C(T,I,PM). 581 6.6. Reporting the Metric 583 The IP-Layer Capacity SHOULD be reported with at least single Megabit 584 resolution, in units of Megabits per second (Mbps) (which is 585 1,000,000 bits per second to avoid any confusion). 587 The related One-way Loss metric and Round Trip Delay measurements for 588 the same Singleton SHALL be reported, also with meaningful resolution 589 for the values measured. 591 When there are demonstrated and repeatable Capacity modes in the 592 Sample, then the Maximum IP-Layer Capacity SHALL be reported for each 593 mode, along with the relative time from the beginning of the stream 594 that the mode was observed to be present. Bimodal Maximum IP-Layer 595 Capacities have been observed with some services, sometimes called a 596 "turbo mode" intending to deliver short transfers more quickly, or 597 reduce the initial buffering time for some video streams. Note that 598 modes lasting less than dt duration will not be detected. 600 Some transmission technologies have multiple methods of operation 601 that may be activated when channel conditions degrade or improve, and 602 these transmission methods may determine the Maximum IP-Layer 603 Capacity. Examples include line-of-sight microwave modulator 604 constellations, or cellular modem technologies where the changes may 605 be initiated by a user moving from one coverage area to another. 606 Operation in the different transmission methods may be observed over 607 time, but the modes of Maximum IP-Layer Capacity will not be 608 activated deterministically as with the "turbo mode" described in the 609 paragraph above. 611 7. IP-Layer Sender Bit Rate Singleton Metric Definitions 613 This section sets requirements for the following components to 614 support the IP-Layer Sender Bitrate Metric. This metric helps to 615 check that the sender actually generated the desired rates during a 616 test, and measurement takes place at the Src host to network path 617 interface (or as close as practical within the Src host). It is not 618 a metric for path performance. 620 7.1. Formal Name 622 Type-P-IP-Sender-Bit-Rate, or informally called IP-Layer Sender 623 Bitrate. 625 Note that Type-P depends on the chosen method. 627 7.2. Parameters 629 This section lists the REQUIRED input factors to specify the metric, 630 beyond those listed in Section 4. 632 o S, the duration of the measurement interval at the Source 634 o st, the nominal duration of N sub-intervals in S (default st = 635 0.05 seconds) 637 o stn, the beginning boundary of a specific sub-interval, n, one of 638 N sub-intervals in S 640 S SHALL be longer than I, primarily to account for on-demand 641 activation of the path, or any preamble to testing required, and the 642 delay of the path. 644 st SHOULD be much smaller than the sub-interval dt and on the same 645 order as FT, otherwise the rate measurement will include many rate 646 adjustments and include more time smoothing, thus missing the Maximum 647 IP-Layer Capacity. The st parameter does not have relevance when the 648 Source is transmitting at a fixed rate throughout S. 650 7.3. Metric Definition 652 This section defines the REQUIRED aspects of the IP-Layer Sender 653 Bitrate metric (unless otherwise indicated) for measurements at the 654 specified Source on packets addressed for the intended Destination 655 host and matching the required Type-P: 657 Define the IP-Layer Sender Bit Rate, B(S,st), to be the number of IP- 658 Layer bits (including header and data fields) that are transmitted 659 from the Source with address pair Src and Dst during one contiguous 660 sub-interval, st, during the test interval S (where S SHALL be longer 661 than I), and where the fixed-size packet count during that single 662 sub-interval st also provides the number of IP-Layer bits in any 663 interval, [stn,stn+1]. 665 Measurements according to these definitions SHALL use the UDP 666 transport layer. Any feedback from Dst host to Src host received by 667 Src host during an interval [stn,stn+1] SHOULD NOT result in an 668 adaptation of the Src host traffic conditioning during this interval 669 (rate adjustment occurs on st interval boundaries). 671 7.4. Discussion 673 Both the Sender and Receiver or (Source and Destination) bit rates 674 SHOULD be assessed as part of an IP-Layer Capacity measurement. 675 Otherwise, an unexpected sending rate limitation could produce an 676 erroneous Maximum IP-Layer Capacity measurement. 678 7.5. Reporting the Metric 680 The IP-Layer Sender Bit Rate SHALL be reported with meaningful 681 resolution, in units of Megabits per second (which is 1,000,000 bits 682 per second to avoid any confusion). 684 Individual IP-Layer Sender Bit Rate measurements are discussed 685 further in Section 9. 687 8. Method of Measurement 689 The architecture of the method REQUIRES two cooperating hosts 690 operating in the roles of Src (test packet sender) and Dst 691 (receiver), with a measured path and return path between them. 693 The duration of a test, parameter I, MUST be constrained in a 694 production network, since this is an active test method and it will 695 likely cause congestion on the Src to Dst host path during a test. 697 8.1. Load Rate Adjustment Algorithm 699 The algorithm described in this section MUST NOT be used as a general 700 Congestion Control Algorithm (CCA). As stated in the Scope 701 Section 2, the load rate adjustment algorithm's goal is to help 702 determine the Maximum IP-Layer Capacity in the context of an 703 infrequent, diagnostic, short term measurement. There is a tradeoff 704 between test duration (also the test data volume) and algorithm 705 aggressiveness (speed of ramp-up and down to the Maximum IP-Layer 706 Capacity). The parameter values chosen below strike a well-tested 707 balance among these factors. 709 A table SHALL be pre-built (by the test initiator) defining all the 710 offered load rates that will be supported (R1 through Rn, in 711 ascending order, corresponding to indexed rows in the table). It is 712 RECOMMENDED that rates begin with 0.5 Mbps at index zero, use 1 Mbps 713 at index one, and then continue in 1 Mbps increments to 1 Gbps. 714 Above 1 Gbps, and up to 10 Gbps, it is RECOMMENDED that 100 Mbps 715 increments be used. Above 10 Gbps, increments of 1 Gbps are 716 RECOMMENDED. A higher initial IP-Layer Sender Bitrate might be 717 configured when the test operator is certain that the Maximum IP- 718 Layer Capacity is well-above the initial IP-Layer Sender Bitrate and 719 factors such as test duration and total test traffic play an 720 important role. 722 Each rate is defined as datagrams of size ss, sent as a burst of 723 count cc, each time interval tt (default for tt is 1ms, a likely 724 system tick-interval). While it is advantageous to use datagrams of 725 as large a size as possible, it may be prudent to use a slightly 726 smaller maximum that allows for secondary protocol headers and/or 727 tunneling without resulting in IP-Layer fragmentation. Selection of 728 a new rate is indicated by a calculation on the current row, Rx. For 729 example: 731 "Rx+1": the sender uses the next higher rate in the table. 733 "Rx-10": the sender uses the rate 10 rows lower in the table. 735 At the beginning of a test, the sender begins sending at rate R1 and 736 the receiver starts a feedback timer of duration FT (while awaiting 737 inbound datagrams). As datagrams are received they are checked for 738 sequence number anomalies (loss, out-of-order, duplication, etc.) and 739 the delay range is measured (one-way or round-trip). This 740 information is accumulated until the feedback timer FT expires and a 741 status feedback message is sent from the receiver back to the sender, 742 to communicate this information. The accumulated statistics are then 743 reset by the receiver for the next feedback interval. As feedback 744 messages are received back at the sender, they are evaluated to 745 determine how to adjust the current offered load rate (Rx). 747 If the feedback indicates that no sequence number anomalies were 748 detected AND the delay range was below the lower threshold, the 749 offered load rate is increased. If congestion has not been confirmed 750 up to this point (see below for the method to declare congestion), 751 the offered load rate is increased by more than one rate (e.g., 752 Rx+10). This allows the offered load to quickly reach a near-maximum 753 rate. Conversely, if congestion has been previously confirmed, the 754 offered load rate is only increased by one (Rx+1). However, if a 755 rate threshold between high and very high sending rates (such as 1 756 Gbps) is exceeded, the offered load rate is only increased by one 757 (Rx+1) above the rate threshold in any congestion state. 759 If the feedback indicates that sequence number anomalies were 760 detected OR the delay range was above the upper threshold, the 761 offered load rate is decreased. The RECOMMENDED threshold values are 762 0 for sequence number gaps and 30 ms for lower and 90 ms for upper 763 delay thresholds, respectively. Also, if congestion is now confirmed 764 for the first time by the current feedback message being processed, 765 then the offered load rate is decreased by more than one rate (e.g., 766 Rx-30). This one-time reduction is intended to compensate for the 767 fast initial ramp-up. In all other cases, the offered load rate is 768 only decreased by one (Rx-1). 770 If the feedback indicates that there were no sequence number 771 anomalies AND the delay range was above the lower threshold, but 772 below the upper threshold, the offered load rate is not changed. 773 This allows time for recent changes in the offered load rate to 774 stabilize, and the feedback to represent current conditions more 775 accurately. 777 Lastly, the method for inferring congestion is that there were 778 sequence number anomalies AND/OR the delay range was above the upper 779 threshold for two consecutive feedback intervals. The algorithm 780 described above is also illustrated in ITU-T Rec. Y.1540, 2020 781 version[Y.1540], in Annex B, and implemented in the Appendix on Load 782 Rate Adjustment Pseudo Code in this memo. 784 The load rate adjustment algorithm MUST include timers that stop the 785 test when received packet streams cease unexpectedly. The timeout 786 thresholds are provided in the table below, along with values for all 787 other parameters and variables described in this section. Operation 788 of non-obvious parameters appear below: 790 load packet timeout Operation: The load packet timeout SHALL be 791 reset to the configured value each time a load packet received. 792 If the timeout expires, the receiver SHALL be closed and no 793 further feedback sent. 795 feedback message timeout Operation: The feedback message timeout 796 SHALL be reset to the configured value each time a feedback 797 message is received. If the timeout expires, the sender SHALL be 798 closed and no further load packets sent. 800 +-------------+-------------+---------------+-----------------------+ 801 | Parameter | Default | Tested Range | Expected Safe Range | 802 | | | or values | (not entirely tested, | 803 | | | | other values NOT | 804 | | | | RECOMMENDED) | 805 +-------------+-------------+---------------+-----------------------+ 806 | FT, | 50ms | 20ms, 50ms, | 20ms <= FT <= 250ms | 807 | feedback | | 100ms | Larger values may | 808 | time | | | slow the rate | 809 | interval | | | increase and fail to | 810 | | | | find the max | 811 +-------------+-------------+---------------+-----------------------+ 812 | Feedback | L*FT, L=20 | L=100 with | 0.5sec <= L*FT <= | 813 | message | (1sec with | FT=50ms | 30sec Upper limit for | 814 | timeout | FT=50ms) | (5sec) | very unreliable test | 815 | (stop test) | | | paths only | 816 +-------------+-------------+---------------+-----------------------+ 817 | load packet | 1sec | 5sec | 0.250sec - 30sec | 818 | timeout | | | Upper limit for very | 819 | (stop test) | | | unreliable test paths | 820 | | | | only | 821 +-------------+-------------+---------------+-----------------------+ 822 | table index | 0.5Mbps | 0.5Mbps | when testing <=10Gbps | 823 | 0 | | | | 824 +-------------+-------------+---------------+-----------------------+ 825 | table index | 1Mbps | 1Mbps | when testing <=10Gbps | 826 | 1 | | | | 827 +-------------+-------------+---------------+-----------------------+ 828 | table index | 1Mbps | 1Mbps<=rate<= | same as tested | 829 | (step) size | | 1Gbps | | 830 +-------------+-------------+---------------+-----------------------+ 831 | table index | 100Mbps | 1Gbps<=rate<= | same as tested | 832 | (step) | | 10Gbps | | 833 | size, | | | | 834 | rate>1Gbps | | | | 835 +-------------+-------------+---------------+-----------------------+ 836 | table index | 1Gbps | untested | >10Gbps | 837 | (step) | | | | 838 | size, | | | | 839 | rate>10Gbps | | | | 840 +-------------+-------------+---------------+-----------------------+ 841 | ss, UDP | none | <=1222 | Recommend max at | 842 | payload | | | largest value that | 843 | size, bytes | | | avoids fragmentation; | 844 | | | | use of too-small | 845 | | | | payload size might | 846 | | | | result in unexpected | 847 | | | | sender limitations. | 848 +-------------+-------------+---------------+-----------------------+ 849 | cc, burst | none | 1<=cc<= 100 | same as tested. Vary | 850 | count | | | cc as needed to | 851 | | | | create the desired | 852 | | | | maximum sending rate. | 853 | | | | Sender buffer size | 854 | | | | may limit cc in | 855 | | | | implementation. | 856 +-------------+-------------+---------------+-----------------------+ 857 | tt, burst | 100microsec | 100microsec, | available range of | 858 | interval | | 1msec | "tick" values (HZ | 859 | | | | param) | 860 +-------------+-------------+---------------+-----------------------+ 861 | low delay | 30ms | 5ms, 30ms | same as tested | 862 | range | | | | 863 | threshold | | | | 864 +-------------+-------------+---------------+-----------------------+ 865 | high delay | 90ms | 10ms, 90ms | same as tested | 866 | range | | | | 867 | threshold | | | | 868 +-------------+-------------+---------------+-----------------------+ 869 | sequence | 0 | 0, 100 | same as tested | 870 | error | | | | 871 | threshold | | | | 872 +-------------+-------------+---------------+-----------------------+ 873 | consecutive | 2 | 2 | Use values >1 to | 874 | errored | | | avoid misinterpreting | 875 | status | | | transient loss | 876 | report | | | | 877 | threshold | | | | 878 +-------------+-------------+---------------+-----------------------+ 879 | Fast mode | 10 | 10 | 2 <= steps <= 30 | 880 | increase, | | | | 881 | in table | | | | 882 | index steps | | | | 883 +-------------+-------------+---------------+-----------------------+ 884 | Fast mode | 3 * Fast | 3 * Fast mode | same as tested | 885 | decrease, | mode | increase | | 886 | in table | increase | | | 887 | index steps | | | | 888 +-------------+-------------+---------------+-----------------------+ 890 Parameters for Load Rate Adjustment Algorithm 892 As a consequence of default parameterization, the Number of table 893 steps in total for rates <10Gbps is 2000 (excluding index 0). 895 A related sender backoff response to network conditions occurs when 896 one or more status feedback messages fail to arrive at the sender. 898 If no status feedback messages arrive at the sender for the interval 899 greater than the Lost Status Backoff timeout: 901 UDRT + (2+w)*FT = Lost Status Backoff timeout 903 where: 904 UDRT = upper delay range threshold (default 90ms) 905 FT = feedback time interval (default 50ms) 906 w = number of repeated timeouts (w=0 initially, w++ on each 907 timeout, and reset to 0 when a message is received) 909 beginning when the last message (of any type) was successfully 910 received at the sender: 912 Then the offered load SHALL be decreased, following the same process 913 as when the feedback indicates presence of one or more sequence 914 number anomalies OR the delay range was above the upper threshold (as 915 described above), with the same load rate adjustment algorithm 916 variables in their current state. This means that rate reduction and 917 congestion confirmation can result from a three-way OR that includes 918 lost status feedback messages, sequence errors, or delay variation. 920 The RECOMMENDED initial value for w is 0, taking Round Trip Time 921 (RTT) less than FT into account. A test with RTT longer than FT is a 922 valid reason to increase the initial value of w appropriately. 923 Variable w SHALL be incremented by 1 whenever the Lost Status Backoff 924 timeout is exceeded. So with FT = 50ms and UDRT = 90ms, a status 925 feedback message loss would be declared at 190ms following a 926 successful message, again at 50ms after that (240ms total), and so 927 on. 929 Also, if congestion is now confirmed for the first time by a Lost 930 Status Backoff timeout, then the offered load rate is decreased by 931 more than one rate (e.g., Rx-30). This one-time reduction is 932 intended to compensate for the fast initial ramp-up. In all other 933 cases, the offered load rate is only decreased by one (Rx-1). 935 Appendix B discusses compliance with the applicable mandatory 936 requirements of [RFC8085], consistent with the goals of the IP-Layer 937 Capacity Metric and Method, including the load rate adjustment 938 algorithm described in this section. 940 8.2. Measurement Qualification or Verification 942 It is of course necessary to calibrate the equipment performing the 943 IP-Layer Capacity measurement, to ensure that the expected capacity 944 can be measured accurately, and that equipment choices (processing 945 speed, interface bandwidth, etc.) are suitably matched to the 946 measurement range. 948 When assessing a Maximum rate as the metric specifies, artificially 949 high (optimistic) values might be measured until some buffer on the 950 path is filled. Other causes include bursts of back-to-back packets 951 with idle intervals delivered by a path, while the measurement 952 interval (dt) is small and aligned with the bursts. The artificial 953 values might result in an un-sustainable Maximum Capacity observed 954 when the method of measurement is searching for the Maximum, and that 955 would not do. This situation is different from the bi-modal service 956 rates (discussed under Reporting), which are characterized by a 957 multi-second duration (much longer than the measured RTT) and 958 repeatable behavior. 960 There are many ways that the Method of Measurement could handle this 961 false-max issue. The default value for measurement of singletons (dt 962 = 1 second) has proven to be of practical value during tests of this 963 method, allows the bimodal service rates to be characterized, and it 964 has an obvious alignment with the reporting units (Mbps). 966 Another approach comes from Section 24 of [RFC2544] and its 967 discussion of Trial duration, where relatively short trials conducted 968 as part of the search are followed by longer trials to make the final 969 determination. In the production network, measurements of Singletons 970 and Samples (the terms for trials and tests of Lab Benchmarking) must 971 be limited in duration because they may be service-affecting. But 972 there is sufficient value in repeating a Sample with a fixed sending 973 rate determined by the previous search for the Maximum IP-Layer 974 Capacity, to qualify the result in terms of the other performance 975 metrics measured at the same time. 977 A qualification measurement for the search result is a subsequent 978 measurement, sending at a fixed 99.x % of the Maximum IP-Layer 979 Capacity for I, or an indefinite period. The same Maximum Capacity 980 Metric is applied, and the Qualification for the result is a Sample 981 without packet loss or a growing minimum delay trend in subsequent 982 singletons (or each dt of the measurement interval, I). Samples 983 exhibiting losses or increasing queue occupation require a repeated 984 search and/or test at reduced fixed sender rate for qualification. 986 Here, as with any Active Capacity test, the test duration must be 987 kept short. 10 second tests for each direction of transmission are 988 common today. The default measurement interval specified here is I = 989 10 seconds. The combination of a fast and congestion-aware search 990 method and user-network coordination make a unique contribution to 991 production testing. The Maximum IP Capacity metric and method for 992 assessing performance is very different from classic [RFC2544] 993 Throughput metric and methods : it uses near-real-time load 994 adjustments that are sensitive to loss and delay, similar to other 995 congestion control algorithms used on the Internet every day, along 996 with limited duration. On the other hand, [RFC2544] Throughput 997 measurements can produce sustained overload conditions for extended 998 periods of time. Individual trials in a test governed by a binary 999 search can last 60 seconds for each step, and the final confirmation 1000 trial may be even longer. This is very different from "normal" 1001 traffic levels, but overload conditions are not a concern in the 1002 isolated test environment. The concerns raised in [RFC6815] were 1003 that [RFC2544] methods would be let loose on production networks, and 1004 instead the authors challenged the standards community to develop 1005 metrics and methods like those described in this memo. 1007 8.3. Measurement Considerations 1009 In general, the wide-spread measurements that this memo encourages 1010 will encounter wide-spread behaviors. The bimodal IP Capacity 1011 behaviors already discussed in Section 6.6 are good examples. 1013 In general, it is RECOMMENDED to locate test endpoints as close to 1014 the intended measured link(s) as practical (this is not always 1015 possible for reasons of scale; there is a limit on number of test 1016 endpoints coming from many perspectives, management and measurement 1017 traffic for example). The testing operator MUST set a value for the 1018 MaxHops parameter, based on the expected path length. This parameter 1019 can keep measurement traffic from straying too far beyond the 1020 intended path. 1022 The path measured may be stateful based on many factors, and the 1023 Parameter "Time of day" when a test starts may not be enough 1024 information. Repeatable testing may require the time from the 1025 beginning of a measured flow, and how the flow is constructed 1026 including how much traffic has already been sent on that flow when a 1027 state-change is observed, because the state-change may be based on 1028 time or bytes sent or both. Both load packets and status feedback 1029 messages MUST contain sequence numbers, which helps with measurements 1030 based on those packets. 1032 Many different types of traffic shapers and on-demand communications 1033 access technologies may be encountered, as anticipated in [RFC7312], 1034 and play a key role in measurement results. Methods MUST be prepared 1035 to provide a short preamble transmission to activate on-demand 1036 communications access, and to discard the preamble from subsequent 1037 test results. 1039 Conditions which might be encountered during measurement, where 1040 packet losses may occur independently of the measurement sending 1041 rate: 1043 1. Congestion of an interconnection or backbone interface may appear 1044 as packet losses distributed over time in the test stream, due to 1045 much higher rate interfaces in the backbone. 1047 2. Packet loss due to use of Random Early Detection (RED) or other 1048 active queue management may or may not affect the measurement 1049 flow if competing background traffic (other flows) are 1050 simultaneously present. 1052 3. There may be only small delay variation independent of sending 1053 rate under these conditions, too. 1055 4. Persistent competing traffic on measurement paths that include 1056 shared transmission media may cause random packet losses in the 1057 test stream. 1059 It is possible to mitigate these conditions using the flexibility of 1060 the load-rate adjusting algorithm described in Section 8.1 above 1061 (tuning specific parameters). 1063 If the measurement flow burst duration happens to be on the order of 1064 or smaller than the burst size of a shaper or a policer in the path, 1065 then the line rate might be measured rather than the bandwidth limit 1066 imposed by the shaper or policer. If this condition is suspected, 1067 alternate configurations SHOULD be used. 1069 In general, results depend on the sending stream characteristics; the 1070 measurement community has known this for a long time, and needs to 1071 keep it front of mind. Although the default is a single flow (F=1) 1072 for testing, use of multiple flows may be advantageous for the 1073 following reasons: 1075 1. the test hosts may be able to create higher load than with a 1076 single flow, or parallel test hosts may be used to generate 1 1077 flow each. 1079 2. there may be link aggregation present (flow-based load balancing) 1080 and multiple flows are needed to occupy each member of the 1081 aggregate. 1083 3. Internet access policies may limit the IP-Layer Capacity 1084 depending on the Type-P of packets, possibly reserving capacity 1085 for various stream types. 1087 Each flow would be controlled using its own implementation of the 1088 load rate adjustment (search) algorithm. 1090 It is obviously counter-productive to run more than one independent 1091 test (regardless of the number of flows in the test stream) 1092 attempting to measure the *maximum* capacity between the same Source 1093 and Destination. The number of concurrent, independent tests between 1094 the same Source and Destination SHALL be limited to one. 1096 As testing continues, implementers should expect some evolution in 1097 the methods. The ITU-T has published a Supplement (60) to the 1098 Y-series of Recommendations, "Interpreting ITU-T Y.1540 Maximum IP- 1099 Layer Capacity measurements", [Y.Sup60], which is the result of 1100 continued testing with the metric, and those results have improved 1101 the method described here. 1103 8.4. Running Code 1105 RFC Editor: This section is for the benefit of the Document 1106 Shepherd's form, and will be deleted prior to publication. 1108 Much of the development of the method and comparisons with existing 1109 methods conducted at IETF Hackathons and elsewhere have been based on 1110 the example udpst Linux measurement tool (which is a working 1111 reference for further development) [udpst]. The current project: 1113 o is a utility that can function as a client or server daemon 1115 o requires a successful client-initiated setup handshake between 1116 cooperating hosts and allows firewalls to control inbound 1117 unsolicited UDP which either go to a control port [expected and w/ 1118 authentication] or to ephemeral ports that are only created as 1119 needed. Firewalls protecting each host can both continue to do 1120 their job normally. This aspect is similar to many other test 1121 utilities available. 1123 o is written in C, and built with gcc (release 9.3) and its standard 1124 run-time libraries 1126 o allows configuration of most of the parameters described in 1127 Sections 4 and 7. 1129 o supports IPv4 and IPv6 address families. 1131 o supports IP-Layer packet marking. 1133 9. Reporting Formats 1135 The singleton IP-Layer Capacity results SHOULD be accompanied by the 1136 context under which they were measured. 1138 o timestamp (especially the time when the maximum was observed in 1139 dtn) 1141 o Source and Destination (by IP or other meaningful ID) 1143 o other inner parameters of the test case (Section 4) 1145 o outer parameters, such as "test conducted in motion" or other 1146 factors belonging to the context of the measurement 1148 o result validity (indicating cases where the process was somehow 1149 interrupted or the attempt failed) 1151 o a field where unusual circumstances could be documented, and 1152 another one for "ignore/mask out" purposes in further processing 1154 The Maximum IP-Layer Capacity results SHOULD be reported in the 1155 format of a table with a row for each of the test Phases and Number 1156 of Flows. There SHOULD be columns for the phases with number of 1157 flows, and for the resultant Maximum IP-Layer Capacity results for 1158 the aggregate and each flow tested. 1160 As mentioned in Section 6.6, bi-modal (or multi-modal) maxima SHALL 1161 be reported for each mode separately. 1163 +-------------+-------------------------+----------+----------------+ 1164 | Phase, # | Maximum IP-Layer | Loss | RTT min, max, | 1165 | Flows | Capacity, Mbps | Ratio | msec | 1166 +-------------+-------------------------+----------+----------------+ 1167 | Search,1 | 967.31 | 0.0002 | 30, 58 | 1168 +-------------+-------------------------+----------+----------------+ 1169 | Verify,1 | 966.00 | 0.0000 | 30, 38 | 1170 +-------------+-------------------------+----------+----------------+ 1172 Maximum IP-layer Capacity Results 1174 Static and configuration parameters: 1176 The sub-interval time, dt, MUST accompany a report of Maximum IP- 1177 Layer Capacity results, and the remaining Parameters from Section 4, 1178 General Parameters. 1180 The PM list metrics corresponding to the sub-interval where the 1181 Maximum Capacity occurred MUST accompany a report of Maximum IP-Layer 1182 Capacity results, for each test phase. 1184 The IP-Layer Sender Bit rate results SHOULD be reported in the format 1185 of a table with a row for each of the test phases, sub-intervals (st) 1186 and number of flows. There SHOULD be columns for the phases with 1187 number of flows, and for the resultant IP-Layer Sender Bit rate 1188 results for the aggregate and each flow tested. 1190 +--------------------------+-------------+----------------------+ 1191 | Phase, Flow or Aggregate | st, sec | Sender Bitrate, Mbps | 1192 +--------------------------+-------------+----------------------+ 1193 | Search,1 | 0.00 - 0.05 | 345 | 1194 +--------------------------+-------------+----------------------+ 1195 | Search,2 | 0.00 - 0.05 | 289 | 1196 +--------------------------+-------------+----------------------+ 1197 | Search,Agg | 0.00 - 0.05 | 634 | 1198 +--------------------------+-------------+----------------------+ 1200 IP-layer Sender Bit Rate Results 1202 Static and configuration parameters: 1204 The subinterval time, st, MUST accompany a report of Sender IP-Layer 1205 Bit Rate results. 1207 Also, the values of the remaining Parameters from Section 4, General 1208 Parameters, MUST be reported. 1210 9.1. Configuration and Reporting Data Formats 1212 As a part of the multi-Standards Development Organization (SDO) 1213 harmonization of this metric and method of measurement, one of the 1214 areas where the Broadband Forum (BBF) contributed its expertise was 1215 in the definition of an information model and data model for 1216 configuration and reporting. These models are consistent with the 1217 metric parameters and default values specified as lists is this memo. 1218 [TR-471] provides the Information model that was used to prepare a 1219 full data model in related BBF work. The BBF has also carefully 1220 considered topics within its purview, such as placement of 1221 measurement systems within the Internet access architecture. For 1222 example, timestamp resolution requirements that influence the choice 1223 of the test protocol are provided in Table 2 of [TR-471]. 1225 10. Security Considerations 1227 Active metrics and measurements have a long history of security 1228 considerations. The security considerations that apply to any active 1229 measurement of live paths are relevant here. See [RFC4656] and 1230 [RFC5357]. 1232 When considering privacy of those involved in measurement or those 1233 whose traffic is measured, the sensitive information available to 1234 potential observers is greatly reduced when using active techniques 1235 which are within this scope of work. Passive observations of user 1236 traffic for measurement purposes raise many privacy issues. We refer 1237 the reader to the privacy considerations described in the Large Scale 1238 Measurement of Broadband Performance (LMAP) Framework [RFC7594], 1239 which covers active and passive techniques. 1241 There are some new considerations for Capacity measurement as 1242 described in this memo. 1244 1. Cooperating Source and Destination hosts and agreements to test 1245 the path between the hosts are REQUIRED. Hosts perform in either 1246 the Src or Dst roles. 1248 2. It is REQUIRED to have a user client-initiated setup handshake 1249 between cooperating hosts that allows firewalls to control 1250 inbound unsolicited UDP traffic which either goes to a control 1251 port [expected and w/authentication] or to ephemeral ports that 1252 are only created as needed. Firewalls protecting each host can 1253 both continue to do their job normally. 1255 3. Client-server authentication and integrity protection for 1256 feedback messages conveying measurements is RECOMMENDED. 1258 4. Hosts MUST limit the number of simultaneous tests to avoid 1259 resource exhaustion and inaccurate results. 1261 5. Senders MUST be rate-limited. This can be accomplished using a 1262 pre-built table defining all the offered load rates that will be 1263 supported (Section 8.1). The recommended load-control search 1264 algorithm results in "ramp-up" from the lowest rate in the table. 1266 6. Service subscribers with limited data volumes who conduct 1267 extensive capacity testing might experience the effects of 1268 Service Provider controls on their service. Testing with the 1269 Service Provider's measurement hosts SHOULD be limited in 1270 frequency and/or overall volume of test traffic (for example, the 1271 range of duration values, I, SHOULD be limited). 1273 The exact specification of these features is left for the future 1274 protocol development. 1276 11. IANA Considerations 1278 This memo makes no requests of IANA. 1280 12. Acknowledgments 1282 Thanks to Joachim Fabini, Matt Mathis, J.Ignacio Alvarez-Hamelin, 1283 Wolfgang Balzer, Frank Brockners, Greg Mirsky, Martin Duke, Murray 1284 Kucherawy, and Benjamin Kaduk for their extensive comments on the 1285 memo and related topics. In a second round of reviews, we 1286 acknowledge Magnus Westerlund, Lars Eggert, and Zahed Sarkar. 1288 13. Appendix A - Load Rate Adjustment Pseudo Code 1290 The following is a pseudo-code implementation of the algorithm 1291 described in Section 8.1. 1293 Rx = 0 # The current sending rate (equivalent to a row of the table) 1294 seqErr = 0 # Measured count of any of Loss or Reordering impairments 1295 delay = 0 # Measured Range of Round Trip Delay, RTD, ms 1296 lowThresh = 30 # Low threshold on the Range of RTD, ms 1297 upperThresh = 90 # Upper threshold on the Range of RTD, ms 1298 hSpeedTresh = 1 Gbps # Threshold for transition between sending rate step 1299 sizes (such as 1 Mbps and 100 Mbps) 1300 slowAdjCount = 0 # Measured Number of consecutive status reports 1301 indicating loss and/or delay variation above upperThresh 1302 slowAdjThresh = 2 # Threshold on slowAdjCount used to infer congestion. 1303 Use values >1 to avoid misinterpreting transient loss 1304 highSpeedDelta = 10 # The number of rows to move in a single adjustment 1305 when initially increasing offered load (to ramp-up quickly) 1306 maxLoadRates = 2000 # Maximum table index (rows) 1308 if ( seqErr == 0 && delay < lowThresh ) { 1309 if ( Rx < hSpeedTresh && slowAdjCount < slowAdjThresh ) { 1310 Rx += highSpeedDelta; 1311 slowAdjCount = 0; 1312 } else { 1313 if ( Rx < maxLoadRates - 1 ) 1314 Rx++; 1315 } 1316 } else if ( seqErr > 0 || delay > upperThresh ) { 1317 slowAdjCount++; 1318 if ( Rx < hSpeedTresh && slowAdjCount == slowAdjThresh ) { 1319 if ( Rx > highSpeedDelta * 3 ) 1320 Rx -= highSpeedDelta * 3; 1321 else 1322 Rx = 0; 1323 } else { 1324 if ( Rx > 0 ) 1325 Rx--; 1326 } 1327 } 1329 14. Appendix B - RFC 8085 UDP Guidelines Check 1331 The BCP on UDP usage guidelines [RFC8085] focuses primarily on 1332 congestion control in section 3.1. The Guidelines appear in 1333 mandatory (MUST) and recommendation (SHOULD) categories. 1335 14.1. Assessment of Mandatory Requirements 1337 The mandatory requirements in Section 3 of [RFC8085] include: 1339 Internet paths can have widely varying characteristics, ... 1340 Consequently, applications that may be used on the Internet MUST 1341 NOT make assumptions about specific path characteristics. They 1342 MUST instead use mechanisms that let them operate safely under 1343 very different path conditions. Typically, this requires 1344 conservatively probing the current conditions of the Internet path 1345 they communicate over to establish a transmission behavior that it 1346 can sustain and that is reasonably fair to other traffic sharing 1347 the path. 1349 The purpose of the load rate adjustment algorithm in Section 8.1 is 1350 to probe the network and enable Maximum IP-Layer Capacity 1351 measurements with as few assumptions about the measured path as 1352 possible, and within the range application described in Section 2. 1353 The degree of probing conservatism is in tension with the need to 1354 minimize both the traffic dedicated to testing (especially with 1355 Gigabit rate measurements) and the duration of the test (which is one 1356 contributing factor to the overall algorithm fairness). 1358 The text of Section 3 of [RFC8085] goes on to recommend alternatives 1359 to UDP to meet the mandatory requirements, but none are suitable for 1360 the scope and purpose of the metrics and methods in this memo. In 1361 fact, ad hoc TCP-based methods fail to achieve the measurement 1362 accuracy repeatedly proven in comparison measurements with the 1363 running code [LS-SG12-A] [LS-SG12-B] [Y.Sup60]. Also, the UDP aspect 1364 of these methods is present primarily to support modern Internet 1365 transmission where a transport protocol is required [copycat]; the 1366 metric is based on the IP-Layer and UDP allows simple correlation to 1367 the IP-Layer. 1369 Section 3.1.1 of [RFC8085] discusses protocol timer guidelines: 1371 Latency samples MUST NOT be derived from ambiguous transactions. 1372 The canonical example is in a protocol that retransmits data, but 1373 subsequently cannot determine which copy is being acknowledged. 1375 Both load packets and status feedback messages MUST contain sequence 1376 numbers, which helps with measurements based on those packets, and 1377 there are no retransmissions needed. 1379 When a latency estimate is used to arm a timer that provides loss 1380 detection -- with or without retransmission -- expiry of the timer 1381 MUST be interpreted as an indication of congestion in the network, 1382 causing the sending rate to be adapted to a safe conservative 1383 rate... 1385 The method described in this memo uses timers for sending rate 1386 backoff when status feedback messages are lost (Lost Status Backoff 1387 timeout), and for stopping a test when connectivity is lost for a 1388 longer interval (Feedback message or load packet timeouts). 1390 There is no specific benefit foreseen by using Explicit Congestion 1391 Notification (ECN) in this memo. 1393 Section 3.2 of [RFC8085] discusses message size guidelines: 1395 To determine an appropriate UDP payload size, applications MUST 1396 subtract the size of the IP header (which includes any IPv4 1397 optional headers or IPv6 extension headers) as well as the length 1398 of the UDP header (8 bytes) from the PMTU size. 1400 The method uses a sending rate table with a maximum UDP payload size 1401 that anticipates significant header overhead and avoids 1402 fragmentation. 1404 Section 3.3 of [RFC8085] provides reliability guidelines: 1406 Applications that do require reliable message delivery MUST 1407 implement an appropriate mechanism themselves. 1409 The IP-Layer Capacity Metric and Method do not require reliable 1410 delivery. 1412 Applications that require ordered delivery MUST reestablish 1413 datagram ordering themselves. 1415 The IP-Layer Capacity Metric and Method does not need to reestablish 1416 packet order; it is preferred to measure packet reordering if it 1417 occurs [RFC4737]. 1419 14.2. Assessment of Recommendations 1421 The load rate adjustment algorithm's goal is to determine the Maximum 1422 IP-Layer Capacity in the context of an infrequent, diagnostic, short 1423 term measurement. This goal is a global exception to many [RFC8085] 1424 SHOULD-level requirements, of which many are intended for long-lived 1425 flows that must coexist with other traffic in more-or-less fair way. 1426 However, the algorithm (as specified in Section 8.1 and Appendix A 1427 above) reacts to indications of congestion in clearly defined ways. 1429 A specific recommendation is provided as an example. Section 3.1.5 1430 of [RFC8085] on implications of RTT and Loss Measurements on 1431 Congestion Control says: 1433 A congestion control designed for UDP SHOULD respond as quickly as 1434 possible when it experiences congestion, and it SHOULD take into 1435 account both the loss rate and the response time when choosing a 1436 new rate. 1438 The load rate adjustment algorithm responds to loss and RTT 1439 measurements with a clear and concise rate reduction when warranted, 1440 and the response makes use of direct measurements (more exact than 1441 can be inferred from TCP ACKs). 1443 Section 3.1.5 of [RFC8085] goes on to specify: 1445 The implemented congestion control scheme SHOULD result in 1446 bandwidth (capacity) use that is comparable to that of TCP within 1447 an order of magnitude, so that it does not starve other flows 1448 sharing a common bottleneck. 1450 This is a requirement for coexistent streams, and not for diagnostic 1451 and infrequent measurements using short durations. The rate 1452 oscillations during short tests allow other packets to pass, and 1453 don't starve other flows. 1455 Ironically, ad hoc TCP-based measurements of "Internet Speed" are 1456 also designed to work around this SHOULD-level requirement, by 1457 launching many flows (9, for example) to increase the outstanding 1458 data dedicated to testing. 1460 The load rate adjustment algorithm cannot become a TCP-like 1461 congestion control, or it will have the same weaknesses of TCP when 1462 trying to make a Maximum IP-Layer Capacity measurement, and will not 1463 achieve the goal. The results of the referenced testing [LS-SG12-A] 1464 [LS-SG12-B] [Y.Sup60] supported this statement hundreds of times, 1465 with comparisons to multi-connection TCP-based measurements. 1467 A brief review of some other SHOULD-level requirements follows (Yes 1468 or Not applicable = NA) : 1470 +--+---------------------------------------------------------+---------+ 1471 |Y?| RFC 8085 Recommendation | Section | 1472 +--+---------------------------------------------------------+---------+ 1473 Yes| MUST tolerate a wide range of Internet path conditions | 3 | 1474 NA | SHOULD use a full-featured transport (e.g., TCP) | | 1475 | | | 1476 Yes| SHOULD control rate of transmission | 3.1 | 1477 NA | SHOULD perform congestion control over all traffic | | 1478 | | | 1479 | for bulk transfers, | 3.1.2 | 1480 NA | SHOULD consider implementing TFRC | | 1481 NA | else, SHOULD in other ways use bandwidth similar to TCP | | 1482 | | | 1483 | for non-bulk transfers, | 3.1.3 | 1484 NA | SHOULD measure RTT and transmit max. 1 datagram/RTT | 3.1.1 | 1485 NA | else, SHOULD send at most 1 datagram every 3 seconds | | 1486 NA | SHOULD back-off retransmission timers following loss | | 1487 | | | 1488 Yes| SHOULD provide mechanisms to regulate the bursts of | 3.1.6 | 1489 | transmission | | 1490 | | | 1491 NA | MAY implement ECN; a specific set of application | 3.1.7 | 1492 | mechanisms are REQUIRED if ECN is used. | | 1493 | | | 1494 Yes| for DiffServ, SHOULD NOT rely on implementation of PHBs | 3.1.8 | 1495 | | | 1496 Yes| for QoS-enabled paths, MAY choose not to use CC | 3.1.9 | 1497 | | | 1498 Yes| SHOULD NOT rely solely on QoS for their capacity | 3.1.10 | 1499 | non-CC controlled flows SHOULD implement a transport | | 1500 | circuit breaker | | 1501 | MAY implement a circuit breaker for other applications | | 1502 | | | 1503 | for tunnels carrying IP traffic, | 3.1.11 | 1504 NA | SHOULD NOT perform congestion control | | 1505 NA | MUST correctly process the IP ECN field | | 1506 | | | 1507 | for non-IP tunnels or rate not determined by traffic, | | 1508 NA | SHOULD perform CC or use circuit breaker | 3.1.11 | 1509 NA | SHOULD restrict types of traffic transported by the | | 1510 | tunnel | | 1511 | | | 1512 Yes| SHOULD NOT send datagrams that exceed the PMTU, i.e., | 3.2 | 1513 Yes| SHOULD discover PMTU or send datagrams < minimum PMTU; | | 1514 NA | Specific application mechanisms are REQUIRED if PLPMTUD | | 1515 | is used. | | 1516 | | | 1517 Yes| SHOULD handle datagram loss, duplication, reordering | 3.3 | 1518 NA | SHOULD be robust to delivery delays up to 2 minutes | | 1519 | | | 1520 Yes| SHOULD enable IPv4 UDP checksum | 3.4 | 1521 Yes| SHOULD enable IPv6 UDP checksum; Specific application | 3.4.1 | 1522 | mechanisms are REQUIRED if a zero IPv6 UDP checksum is | | 1523 | used. | | 1524 | | | 1525 NA | SHOULD provide protection from off-path attacks | 5.1 | 1526 | else, MAY use UDP-Lite with suitable checksum coverage | 3.4.2 | 1527 | | | 1528 NA | SHOULD NOT always send middlebox keep-alive messages | 3.5 | 1529 NA | MAY use keep-alives when needed (min. interval 15 sec) | | 1530 | | | 1532 Yes| Applications specified for use in limited use (or | 3.6 | 1533 | controlled environments) SHOULD identify equivalent | | 1534 | mechanisms and describe their use case. | | 1535 | | | 1536 NA | Bulk-multicast apps SHOULD implement congestion control | 4.1.1 | 1537 | | | 1538 NA | Low volume multicast apps SHOULD implement congestion | 4.1.2 | 1539 | control | | 1540 | | | 1541 NA | Multicast apps SHOULD use a safe PMTU | 4.2 | 1542 | | | 1543 Yes| SHOULD avoid using multiple ports | 5.1.2 | 1544 Yes| MUST check received IP source address | | 1545 | | | 1546 NA | SHOULD validate payload in ICMP messages | 5.2 | 1547 | | | 1548 Yes| SHOULD use a randomized source port or equivalent | 6 | 1549 | technique, and, for client/server applications, SHOULD | | 1550 | send responses from source address matching request | | 1551 | 5.1 | | 1552 NA | SHOULD use standard IETF security protocols when needed | 6 | 1553 +---------------------------------------------------------+---------+ 1555 15. References 1557 15.1. Normative References 1559 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1560 Requirement Levels", BCP 14, RFC 2119, 1561 DOI 10.17487/RFC2119, March 1997, 1562 . 1564 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 1565 "Framework for IP Performance Metrics", RFC 2330, 1566 DOI 10.17487/RFC2330, May 1998, 1567 . 1569 [RFC2681] Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip 1570 Delay Metric for IPPM", RFC 2681, DOI 10.17487/RFC2681, 1571 September 1999, . 1573 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 1574 Zekauskas, "A One-way Active Measurement Protocol 1575 (OWAMP)", RFC 4656, DOI 10.17487/RFC4656, September 2006, 1576 . 1578 [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, 1579 S., and J. Perser, "Packet Reordering Metrics", RFC 4737, 1580 DOI 10.17487/RFC4737, November 2006, 1581 . 1583 [RFC5357] Hedayat, K., Krzanowski, R., Morton, A., Yum, K., and J. 1584 Babiarz, "A Two-Way Active Measurement Protocol (TWAMP)", 1585 RFC 5357, DOI 10.17487/RFC5357, October 2008, 1586 . 1588 [RFC6438] Carpenter, B. and S. Amante, "Using the IPv6 Flow Label 1589 for Equal Cost Multipath Routing and Link Aggregation in 1590 Tunnels", RFC 6438, DOI 10.17487/RFC6438, November 2011, 1591 . 1593 [RFC7497] Morton, A., "Rate Measurement Test Protocol Problem 1594 Statement and Requirements", RFC 7497, 1595 DOI 10.17487/RFC7497, April 2015, 1596 . 1598 [RFC7680] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, 1599 Ed., "A One-Way Loss Metric for IP Performance Metrics 1600 (IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January 1601 2016, . 1603 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1604 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1605 May 2017, . 1607 [RFC8468] Morton, A., Fabini, J., Elkins, N., Ackermann, M., and V. 1608 Hegde, "IPv4, IPv6, and IPv4-IPv6 Coexistence: Updates for 1609 the IP Performance Metrics (IPPM) Framework", RFC 8468, 1610 DOI 10.17487/RFC8468, November 2018, 1611 . 1613 15.2. Informative References 1615 [copycat] Edleine, K., Kuhlewind, K., Trammell, B., and B. Donnet, 1616 "copycat: Testing Differential Treatment of New Transport 1617 Protocols in the Wild (ANRW '17)", July 2017, 1618 . 1620 [LS-SG12-A] 1621 12, I. S., "LS - Harmonization of IP Capacity and Latency 1622 Parameters: Revision of Draft Rec. Y.1540 on IP packet 1623 transfer performance parameters and New Annex A with Lab 1624 Evaluation Plan", May 2019, 1625 . 1627 [LS-SG12-B] 1628 12, I. S., "LS on harmonization of IP Capacity and Latency 1629 Parameters: Consent of Draft Rec. Y.1540 on IP packet 1630 transfer performance parameters and New Annex A with Lab & 1631 Field Evaluation Plans", March 2019, 1632 . 1634 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1635 Network Interconnect Devices", RFC 2544, 1636 DOI 10.17487/RFC2544, March 1999, 1637 . 1639 [RFC3148] Mathis, M. and M. Allman, "A Framework for Defining 1640 Empirical Bulk Transfer Capacity Metrics", RFC 3148, 1641 DOI 10.17487/RFC3148, July 2001, 1642 . 1644 [RFC5136] Chimento, P. and J. Ishac, "Defining Network Capacity", 1645 RFC 5136, DOI 10.17487/RFC5136, February 2008, 1646 . 1648 [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, 1649 "Applicability Statement for RFC 2544: Use on Production 1650 Networks Considered Harmful", RFC 6815, 1651 DOI 10.17487/RFC6815, November 2012, 1652 . 1654 [RFC7312] Fabini, J. and A. Morton, "Advanced Stream and Sampling 1655 Framework for IP Performance Metrics (IPPM)", RFC 7312, 1656 DOI 10.17487/RFC7312, August 2014, 1657 . 1659 [RFC7594] Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 1660 Aitken, P., and A. Akhter, "A Framework for Large-Scale 1661 Measurement of Broadband Performance (LMAP)", RFC 7594, 1662 DOI 10.17487/RFC7594, September 2015, 1663 . 1665 [RFC7799] Morton, A., "Active and Passive Metrics and Methods (with 1666 Hybrid Types In-Between)", RFC 7799, DOI 10.17487/RFC7799, 1667 May 2016, . 1669 [RFC8085] Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage 1670 Guidelines", BCP 145, RFC 8085, DOI 10.17487/RFC8085, 1671 March 2017, . 1673 [RFC8337] Mathis, M. and A. Morton, "Model-Based Metrics for Bulk 1674 Transport Capacity", RFC 8337, DOI 10.17487/RFC8337, March 1675 2018, . 1677 [TR-471] Morton, A., "Broadband Forum TR-471: IP Layer Capacity 1678 Metrics and Measurement", July 2020, 1679 . 1682 [udpst] udpst Project Collaborators, "UDP Speed Test Open 1683 Broadband project", December 2020, 1684 . 1686 [Y.1540] Y.1540, I. R., "Internet protocol data communication 1687 service - IP packet transfer and availability performance 1688 parameters", December 2019, 1689 . 1691 [Y.Sup60] Morton, A., "Recommendation Y.Sup60, (09/20) Interpreting 1692 ITU-T Y.1540 maximum IP-layer capacity measurements, and 1693 Errata", September 2020, 1694 . 1696 Authors' Addresses 1698 Al Morton 1699 AT&T Labs 1700 200 Laurel Avenue South 1701 Middletown, NJ 07748 1702 USA 1704 Phone: +1 732 420 1571 1705 Fax: +1 732 368 1192 1706 Email: acm@research.att.com 1708 Ruediger Geib 1709 Deutsche Telekom 1710 Heinrich Hertz Str. 3-7 1711 Darmstadt 64295 1712 Germany 1714 Phone: +49 6151 5812747 1715 Email: Ruediger.Geib@telekom.de 1716 Len Ciavattone 1717 AT&T Labs 1718 200 Laurel Avenue South 1719 Middletown, NJ 07748 1720 USA 1722 Email: lencia@att.com