idnits 2.17.1 draft-ietf-ippm-capacity-metric-method-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (June 9, 2021) is 1045 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'T' is mentioned on line 569, but not defined == Missing Reference: 'I' is mentioned on line 569, but not defined == Missing Reference: 'PM' is mentioned on line 568, but not defined ** Downref: Normative reference to an Informational RFC: RFC 2330 ** Downref: Normative reference to an Informational RFC: RFC 7497 ** Downref: Normative reference to an Informational RFC: RFC 8468 Summary: 4 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft AT&T Labs 4 Intended status: Standards Track R. Geib 5 Expires: December 11, 2021 Deutsche Telekom 6 L. Ciavattone 7 AT&T Labs 8 June 9, 2021 10 Metrics and Methods for One-way IP Capacity 11 draft-ietf-ippm-capacity-metric-method-12 13 Abstract 15 This memo revisits the problem of Network Capacity metrics first 16 examined in RFC 5136. The memo specifies a more practical Maximum 17 IP-Layer Capacity metric definition catering for measurement 18 purposes, and outlines the corresponding methods of measurement. 20 Status of This Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at https://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on December 11, 2021. 37 Copyright Notice 39 Copyright (c) 2021 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (https://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 55 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 56 2. Scope, Goals, and Applicability . . . . . . . . . . . . . . . 4 57 3. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 5 58 4. General Parameters and Definitions . . . . . . . . . . . . . 6 59 5. IP-Layer Capacity Singleton Metric Definitions . . . . . . . 8 60 5.1. Formal Name . . . . . . . . . . . . . . . . . . . . . . . 8 61 5.2. Parameters . . . . . . . . . . . . . . . . . . . . . . . 8 62 5.3. Metric Definitions . . . . . . . . . . . . . . . . . . . 8 63 5.4. Related Round-Trip Delay and One-way Loss Definitions . . 9 64 5.5. Discussion . . . . . . . . . . . . . . . . . . . . . . . 10 65 5.6. Reporting the Metric . . . . . . . . . . . . . . . . . . 10 66 6. Maximum IP-Layer Capacity Metric Definitions (Statistic) . . 10 67 6.1. Formal Name . . . . . . . . . . . . . . . . . . . . . . . 10 68 6.2. Parameters . . . . . . . . . . . . . . . . . . . . . . . 11 69 6.3. Metric Definitions . . . . . . . . . . . . . . . . . . . 11 70 6.4. Related Round-Trip Delay and One-way Loss Definitions . . 13 71 6.5. Discussion . . . . . . . . . . . . . . . . . . . . . . . 13 72 6.6. Reporting the Metric . . . . . . . . . . . . . . . . . . 13 73 7. IP-Layer Sender Bit Rate Singleton Metric Definitions . . . . 14 74 7.1. Formal Name . . . . . . . . . . . . . . . . . . . . . . . 14 75 7.2. Parameters . . . . . . . . . . . . . . . . . . . . . . . 14 76 7.3. Metric Definition . . . . . . . . . . . . . . . . . . . . 15 77 7.4. Discussion . . . . . . . . . . . . . . . . . . . . . . . 15 78 7.5. Reporting the Metric . . . . . . . . . . . . . . . . . . 15 79 8. Method of Measurement . . . . . . . . . . . . . . . . . . . . 15 80 8.1. Load Rate Adjustment Algorithm . . . . . . . . . . . . . 16 81 8.2. Measurement Qualification or Verification . . . . . . . . 21 82 8.3. Measurement Considerations . . . . . . . . . . . . . . . 22 83 8.4. Running Code . . . . . . . . . . . . . . . . . . . . . . 24 84 9. Reporting Formats . . . . . . . . . . . . . . . . . . . . . . 25 85 9.1. Configuration and Reporting Data Formats . . . . . . . . 27 86 10. Security Considerations . . . . . . . . . . . . . . . . . . . 27 87 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 28 88 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 28 89 13. Appendix A - Load Rate Adjustment Pseudo Code . . . . . . . . 28 90 14. Appendix B - RFC 8085 UDP Guidelines Check . . . . . . . . . 29 91 14.1. Assessment of Mandatory Requirements . . . . . . . . . . 29 92 14.2. Assessment of Recommendations . . . . . . . . . . . . . 31 93 15. References . . . . . . . . . . . . . . . . . . . . . . . . . 34 94 15.1. Normative References . . . . . . . . . . . . . . . . . . 34 95 15.2. Informative References . . . . . . . . . . . . . . . . . 35 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 37 98 1. Introduction 100 The IETF's efforts to define Network and Bulk Transport Capacity have 101 been chartered and progressed for over twenty years. Over that time, 102 the performance community has seen development of Informative 103 definitions in [RFC3148] for Framework for Bulk Transport Capacity 104 (BTC), RFC 5136 for Network Capacity and Maximum IP-Layer Capacity, 105 and the Experimental metric definitions and methods in [RFC8337], 106 Model-Based Metrics for BTC. 108 This memo revisits the problem of Network Capacity metrics examined 109 first in [RFC3148] and later in [RFC5136]. Maximum IP-Layer Capacity 110 and [RFC3148] Bulk Transfer Capacity (goodput) are different metrics. 111 Maximum IP-Layer Capacity is like the theoretical goal for goodput. 112 There are many metrics in [RFC5136], such as Available Capacity. 113 Measurements depend on the network path under test and the use case. 114 Here, the main use case is to assess the maximum capacity of one or 115 more networks where the subscriber receives specific performance 116 assurances, sometimes referred to as the Internet access, or where a 117 limit of the technology used on a path is being tested. For example, 118 when a user subscribes to a 1 Gbps service, then the user, the 119 service provider, and possibly other parties want to assure that 120 performance level is delivered. When a test confirms the subscribed 121 performance level, then a tester can seek the location of a 122 bottleneck elsewhere. 124 This memo recognizes the importance of a definition of a Maximum IP- 125 Layer Capacity Metric at a time when Internet subscription speeds 126 have increased dramatically; a definition that is both practical and 127 effective for the performance community's needs, including Internet 128 users. The metric definition is intended to use Active Methods of 129 Measurement [RFC7799], and a method of measurement is included. 131 The most direct active measurement of IP-Layer Capacity would use IP 132 packets, but in practice a transport header is needed to traverse 133 address and port translators. UDP offers the most direct assessment 134 possibility, and in the [copycat] measurement study to investigate 135 whether UDP is viable as a general Internet transport protocol, the 136 authors found that a high percentage of paths tested support UDP 137 transport. A number of liaisons have been exchanged on this topic 138 [LS-SG12-A] [LS-SG12-B], discussing the laboratory and field tests 139 that support the UDP-based approach to IP-Layer Capacity measurement. 141 This memo also recognizes the many updates to the IP Performance 142 Metrics Framework [RFC2330] published over twenty years, and makes 143 use of [RFC7312] for Advanced Stream and Sampling Framework, and 144 [RFC8468] with IPv4, IPv6, and IPv4-IPv6 Coexistence Updates. 146 Appendix A describes the load rate adjustment algorithm in pseudo- 147 code. Appendix B discusses the algorithm's compliance with 148 [RFC8085]. 150 1.1. Requirements Language 152 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 153 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 154 "OPTIONAL" in this document are to be interpreted as described in BCP 155 14[RFC2119] [RFC8174] when, and only when, they appear in all 156 capitals, as shown here. 158 2. Scope, Goals, and Applicability 160 The scope of this memo is to define Active Measurement metrics and 161 corresponding methods to unambiguously determine Maximum IP-Layer 162 Capacity and useful secondary metrics. 164 Another goal is to harmonize the specified metric and method across 165 the industry, and this memo is the vehicle that captures IETF 166 consensus, possibly resulting in changes to the specifications of 167 other Standards Development Organizations (SDO) (through each SDO's 168 normal contribution process, or through liaison exchange). 170 Secondary goals are to add considerations for test procedures, and to 171 provide interpretation of the Maximum IP-Layer Capacity results (to 172 identify cases where more testing is warranted, possibly with 173 alternate configurations). Fostering the development of protocol 174 support for this metric and method of measurement is also a goal of 175 this memo (all active testing protocols currently defined by the IPPM 176 WG are UDP-based, meeting a key requirement of these methods). The 177 supporting protocol development to measure this metric according to 178 the specified method is a key future contribution to Internet 179 measurement. 181 The load rate adjustment algorithm's scope is limited to helping 182 determine the Maximum IP-Layer Capacity in the context of an 183 infrequent, diagnostic, short term measurement. It is RECOMMENDED to 184 discontinue non-measurement traffic that shares a subscriber's 185 dedicated resources while testing: measurements may not be accurate 186 and throughput of competing elastic traffic may be greatly reduced. 188 The primary application of the metric and method of measurement 189 described here is the same as in Section 2 of [RFC7497] where: 191 o The access portion of the network is the focus of this problem 192 statement. The user typically subscribes to a service with 193 bidirectional Internet access partly described by rates in bits 194 per second. 196 In addition, the use of the load rate adjustment algorithm described 197 in section 8.1 has the following additional applicability 198 limitations: 200 - MUST only be used in the application of diagnostic and operations 201 measurements as described in this memo 203 - MUST only be used in circumstances consistent with Section 10, 204 Security Considerations 206 - If a network operator is certain of the IP-layer capacity to be 207 validated, then testing MAY start with a fixed rate test at the IP- 208 layer capacity and avoid activating the load adjustment algorithm. 209 However, the stimulus for a diagnostic test (such as a subscriber 210 request) strongly implies that there is no certainty and the load 211 adjustment algorithm is RECOMMENDED. 213 Further, the metric and method of measurement are intended for use 214 where specific exact path information is unknown within a range of 215 possible values: 217 - the subscriber's exact Maximum IP-Layer Capacity is unknown (which 218 is sometimes the case; service rates can be increased due to upgrades 219 without a subscriber's request, or to provide a surplus to compensate 220 for possible underestimates of TCP-based testing). 222 - the size of the bottleneck buffer is unknown. 224 Finally, the measurement system's load rate adjustment algorithm 225 SHALL NOT be provided with the exact capacity value to be validated a 226 priori. This restriction fosters a fair result, and removes an 227 opportunity for bad actors to operate with knowledge of the "right 228 answer". 230 3. Motivation 232 As with any problem that has been worked for many years in various 233 SDOs without any special attempts at coordination, various solutions 234 for metrics and methods have emerged. 236 There are five factors that have changed (or begun to change) in the 237 2013-2019 time frame, and the presence of any one of them on the path 238 requires features in the measurement design to account for the 239 changes: 241 1. Internet access is no longer the bottleneck for many users (but 242 subscribers expect network providers to honor contracted 243 performance). 245 2. Both transfer rate and latency are important to user's 246 satisfaction. 248 3. UDP's growing role in Transport, in areas where TCP once 249 dominated. 251 4. Content and applications are moving physically closer to users. 253 5. There is less emphasis on ISP gateway measurements, possibly due 254 to less traffic crossing ISP gateways in the future. 256 4. General Parameters and Definitions 258 This section lists the REQUIRED input factors to specify a Sender or 259 Receiver metric. 261 o Src, one of the addresses of a host (such as a globally routable 262 IP address). 264 o Dst, one of the addresses of a host (such as a globally routable 265 IP address). 267 o MaxHops, the limit on the number of Hops a specific packet may 268 visit as it traverses from the host at Src to the host at Dst 269 (implemented in the TTL or Hop Limit). 271 o T0, the time at the start of measurement interval, when packets 272 are first transmitted from the Source. 274 o I, the nominal duration of a measurement interval at the 275 destination (default 10 sec) 277 o dt, the nominal duration of m equal sub-intervals in I at the 278 destination (default 1 sec) 280 o dtn, the beginning boundary of a specific sub-interval, n, one of 281 m sub-intervals in I 283 o FT, the feedback time interval between status feedback messages 284 communicating measurement results, sent from the receiver to 285 control the sender. The results are evaluated throughout the test 286 to determine how to adjust the current offered load rate at the 287 sender (default 50ms) 289 o Tmax, a maximum waiting time for test packets to arrive at the 290 destination, set sufficiently long to disambiguate packets with 291 long delays from packets that are discarded (lost), such that the 292 distribution of one-way delay is not truncated. 294 o F, the number of different flows synthesized by the method 295 (default 1 flow) 297 o flow, the stream of packets with the same n-tuple of designated 298 header fields that (when held constant) result in identical 299 treatment in a multi-path decision (such as the decision taken in 300 load balancing). Note: The IPv6 flow label SHOULD be included in 301 the flow definition when routers have complied with [RFC6438] 302 guidelines. 304 o Type-P, the complete description of the test packets for which 305 this assessment applies (including the flow-defining fields). 306 Note that the UDP transport layer is one requirement for test 307 packets specified below. Type-P is a parallel concept to 308 "population of interest" defined in clause 6.1.1 of[Y.1540]. 310 o Payload Content, this IPPM Framework-conforming metric and method 311 includes packet payload content as an aspect of the Type-P 312 parameter, which can help to improve measurement determinism. If 313 there is payload compression in the path and tests intend to 314 characterize a possible advantage due to compression, then payload 315 content SHOULD be supplied by a pseudo-random sequence generator, 316 by using part of a compressed file, or by other means. See 317 Section 3.1.2 of [RFC7312]. 319 o PM, a list of fundamental metrics, such as loss, delay, and 320 reordering, and corresponding target performance threshold. At 321 least one fundamental metric and target performance threshold MUST 322 be supplied (such as One-way IP Packet Loss [RFC7680] equal to 323 zero). 325 A non-Parameter which is required for several metrics is defined 326 below: 328 o T, the host time of the *first* test packet's *arrival* as 329 measured at the destination Measurement Point, or MP(Dst). There 330 may be other packets sent between Source and Destination hosts 331 that are excluded, so this is the time of arrival of the first 332 packet used for measurement of the metric. 334 Note that time stamp format and resolution, sequence numbers, etc. 335 will be established by the chosen test protocol standard or 336 implementation. 338 5. IP-Layer Capacity Singleton Metric Definitions 340 This section sets requirements for the singleton metric that supports 341 the Maximum IP-Layer Capacity Metric definition in Section 6. 343 5.1. Formal Name 345 Type-P-One-way-IP-Capacity, or informally called IP-Layer Capacity. 347 Note that Type-P depends on the chosen method. 349 5.2. Parameters 351 This section lists the REQUIRED input factors to specify the metric, 352 beyond those listed in Section 4. 354 No additional Parameters are needed. 356 5.3. Metric Definitions 358 This section defines the REQUIRED aspects of the measurable IP-Layer 359 Capacity metric (unless otherwise indicated) for measurements between 360 specified Source and Destination hosts: 362 Define the IP-Layer Capacity, C(T,dt,PM), to be the number of IP- 363 Layer bits (including header and data fields) in packets that can be 364 transmitted from the Src host and correctly received by the Dst host 365 during one contiguous sub-interval, dt in length. The IP-Layer 366 Capacity depends on the Src and Dst hosts, the host addresses, and 367 the path between the hosts. 369 The number of these IP-Layer bits is designated n0[dtn,dtn+1] for a 370 specific dt. 372 When the packet size is known and of fixed size, the packet count 373 during a single sub-interval dt multiplied by the total bits in IP 374 header and data fields is equal to n0[dtn,dtn+1]. 376 Anticipating a Sample of Singletons, the number of sub-intervals with 377 duration dt MUST be set to a natural number m, so that T+I = T + m*dt 378 with dtn+1 - dtn = dt for 1 <= n <= m. 380 Parameter PM represents other performance metrics [see section 5.4 381 below]; their measurement results SHALL be collected during 382 measurement of IP-Layer Capacity and associated with the 383 corresponding dtn for further evaluation and reporting. Users SHALL 384 specify the parameter Tmax as required by each metric's reference 385 definition. 387 Mathematically, this definition is represented as (for each n): 389 ( n0[dtn,dtn+1] ) 390 C(T,dt,PM) = ------------------------- 391 dt 393 Equation for IP-Layer Capacity 395 and: 397 o n0 is the total number of IP-Layer header and payload bits that 398 can be transmitted in standard-formed packets [RFC8468] from the 399 Src host and correctly received by the Dst host during one 400 contiguous sub-interval, dt in length, during the interval [T, 401 T+I], 403 o C(T,dt,PM) the IP-Layer Capacity, corresponds to the value of n0 404 measured in any sub-interval beginning at dtn, divided by the 405 length of sub-interval, dt. 407 o PM represents other performance metrics [see section 5.4 below]; 408 their measurement results SHALL be collected during measurement of 409 IP-Layer Capacity and associated with the corresponding dtn for 410 further evaluation and reporting. 412 o all sub-intervals MUST be of equal duration. Choosing dt as non- 413 overlapping consecutive time intervals allows for a simple 414 implementation. 416 o The bit rate of the physical interface of the measurement devices 417 MUST be higher than the smallest of the links on the path whose 418 C(T,I,PM) is to be measured (the bottleneck link). 420 Measurements according to these definitions SHALL use the UDP 421 transport layer. Standard-formed packets are specified in Section 5 422 of [RFC8468]. The measurement SHOULD use a randomized Source port or 423 equivalent technique, and SHOULD send responses from the Source 424 address matching the test packet destination address. 426 Some compression affects on measurement are discussed in Section 6 of 427 [RFC8468]. 429 5.4. Related Round-Trip Delay and One-way Loss Definitions 431 RTD[dtn,dtn+1] is defined as a Sample of the [RFC2681] Round-trip 432 Delay between the Src host and the Dst host over the interval [T,T+I] 433 (that contains equal non-overlapping intervals of dt). The 434 "reasonable period of time" in [RFC2681] is the parameter Tmax in 435 this memo. The statistics used to summarize RTD[dtn,dtn+1] MAY 436 include the minimum, maximum, median, and mean, and the range = 437 (maximum - minimum) is referred to below in Section 8.1 for load 438 adjustment purposes. 440 OWL[dtn,dtn+1] is defined as a Sample of the [RFC7680] One-way Loss 441 between the Src host and the Dst host over the interval [T,T+I] (that 442 contains equal non-overlapping intervals of dt). The statistics used 443 to summarize OWL[dtn,dtn+1] MAY include the lost packet count and the 444 lost packet ratio. 446 Other metrics MAY be measured: one-way reordering, duplication, and 447 delay variation. 449 5.5. Discussion 451 See the corresponding section for Maximum IP-Layer Capacity. 453 5.6. Reporting the Metric 455 The IP-Layer Capacity SHOULD be reported with at least single Megabit 456 resolution, in units of Megabits per second (Mbps), (which is 457 1,000,000 bits per second to avoid any confusion). 459 The related One-way Loss metric and Round Trip Delay measurements for 460 the same Singleton SHALL be reported, also with meaningful resolution 461 for the values measured. 463 Individual Capacity measurements MAY be reported in a manner 464 consistent with the Maximum IP-Layer Capacity, see Section 9. 466 6. Maximum IP-Layer Capacity Metric Definitions (Statistic) 468 This section sets requirements for the following components to 469 support the Maximum IP-Layer Capacity Metric. 471 6.1. Formal Name 473 Type-P-One-way-Max-IP-Capacity, or informally called Maximum IP-Layer 474 Capacity. 476 Note that Type-P depends on the chosen method. 478 6.2. Parameters 480 This section lists the REQUIRED input factors to specify the metric, 481 beyond those listed in Section 4. 483 No additional Parameters or definitions are needed. 485 6.3. Metric Definitions 487 This section defines the REQUIRED aspects of the Maximum IP-Layer 488 Capacity metric (unless otherwise indicated) for measurements between 489 specified Source and Destination hosts: 491 Define the Maximum IP-Layer Capacity, Maximum_C(T,I,PM), to be the 492 maximum number of IP-Layer bits n0[dtn,dtn+1] divided by dt that can 493 be transmitted in packets from the Src host and correctly received by 494 the Dst host, over all dt length intervals in [T, T+I], and meeting 495 the PM criteria. Equivalently the Maximum of a Sample of size m of 496 C(T,I,PM) collected during the interval [T, T+I] and meeting the PM 497 criteria. 499 The number of sub-intervals with duration dt MUST be set to a natural 500 number m, so that T+I = T + m*dt with dtn+1 - dtn = dt for 1 <= n <= 501 m. 503 Parameter PM represents the other performance metrics (see 504 Section 6.4 below) and their measurement results for the Maximum IP- 505 Layer Capacity. At least one target performance threshold (PM 506 criterion) MUST be defined. If more than one metric and target 507 performance threshold are defined, then the sub-interval with maximum 508 number of bits transmitted MUST meet all the target performance 509 thresholds. Users SHALL specify the parameter Tmax as required by 510 each metric's reference definition. 512 Mathematically, this definition can be represented as: 514 max ( n0[dtn,dtn+1] ) 515 [T,T+I] 516 Maximum_C(T,I,PM) = ------------------------- 517 dt 518 where: 519 T T+I 520 _________________________________________ 521 | | | | | | | | | | | 522 dtn=1 2 3 4 5 6 7 8 9 10 n+1 523 n=m 525 Equation for Maximum Capacity 527 and: 529 o n0 is the total number of IP-Layer header and payload bits that 530 can be transmitted in standard-formed packets from the Src host 531 and correctly received by the Dst host during one contiguous sub- 532 interval, dt in length, during the interval [T, T+I], 534 o Maximum_C(T,I,PM) the Maximum IP-Layer Capacity, corresponds to 535 the maximum value of n0 measured in any sub-interval beginning at 536 dtn, divided by the constant length of all sub-intervals, dt. 538 o PM represents the other performance metrics (see Section 5.4) and 539 their measurement results for the Maximum IP-Layer Capacity. At 540 least one target performance threshold (PM criterion) MUST be 541 defined. 543 o all sub-intervals MUST be of equal duration. Choosing dt as non- 544 overlapping consecutive time intervals allows for a simple 545 implementation. 547 o The bit rate of the physical interface of the measurement systems 548 MUST be higher than the smallest of the links on the path whose 549 Maximum_C(T,I,PM) is to be measured (the bottleneck link). 551 In this definition, the m sub-intervals can be viewed as trials when 552 the Src host varies the transmitted packet rate, searching for the 553 maximum n0 that meets the PM criteria measured at the Dst host in a 554 test of duration, I. When the transmitted packet rate is held 555 constant at the Src host, the m sub-intervals may also be viewed as 556 trials to evaluate the stability of n0 and metric(s) in the PM list 557 over all dt-length intervals in I. 559 Measurements according to these definitions SHALL use the UDP 560 transport layer. 562 6.4. Related Round-Trip Delay and One-way Loss Definitions 564 RTD[dtn,dtn+1] and OWL[dtn,dtn+1] are defined in Section 5.4. Here, 565 the test intervals are increased to match the capacity Samples, 566 RTD[T,I] and OWL[T,I]. 568 The interval dtn,dtn+1 where Maximum_C[T,I,PM] occurs is the 569 reporting sub-interval within RTD[T,I] and OWL[T,I]. 571 Other metrics MAY be measured: one-way reordering, duplication, and 572 delay variation. 574 6.5. Discussion 576 If traffic conditioning (e.g., shaping, policing) applies along a 577 path for which Maximum_C(T,I,PM) is to be determined, different 578 values for dt SHOULD be picked and measurements be executed during 579 multiple intervals [T, T+I]. Each duration dt SHOULD be chosen so 580 that it is an integer multiple of increasing values k times 581 serialization delay of a path MTU at the physical interface speed 582 where traffic conditioning is expected. This should avoid taking 583 configured burst tolerance singletons as a valid Maximum_C(T,I,PM) 584 result. 586 A Maximum_C(T,I,PM) without any indication of bottleneck congestion, 587 be that an increasing latency, packet loss or ECN marks during a 588 measurement interval I, is likely to underestimate Maximum_C(T,I,PM). 590 6.6. Reporting the Metric 592 The IP-Layer Capacity SHOULD be reported with at least single Megabit 593 resolution, in units of Megabits per second (Mbps) (which is 594 1,000,000 bits per second to avoid any confusion). 596 The related One-way Loss metric and Round Trip Delay measurements for 597 the same Singleton SHALL be reported, also with meaningful resolution 598 for the values measured. 600 When there are demonstrated and repeatable Capacity modes in the 601 Sample, then the Maximum IP-Layer Capacity SHALL be reported for each 602 mode, along with the relative time from the beginning of the stream 603 that the mode was observed to be present. Bimodal Maximum IP-Layer 604 Capacities have been observed with some services, sometimes called a 605 "turbo mode" intending to deliver short transfers more quickly, or 606 reduce the initial buffering time for some video streams. Note that 607 modes lasting less than dt duration will not be detected. 609 Some transmission technologies have multiple methods of operation 610 that may be activated when channel conditions degrade or improve, and 611 these transmission methods may determine the Maximum IP-Layer 612 Capacity. Examples include line-of-sight microwave modulator 613 constellations, or cellular modem technologies where the changes may 614 be initiated by a user moving from one coverage area to another. 615 Operation in the different transmission methods may be observed over 616 time, but the modes of Maximum IP-Layer Capacity will not be 617 activated deterministically as with the "turbo mode" described in the 618 paragraph above. 620 7. IP-Layer Sender Bit Rate Singleton Metric Definitions 622 This section sets requirements for the following components to 623 support the IP-Layer Sender Bitrate Metric. This metric helps to 624 check that the sender actually generated the desired rates during a 625 test, and measurement takes place at the Src host to network path 626 interface (or as close as practical within the Src host). It is not 627 a metric for path performance. 629 7.1. Formal Name 631 Type-P-IP-Sender-Bit-Rate, or informally called IP-Layer Sender 632 Bitrate. 634 Note that Type-P depends on the chosen method. 636 7.2. Parameters 638 This section lists the REQUIRED input factors to specify the metric, 639 beyond those listed in Section 4. 641 o S, the duration of the measurement interval at the Source 643 o st, the nominal duration of N sub-intervals in S (default st = 644 0.05 seconds) 646 o stn, the beginning boundary of a specific sub-interval, n, one of 647 N sub-intervals in S 649 S SHALL be longer than I, primarily to account for on-demand 650 activation of the path, or any preamble to testing required, and the 651 delay of the path. 653 st SHOULD be much smaller than the sub-interval dt and on the same 654 order as FT, otherwise the rate measurement will include many rate 655 adjustments and include more time smoothing, thus missing the Maximum 656 IP-Layer Capacity. The st parameter does not have relevance when the 657 Source is transmitting at a fixed rate throughout S. 659 7.3. Metric Definition 661 This section defines the REQUIRED aspects of the IP-Layer Sender 662 Bitrate metric (unless otherwise indicated) for measurements at the 663 specified Source on packets addressed for the intended Destination 664 host and matching the required Type-P: 666 Define the IP-Layer Sender Bit Rate, B(S,st), to be the number of IP- 667 Layer bits (including header and data fields) that are transmitted 668 from the Source with address pair Src and Dst during one contiguous 669 sub-interval, st, during the test interval S (where S SHALL be longer 670 than I), and where the fixed-size packet count during that single 671 sub-interval st also provides the number of IP-Layer bits in any 672 interval, [stn,stn+1]. 674 Measurements according to these definitions SHALL use the UDP 675 transport layer. Any feedback from Dst host to Src host received by 676 Src host during an interval [stn,stn+1] SHOULD NOT result in an 677 adaptation of the Src host traffic conditioning during this interval 678 (rate adjustment occurs on st interval boundaries). 680 7.4. Discussion 682 Both the Sender and Receiver or (Source and Destination) bit rates 683 SHOULD be assessed as part of an IP-Layer Capacity measurement. 684 Otherwise, an unexpected sending rate limitation could produce an 685 erroneous Maximum IP-Layer Capacity measurement. 687 7.5. Reporting the Metric 689 The IP-Layer Sender Bit Rate SHALL be reported with meaningful 690 resolution, in units of Megabits per second (which is 1,000,000 bits 691 per second to avoid any confusion). 693 Individual IP-Layer Sender Bit Rate measurements are discussed 694 further in Section 9. 696 8. Method of Measurement 698 The architecture of the method REQUIRES two cooperating hosts 699 operating in the roles of Src (test packet sender) and Dst 700 (receiver), with a measured path and return path between them. 702 The duration of a test, parameter I, MUST be constrained in a 703 production network, since this is an active test method and it will 704 likely cause congestion on the Src to Dst host path during a test. 706 8.1. Load Rate Adjustment Algorithm 708 The algorithm described in this section MUST NOT be used as a general 709 Congestion Control Algorithm (CCA). As stated in the Scope 710 Section 2, the load rate adjustment algorithm's goal is to help 711 determine the Maximum IP-Layer Capacity in the context of an 712 infrequent, diagnostic, short term measurement. There is a tradeoff 713 between test duration (also the test data volume) and algorithm 714 aggressiveness (speed of ramp-up and down to the Maximum IP-Layer 715 Capacity). The parameter values chosen below strike a well-tested 716 balance among these factors. 718 A table SHALL be pre-built (by the test initiator) defining all the 719 offered load rates that will be supported (R1 through Rn, in 720 ascending order, corresponding to indexed rows in the table). It is 721 RECOMMENDED that rates begin with 0.5 Mbps at index zero, use 1 Mbps 722 at index one, and then continue in 1 Mbps increments to 1 Gbps. 723 Above 1 Gbps, and up to 10 Gbps, it is RECOMMENDED that 100 Mbps 724 increments be used. Above 10 Gbps, increments of 1 Gbps are 725 RECOMMENDED. A higher initial IP-Layer Sender Bitrate might be 726 configured when the test operator is certain that the Maximum IP- 727 Layer Capacity is well-above the initial IP-Layer Sender Bitrate and 728 factors such as test duration and total test traffic play an 729 important role. The sending rate table SHOULD backet the maximum 730 capacity where it will make measurements, including constrained rates 731 less than 500kbps if applicable. 733 Each rate is defined as datagrams of size ss, sent as a burst of 734 count cc, each time interval tt (default for tt is 1ms, a likely 735 system tick-interval). While it is advantageous to use datagrams of 736 as large a size as possible, it may be prudent to use a slightly 737 smaller maximum that allows for secondary protocol headers and/or 738 tunneling without resulting in IP-Layer fragmentation. Selection of 739 a new rate is indicated by a calculation on the current row, Rx. For 740 example: 742 "Rx+1": the sender uses the next higher rate in the table. 744 "Rx-10": the sender uses the rate 10 rows lower in the table. 746 At the beginning of a test, the sender begins sending at rate R1 and 747 the receiver starts a feedback timer of duration FT (while awaiting 748 inbound datagrams). As datagrams are received they are checked for 749 sequence number anomalies (loss, out-of-order, duplication, etc.) and 750 the delay range is measured (one-way or round-trip). This 751 information is accumulated until the feedback timer FT expires and a 752 status feedback message is sent from the receiver back to the sender, 753 to communicate this information. The accumulated statistics are then 754 reset by the receiver for the next feedback interval. As feedback 755 messages are received back at the sender, they are evaluated to 756 determine how to adjust the current offered load rate (Rx). 758 If the feedback indicates that no sequence number anomalies were 759 detected AND the delay range was below the lower threshold, the 760 offered load rate is increased. If congestion has not been confirmed 761 up to this point (see below for the method to declare congestion), 762 the offered load rate is increased by more than one rate (e.g., 763 Rx+10). This allows the offered load to quickly reach a near-maximum 764 rate. Conversely, if congestion has been previously confirmed, the 765 offered load rate is only increased by one (Rx+1). However, if a 766 rate threshold between high and very high sending rates (such as 1 767 Gbps) is exceeded, the offered load rate is only increased by one 768 (Rx+1) above the rate threshold in any congestion state. 770 If the feedback indicates that sequence number anomalies were 771 detected OR the delay range was above the upper threshold, the 772 offered load rate is decreased. The RECOMMENDED threshold values are 773 0 for sequence number gaps and 30 ms for lower and 90 ms for upper 774 delay thresholds, respectively. Also, if congestion is now confirmed 775 for the first time by the current feedback message being processed, 776 then the offered load rate is decreased by more than one rate (e.g., 777 Rx-30). This one-time reduction is intended to compensate for the 778 fast initial ramp-up. In all other cases, the offered load rate is 779 only decreased by one (Rx-1). 781 If the feedback indicates that there were no sequence number 782 anomalies AND the delay range was above the lower threshold, but 783 below the upper threshold, the offered load rate is not changed. 784 This allows time for recent changes in the offered load rate to 785 stabilize, and the feedback to represent current conditions more 786 accurately. 788 Lastly, the method for inferring congestion is that there were 789 sequence number anomalies AND/OR the delay range was above the upper 790 threshold for two consecutive feedback intervals. The algorithm 791 described above is also illustrated in ITU-T Rec. Y.1540, 2020 792 version[Y.1540], in Annex B, and implemented in the Appendix on Load 793 Rate Adjustment Pseudo Code in this memo. 795 The load rate adjustment algorithm MUST include timers that stop the 796 test when received packet streams cease unexpectedly. The timeout 797 thresholds are provided in the table below, along with values for all 798 other parameters and variables described in this section. Operation 799 of non-obvious parameters appear below: 801 load packet timeout Operation: The load packet timeout SHALL be 802 reset to the configured value each time a load packet received. 803 If the timeout expires, the receiver SHALL be closed and no 804 further feedback sent. 806 feedback message timeout Operation: The feedback message timeout 807 SHALL be reset to the configured value each time a feedback 808 message is received. If the timeout expires, the sender SHALL be 809 closed and no further load packets sent. 811 +-------------+-------------+---------------+-----------------------+ 812 | Parameter | Default | Tested Range | Expected Safe Range | 813 | | | or values | (not entirely tested, | 814 | | | | other | 815 | | | | values NOT | 816 | | | | RECOMMENDED) | 817 +-------------+-------------+---------------+-----------------------+ 818 | FT, | 50ms | 20ms, 50ms, | 20ms <= FT <= 250ms | 819 | feedback | | 100ms | Larger values may | 820 | time | | | slow the rate | 821 | interval | | | increase and fail to | 822 | | | | find the max | 823 +-------------+-------------+---------------+-----------------------+ 824 | Feedback | L*FT, L=20 | L=100 with | 0.5sec <= L*FT <= | 825 | message | (1sec with | FT=50ms | 30sec Upper limit for | 826 | timeout | FT=50ms) | (5sec) | very unreliable | 827 | (stop test) | | | test paths only | 828 +-------------+-------------+---------------+-----------------------+ 829 | load packet | 1sec | 5sec | 0.250sec - 30sec | 830 | timeout | | | Upper limit for very | 831 | (stop test) | | | unreliable test paths | 832 | | | | only | 833 +-------------+-------------+---------------+-----------------------+ 834 | table index | 0.5Mbps | 0.5Mbps | when testing <=10Gbps | 835 | 0 | | | | 836 +-------------+-------------+---------------+-----------------------+ 837 | table index | 1Mbps | 1Mbps | when testing <=10Gbps | 838 | 1 | | | | 839 +-------------+-------------+---------------+-----------------------+ 840 | table index | 1Mbps | 1Mbps<=rate<= | same as tested | 841 | (step) size | | 1Gbps | | 842 +-------------+-------------+---------------+-----------------------+ 843 | table index | 100Mbps | 1Gbps<=rate<= | same as tested | 844 | (step) | | 10Gbps | | 845 | size, | | | | 846 | rate>1Gbps | | | | 847 +-------------+-------------+---------------+-----------------------+ 848 | table index | 1Gbps | untested | >10Gbps | 849 | (step) | | | | 850 | size, | | | | 851 | rate>10Gbps | | | | 852 +-------------+-------------+---------------+-----------------------+ 853 | ss, UDP | none | <=1222 | Recommend max at | 854 | payload | | | largest value that | 855 | size, bytes | | | avoids fragmentation; | 856 | | | | use of too- | 857 | | | | small payload size | 858 | | | | might result in | 859 | | | | unexpected sender | 860 | | | | limitations. | 861 +-------------+-------------+---------------+-----------------------+ 862 | cc, burst | none | 1<=cc<= 100 | same as tested. Vary | 863 | count | | | cc as needed to | 864 | | | | create the desired | 865 | | | | maximum | 866 | | | | sending rate. Sender | 867 | | | | buffer size may limit | 868 | | | | cc in implementation. | 869 +-------------+-------------+---------------+-----------------------+ 870 | tt, burst | 100microsec | 100microsec, | available range of | 871 | interval | | 1msec | "tick" values (HZ | 872 | | | | param) | 873 +-------------+-------------+---------------+-----------------------+ 874 | low delay | 30ms | 5ms, 30ms | same as tested | 875 | range | | | | 876 | threshold | | | | 877 +-------------+-------------+---------------+-----------------------+ 878 | high delay | 90ms | 10ms, 90ms | same as tested | 879 | range | | | | 880 | threshold | | | | 881 +-------------+-------------+---------------+-----------------------+ 882 | sequence | 0 | 0, 100 | same as tested | 883 | error | | | | 884 | threshold | | | | 885 +-------------+-------------+---------------+-----------------------+ 886 | consecutive | 2 | 2 | Use values >1 to | 887 | errored | | | avoid misinterpreting | 888 | status | | | transient loss | 889 | report | | | | 890 | threshold | | | | 891 +-------------+-------------+---------------+-----------------------+ 892 | Fast mode | 10 | 10 | 2 <= steps <= 30 | 893 | increase, | | | | 894 | in table | | | | 895 | index steps | | | | 896 +-------------+-------------+---------------+-----------------------+ 897 | Fast mode | 3 * Fast | 3 * Fast mode | same as tested | 898 | decrease, | mode | increase | | 899 | in table | increase | | | 900 | index steps | | | | 901 +-------------+-------------+---------------+-----------------------+ 903 Parameters for Load Rate Adjustment Algorithm 905 As a consequence of default parameterization, the Number of table 906 steps in total for rates <10Gbps is 2000 (excluding index 0). 908 A related sender backoff response to network conditions occurs when 909 one or more status feedback messages fail to arrive at the sender. 911 If no status feedback messages arrive at the sender for the interval 912 greater than the Lost Status Backoff timeout: 914 UDRT + (2+w)*FT = Lost Status Backoff timeout 916 where: 917 UDRT = upper delay range threshold (default 90ms) 918 FT = feedback time interval (default 50ms) 919 w = number of repeated timeouts (w=0 initially, w++ on each 920 timeout, and reset to 0 when a message is received) 922 beginning when the last message (of any type) was successfully 923 received at the sender: 925 Then the offered load SHALL be decreased, following the same process 926 as when the feedback indicates presence of one or more sequence 927 number anomalies OR the delay range was above the upper threshold (as 928 described above), with the same load rate adjustment algorithm 929 variables in their current state. This means that rate reduction and 930 congestion confirmation can result from a three-way OR that includes 931 lost status feedback messages, sequence errors, or delay variation. 933 The RECOMMENDED initial value for w is 0, taking Round Trip Time 934 (RTT) less than FT into account. A test with RTT longer than FT is a 935 valid reason to increase the initial value of w appropriately. 936 Variable w SHALL be incremented by 1 whenever the Lost Status Backoff 937 timeout is exceeded. So with FT = 50ms and UDRT = 90ms, a status 938 feedback message loss would be declared at 190ms following a 939 successful message, again at 50ms after that (240ms total), and so 940 on. 942 Also, if congestion is now confirmed for the first time by a Lost 943 Status Backoff timeout, then the offered load rate is decreased by 944 more than one rate (e.g., Rx-30). This one-time reduction is 945 intended to compensate for the fast initial ramp-up. In all other 946 cases, the offered load rate is only decreased by one (Rx-1). 948 Appendix B discusses compliance with the applicable mandatory 949 requirements of [RFC8085], consistent with the goals of the IP-Layer 950 Capacity Metric and Method, including the load rate adjustment 951 algorithm described in this section. 953 8.2. Measurement Qualification or Verification 955 It is of course necessary to calibrate the equipment performing the 956 IP-Layer Capacity measurement, to ensure that the expected capacity 957 can be measured accurately, and that equipment choices (processing 958 speed, interface bandwidth, etc.) are suitably matched to the 959 measurement range. 961 When assessing a Maximum rate as the metric specifies, artificially 962 high (optimistic) values might be measured until some buffer on the 963 path is filled. Other causes include bursts of back-to-back packets 964 with idle intervals delivered by a path, while the measurement 965 interval (dt) is small and aligned with the bursts. The artificial 966 values might result in an un-sustainable Maximum Capacity observed 967 when the method of measurement is searching for the Maximum, and that 968 would not do. This situation is different from the bi-modal service 969 rates (discussed under Reporting), which are characterized by a 970 multi-second duration (much longer than the measured RTT) and 971 repeatable behavior. 973 There are many ways that the Method of Measurement could handle this 974 false-max issue. The default value for measurement of singletons (dt 975 = 1 second) has proven to be of practical value during tests of this 976 method, allows the bimodal service rates to be characterized, and it 977 has an obvious alignment with the reporting units (Mbps). 979 Another approach comes from Section 24 of [RFC2544] and its 980 discussion of Trial duration, where relatively short trials conducted 981 as part of the search are followed by longer trials to make the final 982 determination. In the production network, measurements of Singletons 983 and Samples (the terms for trials and tests of Lab Benchmarking) must 984 be limited in duration because they may be service-affecting. But 985 there is sufficient value in repeating a Sample with a fixed sending 986 rate determined by the previous search for the Maximum IP-Layer 987 Capacity, to qualify the result in terms of the other performance 988 metrics measured at the same time. 990 A qualification measurement for the search result is a subsequent 991 measurement, sending at a fixed 99.x % of the Maximum IP-Layer 992 Capacity for I, or an indefinite period. The same Maximum Capacity 993 Metric is applied, and the Qualification for the result is a Sample 994 without packet loss or a growing minimum delay trend in subsequent 995 singletons (or each dt of the measurement interval, I). Samples 996 exhibiting losses or increasing queue occupation require a repeated 997 search and/or test at reduced fixed sender rate for qualification. 999 Here, as with any Active Capacity test, the test duration must be 1000 kept short. 10 second tests for each direction of transmission are 1001 common today. The default measurement interval specified here is I = 1002 10 seconds. The combination of a fast and congestion-aware search 1003 method and user-network coordination make a unique contribution to 1004 production testing. The Maximum IP Capacity metric and method for 1005 assessing performance is very different from classic [RFC2544] 1006 Throughput metric and methods : it uses near-real-time load 1007 adjustments that are sensitive to loss and delay, similar to other 1008 congestion control algorithms used on the Internet every day, along 1009 with limited duration. On the other hand, [RFC2544] Throughput 1010 measurements can produce sustained overload conditions for extended 1011 periods of time. Individual trials in a test governed by a binary 1012 search can last 60 seconds for each step, and the final confirmation 1013 trial may be even longer. This is very different from "normal" 1014 traffic levels, but overload conditions are not a concern in the 1015 isolated test environment. The concerns raised in [RFC6815] were 1016 that [RFC2544] methods would be let loose on production networks, and 1017 instead the authors challenged the standards community to develop 1018 metrics and methods like those described in this memo. 1020 8.3. Measurement Considerations 1022 In general, the wide-spread measurements that this memo encourages 1023 will encounter wide-spread behaviors. The bimodal IP Capacity 1024 behaviors already discussed in Section 6.6 are good examples. 1026 In general, it is RECOMMENDED to locate test endpoints as close to 1027 the intended measured link(s) as practical (this is not always 1028 possible for reasons of scale; there is a limit on number of test 1029 endpoints coming from many perspectives, management and measurement 1030 traffic for example). The testing operator MUST set a value for the 1031 MaxHops parameter, based on the expected path length. This parameter 1032 can keep measurement traffic from straying too far beyond the 1033 intended path. 1035 The path measured may be stateful based on many factors, and the 1036 Parameter "Time of day" when a test starts may not be enough 1037 information. Repeatable testing may require the time from the 1038 beginning of a measured flow, and how the flow is constructed 1039 including how much traffic has already been sent on that flow when a 1040 state-change is observed, because the state-change may be based on 1041 time or bytes sent or both. Both load packets and status feedback 1042 messages MUST contain sequence numbers, which helps with measurements 1043 based on those packets. 1045 Many different types of traffic shapers and on-demand communications 1046 access technologies may be encountered, as anticipated in [RFC7312], 1047 and play a key role in measurement results. Methods MUST be prepared 1048 to provide a short preamble transmission to activate on-demand 1049 communications access, and to discard the preamble from subsequent 1050 test results. 1052 Conditions which might be encountered during measurement, where 1053 packet losses may occur independently of the measurement sending 1054 rate: 1056 1. Congestion of an interconnection or backbone interface may appear 1057 as packet losses distributed over time in the test stream, due to 1058 much higher rate interfaces in the backbone. 1060 2. Packet loss due to use of Random Early Detection (RED) or other 1061 active queue management may or may not affect the measurement 1062 flow if competing background traffic (other flows) are 1063 simultaneously present. 1065 3. There may be only small delay variation independent of sending 1066 rate under these conditions, too. 1068 4. Persistent competing traffic on measurement paths that include 1069 shared transmission media may cause random packet losses in the 1070 test stream. 1072 It is possible to mitigate these conditions using the flexibility of 1073 the load-rate adjusting algorithm described in Section 8.1 above 1074 (tuning specific parameters). 1076 If the measurement flow burst duration happens to be on the order of 1077 or smaller than the burst size of a shaper or a policer in the path, 1078 then the line rate might be measured rather than the bandwidth limit 1079 imposed by the shaper or policer. If this condition is suspected, 1080 alternate configurations SHOULD be used. 1082 In general, results depend on the sending stream characteristics; the 1083 measurement community has known this for a long time, and needs to 1084 keep it front of mind. Although the default is a single flow (F=1) 1085 for testing, use of multiple flows may be advantageous for the 1086 following reasons: 1088 1. the test hosts may be able to create higher load than with a 1089 single flow, or parallel test hosts may be used to generate 1 1090 flow each. 1092 2. there may be link aggregation present (flow-based load balancing) 1093 and multiple flows are needed to occupy each member of the 1094 aggregate. 1096 3. Internet access policies may limit the IP-Layer Capacity 1097 depending on the Type-P of packets, possibly reserving capacity 1098 for various stream types. 1100 Each flow would be controlled using its own implementation of the 1101 load rate adjustment (search) algorithm. 1103 It is obviously counter-productive to run more than one independent 1104 and concurrent test (regardless of the number of flows in the test 1105 stream) attempting to measure the *maximum* capacity on a single 1106 path. The number of concurrent, independent tests of a path SHALL be 1107 limited to one. 1109 Tests of a v4-v6 transition mechanism might well be the intended 1110 subject of a capacity test. As long as the IPv4 and IPv6 packets 1111 sent/received are both standard-formed, this should be allowed (and 1112 the change in header size easily accounted for on a per-packet 1113 basis). 1115 As testing continues, implementers should expect some evolution in 1116 the methods. The ITU-T has published a Supplement (60) to the 1117 Y-series of Recommendations, "Interpreting ITU-T Y.1540 Maximum IP- 1118 Layer Capacity measurements", [Y.Sup60], which is the result of 1119 continued testing with the metric, and those results have improved 1120 the method described here. 1122 8.4. Running Code 1124 RFC Editor: This section is for the benefit of the Document 1125 Shepherd's form, and will be deleted prior to publication. 1127 Much of the development of the method and comparisons with existing 1128 methods conducted at IETF Hackathons and elsewhere have been based on 1129 the example udpst Linux measurement tool (which is a working 1130 reference for further development) [udpst]. The current project: 1132 o is a utility that can function as a client or server daemon 1133 o requires a successful client-initiated setup handshake between 1134 cooperating hosts and allows firewalls to control inbound 1135 unsolicited UDP which either go to a control port [expected and w/ 1136 authentication] or to ephemeral ports that are only created as 1137 needed. Firewalls protecting each host can both continue to do 1138 their job normally. This aspect is similar to many other test 1139 utilities available. 1141 o is written in C, and built with gcc (release 9.3) and its standard 1142 run-time libraries 1144 o allows configuration of most of the parameters described in 1145 Sections 4 and 7. 1147 o supports IPv4 and IPv6 address families. 1149 o supports IP-Layer packet marking. 1151 9. Reporting Formats 1153 The singleton IP-Layer Capacity results SHOULD be accompanied by the 1154 context under which they were measured. 1156 o timestamp (especially the time when the maximum was observed in 1157 dtn) 1159 o Source and Destination (by IP or other meaningful ID) 1161 o other inner parameters of the test case (Section 4) 1163 o outer parameters, such as "test conducted in motion" or other 1164 factors belonging to the context of the measurement 1166 o result validity (indicating cases where the process was somehow 1167 interrupted or the attempt failed) 1169 o a field where unusual circumstances could be documented, and 1170 another one for "ignore/mask out" purposes in further processing 1172 The Maximum IP-Layer Capacity results SHOULD be reported in the 1173 format of a table with a row for each of the test Phases and Number 1174 of Flows. There SHOULD be columns for the phases with number of 1175 flows, and for the resultant Maximum IP-Layer Capacity results for 1176 the aggregate and each flow tested. 1178 As mentioned in Section 6.6, bi-modal (or multi-modal) maxima SHALL 1179 be reported for each mode separately. 1181 +-------------+-------------------------+----------+----------------+ 1182 | Phase, # | Maximum IP-Layer | Loss | RTT min, max, | 1183 | Flows | Capacity, Mbps | Ratio | msec | 1184 +-------------+-------------------------+----------+----------------+ 1185 | Search,1 | 967.31 | 0.0002 | 30, 58 | 1186 +-------------+-------------------------+----------+----------------+ 1187 | Verify,1 | 966.00 | 0.0000 | 30, 38 | 1188 +-------------+-------------------------+----------+----------------+ 1190 Maximum IP-layer Capacity Results 1192 Static and configuration parameters: 1194 The sub-interval time, dt, MUST accompany a report of Maximum IP- 1195 Layer Capacity results, and the remaining Parameters from Section 4, 1196 General Parameters. 1198 The PM list metrics corresponding to the sub-interval where the 1199 Maximum Capacity occurred MUST accompany a report of Maximum IP-Layer 1200 Capacity results, for each test phase. 1202 The IP-Layer Sender Bit rate results SHOULD be reported in the format 1203 of a table with a row for each of the test phases, sub-intervals (st) 1204 and number of flows. There SHOULD be columns for the phases with 1205 number of flows, and for the resultant IP-Layer Sender Bit rate 1206 results for the aggregate and each flow tested. 1208 +--------------------------+-------------+----------------------+ 1209 | Phase, Flow or Aggregate | st, sec | Sender Bitrate, Mbps | 1210 +--------------------------+-------------+----------------------+ 1211 | Search,1 | 0.00 - 0.05 | 345 | 1212 +--------------------------+-------------+----------------------+ 1213 | Search,2 | 0.00 - 0.05 | 289 | 1214 +--------------------------+-------------+----------------------+ 1215 | Search,Agg | 0.00 - 0.05 | 634 | 1216 +--------------------------+-------------+----------------------+ 1218 IP-layer Sender Bit Rate Results 1220 Static and configuration parameters: 1222 The subinterval time, st, MUST accompany a report of Sender IP-Layer 1223 Bit Rate results. 1225 Also, the values of the remaining Parameters from Section 4, General 1226 Parameters, MUST be reported. 1228 9.1. Configuration and Reporting Data Formats 1230 As a part of the multi-Standards Development Organization (SDO) 1231 harmonization of this metric and method of measurement, one of the 1232 areas where the Broadband Forum (BBF) contributed its expertise was 1233 in the definition of an information model and data model for 1234 configuration and reporting. These models are consistent with the 1235 metric parameters and default values specified as lists is this memo. 1236 [TR-471] provides the Information model that was used to prepare a 1237 full data model in related BBF work. The BBF has also carefully 1238 considered topics within its purview, such as placement of 1239 measurement systems within the Internet access architecture. For 1240 example, timestamp resolution requirements that influence the choice 1241 of the test protocol are provided in Table 2 of [TR-471]. 1243 10. Security Considerations 1245 Active metrics and measurements have a long history of security 1246 considerations. The security considerations that apply to any active 1247 measurement of live paths are relevant here. See [RFC4656] and 1248 [RFC5357]. 1250 When considering privacy of those involved in measurement or those 1251 whose traffic is measured, the sensitive information available to 1252 potential observers is greatly reduced when using active techniques 1253 which are within this scope of work. Passive observations of user 1254 traffic for measurement purposes raise many privacy issues. We refer 1255 the reader to the privacy considerations described in the Large Scale 1256 Measurement of Broadband Performance (LMAP) Framework [RFC7594], 1257 which covers active and passive techniques. 1259 There are some new considerations for Capacity measurement as 1260 described in this memo. 1262 1. Cooperating Source and Destination hosts and agreements to test 1263 the path between the hosts are REQUIRED. Hosts perform in either 1264 the Src or Dst roles. 1266 2. It is REQUIRED to have a user client-initiated setup handshake 1267 between cooperating hosts that allows firewalls to control 1268 inbound unsolicited UDP traffic which either goes to a control 1269 port [expected and w/authentication] or to ephemeral ports that 1270 are only created as needed. Firewalls protecting each host can 1271 both continue to do their job normally. 1273 3. Client-server authentication and integrity protection for 1274 feedback messages conveying measurements is RECOMMENDED. 1276 4. Hosts MUST limit the number of simultaneous tests to avoid 1277 resource exhaustion and inaccurate results. 1279 5. Senders MUST be rate-limited. This can be accomplished using a 1280 pre-built table defining all the offered load rates that will be 1281 supported (Section 8.1). The recommended load-control search 1282 algorithm results in "ramp-up" from the lowest rate in the table. 1284 6. Service subscribers with limited data volumes who conduct 1285 extensive capacity testing might experience the effects of 1286 Service Provider controls on their service. Testing with the 1287 Service Provider's measurement hosts SHOULD be limited in 1288 frequency and/or overall volume of test traffic (for example, the 1289 range of duration values, I, SHOULD be limited). 1291 The exact specification of these features is left for the future 1292 protocol development. 1294 11. IANA Considerations 1296 This memo makes no requests of IANA. 1298 12. Acknowledgments 1300 Thanks to Joachim Fabini, Matt Mathis, J.Ignacio Alvarez-Hamelin, 1301 Wolfgang Balzer, Frank Brockners, Greg Mirsky, Martin Duke, Murray 1302 Kucherawy, and Benjamin Kaduk for their extensive comments on the 1303 memo and related topics. In a second round of reviews, we 1304 acknowledge Magnus Westerlund, Lars Eggert, and Zahed Sarkar. 1306 13. Appendix A - Load Rate Adjustment Pseudo Code 1308 The following is a pseudo-code implementation of the algorithm 1309 described in Section 8.1. 1311 Rx = 0 # The current sending rate (equivalent to a row of the table) 1312 seqErr = 0 # Measured count of any of Loss or Reordering impairments 1313 delay = 0 # Measured Range of Round Trip Delay, RTD, ms 1314 lowThresh = 30 # Low threshold on the Range of RTD, ms 1315 upperThresh = 90 # Upper threshold on the Range of RTD, ms 1316 hSpeedTresh = 1 Gbps # Threshold for transition between sending rate step 1317 sizes (such as 1 Mbps and 100 Mbps) 1318 slowAdjCount = 0 # Measured Number of consecutive status reports 1319 indicating loss and/or delay variation above upperThresh 1320 slowAdjThresh = 2 # Threshold on slowAdjCount used to infer congestion. 1321 Use values >1 to avoid misinterpreting transient loss 1322 highSpeedDelta = 10 # The number of rows to move in a single adjustment 1323 when initially increasing offered load (to ramp-up quickly) 1324 maxLoadRates = 2000 # Maximum table index (rows) 1326 if ( seqErr == 0 && delay < lowThresh ) { 1327 if ( Rx < hSpeedTresh && slowAdjCount < slowAdjThresh ) { 1328 Rx += highSpeedDelta; 1329 slowAdjCount = 0; 1330 } else { 1331 if ( Rx < maxLoadRates - 1 ) 1332 Rx++; 1333 } 1334 } else if ( seqErr > 0 || delay > upperThresh ) { 1335 slowAdjCount++; 1336 if ( Rx < hSpeedTresh && slowAdjCount == slowAdjThresh ) { 1337 if ( Rx > highSpeedDelta * 3 ) 1338 Rx -= highSpeedDelta * 3; 1339 else 1340 Rx = 0; 1341 } else { 1342 if ( Rx > 0 ) 1343 Rx--; 1344 } 1345 } 1347 14. Appendix B - RFC 8085 UDP Guidelines Check 1349 The BCP on UDP usage guidelines [RFC8085] focuses primarily on 1350 congestion control in section 3.1. The Guidelines appear in 1351 mandatory (MUST) and recommendation (SHOULD) categories. 1353 14.1. Assessment of Mandatory Requirements 1355 The mandatory requirements in Section 3 of [RFC8085] include: 1357 Internet paths can have widely varying characteristics, ... 1358 Consequently, applications that may be used on the Internet MUST 1359 NOT make assumptions about specific path characteristics. They 1360 MUST instead use mechanisms that let them operate safely under 1361 very different path conditions. Typically, this requires 1362 conservatively probing the current conditions of the Internet path 1363 they communicate over to establish a transmission behavior that it 1364 can sustain and that is reasonably fair to other traffic sharing 1365 the path. 1367 The purpose of the load rate adjustment algorithm in Section 8.1 is 1368 to probe the network and enable Maximum IP-Layer Capacity 1369 measurements with as few assumptions about the measured path as 1370 possible, and within the range application described in Section 2. 1371 The degree of probing conservatism is in tension with the need to 1372 minimize both the traffic dedicated to testing (especially with 1373 Gigabit rate measurements) and the duration of the test (which is one 1374 contributing factor to the overall algorithm fairness). 1376 The text of Section 3 of [RFC8085] goes on to recommend alternatives 1377 to UDP to meet the mandatory requirements, but none are suitable for 1378 the scope and purpose of the metrics and methods in this memo. In 1379 fact, ad hoc TCP-based methods fail to achieve the measurement 1380 accuracy repeatedly proven in comparison measurements with the 1381 running code [LS-SG12-A] [LS-SG12-B] [Y.Sup60]. Also, the UDP aspect 1382 of these methods is present primarily to support modern Internet 1383 transmission where a transport protocol is required [copycat]; the 1384 metric is based on the IP-Layer and UDP allows simple correlation to 1385 the IP-Layer. 1387 Section 3.1.1 of [RFC8085] discusses protocol timer guidelines: 1389 Latency samples MUST NOT be derived from ambiguous transactions. 1390 The canonical example is in a protocol that retransmits data, but 1391 subsequently cannot determine which copy is being acknowledged. 1393 Both load packets and status feedback messages MUST contain sequence 1394 numbers, which helps with measurements based on those packets, and 1395 there are no retransmissions needed. 1397 When a latency estimate is used to arm a timer that provides loss 1398 detection -- with or without retransmission -- expiry of the timer 1399 MUST be interpreted as an indication of congestion in the network, 1400 causing the sending rate to be adapted to a safe conservative 1401 rate... 1403 The method described in this memo uses timers for sending rate 1404 backoff when status feedback messages are lost (Lost Status Backoff 1405 timeout), and for stopping a test when connectivity is lost for a 1406 longer interval (Feedback message or load packet timeouts). 1408 There is no specific benefit foreseen by using Explicit Congestion 1409 Notification (ECN) in this memo. 1411 Section 3.2 of [RFC8085] discusses message size guidelines: 1413 To determine an appropriate UDP payload size, applications MUST 1414 subtract the size of the IP header (which includes any IPv4 1415 optional headers or IPv6 extension headers) as well as the length 1416 of the UDP header (8 bytes) from the PMTU size. 1418 The method uses a sending rate table with a maximum UDP payload size 1419 that anticipates significant header overhead and avoids 1420 fragmentation. 1422 Section 3.3 of [RFC8085] provides reliability guidelines: 1424 Applications that do require reliable message delivery MUST 1425 implement an appropriate mechanism themselves. 1427 The IP-Layer Capacity Metric and Method do not require reliable 1428 delivery. 1430 Applications that require ordered delivery MUST reestablish 1431 datagram ordering themselves. 1433 The IP-Layer Capacity Metric and Method does not need to reestablish 1434 packet order; it is preferred to measure packet reordering if it 1435 occurs [RFC4737]. 1437 14.2. Assessment of Recommendations 1439 The load rate adjustment algorithm's goal is to determine the Maximum 1440 IP-Layer Capacity in the context of an infrequent, diagnostic, short 1441 term measurement. This goal is a global exception to many [RFC8085] 1442 SHOULD-level requirements, of which many are intended for long-lived 1443 flows that must coexist with other traffic in more-or-less fair way. 1444 However, the algorithm (as specified in Section 8.1 and Appendix A 1445 above) reacts to indications of congestion in clearly defined ways. 1447 A specific recommendation is provided as an example. Section 3.1.5 1448 of [RFC8085] on implications of RTT and Loss Measurements on 1449 Congestion Control says: 1451 A congestion control designed for UDP SHOULD respond as quickly as 1452 possible when it experiences congestion, and it SHOULD take into 1453 account both the loss rate and the response time when choosing a 1454 new rate. 1456 The load rate adjustment algorithm responds to loss and RTT 1457 measurements with a clear and concise rate reduction when warranted, 1458 and the response makes use of direct measurements (more exact than 1459 can be inferred from TCP ACKs). 1461 Section 3.1.5 of [RFC8085] goes on to specify: 1463 The implemented congestion control scheme SHOULD result in 1464 bandwidth (capacity) use that is comparable to that of TCP within 1465 an order of magnitude, so that it does not starve other flows 1466 sharing a common bottleneck. 1468 This is a requirement for coexistent streams, and not for diagnostic 1469 and infrequent measurements using short durations. The rate 1470 oscillations during short tests allow other packets to pass, and 1471 don't starve other flows. 1473 Ironically, ad hoc TCP-based measurements of "Internet Speed" are 1474 also designed to work around this SHOULD-level requirement, by 1475 launching many flows (9, for example) to increase the outstanding 1476 data dedicated to testing. 1478 The load rate adjustment algorithm cannot become a TCP-like 1479 congestion control, or it will have the same weaknesses of TCP when 1480 trying to make a Maximum IP-Layer Capacity measurement, and will not 1481 achieve the goal. The results of the referenced testing [LS-SG12-A] 1482 [LS-SG12-B] [Y.Sup60] supported this statement hundreds of times, 1483 with comparisons to multi-connection TCP-based measurements. 1485 A brief review of some other SHOULD-level requirements follows (Yes 1486 or Not applicable = NA) : 1488 +--+---------------------------------------------------------+---------+ 1489 |Y?| RFC 8085 Recommendation | Section | 1490 +--+---------------------------------------------------------+---------+ 1491 Yes| MUST tolerate a wide range of Internet path conditions | 3 | 1492 NA | SHOULD use a full-featured transport (e.g., TCP) | | 1493 | | | 1494 Yes| SHOULD control rate of transmission | 3.1 | 1495 NA | SHOULD perform congestion control over all traffic | | 1496 | | | 1497 | for bulk transfers, | 3.1.2 | 1498 NA | SHOULD consider implementing TFRC | | 1499 NA | else, SHOULD in other ways use bandwidth similar to TCP | | 1500 | | | 1501 | for non-bulk transfers, | 3.1.3 | 1502 NA | SHOULD measure RTT and transmit max. 1 datagram/RTT | 3.1.1 | 1503 NA | else, SHOULD send at most 1 datagram every 3 seconds | | 1504 NA | SHOULD back-off retransmission timers following loss | | 1505 | | | 1506 Yes| SHOULD provide mechanisms to regulate the bursts of | 3.1.6 | 1507 | transmission | | 1508 | | | 1509 NA | MAY implement ECN; a specific set of application | 3.1.7 | 1510 | mechanisms are REQUIRED if ECN is used. | | 1511 | | | 1512 Yes| for DiffServ, SHOULD NOT rely on implementation of PHBs | 3.1.8 | 1513 | | | 1514 Yes| for QoS-enabled paths, MAY choose not to use CC | 3.1.9 | 1515 | | | 1516 Yes| SHOULD NOT rely solely on QoS for their capacity | 3.1.10 | 1517 | non-CC controlled flows SHOULD implement a transport | | 1518 | circuit breaker | | 1519 | MAY implement a circuit breaker for other applications | | 1520 | | | 1521 | for tunnels carrying IP traffic, | 3.1.11 | 1522 NA | SHOULD NOT perform congestion control | | 1523 NA | MUST correctly process the IP ECN field | | 1524 | | | 1525 | for non-IP tunnels or rate not determined by traffic, | | 1526 NA | SHOULD perform CC or use circuit breaker | 3.1.11 | 1527 NA | SHOULD restrict types of traffic transported by the | | 1528 | tunnel | | 1529 | | | 1530 Yes| SHOULD NOT send datagrams that exceed the PMTU, i.e., | 3.2 | 1531 Yes| SHOULD discover PMTU or send datagrams < minimum PMTU; | | 1532 NA | Specific application mechanisms are REQUIRED if PLPMTUD | | 1533 | is used. | | 1534 | | | 1535 Yes| SHOULD handle datagram loss, duplication, reordering | 3.3 | 1536 NA | SHOULD be robust to delivery delays up to 2 minutes | | 1537 | | | 1538 Yes| SHOULD enable IPv4 UDP checksum | 3.4 | 1539 Yes| SHOULD enable IPv6 UDP checksum; Specific application | 3.4.1 | 1540 | mechanisms are REQUIRED if a zero IPv6 UDP checksum is | | 1541 | used. | | 1542 | | | 1543 NA | SHOULD provide protection from off-path attacks | 5.1 | 1544 | else, MAY use UDP-Lite with suitable checksum coverage | 3.4.2 | 1545 | | | 1546 NA | SHOULD NOT always send middlebox keep-alive messages | 3.5 | 1547 NA | MAY use keep-alives when needed (min. interval 15 sec) | | 1548 | | | 1550 Yes| Applications specified for use in limited use (or | 3.6 | 1551 | controlled environments) SHOULD identify equivalent | | 1552 | mechanisms and describe their use case. | | 1553 | | | 1554 NA | Bulk-multicast apps SHOULD implement congestion control | 4.1.1 | 1555 | | | 1556 NA | Low volume multicast apps SHOULD implement congestion | 4.1.2 | 1557 | control | | 1558 | | | 1559 NA | Multicast apps SHOULD use a safe PMTU | 4.2 | 1560 | | | 1561 Yes| SHOULD avoid using multiple ports | 5.1.2 | 1562 Yes| MUST check received IP source address | | 1563 | | | 1564 NA | SHOULD validate payload in ICMP messages | 5.2 | 1565 | | | 1566 Yes| SHOULD use a randomized source port or equivalent | 6 | 1567 | technique, and, for client/server applications, SHOULD | | 1568 | send responses from source address matching request | | 1569 | 5.1 | | 1570 NA | SHOULD use standard IETF security protocols when needed | 6 | 1571 +---------------------------------------------------------+---------+ 1573 15. References 1575 15.1. Normative References 1577 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1578 Requirement Levels", BCP 14, RFC 2119, 1579 DOI 10.17487/RFC2119, March 1997, 1580 . 1582 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 1583 "Framework for IP Performance Metrics", RFC 2330, 1584 DOI 10.17487/RFC2330, May 1998, 1585 . 1587 [RFC2681] Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip 1588 Delay Metric for IPPM", RFC 2681, DOI 10.17487/RFC2681, 1589 September 1999, . 1591 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 1592 Zekauskas, "A One-way Active Measurement Protocol 1593 (OWAMP)", RFC 4656, DOI 10.17487/RFC4656, September 2006, 1594 . 1596 [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, 1597 S., and J. Perser, "Packet Reordering Metrics", RFC 4737, 1598 DOI 10.17487/RFC4737, November 2006, 1599 . 1601 [RFC5357] Hedayat, K., Krzanowski, R., Morton, A., Yum, K., and J. 1602 Babiarz, "A Two-Way Active Measurement Protocol (TWAMP)", 1603 RFC 5357, DOI 10.17487/RFC5357, October 2008, 1604 . 1606 [RFC6438] Carpenter, B. and S. Amante, "Using the IPv6 Flow Label 1607 for Equal Cost Multipath Routing and Link Aggregation in 1608 Tunnels", RFC 6438, DOI 10.17487/RFC6438, November 2011, 1609 . 1611 [RFC7497] Morton, A., "Rate Measurement Test Protocol Problem 1612 Statement and Requirements", RFC 7497, 1613 DOI 10.17487/RFC7497, April 2015, 1614 . 1616 [RFC7680] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, 1617 Ed., "A One-Way Loss Metric for IP Performance Metrics 1618 (IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January 1619 2016, . 1621 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1622 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1623 May 2017, . 1625 [RFC8468] Morton, A., Fabini, J., Elkins, N., Ackermann, M., and V. 1626 Hegde, "IPv4, IPv6, and IPv4-IPv6 Coexistence: Updates for 1627 the IP Performance Metrics (IPPM) Framework", RFC 8468, 1628 DOI 10.17487/RFC8468, November 2018, 1629 . 1631 15.2. Informative References 1633 [copycat] Edleine, K., Kuhlewind, K., Trammell, B., and B. Donnet, 1634 "copycat: Testing Differential Treatment of New Transport 1635 Protocols in the Wild (ANRW '17)", July 2017, 1636 . 1638 [LS-SG12-A] 1639 12, I. S., "LS - Harmonization of IP Capacity and Latency 1640 Parameters: Revision of Draft Rec. Y.1540 on IP packet 1641 transfer performance parameters and New Annex A with Lab 1642 Evaluation Plan", May 2019, 1643 . 1645 [LS-SG12-B] 1646 12, I. S., "LS on harmonization of IP Capacity and Latency 1647 Parameters: Consent of Draft Rec. Y.1540 on IP packet 1648 transfer performance parameters and New Annex A with Lab & 1649 Field Evaluation Plans", March 2019, 1650 . 1652 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1653 Network Interconnect Devices", RFC 2544, 1654 DOI 10.17487/RFC2544, March 1999, 1655 . 1657 [RFC3148] Mathis, M. and M. Allman, "A Framework for Defining 1658 Empirical Bulk Transfer Capacity Metrics", RFC 3148, 1659 DOI 10.17487/RFC3148, July 2001, 1660 . 1662 [RFC5136] Chimento, P. and J. Ishac, "Defining Network Capacity", 1663 RFC 5136, DOI 10.17487/RFC5136, February 2008, 1664 . 1666 [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, 1667 "Applicability Statement for RFC 2544: Use on Production 1668 Networks Considered Harmful", RFC 6815, 1669 DOI 10.17487/RFC6815, November 2012, 1670 . 1672 [RFC7312] Fabini, J. and A. Morton, "Advanced Stream and Sampling 1673 Framework for IP Performance Metrics (IPPM)", RFC 7312, 1674 DOI 10.17487/RFC7312, August 2014, 1675 . 1677 [RFC7594] Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 1678 Aitken, P., and A. Akhter, "A Framework for Large-Scale 1679 Measurement of Broadband Performance (LMAP)", RFC 7594, 1680 DOI 10.17487/RFC7594, September 2015, 1681 . 1683 [RFC7799] Morton, A., "Active and Passive Metrics and Methods (with 1684 Hybrid Types In-Between)", RFC 7799, DOI 10.17487/RFC7799, 1685 May 2016, . 1687 [RFC8085] Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage 1688 Guidelines", BCP 145, RFC 8085, DOI 10.17487/RFC8085, 1689 March 2017, . 1691 [RFC8337] Mathis, M. and A. Morton, "Model-Based Metrics for Bulk 1692 Transport Capacity", RFC 8337, DOI 10.17487/RFC8337, March 1693 2018, . 1695 [TR-471] Morton, A., "Broadband Forum TR-471: IP Layer Capacity 1696 Metrics and Measurement", July 2020, 1697 . 1700 [udpst] udpst Project Collaborators, "UDP Speed Test Open 1701 Broadband project", December 2020, 1702 . 1704 [Y.1540] Y.1540, I. R., "Internet protocol data communication 1705 service - IP packet transfer and availability performance 1706 parameters", December 2019, 1707 . 1709 [Y.Sup60] Morton, A., "Recommendation Y.Sup60, (09/20) Interpreting 1710 ITU-T Y.1540 maximum IP-layer capacity measurements, and 1711 Errata", September 2020, 1712 . 1714 Authors' Addresses 1716 Al Morton 1717 AT&T Labs 1718 200 Laurel Avenue South 1719 Middletown, NJ 07748 1720 USA 1722 Phone: +1 732 420 1571 1723 Fax: +1 732 368 1192 1724 Email: acm@research.att.com 1726 Ruediger Geib 1727 Deutsche Telekom 1728 Heinrich Hertz Str. 3-7 1729 Darmstadt 64295 1730 Germany 1732 Phone: +49 6151 5812747 1733 Email: Ruediger.Geib@telekom.de 1734 Len Ciavattone 1735 AT&T Labs 1736 200 Laurel Avenue South 1737 Middletown, NJ 07748 1738 USA 1740 Email: lencia@att.com