<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<rfc category="std" docName="draft-ietf-ippm-capacity-metric-method-08"
     ipr="trust200902" updates="">
  <front>
    <title abbrev="IP Capacity Metrics/Methods">Metrics and Methods for
    One-way IP Capacity</title>

    <author fullname="Al Morton" initials="A." surname="Morton">
      <organization>AT&amp;T Labs</organization>

      <address>
        <postal>
          <street>200 Laurel Avenue South</street>

          <city>Middletown,</city>

          <region>NJ</region>

          <code>07748</code>

          <country>USA</country>
        </postal>

        <phone>+1 732 420 1571</phone>

        <facsimile>+1 732 368 1192</facsimile>

        <email>acm@research.att.com</email>

        <uri/>
      </address>
    </author>

    <author fullname="Ruediger Geib" initials="R." surname="Geib">
      <organization>Deutsche Telekom</organization>

      <address>
        <postal>
          <street>Heinrich Hertz Str. 3-7</street>

          <city>Darmstadt</city>

          <region/>

          <code>64295</code>

          <country>Germany</country>
        </postal>

        <phone>+49 6151 5812747</phone>

        <facsimile/>

        <email>Ruediger.Geib@telekom.de</email>

        <uri/>
      </address>
    </author>

    <author fullname="Len Ciavattone" initials="L." surname="Ciavattone">
      <organization>AT&amp;T Labs</organization>

      <address>
        <postal>
          <street>200 Laurel Avenue South</street>

          <city>Middletown,</city>

          <region>NJ</region>

          <code>07748</code>

          <country>USA</country>
        </postal>

        <phone/>

        <facsimile/>

        <email>lencia@att.com</email>

        <uri/>
      </address>
    </author>

    <date day="29" month="March" year="2021"/>

    <abstract>
      <t>This memo revisits the problem of Network Capacity metrics first
      examined in RFC 5136. The memo specifies a more practical Maximum
      IP-layer Capacity metric definition catering for measurement purposes,
      and outlines the corresponding methods of measurement.</t>
    </abstract>
  </front>

  <middle>
    <section title="Introduction">
      <t>The IETF's efforts to define Network and Bulk Transport Capacity have
      been chartered and progressed for over twenty years. Over that time, the
      performance community has seen development of Informative definitions in
      <xref target="RFC3148"/> for Framework for Bulk Transport Capacity
      (BTC), RFC 5136 for Network Capacity and Maximum IP-layer Capacity, and
      the Experimental metric definitions and methods in <xref
      target="RFC8337"/>, Model-Based Metrics for BTC.</t>

      <t>This memo revisits the problem of Network Capacity metrics examined
      first in <xref target="RFC3148"/> and later in <xref target="RFC5136"/>.
      Maximum IP-Layer Capacity and <xref target="RFC3148"/> Bulk Transfer
      Capacity (goodput) are different metrics. Maximum IP-layer Capacity is
      like the theoretical goal for goodput. There are many metrics in <xref
      target="RFC5136"/>, such as Available Capacity. Measurements depend on
      the network path under test and the use case. Here, the main use case is
      to assess the maximum capacity of the access network, with specific
      performance criteria used in the measurement.</t>

      <t>This memo recognizes the importance of a definition of a Maximum
      IP-layer Capacity Metric at a time when access speeds have increased
      dramatically; a definition that is both practical and effective for the
      performance community's needs, including Internet users. The metric
      definition is intended to use Active Methods of Measurement <xref
      target="RFC7799"/>, and a method of measurement is included.</t>

      <t>The most direct active measurement of IP-layer Capacity would use IP
      packets, but in practice a transport header is needed to traverse
      address and port translators. UDP offers the most direct assessment
      possibility, and in the <xref target="copycat"/> measurement study to
      investigate whether UDP is viable as a general Internet transport
      protocol, the authors found that a high percentage of paths tested
      support UDP transport. A number of liaisons have been exchanged on this
      topic <xref target="LS-SG12-A"/> <xref target="LS-SG12-B"/>, discussing
      the laboratory and field tests that support the UDP-based approach to
      IP-layer Capacity measurement.</t>

      <t>This memo also recognizes the many updates to the IP Performance
      Metrics Framework <xref target="RFC2330"/> published over twenty years,
      and makes use of <xref target="RFC7312"/> for Advanced Stream and
      Sampling Framework, and <xref target="RFC8468"/> with IPv4, IPv6, and
      IPv4-IPv6 Coexistence Updates.</t>

      <t>Appendix A describes the load rate adjustment algorithm in
      pseudo-code. Appendix B discusses the algorithm's compliance with <xref
      target="RFC8085"/>.</t>

      <section title="Requirements Language">
        <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
        "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
        "OPTIONAL" in this document are to be interpreted as described in BCP
        14<xref target="RFC2119"/> <xref target="RFC8174"/> when, and only
        when, they appear in all capitals, as shown here.</t>
      </section>
    </section>

    <section title="Scope, Goals, and Applicability">
      <t>The scope of this memo is to define a metric and corresponding method
      to unambiguously perform Active measurements of Maximum IP-Layer
      Capacity, along with related metrics and methods.</t>

      <t>Another goal is to harmonize the specified metric and method across
      the industry, and this memo is the vehicle that captures IETF consensus,
      possibly resulting in changes to the specifications of other Standards
      Development Organizations (SDO) (through each SDO's normal contribution
      process, or through liaison exchange).</t>

      <t>A local goal is to aid efficient test procedures where possible, and
      to recommend reporting with additional interpretation of the results.
      Fostering the development of protocol support for this metric and method
      of measurement is also a goal of this memo (all active testing protocols
      currently defined by the IPPM WG are UDP-based, meeting a key
      requirement of these methods). The supporting protocol development to
      measure this metric according to the specified method is a key future
      contribution to Internet measurement.</t>

      <t>The load rate adjustment algorithm's goal is to determine the Maximum
      IP-Layer Capacity in the context of an infrequent, diagnostic, short
      term measurement. It is RECOMMENDED to discontinue non-measurement
      traffic that shares a subscriber's dedicated resources while
      testing.</t>

      <t>The primary application of the metric and method of measurement
      described here is the same as in Section 2 of <xref target="RFC7479"/>
      where:<list style="symbols">
          <t>The access portion of the network is the focus of this problem
          statement. The user typically subscribes to a service with
          bidirectional access partly described by rates in bits per
          second.</t>
        </list>In addition, the use of the load rate adjustment algorithm
      described in section 8.1 has the following additional applicability
      limitations:</t>

      <t>- MUST only be used in the application of diagnostic and operations
      measurements as described in this memo</t>

      <t>- MUST only be used in circumstances consistent with Section 10,
      Security Considerations</t>

      <t>- If a network operator is certain of the access capacity to be
      validated, then testing MAY start with a fixed rate test at the access
      capacity and avoid activating the load adjustment algorithm. However,
      the stimulus for a diagnostic test (such as a subscriber request)
      strongly implies that there is no certainty and the load adjustment
      algorithm will be needed.</t>

      <t>Further, the metric and method of measurement are intended for use
      where specific exact path information is unknown within a range of
      possible values:</t>

      <t>- the subscriber's exact Maximum IP-Layer Capacity is unknown (which
      is sometimes the case; service rates can be increased due to upgrades
      without a subscriber's request, or to provide a surplus to compensate
      for possible underestimates of TCP-based testing.</t>

      <t>- the size of the access bottleneck buffer is unknown. </t>

      <t>Finally, the measurement system's load rate adjustment algorithm
      SHALL NOT be provided with the exact capacity value to be validated a
      priori. This restriction fosters a fair result, and removes an
      opportunity for bad actors to operate with knowledge of the "right
      answer".</t>
    </section>

    <section title="Motivation">
      <t>As with any problem that has been worked for many years in various
      SDOs without any special attempts at coordination, various solutions for
      metrics and methods have emerged.</t>

      <t>There are five factors that have changed (or begun to change) in the
      2013-2019 time frame, and the presence of any one of them on the path
      requires features in the measurement design to account for the
      changes:</t>

      <t><list style="numbers">
          <t>Internet access is no longer the bottleneck for many users.</t>

          <t>Both transfer rate and latency are important to user's
          satisfaction.</t>

          <t>UDP's growing role in Transport, in areas where TCP once
          dominated.</t>

          <t>Content and applications are moving physically closer to
          users.</t>

          <t>There is less emphasis on ISP gateway measurements, possibly due
          to less traffic crossing ISP gateways in future.</t>
        </list></t>
    </section>

    <section title="General Parameters and Definitions">
      <t>This section lists the REQUIRED input factors to specify a Sender or
      Receiver metric.<list style="symbols">
          <t>Src, the address of a host (such as the globally routable IP
          address).</t>

          <t>Dst, the address of a host (such as the globally routable IP
          address).</t>

          <t>MaxHops, the limit on the number of Hops a specific packet may
          visit as it traverses from the host at Src to the host at Dst
          (implemented in the TTL or Hop Limit).</t>

          <t>T0, the time at the start of measurement interval, when packets
          are first transmitted from the Source.</t>

          <t>I, the nominal duration of a measurement interval at the
          destination (default 10 sec)</t>

          <t>dt, the nominal duration of m equal sub-intervals in I at the
          destination (default 1 sec)</t>

          <t>dtn, the beginning boundary of a specific sub-interval, n, one of
          m sub-intervals in I</t>

          <t>FT, the feedback time interval between status feedback messages
          communicating measurement results, sent from the receiver to control
          the sender. The results are evaluated throughout the test to
          determine how to adjust the current offered load rate at the sender
          (default 50ms)</t>

          <t>Tmax, a maximum waiting time for test packets to arrive at the
          destination, set sufficiently long to disambiguate packets with long
          delays from packets that are discarded (lost), such that the
          distribution of one-way delay is not truncated.</t>

          <t>F, the number of different flows synthesized by the method
          (default 1 flow)</t>

          <t>flow, the stream of packets with the same n-tuple of designated
          header fields that (when held constant) result in identical
          treatment in a multi-path decision (such as the decision taken in
          load balancing). Note: The IPv6 flow label MAY be included in the
          flow definition when routers have complied with <xref
          target="RFC6438"/> guidelines.</t>

          <t>Type-P, the complete description of the test packets for which
          this assessment applies (including the flow-defining fields). Note
          that the UDP transport layer is one requirement for test packets
          specified below. Type-P is a parallel concept to "population of
          interest" defined in clause 6.1.1 of<xref target="Y.1540"/>.</t>

          <t>PM, a list of fundamental metrics, such as loss, delay, and
          reordering, and corresponding target performance threshold. At least
          one fundamental metric and target performance threshold MUST be
          supplied (such as One-way IP Packet Loss <xref target="RFC7680"/>
          equal to zero).</t>
        </list>A non-Parameter which is required for several metrics is
      defined below:</t>

      <t><list style="symbols">
          <t>T, the host time of the *first* test packet's *arrival* as
          measured at the destination Measurement Point, or MP(Dst). There may
          be other packets sent between source and destination hosts that are
          excluded, so this is the time of arrival of the first packet used
          for measurement of the metric.</t>
        </list>Note that time stamp format and resolution, sequence numbers,
      etc. will be established by the chosen test protocol standard or
      implementation.</t>
    </section>

    <section title="IP-Layer Capacity Singleton Metric Definitions">
      <t>This section sets requirements for the singleton metric that supports
      the Maximum IP-layer Capacity Metric definition in Section 6.</t>

      <section title="Formal Name">
        <t>Type-P-One-way-IP-Capacity, or informally called IP-layer
        Capacity.</t>

        <t>Note that Type-P depends on the chosen method.</t>
      </section>

      <section title="Parameters">
        <t>This section lists the REQUIRED input factors to specify the
        metric, beyond those listed in Section 4.</t>

        <t>No additional Parameters are needed.</t>
      </section>

      <section title="Metric Definitions">
        <t>This section defines the REQUIRED aspects of the measurable
        IP-layer Capacity metric (unless otherwise indicated) for measurements
        between specified Source and Destination hosts:</t>

        <t>Define the IP-layer capacity, C(T,dt,PM), to be the number of
        IP-layer bits (including header and data fields) in packets that can
        be transmitted from the Src host and correctly received by the Dst
        host during one contiguous sub-interval, dt in length. The IP-layer
        capacity depends on the Src and Dst hosts, the host addresses, and the
        path between the hosts.</t>

        <t>The number of these IP-layer bits is designated n0[dtn,dtn+1] for a
        specific dt.</t>

        <t>When the packet size is known and of fixed size, the packet count
        during a single sub-interval dt multiplied by the total bits in IP
        header and data fields is equal to n0[dtn,dtn+1].</t>

        <t>Anticipating a Sample of Singletons, the number of sub-intervals
        with duration dt MUST be set to a natural number m, so that T+I = T +
        m*dt with dtn+1 - dtn = dt for 1 &lt;= n &lt;= m.</t>

        <t>Parameter PM represents other performance metrics [see section 5.4
        below]; their measurement results SHALL be collected during
        measurement of IP-layer Capacity and associated with the corresponding
        dtn for further evaluation and reporting. Users SHALL specify the
        parameter Tmax as required by each metric's reference definition.</t>

        <t>Mathematically, this definition is represented as (for each n):</t>

        <t><figure title="Equation for IP-Layer Capacity">
            <artwork align="center"><![CDATA[
                        ( n0[dtn,dtn+1] )
        C(T,dt,PM) = -------------------------
                               dt

]]></artwork>
          </figure>and:<list style="symbols">
            <t>n0 is the total number of IP-layer header and payload bits that
            can be transmitted in standard-formed packets <xref
            target="RFC8468"/> from the Src host and correctly received by the
            Dst host during one contiguous sub-interval, dt in length, during
            the interval [T, T+I],</t>

            <t>C(T,dt,PM) the IP-Layer Capacity, corresponds to the value of
            n0 measured in any sub-interval beginning at dtn, divided by the
            length of sub-interval, dt.</t>

            <t>PM represents other performance metrics [see section 5.4
            below]; their measurement results SHALL be collected during
            measurement of IP-layer Capacity and associated with the
            corresponding dtn for further evaluation and reporting.</t>

            <t>all sub-intervals MUST be of equal duration. Choosing dt as
            non-overlapping consecutive time intervals allows for a simple
            implementation.</t>

            <t>The bit rate of the physical interface of the measurement
            devices must be higher than the smallest of the links on the path
            whose C(T,I,PM) is to be measured (the bottleneck link).</t>
          </list></t>

        <t>Measurements according to these definitions SHALL use the UDP
        transport layer. Standard-formed packets are specified in Section 5 of
        <xref target="RFC8468"/>. The measurement SHOULD use a randomized
        source port or equivalent technique, and SHOULD send responses from
        the source address matching the test packet destination address. </t>

        <t>Some compression affects on measurement are discussed in Section 6
        of <xref target="RFC8468"/>.</t>
      </section>

      <section title="Related Round-Trip Delay and One-way Loss Definitions">
        <t>RTD[dtn,dtn+1] is defined as a sample of the <xref
        target="RFC2681"/> Round-trip Delay between the Src host and the Dst
        host over the interval [T,T+I] (that contains equal non-overlapping
        intervals of dt). The "reasonable period of time" in <xref
        target="RFC2681"/> is the parameter Tmax in this memo. The statistics
        used to summarize RTD[dtn,dtn+1] MAY include the minimum, maximum,
        median, and mean, and the range = (maximum - minimum) is referred to
        below in Section 8.1 for load adjustment purposes.</t>

        <t>OWL[dtn,dtn+1] is defined as a sample of the <xref
        target="RFC7680"/> One-way Loss between the Src host and the Dst host
        over the interval [T,T+I] (that contains equal non-overlapping
        intervals of dt). The statistics used to summarize OWL[dtn,dtn+1] MAY
        include the lost packet count and the lost packet ratio.</t>

        <t>Other metrics MAY be measured: one-way reordering, duplication, and
        delay variation.</t>
      </section>

      <section title="Discussion">
        <t>See the corresponding section for Maximum IP-Layer Capacity.</t>
      </section>

      <section title="Reporting the Metric">
        <t>The IP-Layer Capacity SHOULD be reported with at least single
        Megabit resolution, in units of Megabits per second (Mbps), (which is
        1,000,000 bits per second to avoid any confusion).</t>

        <t>The Related Round Trip Delay and/or Loss metric measurements for
        the same Singleton SHALL be reported, also with meaningful resolution
        for the values measured.</t>

        <t>Individual Capacity measurements MAY be reported in a manner
        consistent with the Maximum IP-Layer Capacity, see Section 9.</t>
      </section>
    </section>

    <section title="Maximum IP-Layer Capacity Metric Definitions (Statistic)">
      <t>This section sets requirements for the following components to
      support the Maximum IP-layer Capacity Metric.</t>

      <section title="Formal Name">
        <t>Type-P-One-way-Max-IP-Capacity, or informally called Maximum
        IP-layer Capacity.</t>

        <t>Note that Type-P depends on the chosen method.</t>
      </section>

      <section title="Parameters">
        <t>This section lists the REQUIRED input factors to specify the
        metric, beyond those listed in Section 4.</t>

        <t>No additional Parameters or definitions are needed.</t>
      </section>

      <section title="Metric Definitions">
        <t>This section defines the REQUIRED aspects of the Maximum IP-layer
        Capacity metric (unless otherwise indicated) for measurements between
        specified Source and Destination hosts:</t>

        <t>Define the Maximum IP-layer capacity, Maximum_C(T,I,PM), to be the
        maximum number of IP-layer bits n0[dtn,dtn+1] divided by dt that can
        be transmitted in packets from the Src host and correctly received by
        the Dst host, over all dt length intervals in [T, T+I], and meeting
        the PM criteria. Equivalently the Maximum of a Sample of size m of
        C(T,I,PM) collected during the interval [T, T+I] and meeting the PM
        criteria.</t>

        <t>The number of sub-intervals with duration dt MUST be set to a
        natural number m, so that T+I = T + m*dt with dtn+1 - dtn = dt for 1
        &lt;= n &lt;= m.</t>

        <t>Parameter PM represents the other performance metrics (see Section
        6.4 below) and their measurement results for the maximum IP-layer
        capacity. At least one target performance threshold (PM criterion)
        MUST be defined. If more than one metric and target performance
        threshold are defined, then the sub-interval with maximum number of
        bits transmitted MUST meet all the target performance thresholds.
        Users SHALL specify the parameter Tmax as required by each metric's
        reference definition.</t>

        <t>Mathematically, this definition can be represented as:</t>

        <t><figure title="Equation for Maximum Capacity">
            <artwork align="center"><![CDATA[
                        max  ( n0[dtn,dtn+1] )
                       [T,T+I]
  Maximum_C(T,I,PM) = -------------------------
                                 dt
 where:
    T                                      T+I
    _________________________________________
    |   |   |   |   |   |   |   |   |   |   |
dtn=1   2   3   4   5   6   7   8   9  10  n+1
                                       n=m

]]></artwork>
          </figure>and:<list style="symbols">
            <t>n0 is the total number of IP-layer header and payload bits that
            can be transmitted in standard-formed packets from the Src host
            and correctly received by the Dst host during one contiguous
            sub-interval, dt in length, during the interval [T, T+I],</t>

            <t>Maximum_C(T,I,PM) the Maximum IP-Layer Capacity, corresponds to
            the maximum value of n0 measured in any sub-interval beginning at
            dtn, divided by the constant length of all sub-intervals, dt.</t>

            <t>PM represents the other performance metrics (see Section 5.4)
            and their measurement results for the maximum IP-layer capacity.
            At least one target performance threshold (PM criterion) MUST be
            defined.</t>

            <t>all sub-intervals MUST be of equal duration. Choosing dt as
            non-overlapping consecutive time intervals allows for a simple
            implementation.</t>

            <t>The bit rate of the physical interface of the measurement
            systems must be higher than than the smallest of the links on the
            path whose Maximum_C(T,I,PM) is to be measured (the bottleneck
            link).</t>
          </list></t>

        <t>In this definition, the m sub-intervals can be viewed as trials
        when the Src host varies the transmitted packet rate, searching for
        the maximum n0 that meets the PM criteria measured at the Dst host in
        a test of duration, I. When the transmitted packet rate is held
        constant at the Src host, the m sub-intervals may also be viewed as
        trials to evaluate the stability of n0 and metric(s) in the PM list
        over all dt-length intervals in I.</t>

        <t>Measurements according to these definitions SHALL use the UDP
        transport layer.</t>
      </section>

      <section title="Related Round-Trip Delay and One-way Loss Definitions">
        <t>RTD[dtn,dtn+1] and OWL[dtn,dtn+1] are defined in Section 5.4. Here,
        the test intervals are increased to match the capacity samples,
        RTD[T,I] and OWL[T,I].</t>

        <t>The interval dtn,dtn+1 where Maximum_C[T,I,PM] occurs is the
        reporting sub-interval within RTD[T,I] and OWL[T,I].</t>

        <t>Other metrics MAY be measured: one-way reordering, duplication, and
        delay variation.</t>
      </section>

      <section title="Discussion">
        <t>If traffic conditioning (e.g., shaping, policing) applies along a
        path for which Maximum_C(T,I,PM) is to be determined, different values
        for dt SHOULD be picked and measurements be executed during multiple
        intervals [T, T+I]. Each duration dt SHOULD be chosen so that it is an
        integer multiple of increasing values k times serialization delay of a
        path MTU at the physical interface speed where traffic conditioning is
        expected. This should avoid taking configured burst tolerance
        singletons as a valid Maximum_C(T,I,PM) result.</t>

        <t>A Maximum_C(T,I,PM) without any indication of bottleneck
        congestion, be that an increasing latency, packet loss or ECN marks
        during a measurement interval I, is likely to underestimate
        Maximum_C(T,I,PM).</t>
      </section>

      <section title="Reporting the Metric">
        <t>The IP-Layer Capacity SHOULD be reported with at least single
        Megabit resolution, in units of Megabits per second (Mbps) (which is
        1,000,000 bits per second to avoid any confusion).</t>

        <t>The Related Round Trip Delay and/or Loss metric measurements for
        the same Singleton SHALL be reported, also with meaningful resolution
        for the values measured.</t>

        <t>When there are demonstrated and repeatable Capacity modes in the
        Sample, then the Maximum IP-Layer Capacity SHALL be reported for each
        mode, along with the relative time from the beginning of the stream
        that the mode was observed to be present. Bimodal Maxima have been
        observed with some services, sometimes called a "turbo mode" intending
        to deliver short transfers more quickly, or reduce the initial
        buffering time for some video streams. Note that modes lasting less
        than dt duration will not be detected.</t>

        <t>Some transmission technologies have multiple methods of operation
        that may be activated when channel conditions degrade or improve, and
        these transmission methods may determine the Maximum IP-Layer
        Capacity. Examples include line-of-sight microwave modulator
        constellations, or cellular modem technologies where the changes may
        be initiated by a user moving from one coverage area to another.
        Operation in the different transmission methods may be observed over
        time, but the modes of Maximum IP-Layer Capacity will not be activated
        deterministically as with the "turbo mode" described in the paragraph
        above.</t>
      </section>
    </section>

    <section title="IP-Layer Sender Bit Rate Singleton Metric Definitions">
      <t>This section sets requirements for the following components to
      support the IP-layer Sender Bitrate Metric. This metric helps to check
      that the sender actually generated the desired rates during a test, and
      measurement takes place at the Src host to network path interface (or as
      close as practical within the Src host). It is not a metric for path
      performance.</t>

      <section title="Formal Name">
        <t>Type-P-IP-Sender-Bit-Rate, or informally called IP-layer Sender
        Bitrate.</t>

        <t>Note that Type-P depends on the chosen method.</t>
      </section>

      <section title="Parameters">
        <t>This section lists the REQUIRED input factors to specify the
        metric, beyond those listed in Section 4.</t>

        <t><list style="symbols">
            <t>S, the duration of the measurement interval at the Source</t>

            <t>st, the nominal duration of N sub-intervals in S (default st =
            0.05 seconds)</t>

            <t>stn, the beginning boundary of a specific sub-interval, n, one
            of N sub-intervals in S</t>
          </list></t>

        <t>S SHALL be longer than I, primarily to account for on-demand
        activation of the path, or any preamble to testing required, and the
        delay of the path.</t>

        <t>st SHOULD be much smaller than the sub-interval dtand on the same
        order as FT, otherwise the rate measurement will include many rate
        adjustments and include more time smoothing, thus missing the maximum
        rate. The st parameter does not have relevance when the Source is
        transmitting at a fixed rate throughout S.</t>
      </section>

      <section title="Metric Definition">
        <t>This section defines the REQUIRED aspects of the IP-layer Sender
        Bitrate metric (unless otherwise indicated) for measurements at the
        specified Source on packets addressed for the intended Destination
        host and matching the required Type-P:</t>

        <t>Define the IP-layer Sender Bit Rate, B(S,st), to be the number of
        IP-layer bits (including header and data fields) that are transmitted
        from the Source with address pair Src and Dst during one contiguous
        sub-interval, st, during the test interval S (where S SHALL be longer
        than I), and where the fixed-size packet count during that single
        sub-interval st also provides the number of IP-layer bits in any
        interval, [stn,stn+1].</t>

        <t>Measurements according to these definitions SHALL use the UDP
        transport layer. Any feedback from Dst host to Src host received by
        Src host during an interval [stn,stn+1] SHOULD NOT result in an
        adaptation of the Src host traffic conditioning during this interval
        (rate adjustment occurs on st interval boundaries).</t>
      </section>

      <section title="Discussion">
        <t>Both the Sender and Receiver or (source and destination) bit rates
        SHOULD be assessed as part of an IP-layer Capacity measurement.
        Otherwise, an unexpected sending rate limitation could produce an
        erroneous Maximum IP-Layer Capacity measurement.</t>
      </section>

      <section title="Reporting the Metric">
        <t>The IP-Layer Sender Bit Rate SHALL be reported with meaningful
        resolution, in units of Megabits per second (which is 1,000,000 bits
        per second to avoid any confusion).</t>

        <t>Individual IP-Layer Sender Bit Rate measurements are discussed
        further in Section 9.</t>
      </section>
    </section>

    <section title="Method of Measurement">
      <t>The architecture of the method REQUIRES two cooperating hosts
      operating in the roles of Src (test packet sender) and Dst (receiver),
      with a measured path and return path between them.</t>

      <t>The duration of a test, parameter I, MUST be constrained in a
      production network, since this is an active test method and it will
      likely cause congestion on the Src to Dst host path during a test.</t>

      <section title="Load Rate Adjustment Algorithm">
        <t>A table SHALL be pre-built defining all the offered load rates that
        will be supported (R1 through Rn, in ascending order, corresponding to
        indexed rows in the table). It is RECOMMENDED that rates begin with
        0.5 Mbps at index zero, use 1 Mbps at index one, and then continue in
        1 Mbps increments to 1 Gbps. Above 1 Gbps, and up to 10 Gbps, it is
        RECOMMENDED that 100 Mbps increments be used. Above 10 Gbps,
        increments of 1 Gbps are RECOMMENDED. A higher starting rate might be
        configured when the test operator is certain that the Maximum is
        well-above the starting rate and factors such as test duration and
        total test traffic play an important role.</t>

        <t>Each rate is defined as datagrams of size ss, sent as a burst of
        count cc, each time interval tt (default for tt is 1ms, a likely
        system tick-interval). While it is advantageous to use datagrams of as
        large a size as possible, it may be prudent to use a slightly smaller
        maximum that allows for secondary protocol headers and/or tunneling
        without resulting in IP-layer fragmentation. Selection of a new rate
        is indicated by a calculation on the current row, Rx. For example:</t>

        <t>"Rx+1": the sender uses the next higher rate in the table.</t>

        <t>"Rx-10": the sender uses the rate 10 rows lower in the table.</t>

        <t>At the beginning of a test, the sender begins sending at rate R1
        and the receiver starts a feedback timer of duration FT (while
        awaiting inbound datagrams). As datagrams are received they are
        checked for sequence number anomalies (loss, out-of-order,
        duplication, etc.) and the delay range is measured (one-way or
        round-trip). This information is accumulated until the feedback timer
        FT expires and a status feedback message is sent from the receiver
        back to the sender, to communicate this information. The accumulated
        statistics are then reset by the receiver for the next feedback
        interval. As feedback messages are received back at the sender, they
        are evaluated to determine how to adjust the current offered load rate
        (Rx).</t>

        <t>If the feedback indicates that no sequence number anomalies were
        detected AND the delay range was below the lower threshold, the
        offered load rate is increased. If congestion has not been confirmed
        up to this point, the offered load rate is increased by more than one
        rate (e.g., Rx+10). This allows the offered load to quickly reach a
        near-maximum rate. Conversely, if congestion has been previously
        confirmed, the offered load rate is only increased by one (Rx+1).
        However, if a rate threshold between high and very high sending rates
        (such as 1Gbps) is exceeded, the offered load rate is only increased
        by one (Rx+1) above the rate threshold in any congestion state.</t>

        <t>If the feedback indicates that sequence number anomalies were
        detected OR the delay range was above the upper threshold, the offered
        load rate is decreased. The RECOMMENDED values are 0 for sequence
        number gaps and 30-90 ms for lower and upper delay thresholds,
        respectively. Also, if congestion is now confirmed for the first time
        by the current feedback message being processed, then the offered load
        rate is decreased by more than one rate (e.g., Rx-30). This one-time
        reduction is intended to compensate for the fast initial ramp-up. In
        all other cases, the offered load rate is only decreased by one
        (Rx-1).</t>

        <t>If the feedback indicates that there were no sequence number
        anomalies AND the delay range was above the lower threshold, but below
        the upper threshold, the offered load rate is not changed. This allows
        time for recent changes in the offered load rate to stabilize, and the
        feedback to represent current conditions more accurately.</t>

        <t>Lastly, the method for inferring congestion is that there were
        sequence number anomalies AND/OR the delay range was above the upper
        threshold for two consecutive feedback intervals. The algorithm
        described above is also illustrated in ITU-T Rec. Y.1540, 2020
        version<xref target="Y.1540"/>, in Annex B, and implemented in the
        Appendix on Load Rate Adjustment Pseudo Code in this memo.</t>

        <t>The Load Rate Adjustment Algorithm MUST include timers that stop
        the test when received packet streams cease unexpectedly. The timeout
        thresholds are provided in the table below, along with values for all
        other parameters and variables described in this section. Operation of
        non-obvious parameters appear below:<list style="hanging">
            <t hangText="load packet timeout">Operation: The load packet
            timeout SHALL be reset to the configured value each time a load
            packet received. If the timeout expires, the receiver SHALL be
            closed and no further feedback sent.</t>

            <t hangText="feedback message timeout">Operation: The feedback
            message timeout SHALL be reset to the configured value each time a
            feedback message is received. If the timeout expires, the sender
            SHALL be closed and no further Load packets sent.</t>
          </list></t>

        <t/>

        <texttable style="all"
                   title="Parameters for Load Rate Adjustment Algorithm">
          <ttcol>Parameter</ttcol>

          <ttcol>Default</ttcol>

          <ttcol>Tested Range or values</ttcol>

          <ttcol width="30">Expected Safe Range (not entirely tested, other
          values NOT RECOMMENDED)</ttcol>

          <c>FT, feedback time interval</c>

          <c>50ms</c>

          <c>20ms, 50ms, 100ms</c>

          <c>20ms &lt;= FT &lt;= 250ms Larger values may slow the rate
          increase and fail to find the max</c>

          <c>Feedback message timeout (stop test)</c>

          <c>L*FT, L=20 (1sec with FT=50ms)</c>

          <c>L=100 with FT=50ms (5sec)</c>

          <c>0.5sec &lt;= L*FT &lt;= 30sec Upper limit for very unreliable
          test paths only</c>

          <c>load packet timeout (stop test)</c>

          <c>1sec</c>

          <c>5sec</c>

          <c>0.250sec - 30sec Upper limit for very unreliable test paths
          only</c>

          <c>table index 0</c>

          <c>0.5Mbps</c>

          <c>0.5Mbps</c>

          <c>when testing &lt;=10Gbps</c>

          <c>table index 1</c>

          <c>1Mbps</c>

          <c>1Mbps</c>

          <c>when testing &lt;=10Gbps</c>

          <c>table index (step) size</c>

          <c>1Mbps</c>

          <c>1Mbps&lt;=rate&lt;= 1Gbps</c>

          <c>same as tested</c>

          <c>table index (step) size, rate&gt;1Gbps</c>

          <c>100Mbps</c>

          <c>1Gbps&lt;=rate&lt;= 10Gbps</c>

          <c>same as tested</c>

          <c>table index (step) size, rate&gt;10Gbps</c>

          <c>1Gbps</c>

          <c>untested</c>

          <c>&gt;10Gbps</c>

          <c>ss, UDP payload size, bytes</c>

          <c>none</c>

          <c>&lt;=1222</c>

          <c>Recommend max at largest value that avoids fragmentation; use of
          too-small payload size might result in unexpected sender
          limitations.</c>

          <c>cc, burst count</c>

          <c>none</c>

          <c>1&lt;=cc&lt;= 100</c>

          <c>same as tested. Vary cc as needed to create the desired maximum
          sending rate. Sender buffer size may limit cc in implementation.</c>

          <c>tt, burst interval</c>

          <c>100microsec</c>

          <c>100microsec, 1msec</c>

          <c>available range of "tick" values (HZ param)</c>

          <c>low delay range threshold</c>

          <c>30ms</c>

          <c>5ms, 30ms</c>

          <c>same as tested</c>

          <c>high delay range threshold</c>

          <c>90ms</c>

          <c>10ms, 90ms</c>

          <c>same as tested</c>

          <c>sequence error threshold</c>

          <c>0</c>

          <c>0, 100</c>

          <c>same as tested</c>

          <c>consecutive errored status report threshold</c>

          <c>2</c>

          <c>2</c>

          <c>Use values &gt;1 to avoid misinterpreting transient loss</c>

          <c>Fast mode increase, in table index steps</c>

          <c>10</c>

          <c>10</c>

          <c>2 &lt;= steps &lt;= 30</c>

          <c>Fast mode decrease, in table index steps</c>

          <c>3 * Fast mode increase</c>

          <c>3 * Fast mode increase</c>

          <c>same as tested</c>
        </texttable>

        <t>As a consequence of default parameterization, the Number of table
        steps in total for rates &lt;10Gbps is 2000 (excluding index 0).</t>

        <t>A related sender backoff response to network conditions occurs when
        one or more status feedback messages fail to arrive at the sender.
        </t>

        <t>If no status feedback messages arrive at the sender for the
        interval greater than the Lost Status Backoff timeout = w*FT,
        beginning when the last message (of any type) was successfully
        received at the sender:</t>

        <t>Then the offered load SHALL be decreased, following the same
        process as when the feedback indicates presence of one or more
        sequence number anomalies OR the delay range was above the upper
        threshold (as described above), with the same Load Rate Adjustment
        algorithm variables in their current state. This means that rate
        reduction and congestion confirmation can result from a three-way OR
        that includes lost status feedback messages, sequence errors, or delay
        variation.</t>

        <t>The RECOMMENDED initial value for w is 3, taking Round Trip Time
        (RTT) less than FT into account. A test with RTT longer than FT is a
        valid reason to increase the initial value of w appropriately.
        Variable w SHALL be incremented by 1 whenever the Lost Status Backoff
        timeout is exceeded. So with FT = 50ms, a status feedback message loss
        would be declared at 150ms following a successful message, again at
        50ms after that (200ms total), and so on. </t>

        <t>Also, if congestion is now confirmed for the first time by a Lost
        Status Backoff timeout, then the offered load rate is decreased by
        more than one rate (e.g., Rx-30). This one-time reduction is intended
        to compensate for the fast initial ramp-up. In all other cases, the
        offered load rate is only decreased by one (Rx-1). </t>

        <t>Appendix B discusses compliance with the applicable mandatory
        requirements of <xref target="RFC8085"/>, consistent with the goals of
        the IP-Layer Capacity Metric and Method, including the load rate
        adjustment algorithm described in this section.</t>
      </section>

      <section title="Measurement Qualification or Verification">
        <t>It is of course necessary to calibrate the equipment performing the
        IP-layer Capacity measurement, to ensure that the expected capacity
        can be measured accurately, and that equipment choices (processing
        speed, interface bandwidth, etc.) are suitably matched to the
        measurement range.</t>

        <t>When assessing a Maximum rate as the metric specifies, artificially
        high (optimistic) values might be measured until some buffer on the
        path is filled. Other causes include bursts of back-to-back packets
        with idle intervals delivered by a path, while the measurement
        interval (dt) is small and aligned with the bursts. The artificial
        values might result in an un-sustainable Maximum Capacity observed
        when the method of measurement is searching for the Maximum, and that
        would not do. This situation is different from the bi-modal service
        rates (discussed under Reporting), which are characterized by a
        multi-second duration (much longer than the measured RTT) and
        repeatable behavior.</t>

        <t>There are many ways that the Method of Measurement could handle
        this false-max issue. The default value for measurement of singletons
        (dt = 1 second) has proven to a be of practical value during tests of
        this method, allows the bimodal service rates to be characterized, and
        it has an obvious alignment with the reporting units (Mbps).</t>

        <t>Another approach comes from Section 24 of RFC 2544<xref
        target="RFC2544"/> and its discussion of Trial duration, where
        relatively short trials conducted as part of the search are followed
        by longer trials to make the final determination. In the production
        network, measurements of singletons and samples (the terms for trials
        and tests of Lab Benchmarking) must be limited in duration because
        they may be service-affecting. But there is sufficient value in
        repeating a sample with a fixed sending rate determined by the
        previous search for the Max IP-layer Capacity, to qualify the result
        in terms of the other performance metrics measured at the same
        time.</t>

        <t>A qualification measurement for the search result is a subsequent
        measurement, sending at a fixed 99.x % of the Max IP-layer Capacity
        for I, or an indefinite period. The same Max Capacity Metric is
        applied, and the Qualification for the result is a sample without
        packet loss or a growing minimum delay trend in subsequent singletons
        (or each dt of the measurement interval, I). Samples exhibiting losses
        or increasing queue occupation require a repeated search and/or test
        at reduced fixed sender rate for qualification.</t>

        <t>Here, as with any Active Capacity test, the test duration must be
        kept short. 10 second tests for each direction of transmission are
        common today. The default measurement interval specified here is I =
        10 seconds. The combination of a fast and congestion-aware search
        method and user-network coordination make a unique contribution to
        production testing. The Max IP Capacity metric and method for
        assessing performance is very different from classic <xref
        target="RFC2544"/> Throughput metric and methods : it uses
        near-real-time load adjustments that are sensitive to loss and delay,
        similar to other congestion control algorithms used on the Internet
        every day, along with limited duration. On the other hand, <xref
        target="RFC2544"/> Throughput measurements can produce sustained
        overload conditions for extended periods of time. Individual trials in
        a test governed by a binary search can last 60 seconds for each step,
        and the final confirmation trial may be even longer. This is very
        different from "normal" traffic levels, but overload conditions are
        not a concern in the isolated test environment. The concerns raised in
        <xref target="RFC6815"/> were that <xref target="RFC2544"/> methods
        would be let loose on production networks, and instead the authors
        challenged the standards community to develop metrics and methods like
        those described in this memo.</t>
      </section>

      <section title="Measurement Considerations">
        <t>In general, the wide-spread measurements that this memo encourages
        will encounter wide-spread behaviors. The bimodal IP Capacity
        behaviors already discussed in Section 6.6 are good examples.</t>

        <t>In general, it is RECOMMENDED to locate test endpoints as close to
        the intended measured link(s) as practical (this is not always
        possible for reasons of scale; there is a limit on number of test
        endpoints coming from many perspectives, management and measurement
        traffic for example). The testing operator MUST set a value for the
        MaxHops parameter, based on the expected path length. This parameter
        can keep measurement traffic from straying too far beyond the intended
        path.</t>

        <t>The path measured may be state-full based on many factors, and the
        Parameter "Time of day" when a test starts may not be enough
        information. Repeatable testing may require the time from the
        beginning of a measured flow, and how the flow is constructed
        including how much traffic has already been sent on that flow when a
        state-change is observed, because the state-change may be based on
        time or bytes sent or both. Both load packets and status feedback
        messages MUST contain sequence numbers, which helps with measurements
        based on those packets.</t>

        <t>Many different traffic shapers and on-demand access technologies
        may be encountered, as anticipated in <xref target="RFC7312"/>, and
        play a key role in measurement results. Methods MUST be prepared to
        provide a short preamble transmission to activate on-demand access,
        and to discard the preamble from subsequent test results.</t>

        <t>Conditions which might be encountered during measurement, where
        packet losses may occur independently from the measurement sending
        rate:</t>

        <t><list style="numbers">
            <t>Congestion of an interconnection or backbone interface may
            appear as packet losses distributed over time in the test stream,
            due to much higher rate interfaces in the backbone.</t>

            <t>Packet loss due to use of Random Early Detection (RED) or other
            active queue management may or may not affect the measurement flow
            if competing background traffic (other flows) are simultaneously
            present.</t>

            <t>There may be only small delay variation independent of sending
            rate under these conditions, too.</t>

            <t>Persistent competing traffic on measurement paths that include
            shared transmission media may cause random packet losses in the
            test stream.</t>
          </list>It is possible to mitigate these conditions using the
        flexibility of the load-rate adjusting algorithm described in Section
        8.1 above (tuning specific parameters).</t>

        <t>If the measurement flow burst duration happens to be on the order
        of or smaller than the burst size of a shaper or a policer in the
        path, then the line rate might be measured rather than the bandwidth
        limit imposed by the shaper or policer. If this condition is
        suspected, alternate configurations SHOULD be used.</t>

        <t>In general, results depend on the sending stream characteristics;
        the measurement community has known this for a long time, and needs to
        keep it front of mind. Although the default is a single flow (F=1) for
        testing, use of multiple flows may be advantageous for the following
        reasons:</t>

        <t><list style="numbers">
            <t>the test hosts may be able to create higher load than with a
            single flow, or parallel test hosts may be used to generate 1 flow
            each.</t>

            <t>there may be link aggregation present (flow-based load
            balancing) and multiple flows are needed to occupy each member of
            the aggregate.</t>

            <t>access policies may limit the IP-Layer Capacity depending on
            the Type-P of packets, possibly reserving capacity for various
            stream types.</t>
          </list>Each flow would be controlled using its own implementation of
        the Load Adjustment (Search) Algorithm.</t>

        <t>As testing continues, implementers should expect some evolution in
        the methods. The ITU-T has published a Supplement (60) to the Y-series
        of Recommendations, "Interpreting ITU-T Y.1540 maximum IP-layer
        capacity measurements", <xref target="Y.Sup60"/>, which is the result
        of continued testing with the metric, and those results have improved
        the method described here.</t>
      </section>

      <section title="Running Code">
        <t>This section is for the benefit of the Document Shepherd's form,
        and will be deleted prior to final review.</t>

        <t>Much of the development of the method and comparisons with existing
        methods conducted at IETF Hackathons and elsewhere have been based on
        the example udpst Linux measurement tool (which is a working reference
        for further development) <xref target="udpst"/>. The current
        project:<list style="symbols">
            <t>is a utility that can function as a client or server daemon</t>

            <t>requires a successful client-initiated setup handshake between
            cooperating hosts and allows firewalls to control inbound
            unsolicited UDP which either go to a control port [expected and
            w/authentication] or to ephemeral ports that are only created as
            needed. Firewalls protecting each host can both continue to do
            their job normally. This aspect is similar to many other test
            utilities available.</t>

            <t>is written in C, and built with gcc (release 9.3) and its
            standard run-time libraries</t>

            <t>allows configuration of most of the parameters described in
            Sections 4 and 7.</t>

            <t>supports IPv4 and IPv6 address families.</t>

            <t>supports IP-layer packet marking.</t>
          </list></t>

        <t/>
      </section>
    </section>

    <section title="Reporting Formats">
      <t>The singleton IP-Layer Capacity results SHOULD be accompanied by the
      context under which they were measured.<list style="symbols">
          <t>timestamp (especially the time when the maximum was observed in
          dtn)</t>

          <t>source and destination (by IP or other meaningful ID)</t>

          <t>other inner parameters of the test case (Section 4)</t>

          <t>outer parameters, such as "test conducted in motion" or other
          factors belonging to the context of the measurement</t>

          <t>result validity (indicating cases where the process was somehow
          interrupted or the attempt failed)</t>

          <t>a field where unusual circumstances could be documented, and
          another one for "ignore/mask out" purposes in further processing</t>
        </list></t>

      <t>The Maximum IP-Layer Capacity results SHOULD be reported in the
      format of a table with a row for each of the test Phases and Number of
      Flows. There SHOULD be columns for the phases with number of flows, and
      for the resultant Maximum IP-Layer Capacity results for the aggregate
      and each flow tested.</t>

      <t>As mentioned in Section 6.6, bi-modal (or multi-modal) maxima SHALL
      be reported for each mode separately.</t>

      <texttable style="all" title="Maximum IP-layer Capacity Results">
        <ttcol>Phase, # Flows</ttcol>

        <ttcol>Max IP-Layer Capacity, Mbps</ttcol>

        <ttcol>Loss Ratio</ttcol>

        <ttcol>RTT min, max, msec</ttcol>

        <c>Search,1</c>

        <c>967.31</c>

        <c>0.0002</c>

        <c>30, 58</c>

        <c>Verify,1</c>

        <c>966.00</c>

        <c>0.0000</c>

        <c>30, 38</c>
      </texttable>

      <t>Static and configuration parameters:</t>

      <t>The sub-interval time, dt, MUST accompany a report of Maximum
      IP-Layer Capacity results, and the remaining Parameters from Section 4,
      General Parameters.</t>

      <t>The PM list metrics corresponding to the sub-interval where the
      Maximum Capacity occurred MUST accompany a report of Maximum IP-Layer
      Capacity results, for each test phase.</t>

      <t>The IP-Layer Sender Bit rate results SHOULD be reported in the format
      of a table with a row for each of the test Phases, sub-intervals (st)
      and Number of Flows. There SHOULD be columns for the phases with number
      of flows, and for the resultant IP-Layer Sender Bit rate results for the
      aggregate and each flow tested.</t>

      <texttable style="all" title="IP-layer Sender Bit Rate Results">
        <ttcol>Phase, Flow or Aggregate</ttcol>

        <ttcol>st, sec</ttcol>

        <ttcol>Sender Bit Rate, Mbps</ttcol>

        <c>Search,1</c>

        <c>0.00 - 0.05</c>

        <c>345</c>

        <c>Search,2</c>

        <c>0.00 - 0.05</c>

        <c>289</c>

        <c>Search,Agg</c>

        <c>0.00 - 0.05</c>

        <c>634</c>
      </texttable>

      <t>Static and configuration parameters:</t>

      <t>The subinterval time, st, MUST accompany a report of Sender IP-Layer
      Bit Rate results.</t>

      <t>Also, the values of the remaining Parameters from Section 4, General
      Parameters, MUST be reported.</t>

      <t/>

      <section title="Configuration and Reporting Data Formats">
        <t>As a part of the multi-Standards Development Organization (SDO)
        harmonization of this metric and method of measurement, one of the
        areas where the Broadband Forum (BBF) contributed its expertise was in
        the definition of an information model and data model for
        configuration and reporting. These models are consistent with the
        metric parameters and default values specified as lists is this memo.
        <xref target="TR-471"/> provides the Information model that was used
        to prepare a full data model in related BBF work. The BBF has also
        carefully considered topics within its purview, such as placement of
        measurement systems within the access architecture. For example,
        timestamp resolution requirements that influence the choice of the
        test protocol are provided in Table 2 of <xref target="TR-471"/>.</t>
      </section>
    </section>

    <section title="Security Considerations">
      <t>Active metrics and measurements have a long history of security
      considerations. The security considerations that apply to any active
      measurement of live paths are relevant here. See <xref
      target="RFC4656"/> and <xref target="RFC5357"/>.</t>

      <t>When considering privacy of those involved in measurement or those
      whose traffic is measured, the sensitive information available to
      potential observers is greatly reduced when using active techniques
      which are within this scope of work. Passive observations of user
      traffic for measurement purposes raise many privacy issues. We refer the
      reader to the privacy considerations described in the Large Scale
      Measurement of Broadband Performance (LMAP) Framework <xref
      target="RFC7594"/>, which covers active and passive techniques.</t>

      <t>There are some new considerations for Capacity measurement as
      described in this memo.</t>

      <t><list style="numbers">
          <t>Cooperating source and destination hosts and agreements to test
          the path between the hosts are REQUIRED. Hosts perform in either the
          Src or Dst roles.</t>

          <t>It is REQUIRED to have a user client-initiated setup handshake
          between cooperating hosts that allows firewalls to control inbound
          unsolicited UDP traffic which either goes to a control port
          [expected and w/authentication] or to ephemeral ports that are only
          created as needed. Firewalls protecting each host can both continue
          to do their job normally.</t>

          <t>Client-server authentication and integrity protection for
          feedback messages conveying measurements is RECOMMENDED.</t>

          <t>Hosts MUST limit the number of simultaneous tests to avoid
          resource exhaustion and inaccurate results.</t>

          <t>Senders MUST be rate-limited. This can be accomplished using a
          pre-built table defining all the offered load rates that will be
          supported (Section 8.1). The recommended load-control search
          algorithm results in "ramp up" from the lowest rate in the
          table.</t>

          <t>Service subscribers with limited data volumes who conduct
          extensive capacity testing might experience the effects of Service
          Provider controls on their service. Testing with the Service
          Provider's measurement hosts SHOULD be limited in frequency and/or
          overall volume of test traffic (for example, the range of I duration
          values SHOULD be limited).</t>
        </list></t>

      <t>The exact specification of these features is left for the future
      protocol development.</t>
    </section>

    <section anchor="IANA" title="IANA Considerations">
      <t>This memo makes no requests of IANA.</t>
    </section>

    <section title="Acknowledgments">
      <t>Thanks to Joachim Fabini, Matt Mathis, J.Ignacio Alvarez-Hamelin,
      Wolfgang Balzer, Frank Brockners, Greg Mirsky, Martin Duke, Murray
      Kucherawy, and Benjamin Kaduk for their extensive comments on the memo
      and related topics.</t>
    </section>

    <section title="Appendix A - Load Rate Adjustment Pseudo Code">
      <t>The following is a pseudo-code implementation of the algorithm
      described in Section 8.1.</t>

      <t><figure>
          <artwork><![CDATA[Rx = 0  # The current sending rate (equivalent to a row of the table)
seqErr = 0  # Measured count of any of Loss or Reordering impairments 
delay = 0 # Measured Range of Round Trip Time, RTT, ms
lowThresh = 30 # Low threshold on the Range of RTT, ms
upperThresh = 90 # Upper threshold on the Range of RTT, ms
hSpeedTresh = 1Gbps # Threshold for transition between sending rate step
 sizes (such as 1 Mbps and 100 Mbps)
slowAdjCount = 0 # Measured Number of consecutive status reports
 indicating loss and/or delay variation above upperThresh
slowAdjThresh = 2 # Threshold on slowAdjCount used to infer congestion.
 Use values >1 to avoid misinterpreting transient loss
highSpeedDelta = 10 # The number of rows to move in a single adjustment
 when initially increasing offered load (to ramp-up quickly) 
maxLoadRates = 2000 # Maximum table index (rows)


if ( seqErr == 0 && delay < lowThresh ) {
	if ( Rx < hSpeedTresh && slowAdjCount < slowAdjThresh ) {
			Rx += highSpeedDelta;
			slowAdjCount = 0;
	} else {
			if ( Rx < maxLoadRates - 1 )
					Rx++;
	}
} else if ( seqErr > 0 || delay > upperThresh ) {
	slowAdjCount++;
	if ( Rx < hSpeedTresh && slowAdjCount == slowAdjThresh ) {
			if ( Rx > highSpeedDelta * 3 )
					Rx -= highSpeedDelta * 3;
			else
					Rx = 0;
	} else {
			if ( Rx > 0 )
					Rx--;
	}
}
]]></artwork>
        </figure></t>

      <t/>
    </section>

    <section title="Appendix B - RFC 8085 UDP Guidelines Check">
      <t>The BCP on UDP usage guidelines <xref target="RFC8085"/> focuses
      primarily on congestion control in section 3.1. The Guidelines appear in
      mandatory (MUST) and recommendation (SHOULD) categories. </t>

      <section title="Assessment of Mandatory Requirements">
        <t>The mandatory requirements in Section 3 of <xref target="RFC8085"/>
        include:<list style="hanging">
            <t>Internet paths can have widely varying characteristics, ...
            Consequently, applications that may be used on the Internet MUST
            NOT make assumptions about specific path characteristics. They
            MUST instead use mechanisms that let them operate safely under
            very different path conditions. Typically, this requires
            conservatively probing the current conditions of the Internet path
            they communicate over to establish a transmission behavior that it
            can sustain and that is reasonably fair to other traffic sharing
            the path. </t>
          </list></t>

        <t>The purpose of the load rate adjustment algorithm in Section 8.1 is
        to probe the network and enable Maximum IP-Layer Capacity measurements
        with as few assumptions about the measured path as possible, and
        within the range application described in Section 2. The degree of
        probing conservatism is in tension with the need to minimize both the
        traffic dedicated to testing (especially with Gigabit rate
        measurements) and the duration of the test (which is one contributing
        factor to the overall algorithm fairness).</t>

        <t>The text of Section 3 of <xref target="RFC8085"/> goes on to
        recommend alternatives to UDP to meet the mandatory requirements, but
        none are suitable for this scope and purpose of the metrics and
        methods in this memo. In fact, ad hoc TCP-based methods fail to
        achieve the measurement accuracy repeatedly proven in comparison
        measurements with the running code <xref target="LS-SG12-A"/> <xref
        target="LS-SG12-B"/> <xref target="Y.Sup60"/>. Also, the UDP aspect of
        these methods is present primarily to support modern Internet
        transmission where a transport protocol is required <xref
        target="copycat"/>; the metric is based on the IP-layer and UDP allows
        simple correlation to the IP-layer.</t>

        <t>Section 3.1.1 of <xref target="RFC8085"/> discusses protocol timer
        guidelines:</t>

        <t><list style="hanging">
            <t>Latency samples MUST NOT be derived from ambiguous
            transactions. The canonical example is in a protocol that
            retransmits data, but subsequently cannot determine which copy is
            being acknowledged.</t>
          </list>Both load packets and status feedback messages MUST contain
        sequence numbers, which helps with measurements based on those
        packets, and there are no retransmissions needed.<list style="hanging">
            <t> When a latency estimate is used to arm a timer that provides
            loss detection -- with or without retransmission -- expiry of the
            timer MUST be interpreted as an indication of congestion in the
            network, causing the sending rate to be adapted to a safe
            conservative rate...</t>
          </list></t>

        <t>The method described in this memo uses timers for sending rate
        backoff when status feedback messages are lost (Lost Status Backoff
        timeout), and for stopping a test when connectivity is lost for an
        longer interval (Feedback message or load packet timeouts).</t>

        <t>There is no specific benefit foreseen by using Explicit Congestion
        Notification (ECN) in this memo.</t>

        <t>Section 3.2 of <xref target="RFC8085"/> discusses message size
        guidelines:<list style="hanging">
            <t> To determine an appropriate UDP payload size, applications
            MUST subtract the size of the IP header (which includes any IPv4
            optional headers or IPv6 extension headers) as well as the length
            of the UDP header (8 bytes) from the PMTU size.</t>
          </list></t>

        <t>The method uses a sending rate table with a maximum UDP payload
        size that anticipates significant header overhead and avoids
        fragmentation.</t>

        <t>Section 3.3 of <xref target="RFC8085"/> provides reliability
        guidelines:<list style="hanging">
            <t>Applications that do require reliable message delivery MUST
            implement an appropriate mechanism themselves.</t>
          </list></t>

        <t>The IP-Layer Capacity Metric and Method do not require reliable
        delivery.<list style="hanging">
            <t>Applications that require ordered delivery MUST reestablish
            datagram ordering themselves.</t>
          </list></t>

        <t>The IP-Layer Capacity Metric and Method does not need to
        reestablish packet order, it is preferred to measure packet reordering
        if it occurs <xref target="RFC4737"/>. </t>
      </section>

      <section title="Assessment of Recommendations">
        <t>The load rate adjustment algorithm's goal is to determine the
        Maximum IP-Layer Capacity in the context of an infrequent, diagnostic,
        short term measurement. This goal is a global exception to many <xref
        target="RFC8085"/> SHOULD-level requirements, of which many are
        intended for long-lived flows that must coexist with other traffic in
        more-or-less fair way. However, the algorithm (as specified in Section
        8.1 and Appendix A above) reacts to indications of congestion i
        clearly defined ways.</t>

        <t>A specific example is provided as an example. Section 3.1.5 of
        <xref target="RFC8085"/> on implications of RTT and Loss Measurements
        on COngestion Control says:<list style="hanging">
            <t> A congestion control designed for UDP SHOULD respond as
            quickly as possible when it experiences congestion, and it SHOULD
            take into account both the loss rate and the response time when
            choosing a new rate. </t>
          </list></t>

        <t>The load rate adjustment algorithm responds to loss and RTT
        measurements with a clear and concise rate reduction when warrented,
        and the response makes use of direct measurements (more exact than can
        be inferred from TCP ACKs).</t>

        <t>Section 3.1.5 of <xref target="RFC8085"/> goes on to specify:<list
            style="hanging">
            <t>The implemented congestion control scheme SHOULD result in
            bandwidth (capacity) use that is comparable to that of TCP within
            an order of magnitude, so that it does not starve other flows
            sharing a common bottleneck.</t>
          </list></t>

        <t>This is a requirement for coexistent streams, and not for
        diagnostic and infrequent measurements using short durations. The rate
        oscillations during short tests allow other packets to pass, and
        don&rsquo;t starve other flows. </t>

        <t>Ironically, ad hoc TCP-based measurements of "Internet Speed" are
        also designed to work around this SHOULD-level requirement, by
        launching many flows (9, for example) to increase the outstanding data
        dedicated to testing.</t>

        <t>The load rate adjustment algorithm cannot become a TCP-like
        congestion control, or it will have the same weaknesses of TCP has
        when trying to make a Maximum IP-Layer Capacity measurement, and will
        not achieve the goal. The results of the referenced testing <xref
        target="LS-SG12-A"/> <xref target="LS-SG12-B"/> <xref
        target="Y.Sup60"/> supported this statement hundreds of times, with
        comparisons to multi-connection TCP-based measurements. </t>

        <t>A brief review of some of the other SHOULD-level requirements
        follows (Yes or Not applicable = NA) :<figure>
            <artwork><![CDATA[+--+---------------------------------------------------------+---------+
|Y?| RFC 8085 Recommendation                                 | Section |
+--+---------------------------------------------------------+---------+
Yes| MUST tolerate a wide range of Internet path conditions  | 3       |
NA | SHOULD use a full-featured transport (e.g., TCP)        |         |
   |                                                         |         |
Yes| SHOULD control rate of transmission                     | 3.1     |
NA | SHOULD perform congestion control over all traffic      |         |
   |                                                         |         |
   | for bulk transfers,                                     | 3.1.2   |
NA | SHOULD consider implementing TFRC                       |         |
NA | else, SHOULD in other ways use bandwidth similar to TCP |         |
   |                                                         |         |
   | for non-bulk transfers,                                 | 3.1.3   |
NA | SHOULD measure RTT and transmit max. 1 datagram/RTT     | 3.1.1   |
NA | else, SHOULD send at most 1 datagram every 3 seconds    |         |
NA | SHOULD back-off retransmission timers following loss    |         |
   |                                                         |         |
Yes| SHOULD provide mechanisms to regulate the bursts of     | 3.1.6   |
   | transmission                                            |         |
   |                                                         |         |
NA | MAY implement ECN; a specific set of application        | 3.1.7   |
   | mechanisms are REQUIRED if ECN is used.                 |         |
   |                                                         |         |
Yes| for DiffServ, SHOULD NOT rely on implementation of PHBs | 3.1.8   |
   |                                                         |         |
Yes| for QoS-enabled paths, MAY choose not to use CC         | 3.1.9   |
   |                                                         |         |
Yes| SHOULD NOT rely solely on QoS for their capacity        | 3.1.10  |
   | non-CC controlled flows SHOULD implement a transport    |         |
   | circuit breaker                                         |         |
   | MAY implement a circuit breaker for other applications  |         |
   |                                                         |         |
   | for tunnels carrying IP traffic,                        | 3.1.11  |
NA | SHOULD NOT perform congestion control                   |         |
NA | MUST correctly process the IP ECN field                 |         |
   |                                                         |         |
   | for non-IP tunnels or rate not determined by traffic,   |         |
NA | SHOULD perform CC or use circuit breaker                | 3.1.11  |
NA | SHOULD restrict types of traffic transported by the     |         |
   | tunnel                                                  |         |
   |                                                         |         |
Yes| SHOULD NOT send datagrams that exceed the PMTU, i.e.,   | 3.2     |
Yes| SHOULD discover PMTU or send datagrams < minimum PMTU;  |         |
NA | Specific application mechanisms are REQUIRED if PLPMTUD |         |
   | is used.                                                |         |
   |                                                         |         |
Yes| SHOULD handle datagram loss, duplication, reordering    | 3.3     |
NA | SHOULD be robust to delivery delays up to 2 minutes     |         |
   |                                                         |         |
Yes| SHOULD enable IPv4 UDP checksum                         | 3.4     |
Yes| SHOULD enable IPv6 UDP checksum; Specific application   | 3.4.1   |
   | mechanisms are REQUIRED if a zero IPv6 UDP checksum is  |         |
   | used.                                                   |         |
   |                                                         |         |
NA | SHOULD provide protection from off-path attacks         | 5.1     |
   | else, MAY use UDP-Lite with suitable checksum coverage  | 3.4.2   |
   |                                                         |         |
NA | SHOULD NOT always send middlebox keep-alive messages    | 3.5     |
NA | MAY use keep-alives when needed (min. interval 15 sec)  |         |
   |                                                         |         |
Yes| Applications specified for use in limited use (or       | 3.6     |
   | controlled environments) SHOULD identify equivalent     |         |
   | mechanisms and describe their use case.                 |         |
   |                                                         |         |
NA | Bulk-multicast apps SHOULD implement congestion control | 4.1.1   |
   |                                                         |         |
NA | Low volume multicast apps SHOULD implement congestion   | 4.1.2   |
   | control                                                 |         |
   |                                                         |         |
NA | Multicast apps SHOULD use a safe PMTU                   | 4.2     |
   |                                                         |         |
Yes| SHOULD avoid using multiple ports                       | 5.1.2   |
Yes| MUST check received IP source address                   |         |
   |                                                         |         |
NA | SHOULD validate payload in ICMP messages                | 5.2     |
   |                                                         |         |
Yes| SHOULD use a randomized source port or equivalent       | 6       |
   | technique, and, for client/server applications, SHOULD  |         |
   | send responses from source address matching request     |         |
   | 5.1                                                     |         |
NA | SHOULD use standard IETF security protocols when needed | 6       |
   +---------------------------------------------------------+---------+]]></artwork>
          </figure></t>

        <t/>

        <t/>
      </section>
    </section>
  </middle>

  <back>
    <references title="Normative References">
      <?rfc include='reference.RFC.2330'?>

      <?rfc ?>

      <?rfc ?>

      <?rfc include="reference.RFC.2119"?>

      <?rfc ?>

      <?rfc ?>

      <?rfc include='reference.RFC.7680'?>

      <?rfc include='reference.RFC.8468'?>

      <?rfc ?>

      <?rfc ?>

      <?rfc include='reference.RFC.8174'?>

      <?rfc include='reference.RFC.6438'?>

      <?rfc ?>

      <?rfc include='reference.RFC.4737'?>

      <?rfc include='reference.RFC.4656'?>

      <?rfc include='reference.RFC.2681'?>

      <?rfc include='reference.RFC.5357'?>

      <?rfc ?>

      <?rfc include='reference.RFC.7479'?>

      <?rfc ?>

      <?rfc ?>
    </references>

    <references title="Informative References">
      <?rfc include='reference.RFC.2544'?>

      <?rfc include='reference.RFC.3148'?>

      <?rfc include='reference.RFC.5136'?>

      <?rfc include='reference.RFC.6815'?>

      <?rfc include='reference.RFC.7312'?>

      <?rfc include='reference.RFC.7594'?>

      <?rfc include='reference.RFC.7799'?>

      <?rfc include='reference.RFC.8085'?>

      <?rfc include='reference.RFC.8337'?>

      <reference anchor="copycat"
                 target="https://irtf.org/anrw/2017/anrw17-final5.pdf">
        <front>
          <title>copycat: Testing Differential Treatment of New Transport
          Protocols in the Wild (ANRW '17)</title>

          <author fullname="Korian Edeline" initials="K." surname="Edleine">
            <organization/>
          </author>

          <author fullname="Mirja Kuhlewind" initials="K." surname="Kuhlewind">
            <organization/>
          </author>

          <author fullname="Brian Trammell" initials="B." surname="Trammell">
            <organization/>
          </author>

          <author fullname="Benoit Donnet" initials="B." surname="Donnet">
            <organization/>
          </author>

          <date day="15" month="July" year="2017"/>
        </front>
      </reference>

      <reference anchor="Y.Sup60"
                 target="https://www.itu.int/rec/T-REC-Y.Sup60/en">
        <front>
          <title>Recommendation Y.Sup60, (09/20) Interpreting ITU-T Y.1540
          maximum IP-layer capacity measurements, and Errata</title>

          <author fullname="Al Morton" initials="A., Rapporteur"
                  surname="Morton">
            <organization>AT&amp;T</organization>
          </author>

          <date day="11" month="September" year="2020"/>
        </front>
      </reference>

      <reference anchor="TR-471"
                 target="https://www.broadband-forum.org/technical/download/TR-471.pdf">
        <front>
          <title>Broadband Forum TR-471: IP Layer Capacity Metrics and
          Measurement</title>

          <author fullname="Al Morton" initials="A." surname="Morton">
            <organization>AT&amp;T Labs</organization>
          </author>

          <date day="10" month="July" year="2020"/>
        </front>
      </reference>

      <reference anchor="Y.1540"
                 target="https://www.itu.int/rec/T-REC-Y.1540-201912-I/en">
        <front>
          <title>Internet protocol data communication service - IP packet
          transfer and availability performance parameters</title>

          <author fullname="ITU-T Recommendation Y.1540" surname="">
            <organization>ITU-T</organization>
          </author>

          <date month="December" year="2019"/>
        </front>
      </reference>

      <reference anchor="LS-SG12-B"
                 target="https://datatracker.ietf.org/liaison/1645/">
        <front>
          <title>LS on harmonization of IP Capacity and Latency Parameters:
          Consent of Draft Rec. Y.1540 on IP packet transfer performance
          parameters and New Annex A with Lab &amp; Field Evaluation
          Plans</title>

          <author fullname="ITU-T SG 12" surname="">
            <organization>ITU-T</organization>
          </author>

          <date month="March" year="2019"/>
        </front>
      </reference>

      <reference anchor="LS-SG12-A"
                 target="https://datatracker.ietf.org/liaison/1632/">
        <front>
          <title>LS - Harmonization of IP Capacity and Latency Parameters:
          Revision of Draft Rec. Y.1540 on IP packet transfer performance
          parameters and New Annex A with Lab Evaluation Plan</title>

          <author fullname="ITU-T SG 12" surname="">
            <organization>ITU-T</organization>
          </author>

          <date month="May" year="2019"/>
        </front>
      </reference>

      <reference anchor="udpst"
                 target="https://github.com/BroadbandForum/obudpst">
        <front>
          <title>UDP Speed Test Open Broadband project</title>

          <author fullname="" surname="">
            <organization>udpst Project Collaborators</organization>
          </author>

          <date month="December" year="2020"/>
        </front>
      </reference>

      <?rfc ?>

      <?rfc ?>
    </references>
  </back>
</rfc>
