Metrics and Methods for
One-way IP CapacityAT&T Labs200 Laurel Avenue SouthMiddletown,NJ07748USA+1 732 420 1571+1 732 368 1192acm@research.att.comDeutsche TelekomHeinrich Hertz Str. 3-7Darmstadt64295Germany+49 6151 5812747Ruediger.Geib@telekom.deAT&T Labs200 Laurel Avenue SouthMiddletown,NJ07748USAlencia@att.comThis memo revisits the problem of Network Capacity metrics first
examined in RFC 5136. The memo specifies a more practical Maximum
IP-layer Capacity metric definition catering for measurement purposes,
and outlines the corresponding methods of measurement.The IETF's efforts to define Network and Bulk Transport Capacity have
been chartered and progressed for over twenty years. Over that time, the
performance community has seen development of Informative definitions in
for Framework for Bulk Transport Capacity
(BTC), RFC 5136 for Network Capacity and Maximum IP-layer Capacity, and
the Experimental metric definitions and methods in , Model-Based Metrics for BTC.This memo revisits the problem of Network Capacity metrics examined
first in and later in .
Maximum IP-Layer Capacity and Bulk Transfer
Capacity (goodput) are different metrics. Maximum IP-layer Capacity is
like the theoretical goal for goodput. There are many metrics in , such as Available Capacity. Measurements depend on
the network path under test and the use case. Here, the main use case is
to assess the maximum capacity of the access network, with specific
performance criteria used in the measurement.This memo recognizes the importance of a definition of a Maximum
IP-layer Capacity Metric at a time when access speeds have increased
dramatically; a definition that is both practical and effective for the
performance community's needs, including Internet users. The metric
definition is intended to use Active Methods of Measurement , and a method of measurement is included.The most direct active measurement of IP-layer Capacity would use IP
packets, but in practice a transport header is needed to traverse
address and port translators. UDP offers the most direct assessment
possibility, and in the measurement study to
investigate whether UDP is viable as a general Internet transport
protocol, the authors found that a high percentage of paths tested
support UDP transport. A number of liaisons have been exchanged on this
topic , discussing
the laboratory and field tests that support the UDP-based approach to
IP-layer Capacity measurement.This memo also recognizes the many updates to the IP Performance
Metrics Framework published over twenty years,
and makes use of for Advanced Stream and
Sampling Framework, and with IPv4, IPv6, and
IPv4-IPv6 Coexistence Updates.The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in BCP
14 when, and only
when, they appear in all capitals, as shown here.The scope of this memo is to define a metric and corresponding method
to unambiguously perform Active measurements of Maximum IP-Layer
Capacity, along with related metrics and methods.The main goal is to harmonize the specified metric and method across
the industry, and this memo is the vehicle that captures IETF consensus,
possibly resulting in changes to the specifications of other Standards
Development Organizations (SDO) (through each SDO's normal contribution
process, or through liaison exchange).A local goal is to aid efficient test procedures where possible, and
to recommend reporting with additional interpretation of the results.
Fostering the development of protocol support for this metric and method
of measurement is also a goal of this memo (all active testing protocols
currently defined by the IPPM WG are UDP-based, meeting a key
requirement of these methods). The supporting protocol development to
measure this metric according to the specified method is a key future
contribution to Internet measurement.The primary application of the metric and method of measurement
described here is the same as in Section 2 of
where:The access portion of the network is the focus of this problem
statement. The user typically subscribes to a service with
bidirectional access partly described by rates in bits per
second.In addition, the use of the load adjustment algorithm described
in section 8.1 has the following additional applicability
limitations:- MUST only be used in the application of diagnostic and operations
measurements as described in this memo- MUST only be used in circumstances consistent with Section 10,
Security ConsiderationsAs with any problem that has been worked for many years in various
SDOs without any special attempts at coordination, various solutions for
metrics and methods have emerged.There are five factors that have changed (or begun to change) in the
2013-2019 time frame, and the presence of any one of them on the path
requires features in the measurement design to account for the
changes:Internet access is no longer the bottleneck for many users.Both transfer rate and latency are important to user's
satisfaction.UDP's growing role in Transport, in areas where TCP once
dominated.Content and applications are moving physically closer to
users.There is less emphasis on ISP gateway measurements, possibly due
to less traffic crossing ISP gateways in future.This section lists the REQUIRED input factors to specify a Sender or
Receiver metric.Src, the address of a host (such as the globally routable IP
address).Dst, the address of a host (such as the globally routable IP
address).MaxHops, the limit on the number of Hops a specific packet may
visit as it traverses from the host at Src to the host at Dst
(implemented in the TTL or Hop Limit).T0, the time at the start of measurement interval, when packets
are first transmitted from the Source.I, the nominal duration of a measurement interval at the
destination (default 10 sec)dt, the nominal duration of m equal sub-intervals in I at the
destination (default 1 sec)dtn, the beginning boundary of a specific sub-interval, n, one of
m sub-intervals in IFT, the feedback time interval between status feedback messages
communicating measurement results, sent from the receiver to control
the sender. The results are evaluated to determine how to adjust the
current offered load rate at the sender (default 50ms)Tmax, a maximum waiting time for test packets to arrive at the
destination, set sufficiently long to disambiguate packets with long
delays from packets that are discarded (lost), such that the
distribution of one-way delay is not truncated.F, the number of different flows synthesized by the method
(default 1 flow)flow, the stream of packets with the same n-tuple of designated
header fields that (when held constant) result in identical
treatment in a multi-path decision (such as the decision taken in
load balancing). Note: The IPv6 flow label MAY be included in the
flow definition when routers have complied with guidelines.Type-P, the complete description of the test packets for which
this assessment applies (including the flow-defining fields). Note
that the UDP transport layer is one requirement for test packets
specified below. Type-P is a parallel concept to "population of
interest" defined in clause 6.1.1 of.PM, a list of fundamental metrics, such as loss, delay, and
reordering, and corresponding target performance threshold. At least
one fundamental metric and target performance threshold MUST be
supplied (such as One-way IP Packet Loss
equal to zero).A non-Parameter which is required for several metrics is
defined below:T, the host time of the *first* test packet's *arrival* as
measured at the destination Measurement Point, or MP(Dst). There may
be other packets sent between source and destination hosts that are
excluded, so this is the time of arrival of the first packet used
for measurement of the metric.Note that time stamp format and resolution, sequence numbers,
etc. will be established by the chosen test protocol standard or
implementation.This section sets requirements for the singleton metric that supports
the Maximum IP-layer Capacity Metric definition in Section 6.Type-P-One-way-IP-Capacity, or informally called IP-layer
Capacity.Note that Type-P depends on the chosen method.This section lists the REQUIRED input factors to specify the
metric, beyond those listed in Section 4.No additional Parameters are needed.This section defines the REQUIRED aspects of the measurable
IP-layer Capacity metric (unless otherwise indicated) for measurements
between specified Source and Destination hosts:Define the IP-layer capacity, C(T,dt,PM), to be the number of
IP-layer bits (including header and data fields) in packets that can
be transmitted from the Src host and correctly received by the Dst
host during one contiguous sub-interval, dt in length. The IP-layer
capacity depends on the Src and Dst hosts, the host addresses, and the
path between the hosts.The number of these IP-layer bits is designated n0[dtn,dtn+1] for a
specific dt.When the packet size is known and of fixed size, the packet count
during a single sub-interval dt multiplied by the total bits in IP
header and data fields is equal to n0[dtn,dtn+1].Anticipating a Sample of Singletons, the number of sub-intervals
with duration dt MUST be set to a natural number m, so that T+I = T +
m*dt with dtn+1 - dtn = dt for 1 <= n <= m.Parameter PM represents other performance metrics [see section 5.4
below]; their measurement results SHALL be collected during
measurement of IP-layer Capacity and associated with the corresponding
dtn for further evaluation and reporting. Users SHALL specify the
parameter Tmax as required by each metric's reference definition.Mathematically, this definition is represented as (for each n):and:n0 is the total number of IP-layer header and payload bits that
can be transmitted in standard-formed packets from the Src host and correctly received by the
Dst host during one contiguous sub-interval, dt in length, during
the interval [T, T+I],C(T,dt,PM) the IP-Layer Capacity, corresponds to the value of
n0 measured in any sub-interval beginning at dtn, divided by the
length of sub-interval, dt.PM represents other performance metrics [see section 5.4
below]; their measurement results SHALL be collected during
measurement of IP-layer Capacity and associated with the
corresponding dtn for further evaluation and reporting.all sub-intervals MUST be of equal duration. Choosing dt as
non-overlapping consecutive time intervals allows for a simple
implementation.The bit rate of the physical interface of the measurement
devices must be higher than the smallest of the links on the path
whose C(T,I,PM) is to be measured (the bottleneck link).Measurements according to these definitions SHALL use the UDP
transport layer. Standard-formed packets are specified in Section 5 of
. Some compression affects on measurement are
discussed in Section 6 of , as well.RTD[dtn,dtn+1] is defined as a sample of the Round-trip Delay between the Src host and the Dst
host over the interval [T,T+I] (that contains equal non-overlapping
intervals of dt). The "reasonable period of time" in is the parameter Tmax in this memo. The statistics
used to summarize RTD[dtn,dtn+1] MAY include the minimum, maximum,
median, and mean, and the range = (maximum - minimum) is referred to
below in Section 8.1 for load adjustment purposes.OWL[dtn,dtn+1] is defined as a sample of the One-way Loss between the Src host and the Dst host
over the interval [T,T+I] (that contains equal non-overlapping
intervals of dt). The statistics used to summarize OWL[dtn,dtn+1] MAY
include the lost packet count and the lost packet ratio.Other metrics MAY be measured: one-way reordering, duplication, and
delay variation.See the corresponding section for Maximum IP-Layer Capacity.The IP-Layer Capacity SHOULD be reported with at least single
Megabit resolution, in units of Megabits per second (Mbps), (which is
1,000,000 bits per second to avoid any confusion).The Related Round Trip Delay and/or Loss metric measurements for
the same Singleton SHALL be reported, also with meaningful resolution
for the values measured.Individual Capacity measurements MAY be reported in a manner
consistent with the Maximum IP-Layer Capacity, see Section 9.This section sets requirements for the following components to
support the Maximum IP-layer Capacity Metric.Type-P-One-way-Max-IP-Capacity, or informally called Maximum
IP-layer Capacity.Note that Type-P depends on the chosen method.This section lists the REQUIRED input factors to specify the
metric, beyond those listed in Section 4.No additional Parameters or definitions are needed.This section defines the REQUIRED aspects of the Maximum IP-layer
Capacity metric (unless otherwise indicated) for measurements between
specified Source and Destination hosts:Define the Maximum IP-layer capacity, Maximum_C(T,I,PM), to be the
maximum number of IP-layer bits n0[dtn,dtn+1] divided by dt that can
be transmitted in packets from the Src host and correctly received by
the Dst host, over all dt length intervals in [T, T+I], and meeting
the PM criteria. Equivalently the Maximum of a Sample of size m of
C(T,I,PM) collected during the interval [T, T+I] and meeting the PM
criteria.The number of sub-intervals with duration dt MUST be set to a
natural number m, so that T+I = T + m*dt with dtn+1 - dtn = dt for 1
<= n <= m.Parameter PM represents the other performance metrics (see Section
6.4 below) and their measurement results for the maximum IP-layer
capacity. At least one target performance threshold (PM criterion)
MUST be defined. If more than one metric and target performance
threshold are defined, then the sub-interval with maximum number of
bits transmitted MUST meet all the target performance thresholds.
Users SHALL specify the parameter Tmax as required by each metric's
reference definition.Mathematically, this definition can be represented as:and:n0 is the total number of IP-layer header and payload bits that
can be transmitted in standard-formed packets from the Src host
and correctly received by the Dst host during one contiguous
sub-interval, dt in length, during the interval [T, T+I],Maximum_C(T,I,PM) the Maximum IP-Layer Capacity, corresponds to
the maximum value of n0 measured in any sub-interval beginning at
dtn, divided by the constant length of all sub-intervals, dt.PM represents the other performance metrics (see Section 5.4)
and their measurement results for the maximum IP-layer capacity.
At least one target performance threshold (PM criterion) MUST be
defined.all sub-intervals MUST be of equal duration. Choosing dt as
non-overlapping consecutive time intervals allows for a simple
implementation.The bit rate of the physical interface of the measurement
systems must be higher than than the smallest of the links on the
path whose Maximum_C(T,I,PM) is to be measured (the bottleneck
link).In this definition, the m sub-intervals can be viewed as trials
when the Src host varies the transmitted packet rate, searching for
the maximum n0 that meets the PM criteria measured at the Dst host in
a test of duration, I. When the transmitted packet rate is held
constant at the Src host, the m sub-intervals may also be viewed as
trials to evaluate the stability of n0 and metric(s) in the PM list
over all dt-length intervals in I.Measurements according to these definitions SHALL use the UDP
transport layer.RTD[dtn,dtn+1] and OWL[dtn,dtn+1] are defined in Section 5.4. Here,
the test intervals are increased to match the capacity samples,
RTD[T,I] and OWL[T,I].The interval dtn,dtn+1 where Maximum_C[T,I,PM] occurs is the
reporting sub-interval within RTD[T,I] and OWL[T,I].Other metrics MAY be measured: one-way reordering, duplication, and
delay variation.If traffic conditioning (e.g., shaping, policing) applies along a
path for which Maximum_C(T,I,PM) is to be determined, different values
for dt SHOULD be picked and measurements be executed during multiple
intervals [T, T+I]. Each duration dt SHOULD be chosen so that it is an
integer multiple of increasing values k times serialization delay of a
path MTU at the physical interface speed where traffic conditioning is
expected. This should avoid taking configured burst tolerance
singletons as a valid Maximum_C(T,I,PM) result.A Maximum_C(T,I,PM) without any indication of bottleneck
congestion, be that an increasing latency, packet loss or ECN marks
during a measurement interval I, is likely to underestimate
Maximum_C(T,I,PM).The IP-Layer Capacity SHOULD be reported with at least single
Megabit resolution, in units of Megabits per second (Mbps) (which is
1,000,000 bits per second to avoid any confusion).The Related Round Trip Delay and/or Loss metric measurements for
the same Singleton SHALL be reported, also with meaningful resolution
for the values measured.When there are demonstrated and repeatable Capacity modes in the
Sample, then the Maximum IP-Layer Capacity SHALL be reported for each
mode, along with the relative time from the beginning of the stream
that the mode was observed to be present. Bimodal Maxima have been
observed with some services, sometimes called a "turbo mode" intending
to deliver short transfers more quickly, or reduce the initial
buffering time for some video streams. Note that modes lasting less
than dt duration will not be detected.Some transmission technologies have multiple methods of operation
that may be activated when channel conditions degrade or improve, and
these transmission methods may determine the Maximum IP-Layer
Capacity. Examples include line-of-sight microwave modulator
constellations, or cellular modem technologies where the changes may
be initiated by a user moving from one coverage area to another.
Operation in the different transmission methods may be observed over
time, but the modes of Maximum IP-Layer Capacity will not be activated
deterministically as with the "turbo mode" described in the paragraph
above.This section sets requirements for the following components to
support the IP-layer Sender Bitrate Metric. This metric helps to check
that the sender actually generated the desired rates during a test, and
measurement takes place at the Src host to network path interface (or as
close as practical within the Src host). It is not a metric for path
performance.Type-P-IP-Sender-Bit-Rate, or informally called IP-layer Sender
Bitrate.Note that Type-P depends on the chosen method.This section lists the REQUIRED input factors to specify the
metric, beyond those listed in Section 4.S, the duration of the measurement interval at the Sourcest, the nominal duration of N sub-intervals in S (default st =
0.05 seconds)stn, the beginning boundary of a specific sub-interval, n, one
of N sub-intervals in SS SHALL be longer than I, primarily to account for on-demand
activation of the path, or any preamble to testing required, and the
delay of the path.st SHOULD be much smaller than the sub-interval dtand on the same
order as FT, otherwise the rate measurement will include many rate
adjustments and include more time smoothing, thus missing the maximum
rate. The st parameter does not have relevance when the Source is
transmitting at a fixed rate throughout S.This section defines the REQUIRED aspects of the IP-layer Sender
Bitrate metric (unless otherwise indicated) for measurements at the
specified Source on packets addressed for the intended Destination
host and matching the required Type-P:Define the IP-layer Sender Bit Rate, B(S,st), to be the number of
IP-layer bits (including header and data fields) that are transmitted
from the Source with address pair Src and Dst during one contiguous
sub-interval, st, during the test interval S (where S SHALL be longer
than I), and where the fixed-size packet count during that single
sub-interval st also provides the number of IP-layer bits in any
interval, [stn,stn+1].Measurements according to these definitions SHALL use the UDP
transport layer. Any feedback from Dst host to Src host received by
Src host during an interval [stn,stn+1] SHOULD NOT result in an
adaptation of the Src host traffic conditioning during this interval
(rate adjustment occurs on st interval boundaries).Both the Sender and Receiver or (source and destination) bit rates
SHOULD be assessed as part of an IP-layer Capacity measurement.
Otherwise, an unexpected sending rate limitation could produce an
erroneous Maximum IP-Layer Capacity measurement.The IP-Layer Sender Bit Rate SHALL be reported with meaningful
resolution, in units of Megabits per second (which is 1,000,000 bits
per second to avoid any confusion).Individual IP-Layer Sender Bit Rate measurements are discussed
further in Section 9.The architecture of the method REQUIRES two cooperating hosts
operating in the roles of Src (test packet sender) and Dst (receiver),
with a measured path and return path between them.The duration of a test, parameter I, MUST be constrained in a
production network, since this is an active test method and it will
likely cause congestion on the Src to Dst host path during a test.A table SHALL be pre-built defining all the offered load rates that
will be supported (R1 through Rn, in ascending order, corresponding to
indexed rows in the table). It is RECOMMENDED that rates begin with
0.5 Mbps at index zero, use 1 Mbps at index one, and then continue in
1 Mbps increments to 1 Gbps. Above 1 Gbps, and up to 10 Gbps, it is
RECOMMENDED that 100 Mbps increments be used. Above 10 Gbps,
increments of 1 Gbps are RECOMMENDED. Each rate is defined as
datagrams of size ss, sent as a burst of count cc, each time interval
tt (default for tt is 1ms, a likely system tick-interval). While it is
advantageous to use datagrams of as large a size as possible, it may
be prudent to use a slightly smaller maximum that allows for secondary
protocol headers and/or tunneling without resulting in IP-layer
fragmentation. Selection of a new rate is indicated by a calculation
on the current row, Rx. For example:"Rx+1": the sender uses the next higher rate in the table."Rx-10": the sender uses the rate 10 rows lower in the table.At the beginning of a test, the sender begins sending at rate R1
and the receiver starts a feedback timer of duration FT (while
awaiting inbound datagrams). As datagrams are received they are
checked for sequence number anomalies (loss, out-of-order,
duplication, etc.) and the delay range is measured (one-way or
round-trip). This information is accumulated until the feedback timer
FT expires and a status feedback message is sent from the receiver
back to the sender, to communicate this information. The accumulated
statistics are then reset by the receiver for the next feedback
interval. As feedback messages are received back at the sender, they
are evaluated to determine how to adjust the current offered load rate
(Rx).If the feedback indicates that no sequence number anomalies were
detected AND the delay range was below the lower threshold, the
offered load rate is increased. If congestion has not been confirmed
up to this point, the offered load rate is increased by more than one
rate (e.g., Rx+10). This allows the offered load to quickly reach a
near-maximum rate. Conversely, if congestion has been previously
confirmed, the offered load rate is only increased by one (Rx+1).
However, if a rate threshold between high and very high sending rates
(such as 1Gbps) is exceeded, the offered load rate is only increased
by one (Rx+1) above the rate threshold in any congestion state.If the feedback indicates that sequence number anomalies were
detected OR the delay range was above the upper threshold, the offered
load rate is decreased. The RECOMMENDED values are 0 for sequence
number gaps and 30-90 ms for lower and upper delay thresholds,
respectively. Also, if congestion is now confirmed for the first time
by the current feedback message being processed, then the offered load
rate is decreased by more than one rate (e.g., Rx-30). This one-time
reduction is intended to compensate for the fast initial ramp-up. In
all other cases, the offered load rate is only decreased by one
(Rx-1).If the feedback indicates that there were no sequence number
anomalies AND the delay range was above the lower threshold, but below
the upper threshold, the offered load rate is not changed. This allows
time for recent changes in the offered load rate to stabilize, and the
feedback to represent current conditions more accurately.Lastly, the method for inferring congestion is that there were
sequence number anomalies AND/OR the delay range was above the upper
threshold for two consecutive feedback intervals. The algorithm
described above is also illustrated in ITU-T Rec. Y.1540, 2020
version, in Annex B, and implemented in the
Appendix on Load Rate Adjustment Pseudo Code in this memo.The Load Rate Adjustment Algorithm MUST include timers that stop
the test when received packet streams cease unexpectedly. The timeout
thresholds are provided in the table below, along with values for all
other parameters and variables described in this section.ParameterDefaultTested Range or valuesExpected Safe Range (not entirely tested, other
values NOT RECOMMENDED)FT, feedback time interval50ms20ms, 100ms5ms <= FT <= 250ms Larger values may slow the rate increase
and fail to find the maxFeedback message timeout (stop test)L*FT, L=10 (500ms)L=100 with FT=50ms (5sec)0.5sec <= L*FT <= 30sec Upper limit for very unreliable
test paths onlyload packet timeout (stop test)1sec5sec0.250sec - 30sec Upper limit for very unreliable test paths
onlytable index 00.5Mbps0.5Mbpswhen testing <=10Gbpstable index 11Mbps1Mbpswhen testing <=10Gbpstable index (step) size1Mbps1Mbps - 1Gbpssame as testedtable index (step) size, rate>1Gbps100Mbps1Gbps - 10Gbpssame as testedtable index (step) size, rate>10Gbps1Gbpsuntested>10Gbpsss, UDP payload size, bytesnone<=1222Recommend max at largest value that avoids fragmentationcc, burst countnone1 - 100same as testedtt, burst interval100microsec100microsec, 1msecavailable range of "tick" values (HZ param)low delay range threshold30ms5ms, 30mssame as testedhigh delay range threshold90ms10ms, 90mssame as testedsequence error threshold00, 100same as testedconsecutive errored status report threshold22Use values >1 to avoid misinterpreting transient lossFast mode increase, in table index steps10102 <= steps <= 30Fast mode decrease, in table index steps303 * Fast mode increasesame as testedNumber of table steps in total, <10Gbps20002000same as testedIt is of course necessary to calibrate the equipment performing the
IP-layer Capacity measurement, to ensure that the expected capacity
can be measured accurately, and that equipment choices (processing
speed, interface bandwidth, etc.) are suitably matched to the
measurement range.When assessing a Maximum rate as the metric specifies, artificially
high (optimistic) values might be measured until some buffer on the
path is filled. Other causes include bursts of back-to-back packets
with idle intervals delivered by a path, while the measurement
interval (dt) is small and aligned with the bursts. The artificial
values might result in an un-sustainable Maximum Capacity observed
when the method of measurement is searching for the Maximum, and that
would not do. This situation is different from the bi-modal service
rates (discussed under Reporting), which are characterized by a
multi-second duration (much longer than the measured RTT) and
repeatable behavior.There are many ways that the Method of Measurement could handle
this false-max issue. The default value for measurement of singletons
(dt = 1 second) has proven to a be of practical value during tests of
this method, allows the bimodal service rates to be characterized, and
it has an obvious alignment with the reporting units (Mbps).Another approach comes from Section 24 of RFC 2544 and its discussion of Trial duration, where
relatively short trials conducted as part of the search are followed
by longer trials to make the final determination. In the production
network, measurements of singletons and samples (the terms for trials
and tests of Lab Benchmarking) must be limited in duration because
they may be service-affecting. But there is sufficient value in
repeating a sample with a fixed sending rate determined by the
previous search for the Max IP-layer Capacity, to qualify the result
in terms of the other performance metrics measured at the same
time.A qualification measurement for the search result is a subsequent
measurement, sending at a fixed 99.x % of the Max IP-layer Capacity
for I, or an indefinite period. The same Max Capacity Metric is
applied, and the Qualification for the result is a sample without
packet loss or a growing minimum delay trend in subsequent singletons
(or each dt of the measurement interval, I). Samples exhibiting losses
or increasing queue occupation require a repeated search and/or test
at reduced fixed sender rate for qualification.Here, as with any Active Capacity test, the test duration must be
kept short. 10 second tests for each direction of transmission are
common today. The default measurement interval specified here is I =
10 seconds. The combination of a fast and congestion-aware search
method and user-network coordination make a unique contribution to
production testing. The Max IP Capacity metric and method for
assessing performance is very different from classic Throughput metric and methods : it uses
near-real-time load adjustments that are sensitive to loss and delay,
similar to other congestion control algorithms used on the Internet
every day, along with limited duration. On the other hand, Throughput measurements can produce sustained
overload conditions for extended periods of time. Individual trials in
a test governed by a binary search can last 60 seconds for each step,
and the final confirmation trial may be even longer. This is very
different from "normal" traffic levels, but overload conditions are
not a concern in the isolated test environment. The concerns raised in
were that methods
would be let loose on production networks, and instead the authors
challenged the standards community to develop metrics and methods like
those described in this memo.In general, the wide-spread measurements that this memo encourages
will encounter wide-spread behaviors. The bimodal IP Capacity
behaviors already discussed in Section 6.6 are good examples.In general, it is RECOMMENDED to locate test endpoints as close to
the intended measured link(s) as practical (this is not always
possible for reasons of scale; there is a limit on number of test
endpoints coming from many perspectives, management and measurement
traffic for example). The testing operator MUST set a value for the
MaxHops parameter, based on the expected path length. This parameter
can keep measurement traffic from straying too far beyond the intended
path.The path measured may be state-full based on many factors, and the
Parameter "Time of day" when a test starts may not be enough
information. Repeatable testing may require the time from the
beginning of a measured flow, and how the flow is constructed
including how much traffic has already been sent on that flow when a
state-change is observed, because the state-change may be based on
time or bytes sent or both.Many different traffic shapers and on-demand access technologies
may be encountered, as anticipated in , and
play a key role in measurement results. Methods MUST be prepared to
provide a short preamble transmission to activate on-demand access,
and to discard the preamble from subsequent test results.Conditions which might be encountered during measurement, where
packet losses may occur independently from the measurement sending
rate:Congestion of an interconnection or backbone interface may
appear as packet losses distributed over time in the test stream,
due to much higher rate interfaces in the backbone.Packet loss due to use of Random Early Detection (RED) or other
active queue management may or may not affect the measurement flow
if competing background traffic (other flows) are simultaneously
present.There may be only small delay variation independent of sending
rate under these conditions, too.Persistent competing traffic on measurement paths that include
shared transmission media may cause random packet losses in the
test stream.It is possible to mitigate these conditions using the
flexibility of the load-rate adjusting algorithm described in Section
8.1 above (tuning specific parameters).If the measurement flow burst duration happens to be on the order
of or smaller than the burst size of a shaper or a policer in the
path, then the line rate might be measured rather than the bandwidth
limit imposed by the shaper or policer. If this condition is
suspected, alternate configurations SHOULD be used.In general, results depend on the sending stream characteristics;
the measurement community has known this for a long time, and needs to
keep it front of mind. Although the default is a single flow (F=1) for
testing, use of multiple flows may be advantageous for the following
reasons:the test hosts may be able to create higher load than with a
single flow, or parallel test hosts may be used to generate 1 flow
each.there may be link aggregation present (flow-based load
balancing) and multiple flows are needed to occupy each member of
the aggregate.access policies may limit the IP-Layer Capacity depending on
the Type-P of packets, possibly reserving capacity for various
stream types.Each flow would be controlled using its own implementation of
the Load Adjustment (Search) Algorithm.As testing continues, implementers should expect some evolution in
the methods. The ITU-T has published a Supplement (60) to the Y-series
of Recommendations, "Interpreting ITU-T Y.1540 maximum IP-layer
capacity measurements", , which is the result
of continued testing with the metric, and those results have improved
the method described here.This section is for the benefit of the Document Shepherd's form,
and will be deleted prior to final review.Much of the development of the method and comparisons with existing
methods conducted at IETF Hackathons and elsewhere have been based on
the example udpst Linux measurement tool (which is a working reference
for further development) . The current
project:is a utility that can function as a client or server daemonrequires a successful client-initiated setup handshake between
cooperating hosts and allows firewalls to control inbound
unsolicited UDP which either go to a control port [expected and
w/authentication] or to ephemeral ports that are only created as
needed. Firewalls protecting each host can both continue to do
their job normally. This aspect is similar to many other test
utilities available.is written in C, and built with gcc (release 9.3) and its
standard run-time librariesallows configuration of most of the parameters described in
Sections 4 and 7.supports IPv4 and IPv6 address families.supports IP-layer packet marking.The singleton IP-Layer Capacity results SHOULD be accompanied by the
context under which they were measured.timestamp (especially the time when the maximum was observed in
dtn)source and destination (by IP or other meaningful ID)other inner parameters of the test case (Section 4)outer parameters, such as "test conducted in motion" or other
factors belonging to the context of the measurementresult validity (indicating cases where the process was somehow
interrupted or the attempt failed)a field where unusual circumstances could be documented, and
another one for "ignore/mask out" purposes in further processingThe Maximum IP-Layer Capacity results SHOULD be reported in the
format of a table with a row for each of the test Phases and Number of
Flows. There SHOULD be columns for the phases with number of flows, and
for the resultant Maximum IP-Layer Capacity results for the aggregate
and each flow tested.As mentioned in Section 6.6, bi-modal (or multi-modal) maxima SHALL
be reported for each mode separately.Phase, # FlowsMax IP-Layer Capacity, MbpsLoss RatioRTT min, max, msecSearch,1967.310.000230, 58Verify,1966.000.000030, 38Static and configuration parameters:The sub-interval time, dt, MUST accompany a report of Maximum
IP-Layer Capacity results, and the remaining Parameters from Section 4,
General Parameters.The PM list metrics corresponding to the sub-interval where the
Maximum Capacity occurred MUST accompany a report of Maximum IP-Layer
Capacity results, for each test phase.The IP-Layer Sender Bit rate results SHOULD be reported in the format
of a table with a row for each of the test Phases, sub-intervals (st)
and Number of Flows. There SHOULD be columns for the phases with number
of flows, and for the resultant IP-Layer Sender Bit rate results for the
aggregate and each flow tested.Phase, Flow or Aggregatest, secSender Bit Rate, Mbps??Search,10.00 - 0.05345__Search,20.00 - 0.05289__Search,Agg0.00 - 0.05634__Static and configuration parameters:The subinterval time, st, MUST accompany a report of Sender IP-Layer
Bit Rate results.Also, the values of the remaining Parameters from Section 4, General
Parameters, MUST be reported.As a part of the multi-Standards Development Organization (SDO)
harmonization of this metric and method of measurement, one of the
areas where the Broadband Forum (BBF) contributed its expertise was in
the definition of an information model and data model for
configuration and reporting. These models are consistent with the
metric parameters and default values specified as lists is this memo.
provides the Information model that was used
to prepare a full data model in related BBF work. The BBF has also
carefully considered topics within its purview, such as placement of
measurement systems within the access architecture. For example,
timestamp resolution requirements that influence the choice of the
test protocol are provided in Table 2 of .Active metrics and measurements have a long history of security
considerations. The security considerations that apply to any active
measurement of live paths are relevant here. See and .When considering privacy of those involved in measurement or those
whose traffic is measured, the sensitive information available to
potential observers is greatly reduced when using active techniques
which are within this scope of work. Passive observations of user
traffic for measurement purposes raise many privacy issues. We refer the
reader to the privacy considerations described in the Large Scale
Measurement of Broadband Performance (LMAP) Framework , which covers active and passive techniques.There are some new considerations for Capacity measurement as
described in this memo.Cooperating source and destination hosts and agreements to test
the path between the hosts are REQUIRED. Hosts perform in either the
Src or Dst roles.It is REQUIRED to have a user client-initiated setup handshake
between cooperating hosts that allows firewalls to control inbound
unsolicited UDP traffic which either goes to a control port
[expected and w/authentication] or to ephemeral ports that are only
created as needed. Firewalls protecting each host can both continue
to do their job normally.Client-server authentication and integrity protection for
feedback messages conveying measurements is RECOMMENDED.Hosts MUST limit the number of simultaneous tests to avoid
resource exhaustion and inaccurate results.Senders MUST be rate-limited. This can be accomplished using a
pre-built table defining all the offered load rates that will be
supported (Section 8.1). The recommended load-control search
algorithm results in "ramp up" from the lowest rate in the
table.Service subscribers with limited data volumes who conduct
extensive capacity testing might experience the effects of Service
Provider controls on their service. Testing with the Service
Provider's measurement hosts SHOULD be limited in frequency and/or
overall volume of test traffic (for example, the range of I duration
values SHOULD be limited).The exact specification of these features is left for the future
protocol development.This memo makes no requests of IANA.Thanks to Joachim Fabini, Matt Mathis, J.Ignacio Alvarez-Hamelin,
Wolfgang Balzer, Frank Brockners, Greg Mirsky, Martin Duke, Murray
Kucherawy, and Benjamin Kaduk for their extensive comments on the memo
and related topics.The following is a pseudo-code implementation of the algorithm
described in Section 8.1.copycat: Testing Differential Treatment of New Transport
Protocols in the Wild (ANRW '17)Recommendation Y.Sup60, (09/20) Interpreting ITU-T Y.1540
maximum IP-layer capacity measurementsAT&TBroadband Forum TR-471: IP Layer Capacity Metrics and
MeasurementAT&T LabsInternet protocol data communication service - IP packet
transfer and availability performance parametersITU-TLS on harmonization of IP Capacity and Latency Parameters:
Consent of Draft Rec. Y.1540 on IP packet transfer performance
parameters and New Annex A with Lab & Field Evaluation
PlansITU-TLS - Harmonization of IP Capacity and Latency Parameters:
Revision of Draft Rec. Y.1540 on IP packet transfer performance
parameters and New Annex A with Lab Evaluation PlanITU-TUDP Speed Test Open Broadband projectudpst Project Collaborators