Test Plan and Results for
Advancing RFC 2680 on the Standards TrackAT&T Labs200 Laurel Avenue SouthMiddletownNJ07748USA+1 732 420 1239lencia@att.comDeutsche TelekomHeinrich Hertz Str. 3-764295DarmstadtGermany+49 6151 58 12747Ruediger.Geib@telekom.deAT&T Labs200 Laurel Avenue SouthMiddletownNJ07748USA+1 732 420 1571+1 732 368 1192acmorton@att.comhttp://home.comcast.net/~acmacm/Technical University DarmstadtDarmstadtGermanymatthias_michael.wieser@stud.tu-darmstadt.deThis memo proposes to advance a performance metric RFC along the
standards track, specifically RFC 2680 on One-way Loss Metrics.
Observing that the metric definitions themselves should be the primary
focus rather than the implementations of metrics, this memo describes
the test procedures to evaluate specific metric requirement clauses to
determine if the requirement has been interpreted and implemented as
intended. Two completely independent implementations have been tested
against the key specifications of RFC 2680.In this version, the results are presented in the R-tool output form.
Beautification is future work.The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119.The IETF (IP Performance Metrics working group, IPPM) has considered
how to advance their metrics along the standards track since 2001.A renewed work effort sought to investigate ways in which the
measurement variability could be reduced and thereby simplify the
problem of comparison for equivalence.There is consensus
that the metric definitions should be the primary focus of evaluation
rather than the implementations of metrics, and equivalent results are
deemed to be evidence that the metric specifications are clear and
unambiguous. This is the metric specification equivalent of protocol
interoperability. The advancement process either produces confidence
that the metric definitions and supporting material are clearly worded
and unambiguous, OR, identifies ways in which the metric definitions
should be revised to achieve clarity.The process should also permit identification of options that were
not implemented, so that they can be removed from the advancing
specification (this is an aspect more typical of protocol advancement
along the standards track).This memo's purpose is to implement the current approach for .In particular, this memo documents consensus on the extent of
tolerable errors when assessing equivalence in the results. In
discussions, the IPPM working group agreed that test plan and procedures
should include the threshold for determining equivalence, and this
information should be available in advance of cross-implementation
comparisons. This memo includes procedures for same-implementation
comparisons to help set the equivalence threshold.Another aspect of the metric RFC advancement process is the
requirement to document the work and results. The procedures of are expanded in,
including sample implementation and interoperability reports. This memo
follows the template in for the report that
accompanies the protocol action request submitted to the Area Director,
including description of the test set-up, procedures, results for each
implementation and conclusions.This plan is intended to cover all critical requirements and
sections of .Note that there are only five instances of the requirement term
"MUST" in outside of the boilerplate
and reference.Material may be added as it is "discovered" (apparently, not all
requirements use requirements language).The process described in Section 3.5 of takes as a first principle
that the metric definitions, embodied in the text of the RFCs, are the
objects that require evaluation and possible revision in order to
advance to the next step on the standards track.IF two implementations do not measure an equivalent singleton or
sample, or produce the an equivalent statistic,AND sources of measurement error do not adequately explain the lack
of agreement,THEN the details of each implementation should be audited along with
the exact definition text, to determine if there is a lack of clarity
that has caused the implementations to vary in a way that affects the
correspondence of the results.IF there was a lack of clarity or multiple legitimate interpretations
of the definition text,THEN the text should be modified and the resulting memo proposed for
consensus and advancement along the standards track.Finally, all the findings MUST be documented in a report that can
support advancement on the standards track, similar to those described
in . The list of measurement devices used
in testing satisfies the implementation requirement, while the test
results provide information on the quality of each specification in the
metric RFC (the surrogate for feature interoperability).One metric implementation used was NetProbe version 5.8.5, (an
earlier version is used in the WIPM system and deployed world-wide).
NetProbe uses UDP packets of variable size, and can produce test streams
with Periodic or Poisson sample distributions.The other metric implementation used was Perfas+ version 3.1,
developed by Deutsche Telekom. Perfas+ uses UDP unicast packets of
variable size (but supports also TCP and multicast). Test streams with
periodic, Poisson or uniform sample distributions may be used.Figure 1 shows a view of the test path as each Implementation's test
flows pass through the Internet and the L2TPv3 tunnel IDs (1 and 2),
based on Figure 1 of .The testing employs the Layer 2 Tunnel Protocol, version 3 (L2TPv3)
tunnel between test sites on the
Internet. The tunnel IP and L2TPv3 headers are intended to conceal the
test equipment addresses and ports from hash functions that would tend
to spread different test streams across parallel network resources, with
likely variation in performance as a result.At each end of the tunnel, one pair of VLANs encapsulated in the
tunnel are looped-back so that test traffic is returned to each test
site. Thus, test streams traverse the L2TP tunnel twice, but appear to
be one-way tests from the test equipment point of view.The network emulator is a host running Fedora 14 Linux
[http://fedoraproject.org/] with IP forwarding enabled and the "netem"
Network emulator as part of the Fedora Kernel 2.6.35.11
[http://www.linuxfoundation.org/collaborate/workgroups/networking/netem]
loaded and operating. Connectivity across the netem/Fedora host was
accomplished by bridging Ethernet VLAN interfaces together with "brctl"
commands (e.g., eth1.100 <-> eth2.100). The netem emulator was
activated on one interface (eth1) and only operates on test streams
traveling in one direction. In some tests, independent netem instances
operated separately on each VLAN.The links between the netem emulator host and router and switch were
found to be 100baseTx-HD (100Mbps half duplex) as reported by
"mii-tool"when the testing was complete. Use of Half Duplex was not
intended, but probably added a small amount of delay variation that
could have been avoided in full duplex mode.Each individual test was run with common packet rates (1 pps, 10pps)
Poisson/Periodic distributions, and IP packet sizes of 64, 340, and 500
Bytes.For these tests, a stream of at least 300 packets were sent from
Source to Destination in each implementation. Periodic streams (as per
) with 1 second spacing were used, except
as noted.As required in Section 2.8.1 of ,
packet Type-P must be reported. The packet Type-P for this test was
IP-UDP with Best Effort DCSP. These headers were encapsulated according
to the L2TPv3 specifications , and thus
may not influence the treatment received as the packets traversed the
Internet.With the L2TPv3 tunnel in use, the metric name for the testing
configured here (with respect to the IP header exposed to Internet
processing) is:Type-IP-protocol-115-One-way-Packet-Loss-<StreamType>-StreamWith (Section 3.2. ) Metric
Parameters:+ Src, the IP address of a host (12.3.167.16 or 193.159.144.8)+ Dst, the IP address of a host (193.159.144.8 or 12.3.167.16)+ T0, a time+ Tf, a time+ lambda, a rate in reciprocal seconds+ Thresh, a maximum waiting time in seconds (see Section 2.8.2 of
) and (Section 3.8. )Metric Units: A sequence of pairs; the elements of each pair are:+ T, a time, and+ L, either a zero or a oneThe values of T in the sequence are monotonic increasing. Note that T
would be a valid parameter to the *singleton*
Type-P-One-way-Packet-Loss, and that L would be a valid value of
Type-P-One-way-Packet Loss (see Section 2 of ).Also, Section 2.8.4 of recommends that
the path SHOULD be reported. In this test set-up, most of the path
details will be concealed from the implementations by the L2TPv3
tunnels, thus a more informative path trace route can be conducted by
the routers at each location.When NetProbe is used in production, a traceroute is conducted in
parallel at the outset of measurements.Perfas+ does not support traceroute.It was only possible to conduct the traceroute for the measured path
on one of the tunnel-head routers (the normal trace facilities of the
measurement systems are confounded by the L2TPv3 tunnel
encapsulation).An implementation is required to report calibration results on clock
synchronization in Section 2.8.3 of (also
required in Section 3.7 of for sample
metrics).Also, it is recommended to report the probability that a packet
successfully arriving at the destination network interface is
incorrectly designated as lost due to resource exhaustion in Section
2.8.3 of .For NetProbe and Perfas clock synchronization test results, refer
to Section 4 of .Since both measurement implementations have resource limitations,
it is theoretically possible that these limits could be exceeded and a
packet that arrived at the destination successfully might be discarded
in error.In previous test efforts , NetProbe produced 6
multicast streams with an aggregate bit rate over 53 Mbit/s, in order
to characterize the 1-way capacity of a NISTNet-based emulator.
Neither the emulator nor the pair of NetProbe implementations used in
this testing dropped any packets in these streams.The maximum load used here between any 2 NetProbe implementations
was be 11.5 Mbit/s divided equally among 3 unicast test streams. We
conclude that steady resource usage does not contribute error
(additional loss) to the measurements.In this section, we provide the numerical limits on comparisons
between implementations, in order to declare that the results are
equivalent and therefore, the tested specification is clear.A key point is that the allowable errors, corrections, and confidence
levels only need to be sufficient to detect mis-interpretation of the
tested specification resulting in diverging implementations.Also, the allowable error must be sufficient to compensate for
measured path differences. It was simply not possible to measure fully
identical paths in the VLAN-loopback test configuration used, and this
practical compromise must be taken into account.For Anderson-Darling K-sample (ADK)
comparisons, the required confidence factor for the cross-implementation
comparisons SHALL be the smallest of:0.95 confidence factor at 1 packet resolution, orthe smallest confidence factor (in combination with resolution)
of the two same-implementation comparisons for the same test
conditions (if the number of streams is sufficient to allow such
comparisons).For Anderson-Darling Goodness-of-Fit (ADGoF) comparisons, the required level of significance
for the same-implementation Goodness-of-Fit (GoF) SHALL be 0.05 or 5%,
as specified in Section 11.4 of . This is
equivalent to a 95% confidence factor.This section describes some results from production network
(cross-Internet) tests with measurement devices implementing IPPM
metrics and a network emulator to create relevant conditions, to
determine whether the metric definitions were interpreted consistently
by implementors.The procedures are similar contained in Appendix A.1 of for One-way Delay.This test determines if implementations produce results that appear
to come from a common packet loss distribution, as an overall
evaluation of Section 3 of , "A
Definition for Samples of One-way Packet Loss". Same-implementation
comparison results help to set the threshold of equivalence that will
be applied to cross-implementation comparisons.This test is intended to evaluate measurements in sections 2, 3,
and 4 of .By testing the extent to which the counts of one-way packet loss
counts on different test streams of two
implementations appear to be from the same loss process, we reduce
comparison steps because comparing the resulting summary statistics
(as defined in Section 4 of ) would
require a redundant set of equivalence evaluations. We can easily
check whether the single statistic in Section 4 of was implemented, and report on that fact.Configure an L2TPv3 path between test sites, and each pair of
measurement devices to operate tests in their designated pair of
VLANs.Measure a sample of one-way packet loss singletons with 2 or
more implementations, using identical options and network emulator
settings (if used).Measure a sample of one-way packet loss singletons with *four
or more* instances of the *same* implementations, using identical
options, noting that connectivity differences SHOULD be the same
as for the cross implementation testing.If less than ten test streams are available, skip to step
7.Apply the ADK comparison procedures (see Appendix C of ) and determine the
resolution and confidence factor for distribution equivalence of
each same-implementation comparison and each cross-implementation
comparison.Take the coarsest resolution and confidence factor for
distribution equivalence from the same-implementation pairs, or
the limit defined in Section 5 above, as a limit on the
equivalence threshold for these experimental conditions.Compare the cross-implementation ADK performance with the
equivalence threshold determined in step 5 to determine if
equivalence can be declared.The common parameters used for tests in this section are:The cross-implementation comparison uses a simple ADK analysis
, where all
NetProbe loss counts are compared with all Perfas loss results.In the result analysis of this section:All comparisons used 1 packet resolution.No Correction Factors were applied.The 0.95 confidence factor (1.960 for cross-implementation
comparison) was used.Tests described in this section used:IP header + payload = 340 octetsPeriodic sampling at 1 packet per secondTest duration = 1200 seconds (during April 7, 2011, EDT)The netem emulator was set for 100ms constant delay, with 10%
loss ratio. In this experiment, the netem emulator was configured to
operate independently on each VLAN and thus the emulator itself is a
potential source of error when comparing streams that traverse the
test path in different directions.The cross-implementation comparisons pass the ADK
criterion.Tests described in this section used:IP header + payload = 64 octetsPeriodic sampling at 1 packet per secondTest duration = 300 seconds (during March 24, 2011, EDT)The netem emulator was set for 0ms constant delay, with 10%
loss ratio.The cross-implementation comparisons pass the ADK
criterion.Tests described in this section used:IP header + payload = 64 octetsPoisson sampling at lambda = 1 packet per secondTest duration = 20 minutes (during April 27, 2011, EDT)The netem configuration was 0ms delay and 10% loss, but
there were two passes through an emulator for each stream, and loss
emulation was present for 18 minutes of the 20 minute test .The cross-implementation comparisons barely pass the ADK
criterion at 95% = 1.960 when adjusting for ties.We conclude that the two implementations are capable of producing
equivalent one-way packet loss measurements based on their
interpretation of .This test determines if implementations use the same configured
maximum waiting time delay from one measurement to another under
different delay conditions, and correctly declare packets arriving in
excess of the waiting time threshold as lost.See Section 2.8.2 of .configure an L2TPv3 path between test sites, and each pair of
measurement devices to operate tests in their designated pair of
VLANs.configure the network emulator to add 1.0 sec one-way constant
delay in one direction of transmission.measure (average) one-way delay with 2 or more implementations,
using identical waiting time thresholds (Thresh) for loss set at 3
seconds.configure the network emulator to add 3 sec one-way constant
delay in one direction of transmission equivalent to 2 seconds of
additional one-way delay (or change the path delay while test is
in progress, when there are sufficient packets at the first delay
setting)repeat/continue measurementsobserve that the increase measured in step 5 caused all packets
with 2 sec additional delay to be declared lost, and that all
packets that arrive successfully in step 3 are assigned a valid
one-way delay.The common parameters used for tests in this section are:IP header + payload = 64 octetsPoisson sampling at lambda = 1 packet per secondTest duration = 900 seconds total (March 21)The netem emulator was set to add constant delays as
specified in the procedure above.In NetProbe, the Loss Threshold is implemented uniformly over all
packets as a post-processing routine. With the Loss Threshold set at
3 seconds, all packets with one-way delay >3 seconds are marked
"Lost" and included in the Lost Packet list with their transmission
time (as required in Section 3.3 of ).
This resulted in 342 packets designated as lost in one of the test
streams (with average delay = 3.091 sec).Perfas uses a fixed Loss Threshold which was not adjustable
during this study. The Loss Threshold is approximately one minute,
and emulation of a delay of this size was not attempted. However, it
is possible to implement any delay threshold desired with a
post-processing routine and subsequent analysis. Using this method,
195 packets would be declared lost (with average delay = 3.091
sec).Both implementations assume that any constant delay value desired
can be used as the Loss Threshold, since all delays are stored as a
pair <Time, Delay> as required in . This is a simple way to enforce the
constant loss threshold envisioned in
(see specific section reference above). We take the position that
the assumption of post-processing is compliant, and that the text of
the RFC should be revised slightly to include this point.Section 3.6 of indicates that
implementations need to ensure that reordered packets are handled
correctly using an uncapitalized "must". In essence, this is an
implied requirement because the correct packet must be identified as
lost if it fails to arrive before its delay threshold under all
circumstances, and reordering is always a possibility on IP network
paths. See for the definition of
reordering used in IETF standard-compliant measurements.Using the procedure of section 6.1, the netem emulator was set to
introduce significant delay (2000 ms) and delay variation (1000 ms),
which was sufficient to produce packet reordering because each
packet's emulated delay is independent from others, and 10% loss.The tests described in this section used:IP header + payload = 64 octetsPeriodic sampling = 1 packet per secondTest duration = 600 seconds (during May 2, 2011, EDT)The test results indicate that extensive reordering was present.
Both implementations capture the extensive delay variation between
adjacent packets. In NetProbe, packet arrival order is preserved in
the raw measurement files, so an examination of arrival packet
sequence numbers also indicates reordering.Despite extensive continuous packet reordering present in the
transmission path, the distributions of loss counts from the two
implementations pass the ADK criterion at 95% = 1.960.Section 3.7 of indicates that
implementations need to ensure that their sending process is
reasonably close to a classic Poisson distribution when used. Much
more detail on sample distribution generation and Goodness-of-Fit
testing is specified in Section 11.4 of
and the Appendix of .In this section, each implementation's Poisson distribution is
compared with an idealistic version of the distribution available in
the base functionality of the R-tool for Statistical Analysis, and performed using the Anderson-Darling
Goodness-of-Fit test package (ADGofTest) . The Goodness-of-Fit criterion derived from
requires a test statistic value AD
<= 2.492 for 5% significance. The Appendix of also notes that there may be difficulty
satisfying the ADGofTest when the sample includes many packets (when
8192 were used, the test always failed, but smaller sets of the stream
passed).Both implementations were configured to produce Poisson
distributions with lambda = 1 packet per second.Section 11.4 of suggests three
possible measurement points to evaluate the Poisson distribution.
The NetProbe analysis uses "user-level timestamps made just before
or after the system call for transmitting the packet".The statistical summary for two NetProbe streams is below:We see that both the Means are near the specified lambda = 1.The results of ADGoF tests for these two streams is shown
below:We see that both 100 and 1000 packet sets from two different
streams (s1 and s2) all passed the AD <= 2.492 criterion.Section 11.4 of suggests three
possible measurement points to evaluate the Poisson distribution.
The Perfas analysis uses "wire times for the packets as recorded
using a packet filter". However, due to limited access at the Perfas
side of the test setup, the captures were made after the Perfas
streams traversed the production network, adding a small amount of
unwanted delay variation to the wire times (and possibly error due
to packet loss).The statistical summary for two Perfas streams is below:We see that both the Means are near the specified lambda = 1.The results of ADGoF tests for these two streams is shown
below:We see that both 193, 100, and 93 packet sets from two different
streams (p1 and p2) all passed the AD <= 2.492 criterion.Both NetProbe and Perfas implementations produce adequate Poisson
distributions when according to the Anderson-Darling Goodness-of-Fit
at the 5% significance (1-alpha = 0.05, or 95% confidence
level).We check which statistics were implemented, and report on those
facts, noting that Section 4 of does
not specify the calculations exactly, and gives only some illustrative
examples.We note that implementations refer to this metric as a loss ratio,
and this is an area for likely revision of the text to make it more
consistent with wide-spread usage.The security considerations that apply to any active measurement of
live networks are relevant here as well. See and .This memo makes no requests of IANA, and the authors hope that IANA
personnel will be able to use their valuable time in other worthwhile
pursuits.The authors thank Lars Eggert for his continued encouragement to
advance the IPPM metrics during his tenure as AD Advisor.Nicole Kowalski supplied the needed CPE router for the NetProbe side
of the test set-up, and graciously managed her testing in spite of
issues caused by dual-use of the router. Thanks Nicole!The "NetProbe Team" also acknowledges many useful discussions on
statistical interpretation with Ganga Maguluri.K-sample Anderson-Darling Tests of fit, for continuous and
discrete casesBoeing Computer
ServicesSimon Fraser UniversityR: A language and environment for statistical computing. R
Foundation for Statistical Computing, Vienna, Austria. ISBN
3-900051-07-0, URL http://www.R-project.org/Boeing Computer
Servicesadk: Anderson-Darling K-Sample Test and Combinations of Such
Tests. R package version 1.0.Boeing Computer
ServicesADGofTest: Anderson-Darling Goodness-of-Fit Test. R package
version 0.3.Boeing Computer
Services