IETF IP Performance Metrics WG (ippm) Tuesday, July 15, 2003 at 09:00 to
The meeting was moderated by the working group chairs, Matt Zekauskas and
Merike Kaeo. Al Morton, and Henk Uijterwaal took notes, which were
edited into these minutes by the chairs.
1. Agenda Bashing
2. Packet Reordering
3. Reordering Density
5. Status, Milestones & Futures
2. Packet Reordering
Al Morton led off the meat of the meeting with a report on the
reordering metric progress. Many editorial changes were made from -02 to
-03 to clarify text, and there are more in progress. A
reordering-free-run metric was added based on Jon Bennett's comments in the
last meeting. This metric characterizes how often reordering happens
(along with previous gap metric and general frequency metric).
Comment from Jon: want to answer the question how often do you get
packets out of order. This is important for some applications that need
in-order packets. Applications that run well when in-order (or
increasing order) want a feel for how much "order" there is. Matt asked why
is gap metric not good enough. Jon replied that it doesn't show how often
Basically, the 'beginning of reordering event' is defined. You can't know
when the event ends, in order to keep the metric orthogonal to loss. Call
the places where a sequence number increases, but skips the
'reordering discontinuity'. The Reordering Gap is the difference
between discontinuities. The reordering free run says how many packets
arrive in order. To some extent, the gap shows the "early packets", while
reordering free shows the "late packets".
Al mentioned two pending updates. First, separating reordering by packet
sequence numbers and other views (time, byte counts (TCP sequence
numbers). Second, clarify the base "nonreversing sequence" metric so that it
reports "true" for packets that are out of sequence by it's
definition, false otherwise (currently, it's not as clear and
One open issue that was mentioned: dealing with fragmentation. The
current draft says that you only consider reassembled IP datagrams (and
that's all it says). Jerry Perser (spirent) would like all
reordering to be noted -- reassembly could hide the case where a
datagram was fragmented, the fragments reordered, and then
reassembled. Jerry will supply some text for the next draft revision.
3. Reordering Density
Next, Jerry McCollom from Agilent gave a presentation on reordering
density, a separate characterization metric developed at the
University of Colorado at Fort Collins. The authors were not present, but
Jerry has been working with them, and presented slides they produced (see
slides). The metric has a model of a buffer, and looks at how many
packets are stored in the buffer to compute the metric. If the buffer
overflows, those packets are considered lost. If the duplicates arrive,
they are ignored. The audience (and mailing list members) have
questioned this mixing of loss and reordering -- there will be a new draft
after the meeting that should address (some of?) the concerns.
This led to a good general reordering discussion, including adding usage
notes to each metric (where it works well, when it fails) and having each
metric indicate when it is out-of-scope (if such a condition exists).
Greg Ryan made a number of points, including that it is "squishy" as to
what exactly is measured; the drafts should explain what the metric
measures compared to a complete characterization of reordering (perhaps
edit distance), and give some examples. It is very important to specify
when the "domain of validity" is exceeded... for example, the
underlying assumption with both drafts is that reordering does not happen
very often (otherwise "frequency of reordering" and "reordering free gaps"
don't make much sense). What happens if the data is "completely messed up"
-- say the packets arrive in truly random order. Greg is interested in
providing some text to Al about edit distance. He will comment to the
Jon Bennett probed how reordering density interacts with packet loss.
Jerry said that the metric is density not loss. The metric assumes a
threshold (the buffer size) where packet is lost and leaves it at that. The
authors believe that a characterization of loss needs to be added; how
often are packets lost?
Jon asked what happens if the threshold is wrong? It depends on the
application data stream in the end. So Jon wondered if the metric tried to
represent a prototypical application... and then time might become more
relevant than distance, especially for real time applications. Jerry said
that without much thought it seems that it's possible to compute a
sensible threshold that would cover a large number of
applications. Jon noted that the metric keeps state - the packets in he
buffer. Thus it seems like there might be a number of metrics for
different applications. Either a metric tells me something
intuitively or it's specific to an application. It seems like this
metric is too complicated to be intuitive, but not complicated enough to
match an application.
Jon also noted that with "tool devices", having to keep state
(particularly on a high-speed test) is impossible, or places a high load on
the device. You have to accept that there may be some
deficiency... but being loss impervious might be more practical than
trying to emulate an application.
Al thanked Jerry for coming to represent the metric. It's difficult to
maintain orthogonality between loss and reordering, but that's what the
current reordering draft tries to do. In the -00 draft of reordering
density, there is an example where the metric counts "early packets" out of
order. The current draft calls "late packets" out of order, for one thing to
distinguish loss (only way to distinguish is to have the late packet
Jerry M. mentioned that in the second examples... some of packets in
buffer reordered, doesn't catch. Before releasing the packets, one could
take a look at what have in buffer to accurately reflect metric.
Al thought that sounded reasonable... he noted that one of the problems in
the industry right now is that multiple vendors look at the same same
stream would call different packets out of order; we need one
definition that everyone agrees to.
Jerry Purser wanted to pursue how easily such a metric could be
interpreted, say by a technical support person. Look at the first graph on
slide 9. What does it tell you? It looks like two packets got pushed out.
Al noted that two packets out of order, but the chart has three bars.
Jerry M. noted that the 0 slot represents in-order packets.
Jerry P. was trying to understand normalization... How many times was
packet 3 displaced? That's not what we're measuring. Merike noted it was
how many times packet was in a particular buffer slot.
Jerry M said that if packet 3 is what caused us to start buffering in the
first place... one tweak could be to see 3 and release,
displacement of 3 for packet.
Jerry P was trying to think as support person on line. Get that chart, can
you figured out what happened from that chart? 55% of time everyting is in
order 45% of packets experienced some reordering Jerry P was
wondering if it was reordering or buffering.
Next was Emile Stephan to report on Reporting MIB progress. Major
changes include VACM support for security instead of
roll-your-own, and tweaking all the tables so that VACM access made sense
(mainly replacing pointers with data in some cases). Emile also added
'burst' and 'multiburst' packet types -- MattZ asked where this came from,
and didn't get a clear answer. He will take it to the mailing list.
No comments from floor, other than Andy Bierman who noted there were still a
number of problems (probably due to the major reorganization to satisfy
VACM). The chairs asked had anyone read draft. No responses other than
Andy in this audience. The chairs then followed up with who would use it.
No responses in this audience. This is different from the response in
earlier meetings, where there was general interest.
5. Status, Milestones & Futures
Finally, Matt once again went over the milestones. The OWAMP
requirements document was cycling between the author and the IESG for
clarification of the security section. OWAMP itself was being updated
based on a sample implementation. More information on the
implementation is available at
http://owamp.internet2.edu/ . The MIB work has been progressing. The
metrics registry MIB document has gone through last call and will be
submitted to the IESG.
Matt noted areas where there was interest, but it had waned. (In
particular: Cap, a BTC implementation; parameter sensitivity; ITU vs IPPM
metrics; path bottleneck definitions; and the applicability
statement.) On the applicability statement: Merike and Henk
Uijterwaal have repeatedy prodded vendors, and they indicate interest and
promise text, but the text never materializes. They want to shelve it as an
ippm item for now. Perhaps interest might be generated in doing it in
'ispmon', should it become a WG from a BOF? MattZ noted that a probe will be
sent to the list, and if no interest is generated, the chairs would drop the
items from the charter milestones.
Al noted that the advancing metrics draft was really needed. He wanted to
know if the -00 draft was still available even though it had expired?
Matt said that he had one, and that it was also available from some of the