Current Meeting Report
2.4.1 Benchmarking Methodology (bmwg)
NOTE: This charter is a snapshot of the 54th IETF Meeting in Yokohama, Japan. It may now be out-of-date.
Last Modifield: 05/23/2002
Kevin Dubray <email@example.com>
Operations and Management Area Director(s):
Randy Bush <firstname.lastname@example.org>
Bert Wijnen <email@example.com>
Operations and Management Area Advisor:
Randy Bush <firstname.lastname@example.org>
General Discussion: email@example.com
To Subscribe: firstname.lastname@example.org
In Body: subscribe your_email_address
Description of Working Group:
The major goal of the Benchmarking Methodology Working Group is to make
a series of recommendations concerning the measurement of the
performance characteristics of various internetworking technologies;
further, these recommendations may focus on the systems or services
are built from these technologies.
Each recommendation will describe the class of equipment, system, or
service being addressed; discuss the performance characteristics that
are pertinent to that class; clearly identify a set of metrics that aid
in the description of those characteristics; specify the methodologies
required to collect said metrics; and lastly, present the requirements
for the common, unambiguous reporting of benchmarking results.
Because the demands of a class may vary from deployment to deployment,
specific non-goal of the Working Group is to define acceptance criteria
or performance requirements.
An ongoing task is to provide a forum for discussion regarding the
advancement of measurements designed to provide insight on the
Goals and Milestones:
|Done|| ||Expand the current Ethernet switch benchmarking methodology
draft to define the metrics and methodologies particular to
the general class of connectionless, LAN switches. |
|Done|| ||Edit the LAN switch draft to reflect the input from BMWG.
Issue a new version of document for comment. If
appropriate, ascertain consensus on whether to recommend
the draft for consideration as an RFC. |
|Done|| ||Take controversial components of multicast draft to mailing
list for discussion. Incorporate changes to draft and
reissue appropriately. |
|Done|| ||Submit workplan for initiating work on Benchmarking
Methodology for LAN Switching Devices. |
|Done|| ||Submit workplan for continuing work on the Terminology for
Cell/Call Benchmarking draft. |
|Done|| ||Submit initial draft of Benchmarking Methodology for LAN
|Done|| ||Submit Terminology for IP Multicast Benchmarking draft for
AD Review. |
|Done|| ||Submit Benchmarking Terminology for Firewall Performance
for AD review |
|Done|| ||Progress ATM benchmarking terminology draft to AD review. |
|Done|| ||Submit Benchmarking Methodology for LAN Switching Devices
draft for AD review. |
|Done|| ||Submit first draft of Firewall Benchmarking Methodology. |
|Done|| ||First Draft of Terminology for FIB related Router
Performance Benchmarking. |
|Done|| ||First Draft of Router Benchmarking Framework |
|Done|| ||Methodology for ATM Benchmarking for AD review. |
|Done|| ||Progress Frame Relay benchmarking terminology draft to AD
|Done|| ||Terminology for ATM ABR Benchmarking for AD review. |
|MAR 01|| ||Router Benchmarking Framework to AD review. |
|JUL 01|| ||Terminology for FIB related Router Performance Benchmarking
to AD review. |
|NOV 01|| ||Methodology for IP Multicast Benchmarking to AD Review. |
|NOV 01|| ||Firewall Benchmarking Methodology to AD Review |
|NOV 01|| ||Net Traffic Control Benchmarking Terminology to AD Review |
|NOV 01|| ||Resource Reservation Benchmarking Terminology to AD Review |
|NOV 01|| ||EGP Convergence Benchmarking Terminology to AD Review |
|DEC 01|| ||First Draft of Methodology for FIB related Router
Performance Benchmarking. |
|FEB 02|| ||First draft Net Traffic Control Benchmarking Methodology. |
|FEB 02|| ||Resource Reservation Benchmarking Methodology to AD Review |
|FEB 02|| ||Basic BGP Convergence Benchmarking Methodology to AD
|JUN 02|| ||Methodology for FIB related Router Performance Benchmarking
to AD review. |
|NOV 02|| ||Net Traffic Control Benchmarking Methodology to AD Review. |
Request For Comments:
|RFC1242|| I ||Benchmarking Terminology for Network Interconnection Devices|
|RFC1944|| I ||Benchmarking Methodology for Network Interconnect Devices|
|RFC2285|| I ||Benchmarking Terminology for LAN Switching Devices|
|RFC2432|| I ||Terminology for IP Multicast Benchmarking|
|RFC2544|| I ||Benchmarking Methodology for Network Interconnect Devices|
|RFC2647|| I ||Benchmarking Terminology for Firewall Performance|
|RFC2761|| I ||Terminology for ATM Benchmarking|
|RFC2889|| I ||Benchmarking Methodology for LAN Switching Devices|
|RFC3116|| I ||Methodology for ATM Benchmarking|
|RFC3133|| I ||Terminology for Frame Relay Benchmarking|
|RFC3134|| I ||Terminology for ATM ABR Benchmarking|
|RFC3222|| I ||Terminology for Forwarding Information Base (FIB) based Router Performance|
Current Meeting Report
Benchmarking Methodology WG (bmwg)
MONDAY, July 15 at 1930-2200
CHAIR: Kevin Dubray email@example.com
The meeting was chaired by Marianne Lepp, firstname.lastname@example.org, who presented Kevin Dubray's apologies.
(note taker Elywin Davies)
The agenda was examined and no changes were made.
0. Overall Status
2 drafts were in last call:
- Firewall draft
Authors replied to some comments and last call ended 10 July 2002. Discussion will continue on the list.
- BGP control plane convergence Terminology draft
No comments received so far.
A number of other drafts were on the table for consideration
- Diffserv terminology and methodology
Jerry Perser presented an update, discussed below.
The OSPF drafts
There was no action at this meeting.
- FIB Methodology
Comment was solicited, none received. This draft is on its way to last call.
- Resource reservation
- Multicast methodology draft
Although this draft has expired, and has been around for a long time, the authors would like more comments on it. In particular to consider whether the assumptions made when it was originally written (1996) and now are still valid - This will be taken to the mailing list.
1. Benchmarking Network-layer Traffic Control Mechanisms - status
update. (Jerry Perser)
Changes since last time presented by Jerry Perser.
Particular attention to the changes in the definitions of delay vectors, jitter vectors and congestion detection definition.
The largest change was in the Sequence Tracking Section where packets could now be classified as 'In Sequence/Out of Order/Duplicate' Option for 'Out of Sequence' is no longer needed.
There was some discussion of the definition of congestion measurement. This involves monitoring the egress stream over a period of time to determine the proportion of input packets that appear. If this period is set too short congestion can be concealed in the DUT. This had been extensively discussed by the authors.
JP pointed out that WG had agreed time period of 30 seconds for congestion measurements was suitable to determine how the DUT was behaving whilst not being excessive when multiple test runs were required. A member of the audience thought a longer time period might be needed but after some discussion it was eventually agreed that the definition was appropriate and the time period (not actually in this draft) was appropriate. Jerry noted that when testing devices he normally runs a small number of tests for both 10s and 60s and checks for differences to check if device is concealing congestion on short periods.
The meeting agreed that it was ready for WG last call.
2. Rationale and Goals for a proposed, new BMWG work item:
"SONET/SDH APS Performance Benchmarking." (Takumi Kimura)
TK presented some slides describing how the automatic protection system of a SONET/SDH transport network employed to interconnect two routers could interact with an IP packet flow which it was carrying between the routers when a failure occurred in the ring. Background is that there are various means of using transport nodes such as SONET/SDH rings and RPR, and the recovery timing in the IP layer is different from lower layers (e.g. buffering effects in IP routers and hysteresis on I/F up and down switching).
Hence there is a need to compare implementations.
Goals for work would be:
- Focus on SDH performance
- Consider IP layer aspects
- Consider defined topologies (pt to pt, Ring, Mesh)
- Consider different protection types ( 1+1, 1:n, etc)
The proposed work would benchmark a simple arrangement of two routers and two optical add/drop multiplexers with a two leg ring (effectively main and backup paths) interconnecting the OADMs. The intention was to demonstrate the qualitative (eg packet dropping or misordering) and quantitative (timing) effects when the APS was stimulated by breaking one of the fibre connections between the OADMs.
The document would specify testing terminology and methodology, concentrating on the IP layer.
JP commented that this was a useful piece of work provided that the effects were measured at the IP layer in the routers connected to the ring.
JP did not expect that effects other than packet loss would be seen with the simple test setup but they could occur in principle.
Marianne Lepp pointed out that timers are an added consideration, since the outage will be reflected both in the Sonet/SDH and the historesis timer in the Layer 3.
It was agreed that the work was potentially useful and TK was asked to refine his proposal, possibly incorporating other types of optical connection, such as RPR, and present at the next WG meeting.
3. Disposition of Benchmarking Terminology for Routers Supporting
Resource Reservation. <draft-ietf-bmwg-benchres-term-01.txt>
Last spring a WG Last call was issued on the above (expired) draft. NO commentary, pro or con, was issued from this or other related WGs on the I-D. Does this WG really desire this work to die a quiet death? Please retrieve an archived copy of the I-D, read it, and bring a recommendation to Yokohama (or
the list.) A cached copy of the document can be found here:
Jerry Perser expressed enthusiasm that this should move forward.
There were no other comments. There was no hard opinion on the future disposition of the document, and this will have to be considered on the list.
The meeting closed at approximately 19.50.
New work item proposal: Protection Performance Benchmarking