2.4.3 Benchmarking Methodology (bmwg)

NOTE: This charter is a snapshot of the 51st IETF Meeting in London, England. It may now be out-of-date. Last Modified: 31-Jul-01


Kevin Dubray <kdubray@juniper.net>

Operations and Management Area Director(s):

Randy Bush <randy@psg.com>
Bert Wijnen <bwijnen@lucent.com>

Operations and Management Area Advisor:

Randy Bush <randy@psg.com>

Mailing Lists:

General Discussion:bmwg@ietf.org
To Subscribe: bmwg-request@ietf.org
In Body: subscribe your_email_address
Archive: ftp://ftp.ietf.org/ietf-mail-archive/bmwg/

Description of Working Group:

The major goal of the Benchmarking Methodology Working Group is to make a series of recommendations concerning the measurement of the performance characteristics of various internetworking technologies; further, these recommendations may focus on the systems or services that are built from these technologies.

Each recommendation will describe the class of equipment, system, or service being addressed; discuss the performance characteristics that are pertinent to that class; clearly identify a set of metrics that aid in the description of those characteristics; specify the methodologies required to collect said metrics; and lastly, present the requirements for the common, unambiguous reporting of benchmarking results.

Because the demands of a class may vary from deployment to deployment, a specific non-goal of the Working Group is to define acceptance criteria or performance requirements.

An ongoing task is to provide a forum for discussion regarding the advancement of measurements designed to provide insight on the operation internetworking technologies.

Goals and Milestones:



Expand the current Ethernet switch benchmarking methodology draft to define the metrics and methodologies particular to the general class of connectionless, LAN switches.



Edit the LAN switch draft to reflect the input from BMWG. Issue a new version of document for comment. If appropriate, ascertain consensus on whether to recommend the draft for consideration as an RFC.



Take controversial components of multicast draft to mailing list for discussion. Incorporate changes to draft and reissue appropriately.



Submit workplan for continuing work on the Terminology for Cell/Call Benchmarking draft.



Submit workplan for initiating work on Benchmarking Methodology for LAN Switching Devices.



Submit initial draft of Benchmarking Methodology for LAN Switches.



Submit Terminology for IP Multicast Benchmarking draft for AD Review.



Submit Benchmarking Terminology for Firewall Performance for AD review



Progress ATM benchmarking terminology draft to AD review.



Submit Benchmarking Methodology for LAN Switching Devices draft for AD review.



Submit first draft of Firewall Benchmarking Methodology.



First Draft of Terminology for FIB related Router Performance Benchmarking.



First Draft of Router Benchmarking Framework



Methodology for ATM Benchmarking for AD review.



Progress Frame Relay benchmarking terminology draft to AD review.



Terminology for ATM ABR Benchmarking for AD review.

Mar 01


Router Benchmarking Framework to AD review.

Jul 01


Terminology for FIB related Router Performance Benchmarking to AD review.

Nov 01


Methodology for IP Multicast Benchmarking to AD Review.

Nov 01


Firewall Benchmarking Methodology to AD Review

Nov 01


Net Traffic Control Benchmarking Terminology to AD Review

Nov 01


Resource Reservation Benchmarking Terminology to AD Review

Nov 01


EGP Convergence Benchmarking Terminology to AD Review

Dec 01


First Draft of Methodology for FIB related Router Performance Benchmarking.

Feb 02


First draft Net Traffic Control Benchmarking Methodology.

Feb 02


Resource Reservation Benchmarking Methodology to AD Review

Feb 02


Basic BGP Convergence Benchmarking Methodology to AD Review.

Jun 02


Methodology for FIB related Router Performance Benchmarking to AD review.

Nov 02


Net Traffic Control Benchmarking Methodology to AD Review.

Request For Comments:






Benchmarking Terminology for Network Interconnection Devices



Benchmarking Terminology for LAN Switching Devices



Terminology for IP Multicast Benchmarking



Benchmarking Methodology for Network Interconnect Devices



Benchmarking Terminology for Firewall Performance



Terminology for ATM Benchmarking



Benchmarking Methodology for LAN Switching Devices



Methodology for ATM Benchmarking



Terminology for Frame Relay Benchmarking



Terminology for ATM ABR Benchmarking

Current Meeting Report

Benchmarking Methodology WG Minutes

WG Chair: Kevin Dubray

Minutes reported by Kevin Dubray.

The BMWG met at the 51th IETF in London, England on Thursday, August 9, 2001.

The proposed agenda:

1. Administration

2. Router Resource Reservation Benchmarking I-Ds

3. EGP Convergence Benchmarking

4. Benchmarking Network layer Traffic Control Mechanisms

5. Individually submitted I-D on "Terminology for Router Protocol Testing

was modified by moving item 3 to the end of the session to utilize unused time.

1. Administration.

Activity over the last period was summarized as:

Three BMWG-sourced I-Ds were published as RFCs: RFC 3116, RFC 3133, and RFC 3134. BMWG I-D (draft-ietf-bmwg-fib-term-02.txt) is under AD/IESG review. Seven other BMWG I-Ds were also revised during this period. Of these, (draft-ietf-bmwg-mcastm-07.txt ) and (draft-ietf-bmwg-firewall-02) are close to WG last call according to the editors. Four of these drafts represent newly admitted BMWG objectives - Resource Reservation Benchmarking and EGP convergence.

2. Router Resource Reservation Benchmarking I-Ds
Gabor Feher of the Budapest University and Istvan Cselenyi of Telia Research were on hand to give a presentation that provided an overview of the I-Ds addressing the terminology and methodology of benchmarking routers which support resource reservation. The slide presentation can be found in the Proceedings. The presentation addressed related terminology, test setup, primary load components, and some illustrative measurements of RSVP and Boomerang on a Linux platform. One question was asked regarding the nature of the offered traffic load presented the DUT. Another query followed about the observed packet loss. Gabor indicated the tests were executed with 100% offered load for the tested network medium; characterizing packet loss was outside the scope of the tests.

It was asked whether any characterizations were made with respect to the forwarding of premium vs. best effort traffic. Gabor responded that this, too, was outside the boundaries of the test.

Gabor indicated that both the terminology and methodology documents were reasonably mature; he asked for comments to be sent the list so that the work might be prepared for last call.

More information can be found from: http://boomerang.ttt.bme.hu/

3. Benchmarking Network layer Traffic Control Mechanisms
Scott Poretsky from Avici gave an update on the Terminology draft regarding Layer 3 traffic control benchmarks. He reiterated the applicability of the work, cited the variance of the current version from the previous version, outlined current issues (i.e., complete the metric set, better tie supporting terms to metrics), and highlighted preliminary metrics. The metrics were presented with the caveat that they may be subject to change. Details of the presentation can be found in the corresponding slides in the Proceedings.

There was some discussion regarding the notion of tail drops being one type of drop versus the only type of drop. Scott and Padma Krishnaswamy agreed to conclude the discussion off-line.

4. "Terminology for Router Protocol Testing"
Nick Ambrose from IXIA gave a presentation titled, "Routing Protocol Test Methodology." In the presentation, Nick discussed the target audience, key test facets, a proposed approach to producing testing documents, and the perceived benefits. Details of the presentation can be found in the corresponding slides of Proceedings.

Nick reinforced the notion that the effort wasn't to stop existing work, but that the existing paradigm led to narrowly focused efforts that might have overlap and produce inconsistencies. The offered approach would better coordinate activity.

Commentary reflected concerns about the ability to determine the bounds of the generic areas comprising "Router Protocol Testing" given the breadth and flux of features in the domain. An observation was offered that the genesis of this appeared to be the current FIB terminology effort and the EGP effort. The follow-on question asked whether the related documents were read and were there any egregious inconsistencies or overlapping areas found? Nick answered no and stated the issue wasn't about documents but, rather, approaches. It was suggested that a possible area where this approach might be tried is IGP convergence. There is enough similarity between OSPF and ISIS where a "generic" terminology statement might work with individual, protocol specific methodology documents. Nick indicated that it could be a possibility.

5. EGP Convergence Benchmarking
Howard Berkowitz of Nortel spent some time presenting the overall goals and approaches of the Single Router BGP Convergence effort. In the presentation, insight into terminology, topological parameters, and test heuristics were introduced. The slides of the presentation can be found in the proceedings.

Next, Elwyn Davies from Nortel reported on a experimental implementation and presented information regarding the test setup, requirements, initial findings, and some "lessons learned." Elwyn's findings included test trials from a PC-based, Zebra router as well as a commercial router.

Similarly, Sue Hares of NextHop gave a second implementation report. (Yes, both presentations can be found in the Proceedings.)

Timing and synchronization of auditable events proves not to be an insignificant consideration. Moreover, the introduction of "real-life" stimuli to make the test more meaningful was a common theme. Sue made an input request in this regard.

It was stated several times that this effort seeks to produce test implementations then reflect the implementation experience back into the actual specifications. The current work already reflects some of this notion.

A statement was made that interaction of EGP and IGP events would make the test scenarios more useful. Howard indicated while that might be a goal for later work, the scope of this effort was restricted to a single DUT running BGP. A question was asked if there was any intention to determine the variance of results yielded from the two experimental implementations on a common DUT. Sue indicated that might be tough from a coordination point of view, but not undoable.


Routing Protocol Test Methodology
Terminology for Network Layer Traffic Control Mechanisms
Benchmarking Terminology and Methodology for Routers Supporting Resource Reservation
Single Router BGP Convergence
Experimental Results of BGP Convergence Measurements
BGP router convergence timings