Benchmarking Methodology WG (bmwg)

MONDAY, November 14, 2011

0900-1130  Morning Session I

Room 101A    OPS     bmwg   

CHAIR(s): Al Morton acmorton@att.com

INTRODUCTION

BMWG met at IETF-82 with 20 people attending and at least 5 more participating remotely.  Al Morton chaired the Meeting and prepared these minutes, based on detailed notes from Chris Inacio as official Note-taker (using the new Etherpad tool). Chris and Mike Hamilton monitored Jabber, and Mike added some notes in Etherpad when Chris was speaking. The meeting was broadcast via one-way audio on the IETF audio stream.  The chair requested a 2 hour session, and used 2:15 of the allotted 2:30 time.

This report is divided in two parts: an executive summary with action items, and detailed minutes of the meeting.

 SUMMARY

Brief status, the IGP-convergence drafts are in AUTH48

and making progress. All points are reviewed by Authors

and we are waiting for publication.

 

IP Flow Export Benchmarking has been updated following a productive

Third WGLC. The chair has identified 5 open issues for list discussion

and there will be a 4th WGLC when this is complete (target end of Month).

 

There was also a productive discussion of issues on the new draft on

"Restoration and BGP Convergence of Contemporary Routers"

methodology. The coverage of BGP Data Plane convergence

will be addressed in single draft, requiring the key issue that

if forwarding can be repaired without any control plane action,

that feature MUST be disabled for this benchmarking.

The main next step is to get Operator feedback on the

metrics proposed in these drafts, a rep from RIPE NCC was

present and now need to get feedback from NANOG and APNIC.

 

The Content-Aware (CA) authors revised the methodology draft to address

the first round of comments from chair and others, Mike Hamilton reported.

A "boatload" of comments from Tom Alexander remain open on

both the terms and methodology. The chair has reviewed these comments

and suggested several ways forward to resolution.

There is a plan to generate pseudo-random and malformed traffic,

using algorithms that can be standardized.

 

The existing CA work might be augmented by a new proposal on Security

Effectiveness Benchmarking, as proposed by new attendee Kenneth Green.

The point is to measure the function of culling

malicious traffic from desired traffic, and not forwarding performance

in the presence of malicious traffic, as the current CA method is doing.

Thus, the Security Effectiveness and CA work are complementary.

 

Al made arrangements to meet off-line with the Security Effectiveness

and Content Aware authors to provide comments and coordinate the work.

 

Al presented the IMIX Genome project, and made a point to resolve Ilya

Varlashkin's comments off-line.

Al briefly presented the RFC 2544 Applicability Statement,

which resolves comments from David Newman, Curtis Villamizar, and Bill Cerveny.

the WG may be done with now (so test with WGLC).

WGLC is also expected on the protection methodology draft.

On the other hand, LDP convergence work needs to get going

(it's been on the charter for more than a year now),

or it risks being dropped.

 

There was a very brief description of work on Benchmarking time

synchronization devices from David Moran - which will be discussed

in more detail in TICTOC WG on Thursday.

 

ACTION ITEMS:

 

·        WGLC on Protection Methodology Draft

·        WGLC on IP Flow Export Methodology Draft

·        WGLC on RFC2544 Applicability Statement Draft

·        Obtain Operator feedback on methods and metrics in BGP drafts

·        Address comments on the list, or with updates to drafts (Everybody).

 

DETAILED NOTES

Welcome to Etherpad Lite!

0. Agenda Bashing

1. WG Status and Milestones

Approved:
Benchmarking Link-State IGP Data Plane Route Convergence
State:   nearing the end of AUTH48

ci: Ron Bonica looking for the authors of the dataplane convergence drafts so that the 48-hour time is met.

'Testing Eyeball Happiness' Approved as an Informational RFC

Drafts not presented at this meeting: (need reviews as noted)

ci: Al: hoped that LDP documents would be resurrected by now, but did want IGP to be further to along for LDP to go forward.
ci: Rajiv via jabber: will get revived soon.

Draft Preparation Discussion Summary

ci: Ilya, use version 4.9 of the XML Mind tools for compatibility reasons.  Version 5 is not compatible with the XML2RFC plugin.


RFCs on the Standards Track
   Brief update on IPPM progress and implications for BMWG

ci: Al: Is there any Interest in pursuing a move to the Standards Track here in BMWG??
ci: Kenneth Green: How do you differentiate the variances in the implementations, versus variances in the DUT?
ci: Al: The implementations shouldn't produce different results if they implement the (well specified) standard appropriately.

ci: Ron Bonica: question about being ready for STD's track

ci: Mike Hamilton: 4 implementations of RFC 2544 side-by-side that have fairly significantly different outputs, experience note.

ci: Mike Hamilton: Do we want to be beholden to a 3rd party of an implementation for anything on STD's track.
ci: Al: in that case, then the STD isn't tight enough, excluding the case where the 3rd party is incapable of implementing the spec.


-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

2. IP Flow Information Accounting and Export Benchmarking Methodology
   Presenter: Al for Jan Novak
The chair has identified 5 open issues for list discussion and there will be a 4th WGLC when this is complete (target end of Month).


3. Basic BGP Convergence Benchmarking Methodology status
   Presenter: Ilya Varlashkin and Dean Lee

ci: 3 people have read Ilya's draft.

ci: Dean Lee: A lot of experience doing these two methodologies; one looking at only the data plane, one attempting to look at the control plane.  Very good to have both methodologies.

ci: Al emphasizing Ilya's point that data plane convergence may not necessarily indicate control plane convergence.

ci: Ilya wants actual feedback, volunteers?
ci: Al: in the current text of the charter we are required to get operators feedback.
ci: Al: We should be presenting this at the operators working groups.

ci: Dean: He's been getting constant requests from carriers/operators on how to do these types of measurements.  We should be able to ask those carriers for feedback.
ci: Al: recommends to do it!

ci: Ilya: can present this to the RIPE community.
ci: Al: need volunteers to present at NANOG and APNIC.  This is what Ron envisioned within the WG charter language.

ci: Ilya requests any feedback now:
ci: Al: Would like to get more configuration definition in the RFC
ci: Ilya already had this in his list within his presentation
ci: Al: IGP & BGP interactions to consider in this measurement.
ci: Al: always want a default config so that a comparison between work at Lab A and Lab B is possible.

ci: Craig White: Worried about potential vendor tweaks in order to accelerate performance with respect to this testing methodology / config.  Would really like to have the independant lab measurement capability exist (Lab A, B, C can compare results.)
ci: Al: Concurs, although the saving grace may be allowing operator configurations in the mix.
ci: Ilya: Goal of test design, including the number of devices within the test design, are there to help mitigate the problems of a vendor designing to do well on the test / avoid test bias.


4. Benchmarking Methodology for Content-Aware Network Devices
   Presenter: Mike Hamilton

Mike: Would like to use an algorithm in the specification based on an open source system.  Would like information on the IETF policy.
Al: Would depend on the license of the algorithm; would have to investigate.

Kenneth Green: Worried about implementation vs. specification, basing on open source.
Mike: would just like to reference / use their algorithm, not their implementation.

Al: - many reactions after reading the drafts and Tom Alexander’s comments on the plane ride…
Al: Change wording from TCP throughput to TCP bulk transfer capacity.
Al: terminology updates are needed; Mike agrees, but wants to fix content and then get back to terminology.
Al: client/server metric differences; need to get tighter on the definitions on places for specifying measurement points.
Al: May want to reference ITU-T Recommendation Y.1560 possibly, which talks about TCP setup times.
Al: Only *one* introduction should exist for the documents.

Impromptu Agenda Bash – Continue on the related topic/draft ahead of Al’s presentations:

x. Security Effectiveness Benchmark
   Presenter: Kenneth Green
 
ci: Ilya: wants the addition of performance of passing good traffic when the network is under attack.
ci: Mike: Willing to work with Kenneth on making sure to get this covered in at least one of the documents.

MH: Chris asking clarification on types of devices.  Incredibly difficult to distinguish/define legal/illegal.
MH: Kenneth agrees and stated earlier about difficulty of enumerating these categories.
MH: Chris talked about SCAP or other definitions.  

Al: mentioned discussing the terminology topic with Steve Bellovin before the Reception, There is no special status for the term “evil” in the IETF.

ci: Ilya: Can add more & more evil traffic and watch the good-put performance decrease.


5. IMIX Genome
   Presenter: Al

ci: Al & Ilya will work the “Compression” issue offline, and considered name change of “Genome”, but will leave as-is.


6. RFC 2544 Applicability Statement: 
   Presenter: Al

Note the name change to “Production Networks” from “Real-World Networks”

New Work Proposals:

y. Benchmarking Time Synchronization (David Moran)
ci: Work on measuring differences in time synchronization will be talked about in ticktock WG.

z. Software Update Benchmarking Brief Update

Al: Authors expect to have a draft soon.

LAST. AOB