Benchmarking Methodology WG (bmwg) – Interim Virtual Meeting Minutes FRIDAY, October 30, 2009 1400 - 1600 GMT CHAIR(s): Al Morton For Slides and Agenda, see: http://home.comcast.net/~acmacm/BMWG/Interim-Oct09.html These minutes are divided in four sections: Attendance List, Summary, Action Items and Detailed Notes. The BMWG met using webex conferencing with 21 people in attendance. This meeting report was prepared by Al Morton, based on detailed minutes provided by Carlos Pignataro, as official note taker. Also, Benoit Claise communicated his notes (to the chair) on the IP flow export topic. ATTENDANCE LIST Al Morton , Chair Ron Bonica , OPS Area Director, AD Advisor Dan Romascanu , OPS Area Director Gunter Van de Velde , Webex Guru Carlos Pignataro , Official Note Taker Aamer Akther , Jay Karthik , Bhavani Parise , Jan Novak , Timmons Player , David Newman , Rajiv Papneja , Vinayak Hegde , Benoit Claise , Kris Michielsen , Saqeb Akhter , Mike Hamilton , Eric Puetz , Rodney Dunn , Samir Vapiwala , Rajiv Asati , SUMMARY The MPLS-Forwarding Benchmark Methodology (Update of RFC 2544) has been approved and will be published as RFC 5695. This work spawned a proposal to update the RFC 2544 Reset Benchmark, as this is still an important device aspect and today’s devices have many options in this area. A new author team presented their progress to date, including a draft with a detailed outline of the work. The IGP-Dataplane Convergence Benchmarking Drafts have been revised, following a very productive WGLC in June/July. Another short WGLC is planned to review the final changes before re-requesting publication. The IPsec drafts have reached the IESG Review stage, and several comments/Discusses have been entered. The Sub-IP Protection Drafts were revised, but not all comments were addressed. The WG is expecting revised drafts that fully address comments from the July WG Last Call, so that another WGLC can begin. The SIP device Benchmarking drafts were not updated, but revisions are in-progress. The authors hope to begin the WGLC process in Jan 2010, with March 2010 as the new target for AD Review. The work proposal on Flow Monitoring Device Benchmarks (which was very well supported at IETF-75) has now reached a state of maturity where the WG is discussing the details, rather than the fundamentals (use of white-box benchmarks) in the original proposal. The Benchmarks are now clearly defined and externally measurable across vendors. Also, the draft addresses flow-monitoring appliances without packet forwarding capability, in response to a previous request. There were several comments raised in discussion, such as variability in flow export record rate and packet rate, which should be considered for further development. The WG will review the charter changes that would lead to the adoption of this item. A new work proposal on Data Center Bridging was presented and discussed. This work would “retrofit” two of BMWG’s key RFCs (2544 and 2889) for an environment where per-VLAN flow control is possible, thus loss is no-longer a reliable indication that Throughput level has been reached. There was a reasonable level of interest and readership, and the group agreed to pursue a liaison with IEEE 802.1 to inform them of this proposal and establish a working relationship, if necessary. The IETF-IEEE Liaison for 802.1 (Eric Gray) had already been contacted by the chair, and he agreed that the liaison was appropriate. There was another good discussion on the Content-Aware Device Benchmarking proposal, and several suggestions for further development of the Benchmarks. BGP Convergence time Benchmarking is coming back to life. There is a new co-author and a plan to produce a new methodology and update the terms RFC. There is a milestone on this work already approved. Inactive Work Proposals (which have not seen interest/progress in some time) have been deleted from the work summary matrix. BMWG may have an Interim meeting by Electronic Conferencing to advance its work program. ACTION ITEMS Prepare a 1 paragraph description of the IP Flow Export work, and review with the WG in preparation to revise the charter (bullet items). Prepare a Liaison to IEEE 802.1, pointing to the Data Center Bridging draft and the proposal to update RFCs 2544 and 2889 to address the Per-Flow Control capabilities of IEEE 802.1Qbb. After the Sub-IP Protection Drafts have been revised to address remaining comments, start another WGLC. Start a WGLC on the IGP Dataplane Convergence Drafts (ASAP). When Revising Milestones, include the new dates suggested by the SIP benchmarking authors. WG participants interested in the SIP Device Benchmarking work should schedule some time to do a detailed review of revised drafts in January 2010, during WGLC. Anyone willing to help (additional authors?) on chartered items where the drafts have EXPIRED, such as Accelerated Stress Benchmarking, should contact the Chair and the authors. DETAILED NOTES -1. 1330 GMT -- warm-up/resolve set-up problems, if any 1330 GMT – Al Morton and Gunter Van de Velde going over directions to use webex. Al will be driving as host. In particular, uploading the presentations to webex (Share -> Documents and Presentations -> select file, etc.) was found to be useful. The host can pass the “ball” to each presenter (select participant from list, press Make Presenter button) and each presenter can then control the slides for the entire presentation – very useful, and saves time by reducing interaction needed to flip slides (especially during questions). Chat window: from Aamer Akther to All Attendees: can somebody send URL for the interm page? from Carlos Pignataro to All Participants: http://home.comcast.net/~acmacm/BMWG/Interim-Oct09.html 1345 GMT – List of participants: Al will be recording the meeting when it starts (Aamer Akhter brought this up). Your recording is now available on the WebEx service site. Click the link below to play it: https://workgreen.webex.com/workgreen/lsr.php?AT=pb&SP=MC&rID=2881892&rKey=847e89183efd58bd BMWG Meeting-20091030 1400 Friday, October 30, 2009 2:00 pm Reykjavik Time 2 Hours 3 Minutes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 0. 1400 GMT Agenda Bashing Recording started. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. WG Status and Milestones Al presenting http://home.comcast.net/~acmacm/BMWG/bmwg_Agenda_Status_Milestones.ppt Slide 2: Al going over IPR Policy. Slide 3: Agenda Participants that were “call in user” identified themselves. Total list of participants as follows: Slides 4-6: Details of BMWG Activity and current status presented. Not on slides: New RFC number: RFC 5695 Slide 7: Standard Security paragraph. Slide 8: Current Milestones. Slide 9: Work proposal summary matrix. Some items dropped off for lack of interest, cleaning house. Reset work in gray because it is new. Someone else just joined: Participant “Rodney Dunn” ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2. IP Flow Information Accounting and Export Benchmarking Methodology Jan Novak, presenting novak-bmwg-ip-flow-monitoring-04.ppt for http://tools.ietf.org/html/draft-novak-bmwg-ipflow-meth-04 [all the presentations in the ZIP file at http://home.comcast.net/~acmacm/BMWG/Interim-Oct09-bmwg-slides-drafts.zip] Goal of the draft: What happens with forwarding of devices when flow monitoring is enabled. Slide 4: Concept Slide 6: How to measure flow monitoring, similar to RFC 2544 Slide 7: Measurement example Slide 8: Flow monitoring parameters need to be known and *recorded*. Slide 9-10: Changes from version 3 (previous version). Slide 11: Open issues: i. Use of bidirectional traffic; move section 9 to an appendix. Clarify Section 6. End of preso. Al: Thanks, and thanks to Benoit for co-authoring, open for questions. Aamer: Comment, reviewed the draft, and sent unicast comments. The controversial point was the CPU, if you know the internals for example to compare different features on the same box, it is useful. Different box comparison not useful. Al: Section 9 nicely covers. Jan: We can add to Section 9 or new appendix. Aamer: These are general type things, set of metrics known to be useful (CPU, memory) Al: Benoit copied from the ID to the WebEx chat the following. Do you need something else? from bclaise to All Participants: "A variety of different network architectures exist that are capable of IPFIX support. As such, this document does not attempt to list the various white box variables (CPU load, memory utilization, TCAM utilization etc) that could be gathered as they do always help in comparison evaluations. A better understanding of the stress points of a particular device can be attained by this deeper information gathering and a tester may choose to gather additional information during the test iterations." Aamer: Open question to the group, do you want to go further? On distributed box, CPU on Line-Cards? I am fine with this text, I have seen it, but do we need to go further? Al: Any new text will need to be very general to apply generally. It is auxiliary information to the primary benchmark that we are trying to standardize. Perhaps current text is sufficient from that POV. Aamer, a new paragraph won’t hurt though. Aamer: I agree current is good, that was part of my original comment. Benoit: Question for Al: Would like to discuss this comment you made – can you please expand, I want to understand it. from bclaise to All Participants: General comment on Forwarding Plane Measurement (possibly addressed in later sections of the draft): The test load used for flow monitoring stress will likely be different from that specified in RFC2544 - we need to take care that the Throughput benchmark will still be meaningful, and than may mean some compromises between traffic that stresses the Flow Monitoring Plane and the Forwarding Plane. Al: There may different attributes to emphasize the attribute of the plane. Some of the differences are at the end of Section 3, which prompted this comment. I encourage folks to think about this. Forwarding plane would be benchmarked differently than before. Benoit: I understand this as follows: We want to measure forwarding in the context of flow monitoring, we need more attributes *or* are we saying that we cannot compare 2544 measurements with or without flow monitoring Al: Definitely not the latter. We have to be able to compare with/without. If the comparison is not valid, then we are not answering the Industry’s main question – what is the affect on performance with flow control. Aamer: With ipfix there is a metric for flow rate, over the interval of the test (IIRC); would it be more helpful for some implementations when the rate is smoothed out (some implementations are better on how they perform the export by shaping the packets out and not bursting). Jan: I do not see a methodology of how to do it with this granularity in a black box manner. Aamer: Can the same stats be requested of the ipfix packet? E.g., average inter-packet gap. Jan: Packet or flow stats? Benoit: It is optional, we want to know as Aamer mentioned the peak export rate and *not* the average, because at the peak is when you can loose some records. It is interesting to keep average of flow records per second. But from the collector POV and not router it is *important* to have the peak as well. Al: In the past (IGP Convergence work), we have measured packet rates in small time frames by defining a sub-second sampling interval – this is a possible example for a methodology that makes frequent measurements of export packet rate or flow record rate. Aamer: Al raises a good point - ipfix originator has leniency of how many records per packet, so a record per packet is suboptimal for the collector. We want to understand if there are non intelligent exporters, which for example, include a single packet per export packet Al: Records per packets can be configurable or variable, yes. Is there any discussion of variability of records per packet in the draft? Jan: There is not, because it is inside the implementation, and users do not have option to change. Al: OK. Aamer: My point is that there may be implementations that are not too intelligent. Al: There are 2 metrics here: packets per second, and records per second, regardless of the packaging. Rodney Dunn: Interesting to correlate that to the burstiness. Jan/Benoit: this should be optional because we impose that collector MUST decode everything in real time. Aamer: using PCAP, we can do packet capture and the analysis later on. Real time is not required. Al: Due to Time constraints, I have to cut off the discussion here and we should continue on the list. Since we are working on the details and not general questions, this draft is in a fairly mature state, there’s been a lot of progress made in a little more than a year, close discussion here. We need to begin to think about how to add this to charter. Thanks again Jan, Benoit, and everyone that commented, Aamer in particular. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 3. New Work Item Proposal: Benchmarks For Data Center Bridging Devices David Newman to present, lost the coin toss to Timmons. Presenting http://tools.ietf.org/html/draft-player-dcb-benchmarking-00 with Slides at http://home.comcast.net/~acmacm/BMWG/DCB.ppt New work proposal, extending RFC 2544 (and 2889) in the context of Data Center bridging devices. Slide 3: What is DCB? Data Center Ethernet, refers to the same thing. Set of network convergence technology over IP. This has Storage over IP or FCoE; Important is Priority Flow Control (PFC) from .Qbb. With 802.1Qbb there is flow control is per VLAN priority, other than that is exactly as we’ve always had. Present effort around PFC measurement testing. Slide 4: Why not current metrics? Throughput: RFC 1242 is fine for Ethernet, increasing with 0 loss. FCoE are in the DCB context, and these are lossless, so existing method does not work. Slide 5: Cannot distinguish between iload and offered load. Slide 6: Also issues with latency, because measures @ throughput rate, but throughput rate is undefined here… Slide 7: Traffic patterns also problematic in the DCB context, because round-robin will be changed. Slide 8: DCB out of lock-step quickly, therefore much tougher on scheduler. Slide 9: Solution proposed: New metrics “Queueput” and retrofit existing metrics. More meaningful than throughput because there is no loss. Slide 10: Use existing metric of backoff from 2544. Measure per-classification, not per-port. Slide 11: Same with back-to-back. Draft is -00, early, with a lot of definitions that might be applicable to this context. Encourage the WG to take on this effort and propose to the I-D. EOF Al: Thank you, Questions? Aamer: Respect the QoS-type-thing, didn’t we have a draft about congestion offered, etc? If so, we can go back to that I-D. Al: David and Timmons reference that work, but the distinction is that this is a different “QoS mechanism”. This is a different mechanism that needs to be assessed, not the Diff-Serv mechanism. Timmons: We look at the traffic control mechanisms, and it is IP specific, but this is non-IP context. There is no IP Header here, so what do we do? Similar but not transferable. Ron Bonica: This is not particular Layer 3. Is there a need to set-up a liaison in this case? Al: Introducing Ron Bonica and Dan Romascanu. Various things to do with this I-D. 1. Find interest, and if there is, begin liaison relationship with 802.1. Yesterday I contacted Eric Gray in person, who is identified as the IETF liaison contact for 802.1, and he welcomed the idea of the liaison. Made positive steps there, and would like to report that progress. Ron: Thanks ! David Newman: Quick comment on IP vs. non-IP. Draft is Fiber Channel vs. PFC, but there are other uses of PFC *outside and without* FC. Area to expand Al: General question, how many people read? Jay Karthik suggested using the “Raise Hand” option for “show of hands”. Al: Several people have raised hand of “I have read”. So, interest in taking the work? Some new hands, some hands disappear, but good support. We will take it to the list and investigate liaison. I am raising my hand as participant. Dan: Not raising hand because would like to understand the position of 802.1 on this. [WebEx Chat: from Dan Romascanu to All Attendees: i did from timmons player to All Participants: I've read it :) ] Al: Will definitely pursue the liaison, will put draft email in the list before sending as a liaison. Will look at ADs for guidance to make this happen, we do have the contact already. Eric Gray and Bernand Adoba will be recipients of this. Other thoughts? No, Thanks for interesting proposal. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 4. Benchmarking Terminology for Protection Performance Methodology for benchmarking MPLS protection mechanisms. Presenting http://tools.ietf.org/html/draft-ietf-bmwg-protection-meth and http://tools.ietf.org/html/draft-ietf-bmwg-protection-term Al: This is a short item. Who presents? Rajiv Papneja: Pass the ball to me, please. Describing the status. As co-author we want to take this to a level that it can be forwarded to the IESG. However, there are some comments that are not yet addressed in the current drafts. Thanks to Al for checking that we included all comments, he provided really helpful compilation of comments to the authors. We will revise the drafts in the next few weeks. Al: Thank you. Will do new last call (on the next set of revised drafts), and if it goes quietly I can report that as a good thing in the PROTO write-up. So we will do one more last-call. Rajiv: Thanks to Kris Michielsen for his comments. Kris: You are welcome. Al: We are at the thanking phase already, so we can move on to the next item. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 5. Benchmarking Link-State IGP Data Plane Route Convergence Kris Michielsen will be presenting http://tools.ietf.org/html/draft-ietf-bmwg-igp-dataplane-conv-term-19 and http://tools.ietf.org/html/draft-ietf-bmwg-igp-dataplane-conv-meth-19 Kris: First comment I received: Ingress IGP adjacency “SHOULD” be established, it was “MAY” before. Changed for 3 reasons: I do not expect much impact of that change, and it is more realistic. Besides, would do it in proactive anyway. Kris: [at Slide 4] Next comment I received was about “discard” packet tester. Natural way if you pull a fiber. But if the event you want to bring to the device, you may deliberately drop the packet. Showing example of what I mean in Slide 5. Kris: Slide 5: Example of IGP adjacency failure by dropping the link between the “L2 Switch” and Test Equipment; in that case, all traffic drops. [WebEx Chat: from Bhavani Parise to All Participants: hi] Kris: Last thing I changed, offered load. [Slide 7] – I realized it may not always be feasible or possible to send traffic [...] so I still think it is “RECOMMENDED” if possible, but added “but statistically represent”. Kris: [Slide 8] I did an experiment to back this up. Almost in 100% of the cases, you pick a ball in the 10%. You can accurately measure convergence time with statistical representation. Kris: Thank you. Al: I think that all the comments are addressed, including several points you discovered and WGLC comments. Question: Anything else? [silence]. Hearing none, it seems you have satisfied all comments, I appreciate your efforts, we will have a short WGLC and send to the IESG. Thanks again. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 6. Benchmarking Methodology for Content-Aware Network Devices Al: Moving on to the next item. Hoping that Mike Hamilton is Call-in User 7. [these type of notes are taken for the benefit of the WebEx recording correlation] Mike Hamilton: I was, but I am myself now. Presenting http://tools.ietf.org/html/draft-hamilton-bmwg-ca-bench-meth-02 Mike: I have made some changes to the I-D. Less focus on firewall, and more on next gen devices. Mike: [Slide 2] Changes from -01, picked up at least 1 co-author, Sarah Banks from Cisco. Mike: Al asked me: What devices does this apply to? A list will never be all inclusive, so shy away from that. But added some device applicability. Mike: Content changes: Nothing major on the meth. Change some wording on the last couple of test cases. Handling of malformed packets. Don’t mean to imply it should have the capability. And some other minor updates. Sarah and I did not get to sync up in time to have her content ready for this meeting, but will merge her input and contributions. Mike: [Slide 3] Hoped people on the call took a look and provided comments. Al: Questions or comments? David Newman: Micro-comment: The draft says extent 2647 and 3511 to make them even more realistic. Realisism was not a goal of either of those RFCs. No expressed or implied definition of realism. Realism is a very problematic concept in laboratory testing. This draft says “real traffic is synthetic traffic”, and calls it real because it samples real traffic. Nothing wrong, but real for whom? Implies that sample taken from one network is representative to another network. Scott Bradner has said that a network and another network do not look like each other, and one network does not look like itself 10 minutes before. I am not saying not to take on the work, but that we need to look at how to address the concept of realism. Mike H: Nothing implies replaying. I explicitly don’t, and didn’t mean to imply capturing real network live traffic. Not representative of real live. But HTTP is different from just UDP traffic, and need to take the next step into getting closer to what is realistic. Did not mean to imply that taking netflows implies real. David Newman: Repeatibility is key and relatively easy to do if offered traffic is just HTTP or some mix, and I am all for L4-L7. Reproducible means we can do the same across testbeds. Larger question: How much is meaningful, when “realism” or “realistic” is mentioned in the draft, and I am uncomfortable with that concept that we say it is “real”. More useful or meaningful to describe extensiosn to test traffic we use, but need to understand that all this is synthetic and in the lab, and representing lab performance and not real network. Al: If we incorporate New types of traffic, some understanding of the device behavior in presence of that traffic. That is not enough: we need to also understand what we are characterizing *in presence* of these new inputs. The goal posts for new metrics are about defining the inputs (new types of traffic), and ALSO knowing what aspect of the DUT we are characterizing, and what the results mean (in a multi-vendor comparison). Mike H: I think we are covering these 3 points, but can improve. There is precedence within BMWG with early revs of the firewall methodology. Those that have been around longer may have insight of why on -02 have been removed (from firewall docs). Al: That is good background; my recollections don’t go back that far. Mike H: That was [years] 2000 and 2001. David Newman: When I proposed that, there was strong consensus that it was too broad of a definition. 3511 => define characterizing more narrow set of devices and protocols. Again, I am all for expanding that list. But as Al said, we need to focus on quality over quantity. Al: Good input today for Mike, and several things to think about. Mike H: Appreciate the feedback. Al: Thanks David for the historical context and comments. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 7. RESET benchmarks: Update to RFC 2544 Al: We have reached the Slides-only part of the meeting, next are the Reset slides. Presentation and Preliminary Draft at http://home.comcast.net/~acmacm/BMWG/draft-asati-bmwg--reset-00.txt Rajiv Asati: I will present [garbled] Al: I could not understand what you were saying. Rajiv: [Speaking again, and now the voice is coming much better] I am Rajiv and I will present. Al: Rajiv is a multicast address today, that was Rajiv Asati. [Rajiv starts to present] Rajiv: During the extensive review of MPLS forwarding benchmarking (RFC-5695-to-be), lead to discussion and agreement than 2544 testing of “reset” is not sufficient in today’s devices. Rajiv: [Slide 3] Example: There is no elaboration of what and how a software reset is to be executed. And as we know, it can be a process reset, a kernel reset, etc. This was not possible at 2544 writing, but now we need the granularity. That granularity would extensively help an apples-to-apples comparison. If we do not know how the reset is done, we cannot compare. Rajiv: Thank you to Ron [Bonica] for being so insightful and push to try to solve it. Rajiv: How to solve it? Two buckets for resets: HW and SW. And in each buckets, define specific procedures. Rajiv: We missed the commit date for -00, so we will submit as soon as ID-upload re-opens. We request the expertise of the WG for feedback and discussion. Al: Thank you for the presentation. Part of the plan is to alert that this work is starting and going on. I hope you post “the same” -00 as I have posted (on the Interim meeting page) when the window opens. Feel free to also update with a -01 ASAP. Continue to invite folks with expertise on reset testing to join the authors or to review, all these roles available. Thank you to all the authors for moving forward with this work. Questions? Rajiv Papneja: I have a comment: It is very very applicable and relevant to the BGP convergence work. Al: Good. Maybe get cross-fertilization. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 8. Basic BGP Convergence Benchmarking Methodology status Slides at http://home.comcast.net/~acmacm/BMWG/BGPConv.ppt Al: Who presents? Rajiv P: Welcome new author, Bhavani Parise, joins Rajiv and Jay. Bhavani: Thanks for the timeslot. Bhavani: [slide 2] Background: We have RFC4098 from 2005. Bhavani: [Slide 3] Proposal: Focus on IPv4. Have not decided whether to include MP-BGP (RFC 2858). Slide 4: Scenarios we are focusing on. Also classes of BGP-speakers. Slide 6: description of a number of factors that affect convergence. These include flap dampening, timers, authentication Slide 7: A number of failure trigger events: Soft Reset (GR, Soft Refresh as in RFC 2918), or Hard Reset (See Section 5.1 of the Meth for benchmarking of MPLS protection mechanisms). Vendors that support RFC4724, this would be a good comparison of benchmarking. Slide 8: Plan: Publish initial version. There is room for experience co-authors. Al: Thank you. Comments? Questions? [silence]. We have a milestone, we expect a draft. We are interested to see what you can produce. Rajiv P: Question: Do we submit as WG item directly? Or as individual? Al: Latter. Rajiv P: OK. Al: The proposal is item on the charter already, which makes it easier for you to complete the work. This work spanned so many years that it’s not surprising things changed so much. Rodney Dunn: Listing trigger points, is this multihop or only single hop? Bhavani: Initially single hop, and then will see multihop because there are more triggers to be considered. Rajiv: For multihop, would need to also consider IGP convergence. Rodney: Exactly, and also BFD convergence. Rajiv: Ack. Control Plane convergence is good, but SP also need dataplane to be more reliable. We cannot completely abstract the control plane from data plane. Rodney: Great point, critical for HW forwarding platform. Will ping offline. Rajiv P: Awesome, we need more perspectives. Rodney: And Control Plane vs. Data Plane is crucial to understand. Al: Thanks to all, let’s move on. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 9. SIP Benchmarking Draft status Al: These drafts have expired, but the authors are still active. Scott put together some slides, will let folks read. Alignment in these metrics vs. SIP Metrics draft is an important point. Received revised schedule from the authors; they will prepare revised versions to WGLC in January 2010. I ask everybody interested in this topic to look schedule time for this review on their calendars. Authors propose milestones: to IESG by March 2010. Questions? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 10. AOB [All Other Business] Al: Back to the agenda. ADs, do you have anything to say? Ron B: No, just that I am very pleased with how this virtual interim went. Al: Definitely. Many Thanks to Gunter for setting up the Webex, and Carlos for note taking. Ron B: Really pleased, will say at larger IETF that this should be more common, and not the exception. Same meeting but a lot cheaper. Al: It’s possible to have Informal chats about projects during face-face meetings. I don’t think anyone can argue against the value of these side discussions. Dan Romascanu: We [bmwg] benchmark protocols that other areas create, so there is value in visiting those areas and cross-coordination. Al: Excellent point, this is a wrap. Won’t read my personal actions, will send to mailing list. This was very productive and it would have been catastrophic to our attempts at “continued progress” not to have this meeting.