Performance Measurement at Other Layers (PMOL) Working Group Minutes ========================================================== Reported by Alan Clark, based on notes from Aamer Akhter as official note taker with additional notes from Matt Zekauskas and Ralf Wolter. The Performance Management at Other Layers working group met once at the 70th IETF meeting (Vancouver 2-7 December, 2007). This was the first meeting of the PMOL WG, and was chaired by Alan Clark and Al Morton. Subjects under discussion included a review of the charter of the group, the Metrics Framework draft and SIP Performance Metrics. 29 people participated in the session, with another 3 or so participating remotely. Introduction and Review of Charter The working group chairs introduced the meeting and reviewed the agenda. There were no changes made to the agenda. The charter of the group was outlined - this is a short-lived working group with two deliverables, a Metrics Framework draft due for completion in September 2008 and a SIP Performance Metrics draft due for completion in June 2008. The working group is due to close or be re-chartered in November 2008. Metrics Framework draft-morton-perf-metrics-framework-01.txt Alan Clark introduced the Metrics Framework draft. He explained that Al Morton had agreed to be removed as an author due to an IETF restriction. The draft provides guidelines on the development of metrics, including the need to view the metric from the user/ audience perspective, to measure the observable behavior of an application. Metric definitions need to be tested to ensure repeatability of measurements, relationships between reporting models and metrics definitions should be made explicit rather than implicit. Composite metrics, which are built from other metrics, need to be assessed based on the accuracy of underlying metrics. The draft also provides a brief outline of the new proposal process, which is seen as collaborative with other working groups. David Oran asked for clarification on the use of the term "index" and Alan responsed that this was not intended to be an ordinal number but a way to represent a range of behavior (e.g. "R Factor"). Daryl Malas commented that it would be useful to have some examples of how a metric definition would be structured. Raul (Camacho?) asked if specific metrics would be defined in the draft - Alan responded that the draft would only define the methodology for specifying metrics and ensuring they are "well defined". Alan M (?) asked what the model for contributing to the draft was, Alan responded that contributions to the draft would be welcomed. Anna Charney asked for clarification on the term "intermediate model". Alan explained that this was not a metric but a way of summarizing characteristics of the "system" that would be useful in deriving one or more metrics related to application performance. An example of this is the 4-state Markov Model described in RFC3611, which captures information related to packet loss distribution. Aamer Akhter asked how this was different to a composite metric, Alan responded that an intermediate model was not in itself a metric. Al Morton said that some of these terms (index and intermediate model) had been proposed in IPPM however that PMOL may be a better place for their definition. Benoit Claise expressed the view that it could be difficult to talk about composite metrics in a general sense as the definition of composition may be metric type dependant. Dan Romascanu reminded the group that the draft is intended to be a BCP. There was some additional discussion related to the degree to which the draft could achieve the goal of providing a good basis for metric definition and the definition of the "quality" of metrics without actually specifying particular metrics. Al Morton asked how many had read the draft and there were 10 or more who raised their hands. When asked if this draft was ready to become a WG item, there was general acknowledgement that the draft is a good start, but not mature at this stage and needs further work. Some specific areas mentioned in the final comments from Anna and Dan are listed below: 1. The framework draft should answer questions of a novice user; some hidden assumptions were revealed in discussion today, but there are probably more in the draft. 2. The sections that are currently blank/placeholders should have at least strawman text. 3. Adding a few examples of well-formatted metrics in an Appendix would be useful. Contributions for the empty sections and examples are sought, and there should be another iteration of this draft prior to the next meeting. SIP Performance Metrics draft-malas-performance-metrics-08.txt Daryl Malas introduced the SIP performance metrics draft, initially providing some background. The draft was presented to the SIPPING WG at the 66th IETF and has had many comments from the SIPPING list. There was consensus in SIPPING that the work was of interest but the WG did not feel that it was within their charter. The draft had also been reviewed by BMWG and had received some good feedback related to metric definition. As the scope of the draft included use in live networks it was felt that this would be a good candidate for the PMOL WG. Anna Charney asked for clarification on what was being measured, and Daryl explained that the draft related particularly to the performance of SIP protocol transactions and not to the media sessions being established by SIP. Yutaka Kikuchi asked why this particular draft was being accepted for review by PMOL when the charter stated that no new work was being accepted. Al Morton explained that the charter specifically identified the SIP performance metrics draft as a work item, and that no new work items could be accepted by the group unless it was rechartered after completion of the first two deliverables. Ravi Raviraj suggested that the definition of session establishment rate should include some text to say whether this was with or without media commected. Daryl responded to say that the draft specifically did not include this as it avoids any mention of media. Ravi then asked if media measurement would be within the scope of the charter. Alan Clark responded to say that there the group could only focus on the two drafts - the framework and SIP performance metrics drafts; as application performance monitoring is a broad topic there is a risk that the flood gates would be opened. Anna Charney asked if this meant that the only metrics that could be brought into the group at this time were SIP performance metrics. Alan responded that this was the case however the group was also prototyping the methodology to be used for future metrics. Dan Romescanu added that this was being used as a test case and as work progressed it would be used to see what approach should be used in the future - a directorate, WG etc. Paul Aitken asked if the focus of the WG was the definition of specific metrics or the definition of how to specify metrics. Alan Clark responded that currently it was the specification of specific SIP performance metrics and the general methodology however could be other items if the group was rechartered in the future. Paul asked if he was defining metrics in another group should he come to PMOL, Alan responded that other WGs could look at the work of PMOL for guidelines. Daryl continued with the presentation of the draft. Paul Aitken asked how the draft was considered to be "end to end". Daryl stated that this was defined in the draft, and that the definition was from one user-agent to another user-agent potentially through multiple hops. Scott Poretsky asked if there was a diagram of where measurements were made. Daryl responded that each metric had a place and time of validity. Scott continued to ask how we know that the endpoints have the capability to do the measurements, and commented that the next steps would be to define MIBs to report the data (also commenting that the "criminals" (in this case the SIP UA's) are providing the data). Daryl stated that the draft does mention MIBS but that there may be many reporting devices. Anna Charney asked if PMOL was expected to be expert in SIP. Daryl replied that the draft would be reviewed by SIPPING as well and hence PMOL would be evaluating the quality of the metrics defined in the draft. Dan Romescanu added that the last call would include both PMOL and SIPPING working groups. Paul Aitken commented that there was an ordering issue - the SIP draft was supposed to conform to the Framework draft however was due for completion first. The WG chairs responded that the SIP draft would be used during the development of the Framework and that this was still a workable approach. A key point is that the SIP draft should follow the guidelines and process as much as possible so that the WG can say it was prepared according to the process as "currently defined" when it reaches WG consensus. Al Morton asked if there was consensus in the group to accept the draft as a WG item. Ten or more people had read the draft and raised their hands in support. There were no objections or alternate proposals. However, there were again some questions about how this draft related to the framework/process draft, and so there seemed to be some indecision within the group. The questions on adopting this draft as the WG draft will be raised again on the list. The meeting closed.