Network Working Group A. Clark Internet-Draft Telchemy Incorporated Intended status: BCP B. Claise Expires: August 1, 2011 Cisco Systems, Inc. January 28, 2011 Guidelines for Considering New Performance Metric Development draft-ietf-pmol-metrics-framework-08 Abstract This document describes a framework and a process for developing Performance Metrics of protocols and applications transported over IETF-specified protocols, and that can be used to characterize traffic on live networks and services. Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119]. Status of this Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on August 1, 2011. Copyright Notice Copyright (c) 2011 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of Clark & Claise Expires August 1, 2011 [Page 1] Internet-Draft Guidelines Perf. Metric Devel. January 2011 publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. This document may contain material from IETF Documents or IETF Contributions published or made publicly available before November 10, 2008. The person(s) controlling the copyright in some of this material may not have granted the IETF Trust the right to allow modifications of such material outside the IETF Standards Process. Without obtaining an adequate license from the person(s) controlling the copyright in such materials, this document may not be modified outside the IETF Standards Process, and derivative works of it may not be created outside the IETF Standards Process, except to format it for publication as an RFC or to translate it into languages other than English. Clark & Claise Expires August 1, 2011 [Page 2] Internet-Draft Guidelines Perf. Metric Devel. January 2011 Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1. Background and Motivation . . . . . . . . . . . . . . . . 4 1.2. Organization of this document . . . . . . . . . . . . . . 5 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1. Performance Metrics Entity . . . . . . . . . . . . . . . . 6 2.2. Quality of Service . . . . . . . . . . . . . . . . . . . . 6 2.3. Quality of Experience . . . . . . . . . . . . . . . . . . 6 2.4. Performance Metric . . . . . . . . . . . . . . . . . . . . 7 3. Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . 7 4. Relationship between QoS, QoE and Application-specific Performance Metrics . . . . . . . . . . . . . . . . . . . . . 7 5. Performance Metrics Development . . . . . . . . . . . . . . . 8 5.1. Identifying and Categorizing the Audience . . . . . . . . 8 5.2. Definitions of a Performance Metric . . . . . . . . . . . 9 5.3. Computed Metrics . . . . . . . . . . . . . . . . . . . . . 10 5.3.1. Composed Metrics . . . . . . . . . . . . . . . . . . . 10 5.3.2. Index . . . . . . . . . . . . . . . . . . . . . . . . 10 5.4. Performance Metric Specification . . . . . . . . . . . . . 11 5.4.1. Outline . . . . . . . . . . . . . . . . . . . . . . . 11 5.4.2. Normative parts of Performance Metric definition . . . 11 5.4.3. Informative parts of Performance Metric definition . . 12 5.4.4. Performance Metric Definition Template . . . . . . . . 13 5.4.5. Example: Burst Packet Loss Frequency . . . . . . . . . 14 5.5. Dependencies . . . . . . . . . . . . . . . . . . . . . . . 15 5.5.1. Timing accuracy . . . . . . . . . . . . . . . . . . . 15 5.5.2. Dependencies of Performance Metric definitions on related events or metrics . . . . . . . . . . . . . . 15 5.5.3. Relationship between Performance Metric and lower layer Performance Metrics . . . . . . . . . . . . . . 16 5.5.4. Middlebox presence . . . . . . . . . . . . . . . . . . 16 5.6. Organization of Results . . . . . . . . . . . . . . . . . 16 5.7. Parameters, the variables of a Performance Metric . . . . 16 6. Performance Metric Development Process . . . . . . . . . . . . 17 6.1. New Proposals for Metrics . . . . . . . . . . . . . . . . 17 6.2. Reviewing Metrics . . . . . . . . . . . . . . . . . . . . 17 6.3. Proposal Approval . . . . . . . . . . . . . . . . . . . . 18 6.4. Performance Metrics Entity Interaction with other WGs . . 18 6.5. Standards Track Performance Metrics . . . . . . . . . . . 19 6.6. Recommendations . . . . . . . . . . . . . . . . . . . . . 19 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 8. Security Considerations . . . . . . . . . . . . . . . . . . . 19 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 20 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 20 10.1. Normative References . . . . . . . . . . . . . . . . . . . 20 10.2. Informative References . . . . . . . . . . . . . . . . . . 20 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 22 Clark & Claise Expires August 1, 2011 [Page 3] Internet-Draft Guidelines Perf. Metric Devel. January 2011 1. Introduction Many networking technologies, applications, or services, are distributed in nature, and their performance may be impacted by IP impairments, server capacity, congestion and other factors. It is important to measure the performance of applications and services to ensure that quality objectives are being met and to support problem diagnosis. Standardized metrics help to ensure that performance measurement is implemented consistently and facilitate interpretation and comparison. There are at least three phases in the development of performance standards. They are: 1. Definition of a Performance Metric and its units of measure 2. Specification of a method of measurement 3. Specification of the reporting format During the development of metrics, it is often useful to define performance objectives and expected value ranges. However, this is not defined as part of the metric specification. The intended audience for this document includes, but is not limited to, IETF participants who write Performance Metrics documents in the IETF, reviewers of such documents, and members of the Performance Metrics Entity. 1.1. Background and Motivation Although the IETF has two active Working Groups (WGs) dedicated to the development of Performance Metrics, they each have strict limitations in their charters: - The Benchmarking Methodology WG has addressed a range of networking technologies and protocols in their long history (such as IEEE 802.3, ATM, Frame Relay, and Routing Protocols), but the charter strictly limits their performance characterizations to the laboratory environment. - The IP Performance Metrics (IPPM) WG has developed a set of standard metrics that can be applied to the quality, performance, and reliability of Internet data delivery services. The IPPM metrics development is applicable to live IP networks, but it is specifically prohibited from developing metrics that characterize traffic at upper layers, such as a VoIP stream. Clark & Claise Expires August 1, 2011 [Page 4] Internet-Draft Guidelines Perf. Metric Devel. January 2011 A Birds Of a Feather (BOF) held at IETF-69 introduced the IETF community to the possibility of a generalized activity to define standardized Performance Metrics. The existence of a growing list of Internet-Drafts on Performance Metrics (with community interest in development, but in un-chartered areas) illustrates the need for additional performance work. The majority of people present at the BOF supported the proposition that IETF should be working in these areas, and no one objected to any of the proposals. Previous IETF work related to reporting of application Performance Metrics includes the "Real-time Application Quality-of-Service Monitoring (RAQMON) Framework" RFC 4710 [RFC4710], which extends the remote network monitoring (RMON) family of specifications to allow real-time quality-of-service (QoS) monitoring of various applications that run on devices such as IP phones, pagers, Instant Messaging clients, mobile phones, and various other handheld computing devices. Furthermore, the "RTP Control Protocol Extended Reports (RTCP XR)" RFC 3611 [RFC3611] and the "SIP RTCP Summary Report Protocol" [RFC6035] are protocols that support the real-time reporting of Voice over IP and other applications running on devices such as IP phones and mobile handsets. The IETF is also actively involved in the development of reliable transport protocols, such as TCP [RFC0793] or SCTP [RFC4960], which would affect the relationship between IP performance and application performance. Thus there is a gap in the currently chartered coverage of IETF WGs: development of Performance Metrics for protocols above and below the IP-layer that can be used to characterize performance on live networks. This document refers to the implementation of a Performance Metrics Entity, whose goal is to advice and support the Performance Metric development at the IETF. A recommendation about the Performance Metrics Entity is made in Section 6.6. Similarly to the "Guidelines for Considering Operations and Management of New Protocols and Protocol Extensions" RFC 5706 [RFC5706], which is the reference document for the IETF Operations Directorate, this document should be consulted as part of the new Performance Metric review. 1.2. Organization of this document This document is divided in two major sections beyond the "Purpose and Scope" section. The first is a definition and description of a Performance Metric and its key aspects. The second defines a process Clark & Claise Expires August 1, 2011 [Page 5] Internet-Draft Guidelines Perf. Metric Devel. January 2011 to develop these metrics that is applicable to the IETF environment. 2. Terminology 2.1. Performance Metrics Entity The Performance Metrics Entity is a directorate that coordinates the Performance Metric development in the IETF. The Performance Metrics Entity should be composed of experts in the performance community, potentially selected from the IPPM, BMWG, and PMOL WGs. 2.2. Quality of Service Quality of Service (QoS) is defined in a similar way to the ITU "QoS experienced/perceived by customer/user (QoE)" E.800 [E.800], i.e.: "Totality of characteristics of a telecommunications service that bear on its ability to satisfy stated and implied needs of the user of the service." 2.3. Quality of Experience Quality of Experience (QoE) is defined in a similar way to the ITU "QoS experienced/perceived by customer/user (QoE)" E.800 [E.800], i.e.: "a statement expressing the level of quality that customers/ users believe they have experienced." NOTE 1 - The level of QoS experienced and/or perceived by the customer/user may be expressed by an opinion rating. NOTE 2 - QoE has two main components: quantitative and qualitative. The quantitative component can be influenced by the complete end-to- end system effects (including user devices and network infrastructure). NOTE 3 - The qualitative component can be influenced by user expectations, ambient conditions, psychological factors, application context, etc. NOTE 4 - QoE may also be considered as QoS delivered, received, and interpreted by a user with the pertinent qualitative factors influencing his/her perception of the service. Clark & Claise Expires August 1, 2011 [Page 6] Internet-Draft Guidelines Perf. Metric Devel. January 2011 2.4. Performance Metric A quantitative measure of performance, specific to an IETF-specified protocol or specific to an application transported over an IETF- specified protocol. Examples of Performance Metrics are: the FTP response time for a complete file download, the DNS response time to resolve the IP address, a database logging time, etc. 3. Purpose and Scope The purpose of this document is to define a framework and a process for developing Performance Metrics for protocols above and below the IP-layer (such as IP-based applications that operate over reliable or datagram transport protocols), that can be used to characterize traffic on live networks and services. As such, this document does not define any Performance Metrics. The scope of this document covers guidelines for considering new Performance Metric development. However this document is not intended to supercede existing working methods within WGs that have existing chartered work in this area. This process is not intended to govern Performance Metric development in existing IETF WG that are focused on metrics development, such as IPPM and BMWG. However, this guidelines document may be useful in these activities, and MAY be applied where appropriate. A typical example is the development of Performance Metrics to be exported with the IPFIX protocol RFC 5101 [RFC5101], with specific IPFIX information elements RFC 5102 [RFC5102], which would benefit from the framework in this document. The framework in this document applies to Performance Metrics derived from both active and passive measurements. 4. Relationship between QoS, QoE and Application-specific Performance Metrics Network QoS deals with the network and network protocol performance, while QoE deals with the assessment of a user's experience in a context of a task or a service. As a result, the topic of application-specific Performance Metrics includes the opportunities to quantify performance at layers between IP and the user. For example, network QoS metrics (packet loss, delay, and delay variation [RFC5481]) can be used to estimate application-specific Performance Metrics (de-jitter buffer size and RTP-layer packet loss), then combined with other known aspects of a VoIP application (such as Clark & Claise Expires August 1, 2011 [Page 7] Internet-Draft Guidelines Perf. Metric Devel. January 2011 codec type) to estimate a Mean Opinion Score (MOS) [P.800]. However, the QoE for a particular VoIP user depends on the specific context, such as a casual conversation, a business conference call, or an emergency call. Finally, QoS and application-specific Performance Metrics are quantitative, while QoE is qualitative. Also network QoS and application-specific Performance Metrics can be directly or indirectly evident to the user, while the QoE is directly evident. 5. Performance Metrics Development This section provides key definitions and qualifications of Performance Metrics. 5.1. Identifying and Categorizing the Audience Many of the aspects of metric definition and reporting, even the selection or determination of the essential metrics, depend on who will use the results, and for what purpose. Some examples of how the reports may be used include the proper maintenance of service quality or to identify and quantify problems. The question, "How will the results be used?" usually yields important factors to consider when developing Performance Metrics. All documents defining Performance Metrics SHOULD identify the primary audience and its associated requirements. The audience can influence both the definition of metrics and the methods of measurement. The key areas of variation between different metric users include: o Suitability of passive measurements of live traffic, or active measurements using dedicated traffic o Measurement in laboratory environment, or on a network of deployed devices o Accuracy of the results o Access to measurement points and configuration information o Measurement topology (point-to-point, point-to-multipoint) o Scale of the measurement system o Measurements conducted on-demand, or continuously Clark & Claise Expires August 1, 2011 [Page 8] Internet-Draft Guidelines Perf. Metric Devel. January 2011 o Required reporting formats and periods 5.2. Definitions of a Performance Metric A metric is a measure of an observable behavior of a networking technology, an application, or a service. Most of the time, the metric can be directly measured. However, sometimes, the metric definition is computed: it assumes some implicit or explicit underlying statistical process. In such case, the metric is an estimate of a parameter of this process, assuming that the statistical process closely models the behavior of the system. A metric should serve some defined purpose. This may include the measurement of capacity, quantifying how bad some problem is, measurement of service level, problem diagnosis or location and other such uses. A metric may also be an input to some other process, for example the computation of a composite metric or a model or simulation of a system. Tests of the "usefulness" of a metric include: (i) the degree to which its absence would cause significant loss of information on the behavior or performance of the application or system being measured (ii) the correlation between the Performance Metric, the QoS [G.1000] and QoE delivered to the user (person or other application) (iii) the degree to which the metric is able to support the identification and location of problems affecting service quality. (iv) the requirement to develop policies (Service Level Agreement, and potentially Service Level Contract) based on the metric. For example, consider a distributed application operating over a network connection that is subject to packet loss. A Packet Loss Rate (PLR) metric is defined as the mean packet loss ratio over some time period. If the application performs poorly over network connections with high packet loss ratio and always performs well when the packet loss ratio is zero then the PLR metric is useful to some degree. Some applications are sensitive to short periods of high loss (bursty loss) and are relatively insensitive to isolated packet loss events; for this type of application there would be very weak correlation between PLR and application performance. A "better" metric would consider both the packet loss ratio and the distribution of loss events. If application performance is degraded when the PLR exceeds some rate then a useful metric may be a measure of the duration and frequency of periods during which the PLR exceeds that Clark & Claise Expires August 1, 2011 [Page 9] Internet-Draft Guidelines Perf. Metric Devel. January 2011 rate. 5.3. Computed Metrics 5.3.1. Composed Metrics Some metrics may not be measured directly, but can be composed from base metrics that have been measured. A composed metric is derived from other metrics by applying a deterministic process or function (e.g., a composition function). The process may use metrics that are identical to the metric being composed, or metrics that are dissimilar, or some combination of both types. Usually the base metrics have a limited scope in time or space, and they can be combined to estimate the performance of some larger entities. Some examples of composed metrics and composed metric definitions are: Spatial composition is defined as the composition of metrics of the same type with differing spatial domains [RFC5835] [RFC6049]. For spatially composed metrics to be meaningful, the spatial domains should be non-overlapping and contiguous, and the composition operation should be mathematically appropriate for the type of metric. Temporal composition is defined as the composition of sets of metrics of the same type with differing time spans [RFC5835]. For temporally composed metrics to be meaningful, the time spans should be non- overlapping and contiguous, and the composition operation should be mathematically appropriate for the type of metric. Temporal aggregation is a summarization of metrics into a smaller number of metrics that relate to the total time span covered by the original metrics. An example would be to compute the minimum, maximum and average values of a series of time sampled values of a metric. In the context of flow records in IP Flow Informatin eXport (IPFIX), the IPFIX Mediation: Framework [I-D.ietf-ipfix-mediators-framework] also discusses some aspects of the temporal and spatial composition. 5.3.2. Index An Index is a metric for which the output value range has been selected for convenience or clarity, and the behavior of which is selected to support ease of understanding; for example the R Factor [G.107]. The deterministic function for an index is often developed after the index range and behavior have been determined. Clark & Claise Expires August 1, 2011 [Page 10] Internet-Draft Guidelines Perf. Metric Devel. January 2011 5.4. Performance Metric Specification 5.4.1. Outline A Performance Metric definition MUST have a normative part that defines what the metric is and how it is measured or computed and SHOULD have an informative part that describes the Performance Metric and its application. 5.4.2. Normative parts of Performance Metric definition The normative part of a Performance Metric definition MUST define at least the following: (i) Metric Name Performance Metric names MUST be unique within the set of metrics being defined and MAY be descriptive. (ii) Metric Description The Performance Metric description MUST explain what the metric is, what is being measured and how this relates to the performance of the system being measured. (iii) Method of Measurement or Calculation This method of measurement or calculation MUST define what is being measured or computed and the specific algorithm to be used. Does the measurement involve active or only passive measurements? Terms such as "average" should be qualified (e.g. running average or average over some interval). Exception cases SHOULD also be defined with the appropriate handling method. For example, there are a number of commonly used metrics related to packet loss; these often don't define the criteria by which a packet is determined to be lost (vs very delayed) or how duplicate packets are handled. For example, if the average packet loss rate during a time interval is reported, and a packet's arrival is delayed from one interval to the next then was it "lost" during the interval during which it should have arrived or should it be counted as received? Some parameters linked to the method MAY also be reported, in order to fully interpret the Performance Metric. For example, the time interval, the load, the minimum packet loss, the potential measurement errors and their sources, the attainable accuracy of the metric (e.g. +/-0,1) etc.. (iv) Units of measurement Clark & Claise Expires August 1, 2011 [Page 11] Internet-Draft Guidelines Perf. Metric Devel. January 2011 The units of measurement MUST be clearly stated. (v) Measurement Point(s) If the measurement is specific to a measurement point, this SHOULD be defined. The measurement domain MAY also be defined. Specifically, if measurement points are spread across domains, the measurement domain (intra-, inter-) is another factor to consider. In some cases, the measurement requires multiple measurement points: all measurement points SHOULD be defined, including the measurement domain(s). (vi) Measurement timing The acceptable range of timing intervals or sampling intervals for a measurement and the timing accuracy required for such intervals MUST be specified. Short sampling intervals or frequent samples provide a rich source of information that can help to assess application performance but may lead to excessive measurement data. Long measurement or sampling intervals reduce the amount of reported and collected data such that it may be insufficient to understand application performance or service quality insofar as the measured quantity may vary significantly with time. In case of multiple measurement points, the potential requirement for synchronized clocks must be clearly specified. In the specific example of the IP delay variation application metric, the different aspects of synchronized clocks are discussed in [RFC5481]. 5.4.3. Informative parts of Performance Metric definition The informative part of a Performance Metric specification is intended to support the implementation and use of the metric. This part SHOULD provide the following data: (i) Implementation The implementation description MAY be in the form of text, algorithm or example software. The objective of this part of the metric definition is to assist implementers to achieve consistents results. (ii) Verification The Performance Metric definition SHOULD provide guidance on verification testing. This may be in the form of test vectors, a formal verification test method or informal advice. Clark & Claise Expires August 1, 2011 [Page 12] Internet-Draft Guidelines Perf. Metric Devel. January 2011 (iii) Use and Applications The use and applications description is intended to assist the "user" to understand how, when and where the metric can be applied, and what significance the value range for the metric may have. This MAY include a definition of the "typical" and "abnormal" range of the Performance Metric, if this was not apparent from the nature of the metric. The description MAY include information about the influence of extreme measurement values, i.e. if the Performance Metric is sensitive to outliers. The Use and Application section SHOULD also include the security implications in the description. For example: (a) it is fairly intuitive that a lower packet loss ratio would equate to better performance. However the user may not know the significance of some given packet loss ratio, (b) the speech level of a telephone signal is commonly expressed in dBm0. If the user is presented with: Speech level = -7 dBm0 this is not intuitively understandable, unless the user is a telephony expert. If the metric definition explains that the typical range is -18 to -28 dBm0, a value higher than -18 means the signal may be too high (loud) and less than -28 means that the signal may be too low (quiet), it is much easier to interpret the metric. (iv) Reporting Model The reporting model definition is intended to make any relationship between the metric and the reporting model clear. There are often implied relationships between the method of reporting metrics and the metric itself, however these are often not made apparent to the implementor. For example, if the metric is a short term running average packet delay variation (e.g. RFC 3550 [RFC3550]) and this value is reported at intervals of 6-10 seconds, the resulting measurement may have limited accuracy when packet delay variation is non-stationary. 5.4.4. Performance Metric Definition Template Normative o Metric Name Clark & Claise Expires August 1, 2011 [Page 13] Internet-Draft Guidelines Perf. Metric Devel. January 2011 o Metric Description o Method of Measurement or Calculation o Units of Measurement o Measurement Point(s) with potential Measurement Domain o Measurement Timing Informative o Implementation o Verification o Use and Applications o Reporting Model 5.4.5. Example: Burst Packet Loss Frequency The burst packet loss frequency can be observed at different layers. The following example is specific to RTP RFC 3550 [RFC3550]. Metric Name: BurstPacketLossFrequency Metric Description: A burst of packet loss is defined as a longest period starting and ending with lost packets during which no more than Gmin consecutive packets are received. The BurstPacketLossFrequency is defined as the number of bursts of packet loss occurring during a specified time interval (e.g. per minute, per hour, per day). If Gmin is set to 0 then a burst of packet loss would comprise only consecutive lost packets, whereas a Gmin of 16 would define bursts as periods of both lost and received packets (sparse bursts) having a loss rate of greater than 5.9%. Method: Bursts may be detected using the Markov Model algorithm defined in RFC 3611 [RFC3611]. The BurstPacketLossFrequency is calculated by counting the number of burst events within the defined measurement interval. A burst that spans the boundary between two time intervals shall be counted within the later of the two intervals. Units of Measurement: Bursts per time interval (e.g. per second, per hour, per day) Measurement Timing: This metric can be used over a wide range of time Clark & Claise Expires August 1, 2011 [Page 14] Internet-Draft Guidelines Perf. Metric Devel. January 2011 intervals. Using time intervals of longer than one hour may prevent the detection of variations in the value of this metric due to time- of-day changes in network load. Timing intervals should not vary in duration by more than +/- 2%. Implementation Guidelines: See RFC 3611 [RFC3611]. Verification Testing: See Appendix for C code to generate test vectors. Use and Applications: This metric is useful to detect IP network transients that affect the performance of applications such as Voice over IP or IP Video. The value of Gmin may be selected to ensure that bursts correspond to a packet loss ratio that would degrade the performance of the application of interest (e.g. 16 for VoIP). Reporting Model: This metric needs to be associated with a defined time interval, which could be defined by fixed intervals or by a sliding window. 5.5. Dependencies 5.5.1. Timing accuracy The accuracy of the timing of a measurement may affect the accuracy of the Performance Metric. This may not materially affect a sampled value metric however would affect an interval based metric. Some metrics, for example the number of events per time interval, would be directly affected; for example a 10% variation in time interval would lead directly to a 10% variation in the measured value. Other metrics, such as the average packet loss ratio during some time interval, would be affected to a lesser extent. If it is necessary to correlate sampled values or intervals then it is essential that the accuracy of sampling time and interval start/ stop times is sufficient for the application (for example +/- 2%). 5.5.2. Dependencies of Performance Metric definitions on related events or metrics Performance Metric definitions may explicitly or implicitly rely on factors that may not be obvious. For example, the recognition of a packet as being "lost" relies on having some method to know the packet was actually lost (e.g. RTP sequence number), and some time threshold after which a non-received packet is declared as lost. It is important that any such dependencies are recognized and incorporated into the metric definition. Clark & Claise Expires August 1, 2011 [Page 15] Internet-Draft Guidelines Perf. Metric Devel. January 2011 5.5.3. Relationship between Performance Metric and lower layer Performance Metrics Lower layer Performance Metrics may be used to compute or infer the performance of higher layer applications, potentially using an application performance model. The accuracy of this will depend on many factors including: (i) The completeness of the set of metrics - i.e. are there metrics for all the input values to the application performance model? (ii) Correlation between input variables (being measured) and application performance (iii) Variability in the measured metrics and how this variability affects application performance 5.5.4. Middlebox presence Presence of a middlebox RFC 3303 [RFC3303], e.g., proxy, network address translation (NAT), redirect server, session border controller (SBC), and application layer gateway (ALG) may add variability to or restrict the scope of measurements of a metric. For example, an SBC that does not process RTP loopback packets may block or locally terminate this traffic rather then pass it through to its target. 5.6. Organization of Results The IPPM Framework [RFC2330] organizes the results of metrics into three related notions: o singleton, an elementary instance, or "atomic" value. o sample, a set of singletons with some common properties and some varying properties. o statistic, a value derived from a sample through deterministic calculation, such as the mean. Many Performance Metrics MAY use this organization for the results, with or without the term names used by IPPM WG. Section 11 of RFC 2330 [RFC2330] should consulted for further details. 5.7. Parameters, the variables of a Performance Metric Metrics are completely defined when all options and input variables have been identified and considered. These variables are sometimes left unspecified in a metric definition, and their general name Clark & Claise Expires August 1, 2011 [Page 16] Internet-Draft Guidelines Perf. Metric Devel. January 2011 indicates that the user must set them and report them with the results. Such variables are called "parameters" in the IPPM metric template. The scope of the metric, the time at which it was conducted, the settings for timers and the thresholds for counters are all examples of parameters. All documents defining Performance Metric SHOULD identify all key parameters for each Performance Metric. 6. Performance Metric Development Process 6.1. New Proposals for Metrics This process is intended to add additional considerations to the processes for adopting new work as described in RFC 2026 [RFC2026] and RFC 2418 [RFC2418]. The following entry criteria will be considered for each proposal. Proposals SHOULD be prepared as Internet Drafts, describing the Performance Metric and conforming to the qualifications above as much as possible. Proposals SHOULD be deliverables of the corresponding protocol development WG charters. As such, the Proposals SHOULD be vetted by that WG prior to discussion by the Performance Metrics Entity. This aspect of the process includes an assessment of the need for the Performance Metric proposed and assessment of the support for their development in IETF. Proposals SHOULD include an assessment of interaction and/or overlap with work in other Standards Development Organizations. Proposals SHOULD identify additional expertise that might be consulted. Proposals SHOULD specify the intended audience and users of the Performance Metrics. The development process encourages participation by members of the intended audience. Proposals SHOULD identify any security and IANA requirements. Security issues could potentially involve revealing of user identifying data or the potential misuse of active test tools. IANA considerations may involve the need for a Performance Metrics registry. 6.2. Reviewing Metrics Each Performance Metric SHOULD be assessed according to the following list of qualifications: Clark & Claise Expires August 1, 2011 [Page 17] Internet-Draft Guidelines Perf. Metric Devel. January 2011 o Unambiguously defined? o Units of Measure Specified? o Measurement Interval Specified? o Measurement Errors Identified? o Repeatable? o Implementable? o Assumptions concerning underlying process? o Use cases? o Correlation with application performance/ user experience? o security impact? 6.3. Proposal Approval New work item proposals SHALL be approved using the existing IETF process. In all cases, the proposal will need to achieve consensus, in the corresponding protocol development WG (or alternatively, an "Area" WG with broad charter), that there is interest and a need for the work. The approval SHOULD include the following steps o consultation with the Performance Metrics Entity, using this document o consultation with Area Director(s) o and possibly IESG approval of a new or revised charter for the WG 6.4. Performance Metrics Entity Interaction with other WGs The Performance Metrics Entity SHALL work in partnership with the related protocol development WG when considering an Internet Draft that specifies Performance Metrics for a protocol. A sufficient number of individuals with expertise must be willing to consult on the draft. If the related WG has concluded, comments on the proposal should still be sought from key RFC authors and former chairs, or from the WG mailing list if it was not closed. Clark & Claise Expires August 1, 2011 [Page 18] Internet-Draft Guidelines Perf. Metric Devel. January 2011 A formal review is RECOMMENDED by the time the document is reviewed by the Area Directors, or an IETF Last Call is being conducted - same as expert reviews are being performed by other directorates. Existing mailing lists SHOULD be used, however a dedicated mailing list MAY be initiated if necessary to facilitate work on a draft. In some cases, it will be appropriate to have the IETF session discussion during the related protocol WG session, to maximize visibility of the effort to that WG and expand the review. 6.5. Standards Track Performance Metrics The Performance Metrics Entity will manage the progression of RFCs along the Standards Track. See [I-D.bradner-metricstest]. This may include the preparation of test plans to examine different implementations of the metrics to ensure that the metric definitions are clear and unambiguous (depending on the final form of the draft above). 6.6. Recommendations This document recommends that the Performance Metrics Entity be implemented (according to this memo) as a directorate in one of the IETF Areas, providing advice and support as described in this document to all areas in the IETF. 7. IANA Considerations This document makes no request of IANA. Note to RFC EDITOR: this section may be removed on publication as an RFC. 8. Security Considerations In general, the existence of framework for Performance Metric development does not constitute a security issue for the Internet. Performance Metric definitions may introduce security issues and this framework recommends that those defining Performance Metrics should identify any such risk factors. The security considerations that apply to any active measurement of live networks are relevant here. See [RFC4656]. The security considerations that apply to any passive measurement of Clark & Claise Expires August 1, 2011 [Page 19] Internet-Draft Guidelines Perf. Metric Devel. January 2011 specific packets in live networks are relevant here as well. See the security considerations in [RFC5475]. 9. Acknowledgements The authors would like to thank Al Morton, Dan Romascanu, Daryl Malas and Loki Jorgenson for their comments and contributions. The authors would like to thank Aamer Akhter, Yaakov Stein, Carsten Schmoll, and Jan Novak for their reviews. 10. References 10.1. Normative References [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 3", BCP 9, RFC 2026, October 1996. [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. [RFC2418] Bradner, S., "IETF Working Group Guidelines and Procedures", BCP 25, RFC 2418, September 1998. [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. Zekauskas, "A One-way Active Measurement Protocol (OWAMP)", RFC 4656, September 2006. 10.2. Informative References [E.800] "ITU-T Recommendation E.800. SERIES E: OVERALL NETWORK OPERATION, TELEPHONE SERVICE, SERVICE OPERATION AND HUMAN FACTORS". [G.1000] "ITU-T Recommendation G.1000. Communications Quality of Service: A framework and definitions". [G.107] "ITU-T Recommendation G.107. : The E-model, a computational model for use in transmission planning.". [I-D.bradner-metricstest] Bradner, S. and V. Paxson, "Advancement of metrics specifications on the IETF Standards Track", draft-bradner-metricstest-03 (work in progress), August 2007. [I-D.ietf-ipfix-mediators-framework] Clark & Claise Expires August 1, 2011 [Page 20] Internet-Draft Guidelines Perf. Metric Devel. January 2011 Kobayashi, A., Claise, B., Muenz, G., and K. Ishibashi, "IPFIX Mediation: Framework", draft-ietf-ipfix-mediators-framework-09 (work in progress), October 2010. [P.800] "ITU-T Recommendation P.800. : Methods for subjective determination of transmission quality". [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, RFC 793, September 1981. [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, "Framework for IP Performance Metrics", RFC 2330, May 1998. [RFC3303] Srisuresh, P., Kuthan, J., Rosenberg, J., Molitor, A., and A. Rayhan, "Middlebox communication architecture and framework", RFC 3303, August 2002. [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. Jacobson, "RTP: A Transport Protocol for Real-Time Applications", STD 64, RFC 3550, July 2003. [RFC3611] Friedman, T., Caceres, R., and A. Clark, "RTP Control Protocol Extended Reports (RTCP XR)", RFC 3611, November 2003. [RFC4710] Siddiqui, A., Romascanu, D., and E. Golovinsky, "Real-time Application Quality-of-Service Monitoring (RAQMON) Framework", RFC 4710, October 2006. [RFC4960] Stewart, R., "Stream Control Transmission Protocol", RFC 4960, September 2007. [RFC5101] Claise, B., "Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of IP Traffic Flow Information", RFC 5101, January 2008. [RFC5102] Quittek, J., Bryant, S., Claise, B., Aitken, P., and J. Meyer, "Information Model for IP Flow Information Export", RFC 5102, January 2008. [RFC5475] Zseby, T., Molina, M., Duffield, N., Niccolini, S., and F. Raspall, "Sampling and Filtering Techniques for IP Packet Selection", RFC 5475, March 2009. [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation Applicability Statement", RFC 5481, March 2009. Clark & Claise Expires August 1, 2011 [Page 21] Internet-Draft Guidelines Perf. Metric Devel. January 2011 [RFC5706] Harrington, D., "Guidelines for Considering Operations and Management of New Protocols and Protocol Extensions", RFC 5706, November 2009. [RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric Composition", RFC 5835, April 2010. [RFC6035] Pendleton, A., Clark, A., Johnston, A., and H. Sinnreich, "Session Initiation Protocol Event Package for Voice Quality Reporting", RFC 6035, November 2010. [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of Metrics", RFC 6049, January 2011. Authors' Addresses Alan Clark Telchemy Incorporated 2905 Premiere Parkway, Suite 280 Duluth, Georgia 30097 USA Phone: Fax: Email: alan.d.clark@telchemy.com URI: Benoit Claise Cisco Systems, Inc. De Kleetlaan 6a b1 Diegem 1831 Belgium Phone: +32 2 704 5622 Fax: Email: bclaise@cisco.com URI: Clark & Claise Expires August 1, 2011 [Page 22]