idnits 2.17.1 draft-ietf-pmol-metrics-framework-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (January 28, 2011) is 4836 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Obsolete informational reference (is this intentional?): RFC 793 (Obsoleted by RFC 9293) -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) -- Obsolete informational reference (is this intentional?): RFC 5101 (Obsoleted by RFC 7011) -- Obsolete informational reference (is this intentional?): RFC 5102 (Obsoleted by RFC 7012) Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 6 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Clark 3 Internet-Draft Telchemy Incorporated 4 Intended status: BCP B. Claise 5 Expires: August 1, 2011 Cisco Systems, Inc. 6 January 28, 2011 8 Guidelines for Considering New Performance Metric Development 9 draft-ietf-pmol-metrics-framework-08 11 Abstract 13 This document describes a framework and a process for developing 14 Performance Metrics of protocols and applications transported over 15 IETF-specified protocols, and that can be used to characterize 16 traffic on live networks and services. 18 Requirements Language 20 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 21 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 22 document are to be interpreted as described in RFC 2119 [RFC2119]. 24 Status of this Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on August 1, 2011. 41 Copyright Notice 43 Copyright (c) 2011 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 This document may contain material from IETF Documents or IETF 57 Contributions published or made publicly available before November 58 10, 2008. The person(s) controlling the copyright in some of this 59 material may not have granted the IETF Trust the right to allow 60 modifications of such material outside the IETF Standards Process. 61 Without obtaining an adequate license from the person(s) controlling 62 the copyright in such materials, this document may not be modified 63 outside the IETF Standards Process, and derivative works of it may 64 not be created outside the IETF Standards Process, except to format 65 it for publication as an RFC or to translate it into languages other 66 than English. 68 Table of Contents 70 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 71 1.1. Background and Motivation . . . . . . . . . . . . . . . . 4 72 1.2. Organization of this document . . . . . . . . . . . . . . 5 73 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 74 2.1. Performance Metrics Entity . . . . . . . . . . . . . . . . 6 75 2.2. Quality of Service . . . . . . . . . . . . . . . . . . . . 6 76 2.3. Quality of Experience . . . . . . . . . . . . . . . . . . 6 77 2.4. Performance Metric . . . . . . . . . . . . . . . . . . . . 7 78 3. Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . 7 79 4. Relationship between QoS, QoE and Application-specific 80 Performance Metrics . . . . . . . . . . . . . . . . . . . . . 7 81 5. Performance Metrics Development . . . . . . . . . . . . . . . 8 82 5.1. Identifying and Categorizing the Audience . . . . . . . . 8 83 5.2. Definitions of a Performance Metric . . . . . . . . . . . 9 84 5.3. Computed Metrics . . . . . . . . . . . . . . . . . . . . . 10 85 5.3.1. Composed Metrics . . . . . . . . . . . . . . . . . . . 10 86 5.3.2. Index . . . . . . . . . . . . . . . . . . . . . . . . 10 87 5.4. Performance Metric Specification . . . . . . . . . . . . . 11 88 5.4.1. Outline . . . . . . . . . . . . . . . . . . . . . . . 11 89 5.4.2. Normative parts of Performance Metric definition . . . 11 90 5.4.3. Informative parts of Performance Metric definition . . 12 91 5.4.4. Performance Metric Definition Template . . . . . . . . 13 92 5.4.5. Example: Burst Packet Loss Frequency . . . . . . . . . 14 93 5.5. Dependencies . . . . . . . . . . . . . . . . . . . . . . . 15 94 5.5.1. Timing accuracy . . . . . . . . . . . . . . . . . . . 15 95 5.5.2. Dependencies of Performance Metric definitions on 96 related events or metrics . . . . . . . . . . . . . . 15 97 5.5.3. Relationship between Performance Metric and lower 98 layer Performance Metrics . . . . . . . . . . . . . . 16 99 5.5.4. Middlebox presence . . . . . . . . . . . . . . . . . . 16 100 5.6. Organization of Results . . . . . . . . . . . . . . . . . 16 101 5.7. Parameters, the variables of a Performance Metric . . . . 16 102 6. Performance Metric Development Process . . . . . . . . . . . . 17 103 6.1. New Proposals for Metrics . . . . . . . . . . . . . . . . 17 104 6.2. Reviewing Metrics . . . . . . . . . . . . . . . . . . . . 17 105 6.3. Proposal Approval . . . . . . . . . . . . . . . . . . . . 18 106 6.4. Performance Metrics Entity Interaction with other WGs . . 18 107 6.5. Standards Track Performance Metrics . . . . . . . . . . . 19 108 6.6. Recommendations . . . . . . . . . . . . . . . . . . . . . 19 109 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 110 8. Security Considerations . . . . . . . . . . . . . . . . . . . 19 111 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 20 112 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 20 113 10.1. Normative References . . . . . . . . . . . . . . . . . . . 20 114 10.2. Informative References . . . . . . . . . . . . . . . . . . 20 115 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 22 117 1. Introduction 119 Many networking technologies, applications, or services, are 120 distributed in nature, and their performance may be impacted by IP 121 impairments, server capacity, congestion and other factors. It is 122 important to measure the performance of applications and services to 123 ensure that quality objectives are being met and to support problem 124 diagnosis. Standardized metrics help to ensure that performance 125 measurement is implemented consistently and facilitate interpretation 126 and comparison. 128 There are at least three phases in the development of performance 129 standards. They are: 131 1. Definition of a Performance Metric and its units of measure 133 2. Specification of a method of measurement 135 3. Specification of the reporting format 137 During the development of metrics, it is often useful to define 138 performance objectives and expected value ranges. However, this is 139 not defined as part of the metric specification. 141 The intended audience for this document includes, but is not limited 142 to, IETF participants who write Performance Metrics documents in the 143 IETF, reviewers of such documents, and members of the Performance 144 Metrics Entity. 146 1.1. Background and Motivation 148 Although the IETF has two active Working Groups (WGs) dedicated to 149 the development of Performance Metrics, they each have strict 150 limitations in their charters: 152 - The Benchmarking Methodology WG has addressed a range of networking 153 technologies and protocols in their long history (such as IEEE 802.3, 154 ATM, Frame Relay, and Routing Protocols), but the charter strictly 155 limits their performance characterizations to the laboratory 156 environment. 158 - The IP Performance Metrics (IPPM) WG has developed a set of 159 standard metrics that can be applied to the quality, performance, and 160 reliability of Internet data delivery services. The IPPM metrics 161 development is applicable to live IP networks, but it is specifically 162 prohibited from developing metrics that characterize traffic at upper 163 layers, such as a VoIP stream. 165 A Birds Of a Feather (BOF) held at IETF-69 introduced the IETF 166 community to the possibility of a generalized activity to define 167 standardized Performance Metrics. The existence of a growing list of 168 Internet-Drafts on Performance Metrics (with community interest in 169 development, but in un-chartered areas) illustrates the need for 170 additional performance work. The majority of people present at the 171 BOF supported the proposition that IETF should be working in these 172 areas, and no one objected to any of the proposals. 174 Previous IETF work related to reporting of application Performance 175 Metrics includes the "Real-time Application Quality-of-Service 176 Monitoring (RAQMON) Framework" RFC 4710 [RFC4710], which extends the 177 remote network monitoring (RMON) family of specifications to allow 178 real-time quality-of-service (QoS) monitoring of various applications 179 that run on devices such as IP phones, pagers, Instant Messaging 180 clients, mobile phones, and various other handheld computing devices. 181 Furthermore, the "RTP Control Protocol Extended Reports (RTCP XR)" 182 RFC 3611 [RFC3611] and the "SIP RTCP Summary Report Protocol" 183 [RFC6035] are protocols that support the real-time reporting of Voice 184 over IP and other applications running on devices such as IP phones 185 and mobile handsets. 187 The IETF is also actively involved in the development of reliable 188 transport protocols, such as TCP [RFC0793] or SCTP [RFC4960], which 189 would affect the relationship between IP performance and application 190 performance. 192 Thus there is a gap in the currently chartered coverage of IETF WGs: 193 development of Performance Metrics for protocols above and below the 194 IP-layer that can be used to characterize performance on live 195 networks. 197 This document refers to the implementation of a Performance Metrics 198 Entity, whose goal is to advice and support the Performance Metric 199 development at the IETF. A recommendation about the Performance 200 Metrics Entity is made in Section 6.6. 202 Similarly to the "Guidelines for Considering Operations and 203 Management of New Protocols and Protocol Extensions" RFC 5706 204 [RFC5706], which is the reference document for the IETF Operations 205 Directorate, this document should be consulted as part of the new 206 Performance Metric review. 208 1.2. Organization of this document 210 This document is divided in two major sections beyond the "Purpose 211 and Scope" section. The first is a definition and description of a 212 Performance Metric and its key aspects. The second defines a process 213 to develop these metrics that is applicable to the IETF environment. 215 2. Terminology 217 2.1. Performance Metrics Entity 219 The Performance Metrics Entity is a directorate that coordinates the 220 Performance Metric development in the IETF. 222 The Performance Metrics Entity should be composed of experts in the 223 performance community, potentially selected from the IPPM, BMWG, and 224 PMOL WGs. 226 2.2. Quality of Service 228 Quality of Service (QoS) is defined in a similar way to the ITU "QoS 229 experienced/perceived by customer/user (QoE)" E.800 [E.800], i.e.: 230 "Totality of characteristics of a telecommunications service that 231 bear on its ability to satisfy stated and implied needs of the user 232 of the service." 234 2.3. Quality of Experience 236 Quality of Experience (QoE) is defined in a similar way to the ITU 237 "QoS experienced/perceived by customer/user (QoE)" E.800 [E.800], 238 i.e.: "a statement expressing the level of quality that customers/ 239 users believe they have experienced." 241 NOTE 1 - The level of QoS experienced and/or perceived by the 242 customer/user may be expressed by an opinion rating. 244 NOTE 2 - QoE has two main components: quantitative and qualitative. 245 The quantitative component can be influenced by the complete end-to- 246 end system effects (including user devices and network 247 infrastructure). 249 NOTE 3 - The qualitative component can be influenced by user 250 expectations, ambient conditions, psychological factors, application 251 context, etc. 253 NOTE 4 - QoE may also be considered as QoS delivered, received, and 254 interpreted by a user with the pertinent qualitative factors 255 influencing his/her perception of the service. 257 2.4. Performance Metric 259 A quantitative measure of performance, specific to an IETF-specified 260 protocol or specific to an application transported over an IETF- 261 specified protocol. Examples of Performance Metrics are: the FTP 262 response time for a complete file download, the DNS response time to 263 resolve the IP address, a database logging time, etc. 265 3. Purpose and Scope 267 The purpose of this document is to define a framework and a process 268 for developing Performance Metrics for protocols above and below the 269 IP-layer (such as IP-based applications that operate over reliable or 270 datagram transport protocols), that can be used to characterize 271 traffic on live networks and services. As such, this document does 272 not define any Performance Metrics. 274 The scope of this document covers guidelines for considering new 275 Performance Metric development. However this document is not 276 intended to supercede existing working methods within WGs that have 277 existing chartered work in this area. 279 This process is not intended to govern Performance Metric development 280 in existing IETF WG that are focused on metrics development, such as 281 IPPM and BMWG. However, this guidelines document may be useful in 282 these activities, and MAY be applied where appropriate. A typical 283 example is the development of Performance Metrics to be exported with 284 the IPFIX protocol RFC 5101 [RFC5101], with specific IPFIX 285 information elements RFC 5102 [RFC5102], which would benefit from the 286 framework in this document. 288 The framework in this document applies to Performance Metrics derived 289 from both active and passive measurements. 291 4. Relationship between QoS, QoE and Application-specific Performance 292 Metrics 294 Network QoS deals with the network and network protocol performance, 295 while QoE deals with the assessment of a user's experience in a 296 context of a task or a service. As a result, the topic of 297 application-specific Performance Metrics includes the opportunities 298 to quantify performance at layers between IP and the user. For 299 example, network QoS metrics (packet loss, delay, and delay variation 300 [RFC5481]) can be used to estimate application-specific Performance 301 Metrics (de-jitter buffer size and RTP-layer packet loss), then 302 combined with other known aspects of a VoIP application (such as 303 codec type) to estimate a Mean Opinion Score (MOS) [P.800]. However, 304 the QoE for a particular VoIP user depends on the specific context, 305 such as a casual conversation, a business conference call, or an 306 emergency call. Finally, QoS and application-specific Performance 307 Metrics are quantitative, while QoE is qualitative. Also network QoS 308 and application-specific Performance Metrics can be directly or 309 indirectly evident to the user, while the QoE is directly evident. 311 5. Performance Metrics Development 313 This section provides key definitions and qualifications of 314 Performance Metrics. 316 5.1. Identifying and Categorizing the Audience 318 Many of the aspects of metric definition and reporting, even the 319 selection or determination of the essential metrics, depend on who 320 will use the results, and for what purpose. Some examples of how the 321 reports may be used include the proper maintenance of service quality 322 or to identify and quantify problems. The question, "How will the 323 results be used?" usually yields important factors to consider when 324 developing Performance Metrics. 326 All documents defining Performance Metrics SHOULD identify the 327 primary audience and its associated requirements. The audience can 328 influence both the definition of metrics and the methods of 329 measurement. 331 The key areas of variation between different metric users include: 333 o Suitability of passive measurements of live traffic, or active 334 measurements using dedicated traffic 336 o Measurement in laboratory environment, or on a network of deployed 337 devices 339 o Accuracy of the results 341 o Access to measurement points and configuration information 343 o Measurement topology (point-to-point, point-to-multipoint) 345 o Scale of the measurement system 347 o Measurements conducted on-demand, or continuously 348 o Required reporting formats and periods 350 5.2. Definitions of a Performance Metric 352 A metric is a measure of an observable behavior of a networking 353 technology, an application, or a service. Most of the time, the 354 metric can be directly measured. However, sometimes, the metric 355 definition is computed: it assumes some implicit or explicit 356 underlying statistical process. In such case, the metric is an 357 estimate of a parameter of this process, assuming that the 358 statistical process closely models the behavior of the system. 360 A metric should serve some defined purpose. This may include the 361 measurement of capacity, quantifying how bad some problem is, 362 measurement of service level, problem diagnosis or location and other 363 such uses. A metric may also be an input to some other process, for 364 example the computation of a composite metric or a model or 365 simulation of a system. Tests of the "usefulness" of a metric 366 include: 368 (i) the degree to which its absence would cause significant loss 369 of information on the behavior or performance of the application 370 or system being measured 372 (ii) the correlation between the Performance Metric, the QoS 373 [G.1000] and QoE delivered to the user (person or other 374 application) 376 (iii) the degree to which the metric is able to support the 377 identification and location of problems affecting service quality. 379 (iv) the requirement to develop policies (Service Level Agreement, 380 and potentially Service Level Contract) based on the metric. 382 For example, consider a distributed application operating over a 383 network connection that is subject to packet loss. A Packet Loss 384 Rate (PLR) metric is defined as the mean packet loss ratio over some 385 time period. If the application performs poorly over network 386 connections with high packet loss ratio and always performs well when 387 the packet loss ratio is zero then the PLR metric is useful to some 388 degree. Some applications are sensitive to short periods of high 389 loss (bursty loss) and are relatively insensitive to isolated packet 390 loss events; for this type of application there would be very weak 391 correlation between PLR and application performance. A "better" 392 metric would consider both the packet loss ratio and the distribution 393 of loss events. If application performance is degraded when the PLR 394 exceeds some rate then a useful metric may be a measure of the 395 duration and frequency of periods during which the PLR exceeds that 396 rate. 398 5.3. Computed Metrics 400 5.3.1. Composed Metrics 402 Some metrics may not be measured directly, but can be composed from 403 base metrics that have been measured. A composed metric is derived 404 from other metrics by applying a deterministic process or function 405 (e.g., a composition function). The process may use metrics that are 406 identical to the metric being composed, or metrics that are 407 dissimilar, or some combination of both types. Usually the base 408 metrics have a limited scope in time or space, and they can be 409 combined to estimate the performance of some larger entities. 411 Some examples of composed metrics and composed metric definitions 412 are: 414 Spatial composition is defined as the composition of metrics of the 415 same type with differing spatial domains [RFC5835] [RFC6049]. For 416 spatially composed metrics to be meaningful, the spatial domains 417 should be non-overlapping and contiguous, and the composition 418 operation should be mathematically appropriate for the type of 419 metric. 421 Temporal composition is defined as the composition of sets of metrics 422 of the same type with differing time spans [RFC5835]. For temporally 423 composed metrics to be meaningful, the time spans should be non- 424 overlapping and contiguous, and the composition operation should be 425 mathematically appropriate for the type of metric. 427 Temporal aggregation is a summarization of metrics into a smaller 428 number of metrics that relate to the total time span covered by the 429 original metrics. An example would be to compute the minimum, 430 maximum and average values of a series of time sampled values of a 431 metric. 433 In the context of flow records in IP Flow Informatin eXport (IPFIX), 434 the IPFIX Mediation: Framework [I-D.ietf-ipfix-mediators-framework] 435 also discusses some aspects of the temporal and spatial composition. 437 5.3.2. Index 439 An Index is a metric for which the output value range has been 440 selected for convenience or clarity, and the behavior of which is 441 selected to support ease of understanding; for example the R Factor 442 [G.107]. The deterministic function for an index is often developed 443 after the index range and behavior have been determined. 445 5.4. Performance Metric Specification 447 5.4.1. Outline 449 A Performance Metric definition MUST have a normative part that 450 defines what the metric is and how it is measured or computed and 451 SHOULD have an informative part that describes the Performance Metric 452 and its application. 454 5.4.2. Normative parts of Performance Metric definition 456 The normative part of a Performance Metric definition MUST define at 457 least the following: 459 (i) Metric Name 461 Performance Metric names MUST be unique within the set of metrics 462 being defined and MAY be descriptive. 464 (ii) Metric Description 466 The Performance Metric description MUST explain what the metric is, 467 what is being measured and how this relates to the performance of the 468 system being measured. 470 (iii) Method of Measurement or Calculation 472 This method of measurement or calculation MUST define what is being 473 measured or computed and the specific algorithm to be used. Does the 474 measurement involve active or only passive measurements? Terms such 475 as "average" should be qualified (e.g. running average or average 476 over some interval). Exception cases SHOULD also be defined with the 477 appropriate handling method. For example, there are a number of 478 commonly used metrics related to packet loss; these often don't 479 define the criteria by which a packet is determined to be lost (vs 480 very delayed) or how duplicate packets are handled. For example, if 481 the average packet loss rate during a time interval is reported, and 482 a packet's arrival is delayed from one interval to the next then was 483 it "lost" during the interval during which it should have arrived or 484 should it be counted as received? 486 Some parameters linked to the method MAY also be reported, in order 487 to fully interpret the Performance Metric. For example, the time 488 interval, the load, the minimum packet loss, the potential 489 measurement errors and their sources, the attainable accuracy of the 490 metric (e.g. +/-0,1) etc.. 492 (iv) Units of measurement 493 The units of measurement MUST be clearly stated. 495 (v) Measurement Point(s) 497 If the measurement is specific to a measurement point, this SHOULD be 498 defined. The measurement domain MAY also be defined. Specifically, 499 if measurement points are spread across domains, the measurement 500 domain (intra-, inter-) is another factor to consider. 502 In some cases, the measurement requires multiple measurement points: 503 all measurement points SHOULD be defined, including the measurement 504 domain(s). 506 (vi) Measurement timing 508 The acceptable range of timing intervals or sampling intervals for a 509 measurement and the timing accuracy required for such intervals MUST 510 be specified. Short sampling intervals or frequent samples provide a 511 rich source of information that can help to assess application 512 performance but may lead to excessive measurement data. Long 513 measurement or sampling intervals reduce the amount of reported and 514 collected data such that it may be insufficient to understand 515 application performance or service quality insofar as the measured 516 quantity may vary significantly with time. 518 In case of multiple measurement points, the potential requirement for 519 synchronized clocks must be clearly specified. In the specific 520 example of the IP delay variation application metric, the different 521 aspects of synchronized clocks are discussed in [RFC5481]. 523 5.4.3. Informative parts of Performance Metric definition 525 The informative part of a Performance Metric specification is 526 intended to support the implementation and use of the metric. This 527 part SHOULD provide the following data: 529 (i) Implementation 531 The implementation description MAY be in the form of text, algorithm 532 or example software. The objective of this part of the metric 533 definition is to assist implementers to achieve consistents results. 535 (ii) Verification 537 The Performance Metric definition SHOULD provide guidance on 538 verification testing. This may be in the form of test vectors, a 539 formal verification test method or informal advice. 541 (iii) Use and Applications 543 The use and applications description is intended to assist the "user" 544 to understand how, when and where the metric can be applied, and what 545 significance the value range for the metric may have. This MAY 546 include a definition of the "typical" and "abnormal" range of the 547 Performance Metric, if this was not apparent from the nature of the 548 metric. The description MAY include information about the influence 549 of extreme measurement values, i.e. if the Performance Metric is 550 sensitive to outliers. The Use and Application section SHOULD also 551 include the security implications in the description. 553 For example: 555 (a) it is fairly intuitive that a lower packet loss ratio would 556 equate to better performance. However the user may not know the 557 significance of some given packet loss ratio, 559 (b) the speech level of a telephone signal is commonly expressed in 560 dBm0. If the user is presented with: 562 Speech level = -7 dBm0 564 this is not intuitively understandable, unless the user is a 565 telephony expert. If the metric definition explains that the typical 566 range is -18 to -28 dBm0, a value higher than -18 means the signal 567 may be too high (loud) and less than -28 means that the signal may be 568 too low (quiet), it is much easier to interpret the metric. 570 (iv) Reporting Model 572 The reporting model definition is intended to make any relationship 573 between the metric and the reporting model clear. There are often 574 implied relationships between the method of reporting metrics and the 575 metric itself, however these are often not made apparent to the 576 implementor. For example, if the metric is a short term running 577 average packet delay variation (e.g. RFC 3550 [RFC3550]) and this 578 value is reported at intervals of 6-10 seconds, the resulting 579 measurement may have limited accuracy when packet delay variation is 580 non-stationary. 582 5.4.4. Performance Metric Definition Template 584 Normative 586 o Metric Name 587 o Metric Description 589 o Method of Measurement or Calculation 591 o Units of Measurement 593 o Measurement Point(s) with potential Measurement Domain 595 o Measurement Timing 597 Informative 599 o Implementation 601 o Verification 603 o Use and Applications 605 o Reporting Model 607 5.4.5. Example: Burst Packet Loss Frequency 609 The burst packet loss frequency can be observed at different layers. 610 The following example is specific to RTP RFC 3550 [RFC3550]. 612 Metric Name: BurstPacketLossFrequency 614 Metric Description: A burst of packet loss is defined as a longest 615 period starting and ending with lost packets during which no more 616 than Gmin consecutive packets are received. The 617 BurstPacketLossFrequency is defined as the number of bursts of packet 618 loss occurring during a specified time interval (e.g. per minute, per 619 hour, per day). If Gmin is set to 0 then a burst of packet loss 620 would comprise only consecutive lost packets, whereas a Gmin of 16 621 would define bursts as periods of both lost and received packets 622 (sparse bursts) having a loss rate of greater than 5.9%. 624 Method: Bursts may be detected using the Markov Model algorithm 625 defined in RFC 3611 [RFC3611]. The BurstPacketLossFrequency is 626 calculated by counting the number of burst events within the defined 627 measurement interval. A burst that spans the boundary between two 628 time intervals shall be counted within the later of the two 629 intervals. 631 Units of Measurement: Bursts per time interval (e.g. per second, per 632 hour, per day) 634 Measurement Timing: This metric can be used over a wide range of time 635 intervals. Using time intervals of longer than one hour may prevent 636 the detection of variations in the value of this metric due to time- 637 of-day changes in network load. Timing intervals should not vary in 638 duration by more than +/- 2%. 640 Implementation Guidelines: See RFC 3611 [RFC3611]. 642 Verification Testing: See Appendix for C code to generate test 643 vectors. 645 Use and Applications: This metric is useful to detect IP network 646 transients that affect the performance of applications such as Voice 647 over IP or IP Video. The value of Gmin may be selected to ensure 648 that bursts correspond to a packet loss ratio that would degrade the 649 performance of the application of interest (e.g. 16 for VoIP). 651 Reporting Model: This metric needs to be associated with a defined 652 time interval, which could be defined by fixed intervals or by a 653 sliding window. 655 5.5. Dependencies 657 5.5.1. Timing accuracy 659 The accuracy of the timing of a measurement may affect the accuracy 660 of the Performance Metric. This may not materially affect a sampled 661 value metric however would affect an interval based metric. Some 662 metrics, for example the number of events per time interval, would be 663 directly affected; for example a 10% variation in time interval would 664 lead directly to a 10% variation in the measured value. Other 665 metrics, such as the average packet loss ratio during some time 666 interval, would be affected to a lesser extent. 668 If it is necessary to correlate sampled values or intervals then it 669 is essential that the accuracy of sampling time and interval start/ 670 stop times is sufficient for the application (for example +/- 2%). 672 5.5.2. Dependencies of Performance Metric definitions on related events 673 or metrics 675 Performance Metric definitions may explicitly or implicitly rely on 676 factors that may not be obvious. For example, the recognition of a 677 packet as being "lost" relies on having some method to know the 678 packet was actually lost (e.g. RTP sequence number), and some time 679 threshold after which a non-received packet is declared as lost. It 680 is important that any such dependencies are recognized and 681 incorporated into the metric definition. 683 5.5.3. Relationship between Performance Metric and lower layer 684 Performance Metrics 686 Lower layer Performance Metrics may be used to compute or infer the 687 performance of higher layer applications, potentially using an 688 application performance model. The accuracy of this will depend on 689 many factors including: 691 (i) The completeness of the set of metrics - i.e. are there metrics 692 for all the input values to the application performance model? 694 (ii) Correlation between input variables (being measured) and 695 application performance 697 (iii) Variability in the measured metrics and how this variability 698 affects application performance 700 5.5.4. Middlebox presence 702 Presence of a middlebox RFC 3303 [RFC3303], e.g., proxy, network 703 address translation (NAT), redirect server, session border controller 704 (SBC), and application layer gateway (ALG) may add variability to or 705 restrict the scope of measurements of a metric. For example, an SBC 706 that does not process RTP loopback packets may block or locally 707 terminate this traffic rather then pass it through to its target. 709 5.6. Organization of Results 711 The IPPM Framework [RFC2330] organizes the results of metrics into 712 three related notions: 714 o singleton, an elementary instance, or "atomic" value. 716 o sample, a set of singletons with some common properties and some 717 varying properties. 719 o statistic, a value derived from a sample through deterministic 720 calculation, such as the mean. 722 Many Performance Metrics MAY use this organization for the results, 723 with or without the term names used by IPPM WG. Section 11 of RFC 724 2330 [RFC2330] should consulted for further details. 726 5.7. Parameters, the variables of a Performance Metric 728 Metrics are completely defined when all options and input variables 729 have been identified and considered. These variables are sometimes 730 left unspecified in a metric definition, and their general name 731 indicates that the user must set them and report them with the 732 results. Such variables are called "parameters" in the IPPM metric 733 template. The scope of the metric, the time at which it was 734 conducted, the settings for timers and the thresholds for counters 735 are all examples of parameters. 737 All documents defining Performance Metric SHOULD identify all key 738 parameters for each Performance Metric. 740 6. Performance Metric Development Process 742 6.1. New Proposals for Metrics 744 This process is intended to add additional considerations to the 745 processes for adopting new work as described in RFC 2026 [RFC2026] 746 and RFC 2418 [RFC2418]. The following entry criteria will be 747 considered for each proposal. 749 Proposals SHOULD be prepared as Internet Drafts, describing the 750 Performance Metric and conforming to the qualifications above as much 751 as possible. Proposals SHOULD be deliverables of the corresponding 752 protocol development WG charters. As such, the Proposals SHOULD be 753 vetted by that WG prior to discussion by the Performance Metrics 754 Entity. This aspect of the process includes an assessment of the 755 need for the Performance Metric proposed and assessment of the 756 support for their development in IETF. 758 Proposals SHOULD include an assessment of interaction and/or overlap 759 with work in other Standards Development Organizations. Proposals 760 SHOULD identify additional expertise that might be consulted. 762 Proposals SHOULD specify the intended audience and users of the 763 Performance Metrics. The development process encourages 764 participation by members of the intended audience. 766 Proposals SHOULD identify any security and IANA requirements. 767 Security issues could potentially involve revealing of user 768 identifying data or the potential misuse of active test tools. IANA 769 considerations may involve the need for a Performance Metrics 770 registry. 772 6.2. Reviewing Metrics 774 Each Performance Metric SHOULD be assessed according to the following 775 list of qualifications: 777 o Unambiguously defined? 779 o Units of Measure Specified? 781 o Measurement Interval Specified? 783 o Measurement Errors Identified? 785 o Repeatable? 787 o Implementable? 789 o Assumptions concerning underlying process? 791 o Use cases? 793 o Correlation with application performance/ user experience? 795 o security impact? 797 6.3. Proposal Approval 799 New work item proposals SHALL be approved using the existing IETF 800 process. 802 In all cases, the proposal will need to achieve consensus, in the 803 corresponding protocol development WG (or alternatively, an "Area" WG 804 with broad charter), that there is interest and a need for the work. 806 The approval SHOULD include the following steps 808 o consultation with the Performance Metrics Entity, using this 809 document 811 o consultation with Area Director(s) 813 o and possibly IESG approval of a new or revised charter for the WG 815 6.4. Performance Metrics Entity Interaction with other WGs 817 The Performance Metrics Entity SHALL work in partnership with the 818 related protocol development WG when considering an Internet Draft 819 that specifies Performance Metrics for a protocol. A sufficient 820 number of individuals with expertise must be willing to consult on 821 the draft. If the related WG has concluded, comments on the proposal 822 should still be sought from key RFC authors and former chairs, or 823 from the WG mailing list if it was not closed. 825 A formal review is RECOMMENDED by the time the document is reviewed 826 by the Area Directors, or an IETF Last Call is being conducted - same 827 as expert reviews are being performed by other directorates. 829 Existing mailing lists SHOULD be used, however a dedicated mailing 830 list MAY be initiated if necessary to facilitate work on a draft. 832 In some cases, it will be appropriate to have the IETF session 833 discussion during the related protocol WG session, to maximize 834 visibility of the effort to that WG and expand the review. 836 6.5. Standards Track Performance Metrics 838 The Performance Metrics Entity will manage the progression of RFCs 839 along the Standards Track. See [I-D.bradner-metricstest]. This may 840 include the preparation of test plans to examine different 841 implementations of the metrics to ensure that the metric definitions 842 are clear and unambiguous (depending on the final form of the draft 843 above). 845 6.6. Recommendations 847 This document recommends that the Performance Metrics Entity be 848 implemented (according to this memo) as a directorate in one of the 849 IETF Areas, providing advice and support as described in this 850 document to all areas in the IETF. 852 7. IANA Considerations 854 This document makes no request of IANA. 856 Note to RFC EDITOR: this section may be removed on publication as an 857 RFC. 859 8. Security Considerations 861 In general, the existence of framework for Performance Metric 862 development does not constitute a security issue for the Internet. 863 Performance Metric definitions may introduce security issues and this 864 framework recommends that those defining Performance Metrics should 865 identify any such risk factors. 867 The security considerations that apply to any active measurement of 868 live networks are relevant here. See [RFC4656]. 870 The security considerations that apply to any passive measurement of 871 specific packets in live networks are relevant here as well. See the 872 security considerations in [RFC5475]. 874 9. Acknowledgements 876 The authors would like to thank Al Morton, Dan Romascanu, Daryl Malas 877 and Loki Jorgenson for their comments and contributions. The authors 878 would like to thank Aamer Akhter, Yaakov Stein, Carsten Schmoll, and 879 Jan Novak for their reviews. 881 10. References 883 10.1. Normative References 885 [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 886 3", BCP 9, RFC 2026, October 1996. 888 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 889 Requirement Levels", BCP 14, RFC 2119, March 1997. 891 [RFC2418] Bradner, S., "IETF Working Group Guidelines and 892 Procedures", BCP 25, RFC 2418, September 1998. 894 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 895 Zekauskas, "A One-way Active Measurement Protocol 896 (OWAMP)", RFC 4656, September 2006. 898 10.2. Informative References 900 [E.800] "ITU-T Recommendation E.800. SERIES E: OVERALL NETWORK 901 OPERATION, TELEPHONE SERVICE, SERVICE OPERATION AND HUMAN 902 FACTORS". 904 [G.1000] "ITU-T Recommendation G.1000. Communications Quality of 905 Service: A framework and definitions". 907 [G.107] "ITU-T Recommendation G.107. : The E-model, a 908 computational model for use in transmission planning.". 910 [I-D.bradner-metricstest] 911 Bradner, S. and V. Paxson, "Advancement of metrics 912 specifications on the IETF Standards Track", 913 draft-bradner-metricstest-03 (work in progress), 914 August 2007. 916 [I-D.ietf-ipfix-mediators-framework] 917 Kobayashi, A., Claise, B., Muenz, G., and K. Ishibashi, 918 "IPFIX Mediation: Framework", 919 draft-ietf-ipfix-mediators-framework-09 (work in 920 progress), October 2010. 922 [P.800] "ITU-T Recommendation P.800. : Methods for subjective 923 determination of transmission quality". 925 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, 926 RFC 793, September 1981. 928 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 929 "Framework for IP Performance Metrics", RFC 2330, 930 May 1998. 932 [RFC3303] Srisuresh, P., Kuthan, J., Rosenberg, J., Molitor, A., and 933 A. Rayhan, "Middlebox communication architecture and 934 framework", RFC 3303, August 2002. 936 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 937 Jacobson, "RTP: A Transport Protocol for Real-Time 938 Applications", STD 64, RFC 3550, July 2003. 940 [RFC3611] Friedman, T., Caceres, R., and A. Clark, "RTP Control 941 Protocol Extended Reports (RTCP XR)", RFC 3611, 942 November 2003. 944 [RFC4710] Siddiqui, A., Romascanu, D., and E. Golovinsky, "Real-time 945 Application Quality-of-Service Monitoring (RAQMON) 946 Framework", RFC 4710, October 2006. 948 [RFC4960] Stewart, R., "Stream Control Transmission Protocol", 949 RFC 4960, September 2007. 951 [RFC5101] Claise, B., "Specification of the IP Flow Information 952 Export (IPFIX) Protocol for the Exchange of IP Traffic 953 Flow Information", RFC 5101, January 2008. 955 [RFC5102] Quittek, J., Bryant, S., Claise, B., Aitken, P., and J. 956 Meyer, "Information Model for IP Flow Information Export", 957 RFC 5102, January 2008. 959 [RFC5475] Zseby, T., Molina, M., Duffield, N., Niccolini, S., and F. 960 Raspall, "Sampling and Filtering Techniques for IP Packet 961 Selection", RFC 5475, March 2009. 963 [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation 964 Applicability Statement", RFC 5481, March 2009. 966 [RFC5706] Harrington, D., "Guidelines for Considering Operations and 967 Management of New Protocols and Protocol Extensions", 968 RFC 5706, November 2009. 970 [RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric 971 Composition", RFC 5835, April 2010. 973 [RFC6035] Pendleton, A., Clark, A., Johnston, A., and H. Sinnreich, 974 "Session Initiation Protocol Event Package for Voice 975 Quality Reporting", RFC 6035, November 2010. 977 [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of 978 Metrics", RFC 6049, January 2011. 980 Authors' Addresses 982 Alan Clark 983 Telchemy Incorporated 984 2905 Premiere Parkway, Suite 280 985 Duluth, Georgia 30097 986 USA 988 Phone: 989 Fax: 990 Email: alan.d.clark@telchemy.com 991 URI: 993 Benoit Claise 994 Cisco Systems, Inc. 995 De Kleetlaan 6a b1 996 Diegem 1831 997 Belgium 999 Phone: +32 2 704 5622 1000 Fax: 1001 Email: bclaise@cisco.com 1002 URI: