idnits 2.17.1 draft-ietf-pmol-metrics-framework-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 4 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 28, 2011) is 4627 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Obsolete informational reference (is this intentional?): RFC 793 (Obsoleted by RFC 9293) -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) -- Obsolete informational reference (is this intentional?): RFC 5101 (Obsoleted by RFC 7011) -- Obsolete informational reference (is this intentional?): RFC 5102 (Obsoleted by RFC 7012) Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 6 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Clark 3 Internet-Draft Telchemy Incorporated 4 Intended status: BCP B. Claise 5 Expires: January 29, 2012 Cisco Systems, Inc. 6 July 28, 2011 8 Guidelines for Considering New Performance Metric Development 9 draft-ietf-pmol-metrics-framework-12 11 Abstract 13 This document describes a framework and a process for developing 14 Performance Metrics of protocols and applications transported over 15 IETF-specified protocols, and that can be used to characterize 16 traffic on live networks and services. 18 Status of this Memo 20 This Internet-Draft is submitted in full conformance with the 21 provisions of BCP 78 and BCP 79. 23 Internet-Drafts are working documents of the Internet Engineering 24 Task Force (IETF). Note that other groups may also distribute 25 working documents as Internet-Drafts. The list of current Internet- 26 Drafts is at http://datatracker.ietf.org/drafts/current/. 28 Internet-Drafts are draft documents valid for a maximum of six months 29 and may be updated, replaced, or obsoleted by other documents at any 30 time. It is inappropriate to use Internet-Drafts as reference 31 material or to cite them other than as "work in progress." 33 This Internet-Draft will expire on January 29, 2012. 35 Copyright Notice 37 Copyright (c) 2011 IETF Trust and the persons identified as the 38 document authors. All rights reserved. 40 This document is subject to BCP 78 and the IETF Trust's Legal 41 Provisions Relating to IETF Documents 42 (http://trustee.ietf.org/license-info) in effect on the date of 43 publication of this document. Please review these documents 44 carefully, as they describe your rights and restrictions with respect 45 to this document. Code Components extracted from this document must 46 include Simplified BSD License text as described in Section 4.e of 47 the Trust Legal Provisions and are provided without warranty as 48 described in the Simplified BSD License. 50 This document may contain material from IETF Documents or IETF 51 Contributions published or made publicly available before November 52 10, 2008. The person(s) controlling the copyright in some of this 53 material may not have granted the IETF Trust the right to allow 54 modifications of such material outside the IETF Standards Process. 55 Without obtaining an adequate license from the person(s) controlling 56 the copyright in such materials, this document may not be modified 57 outside the IETF Standards Process, and derivative works of it may 58 not be created outside the IETF Standards Process, except to format 59 it for publication as an RFC or to translate it into languages other 60 than English. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 65 1.1. Background and Motivation . . . . . . . . . . . . . . . . 4 66 1.2. Organization of this document . . . . . . . . . . . . . . 5 67 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 68 2.1. Performance Metrics Directorate . . . . . . . . . . . . . 5 69 2.2. Quality of Service . . . . . . . . . . . . . . . . . . . . 5 70 2.3. Quality of Experience . . . . . . . . . . . . . . . . . . 5 71 2.4. Performance Metric . . . . . . . . . . . . . . . . . . . . 6 72 3. Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . 6 73 4. Relationship between QoS, QoE and Application-specific 74 Performance Metrics . . . . . . . . . . . . . . . . . . . . . 7 75 5. Performance Metrics Development . . . . . . . . . . . . . . . 7 76 5.1. Identifying and Categorizing the Audience . . . . . . . . 7 77 5.2. Definitions of a Performance Metric . . . . . . . . . . . 8 78 5.3. Computed Performance Metrics . . . . . . . . . . . . . . . 9 79 5.3.1. Composed Performance Metrics . . . . . . . . . . . . . 9 80 5.3.2. Index . . . . . . . . . . . . . . . . . . . . . . . . 10 81 5.4. Performance Metric Specification . . . . . . . . . . . . . 10 82 5.4.1. Outline . . . . . . . . . . . . . . . . . . . . . . . 10 83 5.4.2. Normative parts of Performance Metric definition . . . 10 84 5.4.3. Informative parts of Performance Metric definition . . 12 85 5.4.4. Performance Metric Definition Template . . . . . . . . 13 86 5.4.5. Example: Loss Rate . . . . . . . . . . . . . . . . . . 14 87 5.5. Dependencies . . . . . . . . . . . . . . . . . . . . . . . 15 88 5.5.1. Timing accuracy . . . . . . . . . . . . . . . . . . . 15 89 5.5.2. Dependencies of Performance Metric definitions on 90 related events or metrics . . . . . . . . . . . . . . 16 91 5.5.3. Relationship between Performance Metric and lower 92 layer Performance Metrics . . . . . . . . . . . . . . 16 93 5.5.4. Middlebox presence . . . . . . . . . . . . . . . . . . 16 94 5.6. Organization of Results . . . . . . . . . . . . . . . . . 16 95 5.7. Parameters, the variables of a Performance Metric . . . . 17 96 6. Performance Metric Development Process . . . . . . . . . . . . 17 97 6.1. New Proposals for Performance Metrics . . . . . . . . . . 17 98 6.2. Reviewing Metrics . . . . . . . . . . . . . . . . . . . . 18 99 6.3. Performance Metrics Directorate Interaction with other 100 WGs . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 101 6.4. Standards Track Performance Metrics . . . . . . . . . . . 19 102 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 103 8. Security Considerations . . . . . . . . . . . . . . . . . . . 19 104 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 20 105 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 20 106 10.1. Normative References . . . . . . . . . . . . . . . . . . . 20 107 10.2. Informative References . . . . . . . . . . . . . . . . . . 20 108 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 22 110 1. Introduction 112 Many networking technologies, applications, or services, are 113 distributed in nature, and their performance may be impacted by IP 114 impairments, server capacity, congestion and other factors. It is 115 important to measure the performance of applications and services to 116 ensure that quality objectives are being met and to support problem 117 diagnosis. Standardized metrics help to ensure that performance 118 measurement is implemented consistently and facilitate interpretation 119 and comparison. 121 There are at least three phases in the development of performance 122 standards. They are: 124 1. Definition of a Performance Metric and its units of measure 126 2. Specification of a method of measurement 128 3. Specification of the reporting format 130 During the development of metrics, it is often useful to define 131 performance objectives and expected value ranges. However, this is 132 not defined as part of the metric specification. 134 The intended audience for this document includes, but is not limited 135 to, IETF participants who write Performance Metrics documents in the 136 IETF, reviewers of such documents, and members of the Performance 137 Metrics Directorate. 139 1.1. Background and Motivation 141 Previous IETF work related to reporting of application Performance 142 Metrics includes the "Real-time Application Quality-of-Service 143 Monitoring (RAQMON) Framework" [RFC4710], which extends the remote 144 network monitoring (RMON) family of specifications to allow real-time 145 quality-of-service (QoS) monitoring of various applications that run 146 on devices such as IP phones, pagers, Instant Messaging clients, 147 mobile phones, and various other handheld computing devices. 148 Furthermore, the "RTP Control Protocol Extended Reports (RTCP XR)" 149 [RFC3611] and the "SIP RTCP Summary Report Protocol" [RFC6035] are 150 protocols that support the real-time reporting of Voice over IP and 151 other applications running on devices such as IP phones and mobile 152 handsets. 154 The IETF is also actively involved in the development of reliable 155 transport protocols, such as TCP [RFC0793] or SCTP [RFC4960], which 156 would affect the relationship between IP performance and application 157 performance. 159 Thus there is a gap in the currently chartered coverage of IETF 160 Working Groups (WG): development of Performance Metrics for protocols 161 above and below the IP-layer that can be used to characterize 162 performance on live networks. 164 Similarly to the "Guidelines for Considering Operations and 165 Management of New Protocols and Protocol Extensions" [RFC5706], which 166 is the reference document for the IETF Operations Directorate, this 167 document should be consulted as part of the new Performance Metric 168 review by the members of the Performance Metrics Directorate. 170 1.2. Organization of this document 172 This document is divided in two major sections beyond the "Purpose 173 and Scope" section. The first is a definition and description of a 174 Performance Metric and its key aspects. The second defines a process 175 to develop these metrics that is applicable to the IETF environment. 177 2. Terminology 179 Requirements Language 181 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 182 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 183 document are to be interpreted as described in RFC 2119 [RFC2119]. 185 2.1. Performance Metrics Directorate 187 The Performance Metrics Directorate is a directorate provides 188 guidance for Performance Metrics development in the IETF. 190 The Performance Metrics Directorate should be composed of experts in 191 the performance community, potentially selected from the IPPM, BMWG, 192 and PMOL WGs. 194 2.2. Quality of Service 196 Quality of Service (QoS) is defined similarly to the ITU "QoS 197 experienced/perceived by customer/user (QoE)" E.800 [E.800], i.e.: 198 "Totality of characteristics of a telecommunications service that 199 bear on its ability to satisfy stated and implied needs of the user 200 of the service." 202 2.3. Quality of Experience 204 Quality of Experience (QoE) is defined in a similar way to the ITU 205 "QoS experienced/perceived by customer/user (QoE)" E.800 [E.800], 206 i.e.: "a statement expressing the level of quality that customers/ 207 users believe they have experienced." 209 NOTE 1 - The level of QoS experienced and/or perceived by the 210 customer/user may be expressed by an opinion rating. 212 NOTE 2 - QoE has two main components: quantitative and qualitative. 213 The quantitative component can be influenced by the complete end-to- 214 end system effects (including user devices and network 215 infrastructure). 217 NOTE 3 - The qualitative component can be influenced by user 218 expectations, ambient conditions, psychological factors, application 219 context, etc. 221 NOTE 4 - QoE may also be considered as QoS delivered, received, and 222 interpreted by a user with the pertinent qualitative factors 223 influencing his/her perception of the service. 225 2.4. Performance Metric 227 A quantitative measure of performance, specific to an IETF-specified 228 protocol or specific to an application transported over an IETF- 229 specified protocol. Examples of Performance Metrics are: the FTP 230 response time for a complete file download, the DNS response time to 231 resolve the IP address, a database logging time, etc. 233 3. Purpose and Scope 235 The purpose of this document is to define a framework and a process 236 for developing Performance Metrics for protocols above and below the 237 IP-layer (such as IP-based applications that operate over reliable or 238 datagram transport protocols), that can be used to characterize 239 traffic on live networks and services. As such, this document does 240 not define any Performance Metrics. 242 The scope of this document covers guidelines for the Performance 243 Metrics Directorate members for considering new Performance Metrics, 244 and suggests how the Performance Metrics Directorate will interact 245 with the rest of the IETF. However this document is not intended to 246 supersede existing working methods within WGs that have existing 247 chartered work in this area. 249 This process is not intended to govern Performance Metric development 250 in existing IETF WG that are focused on metrics development, such as 251 IPPM and BMWG. However, this guidelines document may be useful in 252 these activities, and MAY be applied where appropriate. A typical 253 example is the development of Performance Metrics to be exported with 254 the IPFIX protocol RFC 5101 [RFC5101], with specific IPFIX 255 information elements RFC 5102 [RFC5102], which would benefit from the 256 framework in this document. 258 The framework in this document applies to Performance Metrics derived 259 from both active and passive measurements. 261 4. Relationship between QoS, QoE and Application-specific Performance 262 Metrics 264 Network QoS deals with network and network protocol performance, 265 while QoE deals with the assessment of a user's experience in a 266 context of a task or a service. As a result, the topic of 267 application-specific Performance Metrics includes the measurement of 268 performance at layers between IP and the user. For example, network 269 QoS metrics (packet loss, delay, and delay variation [RFC5481]) can 270 be used to estimate application-specific Performance Metrics (de- 271 jitter buffer size and RTP-layer packet loss), then combined with 272 other known aspects of a VoIP application (such as codec type) to 273 estimate a Mean Opinion Score (MOS) [P.800]. However, the QoE for a 274 particular VoIP user depends on the specific context, such as a 275 casual conversation, a business conference call, or an emergency 276 call. Finally, QoS and application-specific Performance Metrics are 277 quantitative, while QoE is qualitative. Also network QoS and 278 application-specific Performance Metrics can be directly or 279 indirectly evident to the user, while the QoE is directly evident. 281 5. Performance Metrics Development 283 This section provides key definitions and qualifications of 284 Performance Metrics. 286 5.1. Identifying and Categorizing the Audience 288 Many of the aspects of metric definition and reporting, even the 289 selection or determination of the essential metrics, depend on who 290 will use the results, and for what purpose. For example, the metric 291 description SHOULD include use cases and example reports that 292 illustrate service quality monitoring and maintenance or 293 identification and quantification of problems. 295 All documents defining Performance Metrics SHOULD identify the 296 primary audience and its associated requirements. The audience can 297 influence both the definition of metrics and the methods of 298 measurement. 300 The key areas of variation between different metric users include: 302 o Suitability of passive measurements of live traffic, or active 303 measurements using dedicated traffic 305 o Measurement in laboratory environment, or on a network of deployed 306 devices 308 o Accuracy of the results 310 o Access to measurement points and configuration information 312 o Measurement topology (point-to-point, point-to-multipoint) 314 o Scale of the measurement system 316 o Measurements conducted on-demand, or continuously 318 o Required reporting formats and periods 320 o Sampling criteria, such systematic or probabilistic 322 o Period (and duration) of measurement, as the live traffic can have 323 patterns 325 5.2. Definitions of a Performance Metric 327 A Performance Metric is a measure of an observable behavior of a 328 networking technology, an application, or a service. Most of the 329 time, the Performance Metric can be directly measured however, 330 sometimes, the Performance Metric value is computed. The process for 331 determining the value of a metric may assume some implicit or 332 explicit underlying statistical process, in this case, the 333 Performance Metric is an estimate of a parameter of this process, 334 assuming that the statistical process closely models the behavior of 335 the system. 337 A Performance Metric should serve some defined purpose. This may 338 include the measurement of capacity, quantifying how bad some problem 339 is, measurement of service level, problem diagnosis or location and 340 other such uses. A Performance Metric may also be an input to some 341 other process, for example the computation of a composite Performance 342 Metric or a model or simulation of a system. Tests of the 343 "usefulness" of a Performance Metric include: 345 (i) the degree to which its absence would cause significant loss 346 of information on the behavior or performance of the application 347 or system being measured 348 (ii) the correlation between the Performance Metric, the QoS 349 [G.1000] and QoE delivered to the user (person or other 350 application) 352 (iii) the degree to which the Performance Metric is able to 353 support the identification and location of problems affecting 354 service quality. 356 (iv) the requirement to develop policies (Service Level Agreement, 357 and potentially Service Level Contract) based on the Performance 358 Metric. 360 For example, consider a distributed application operating over a 361 network connection that is subject to packet loss. A Packet Loss 362 Rate (PLR) Performance Metric is defined as the mean packet loss 363 ratio over some time period. If the application performs poorly over 364 network connections with high packet loss ratio and always performs 365 well when the packet loss ratio is zero then the PLR Performance 366 Metric is useful to some degree. Some applications are sensitive to 367 short periods of high loss (bursty loss) and are relatively 368 insensitive to isolated packet loss events; for this type of 369 application there would be very weak correlation between PLR and 370 application performance. A "better" Performance Metric would 371 consider both the packet loss ratio and the distribution of loss 372 events. If application performance is degraded when the PLR exceeds 373 some rate then a useful Performance Metric may be a measure of the 374 duration and frequency of periods during which the PLR exceeds that 375 rate (as for example in RFC3611). 377 5.3. Computed Performance Metrics 379 5.3.1. Composed Performance Metrics 381 Some Performance Metrics may not be measured directly, but can be 382 composed from base metrics that have been measured. A composed 383 Performance Metric is derived from other metrics by applying a 384 deterministic process or function (e.g., a composition function). 385 The process may use metrics that are identical to the metric being 386 composed, or metrics that are dissimilar, or some combination of both 387 types. Usually the base metrics have a limited scope in time or 388 space, and they can be combined to estimate the performance of some 389 larger entities. 391 Some examples of composed Performance Metrics and composed 392 Performance Metric definitions are: 394 Spatial composition is defined as the composition of metrics of 395 the same type with differing spatial domains [RFC5835] [RFC6049]. 397 Ideally, for spatially composed metrics to be meaningful, the 398 spatial domains should be non-overlapping and contiguous, and the 399 composition operation should be mathematically appropriate for the 400 type of metric. 402 Temporal composition is defined as the composition of sets of 403 metrics of the same type with differing time spans [RFC5835]. For 404 temporally composed metrics to be meaningful, the time spans 405 should be non-overlapping and contiguous, and the composition 406 operation should be mathematically appropriate for the type of 407 metric. 409 Temporal aggregation is a summarization of metrics into a smaller 410 number of metrics that relate to the total time span covered by 411 the original metrics. An example would be to compute the minimum, 412 maximum and average values of a series of time sampled values of a 413 metric. 415 In the context of flow records in IP Flow Information eXport (IPFIX), 416 the IPFIX Mediation: Framework [RFC6183] also discusses some aspects 417 of the temporal and spatial composition. 419 5.3.2. Index 421 An Index is a metric for which the output value range has been 422 selected for convenience or clarity, and the behavior of which is 423 selected to support ease of understanding; for example the R Factor 424 [G.107]. The deterministic function for an index is often developed 425 after the index range and behavior have been determined. 427 5.4. Performance Metric Specification 429 5.4.1. Outline 431 A Performance Metric definition MUST have a normative part that 432 defines what the metric is and how it is measured or computed and 433 SHOULD have an informative part that describes the Performance Metric 434 and its application. 436 5.4.2. Normative parts of Performance Metric definition 438 The normative part of a Performance Metric definition MUST define at 439 least the following: 441 (i) Metric Name 443 Performance Metric names are RECOMMENDED to be unique within the set 444 of metrics being defined for the protocol layer and context. While 445 strict uniqueness may not be attainable (See the IPPM registry 446 [RFC6248] for an example of IANA metric registry failing to provide 447 sufficient specificity), broad review must be sought to avoid naming 448 overlap. Note that the Performance Metrics Directorate can help with 449 suggestions for IANA metric registration for unique naming. The 450 Performance Metric name MAY be descriptive. 452 (ii) Metric Description 454 The Performance Metric description MUST explain what the metric is, 455 what is being measured and how this relates to the performance of the 456 system being measured. 458 (iii) Method of Measurement or Calculation 460 The method of measurement or calculation MUST define what is being 461 measured or computed and the specific algorithm to be used. Does the 462 measurement involve active or only passive measurements? Terms such 463 as "average" should be qualified (e.g. running average or average 464 over some interval). Exception cases SHOULD also be defined with the 465 appropriate handling method. For example, there are a number of 466 commonly used metrics related to packet loss; these often don't 467 define the criteria by which a packet is determined to be lost (vs 468 very delayed) or how duplicate packets are handled. For example, if 469 the average packet loss rate during a time interval is reported, and 470 a packet's arrival is delayed from one interval to the next then was 471 it "lost" during the interval during which it should have arrived or 472 should it be counted as received? 474 Some methods of calculation might require discarding some data 475 collected (due to outliers) so as to make the measurement parameters 476 meaningful. One example is burstable billing that sorts the 5-min 477 samples, and discard the top 5 percentile. 479 Some parameters linked to the method MAY also be reported, in order 480 to fully interpret the Performance Metric. For example, the time 481 interval, the load, the minimum packet loss, the potential 482 measurement errors and their sources, the attainable accuracy of the 483 metric (e.g. +/-0,1), the method of caluclation, etc... 485 (iv) Units of measurement 487 The units of measurement MUST be clearly stated. 489 (v) Measurement Point(s) 491 If the measurement is specific to a measurement point, this SHOULD be 492 defined. The measurement domain MAY also be defined. Specifically, 493 if measurement points are spread across domains, the measurement 494 domain (intra-, inter-) is another factor to consider. 496 The Performance Metric definition should discuss how the Performance 497 Metric value might vary depending which measurement point is chosen. 498 For example, the time between a SIP request [RFC3261] and the final 499 response can be significantly different at the User Agent Client 500 (UAC) or User Agent Server (UAS). 502 In some cases, the measurement requires multiple measurement points: 503 all measurement points SHOULD be defined, including the measurement 504 domain(s). 506 (vi) Measurement timing 508 The acceptable range of timing intervals or sampling intervals for a 509 measurement and the timing accuracy required for such intervals MUST 510 be specified. Short sampling intervals or frequent samples provide a 511 rich source of information that can help to assess application 512 performance but may lead to excessive measurement data. Long 513 measurement or sampling intervals reduce the amount of reported and 514 collected data such that it may be insufficient to understand 515 application performance or service quality insofar as the measured 516 quantity may vary significantly with time. 518 In case of multiple measurement points, the potential requirement for 519 synchronized clocks must be clearly specified. In the specific 520 example of the IP delay variation application metric, the different 521 aspects of synchronized clocks are discussed in [RFC5481]. 523 5.4.3. Informative parts of Performance Metric definition 525 The informative part of a Performance Metric specification is 526 intended to support the implementation and use of the metric. This 527 part SHOULD provide the following data: 529 (i) Implementation 531 The implementation description MAY be in the form of text, algorithm 532 or example software. The objective of this part of the metric 533 definition is to assist implementers to achieve consistent results. 535 (ii) Verification 537 The Performance Metric definition SHOULD provide guidance on 538 verification testing. This may be in the form of test vectors, a 539 formal verification test method or informal advice. 541 (iii) Use and Applications 543 The use and applications description is intended to assist the "user" 544 to understand how, when and where the metric can be applied, and what 545 significance the value range for the metric may have. This MAY 546 include a definition of the "typical" and "abnormal" range of the 547 Performance Metric, if this was not apparent from the nature of the 548 metric. The description MAY include information about the influence 549 of extreme measurement values, i.e. if the Performance Metric is 550 sensitive to outliers. The Use and Application section SHOULD also 551 include the security implications in the description. 553 For example: 555 (a) it is fairly intuitive that a lower packet loss ratio would 556 equate to better performance. However the user may not know the 557 significance of some given packet loss ratio, 559 (b) the speech level of a telephone signal is commonly expressed in 560 dBm0. If the user is presented with: 562 Speech level = -7 dBm0 564 this is not intuitively understandable, unless the user is a 565 telephony expert. If the metric definition explains that the typical 566 range is -18 to -28 dBm0, a value higher than -18 means the signal 567 may be too high (loud) and less than -28 means that the signal may be 568 too low (quiet), it is much easier to interpret the metric. 570 (iv) Reporting Model 572 The reporting model definition is intended to make any relationship 573 between the metric and the reporting model clear. There are often 574 implied relationships between the method of reporting metrics and the 575 metric itself, however these are often not made apparent to the 576 implementor. For example, if the metric is a short term running 577 average packet delay variation (e.g. the interarrival jitter in 578 [RFC3550]) and this value is reported at intervals of 6-10 seconds, 579 the resulting measurement may have limited accuracy when packet delay 580 variation is non-stationary. 582 5.4.4. Performance Metric Definition Template 584 Normative 586 o Metric Name 587 o Metric Description 589 o Method of Measurement or Calculation 591 o Units of Measurement 593 o Measurement Point(s) with potential Measurement Domain 595 o Measurement Timing 597 Informative 599 o Implementation 601 o Verification 603 o Use and Applications 605 o Reporting Model 607 5.4.5. Example: Loss Rate 609 The example used is the loss rate metric as specified in RFC 3611 610 [RFC3611]. 612 Metric Name: LossRate 614 Metric Description: The fraction of RTP data packets from the source 615 lost since the beginning of reception. 617 Method of measurement or calculation: This value is calculated by 618 dividing the total number of packets lost (after the effects of 619 applying any error protection such as FEC) by the total number of 620 packets expected, multiplying the result of the division by 256, 621 limiting the maximum value to 255 (to avoid overflow), and taking the 622 integer part. 624 Units of Measurement: This metric is expressed as a fixed point 625 number with the binary point at the left edge of the field. For 626 example, a metric value of 12 means a loss rate of approximately 5%. 628 Measurement Point(s): This metric is made at the receiving end of the 629 RTP stream sent during a Voice over IP call. 631 Measurement Timing: This metric can be used over a wide range of time 632 intervals. Using time intervals of longer than one hour may prevent 633 the detection of variations in the value of this metric due to time- 634 of-day changes in network load. Timing intervals should not vary in 635 duration by more than +/- 2%. 637 Implementation: The numbers of duplicated packets and discarded 638 packets do not enter into this calculation. Since receivers cannot 639 be required to maintain unlimited buffers, a receiver MAY categorize 640 late-arriving packets as lost. The degree of lateness that triggers 641 a loss SHOULD be significantly greater than that which triggers a 642 discard. 644 Verification: The metric value ranges between 0 and 255. 646 Use and Applications: This metric is useful for monitoring VoIP 647 calls. More precisely, to detect the VoIP loss rate in the network. 648 This loss rate, along with the rate of packets discarded due to 649 jitter, has some effect on the quality of the voice stream. 651 Reporting Model: This metric needs to be associated with a defined 652 time interval, which could be defined by fixed intervals or by a 653 sliding window. In the context of RFC3611 the metric is measured 654 continuously from the start of the RTP stream, the value of the 655 metric is sampled and reported in RTCP XR VoIP Metrics reports 657 5.5. Dependencies 659 This section introduces several Performance Metrics dependencies, 660 which the Performance Metric designer should keep in mind during the 661 Performance Metric development. These dependencies, and any others 662 not listed here, SHOULD be documented in the Performance Metric 663 specifications. 665 5.5.1. Timing accuracy 667 The accuracy of the timing of a measurement may affect the accuracy 668 of the Performance Metric. This may not materially affect a sampled 669 value metric however would affect an interval based metric. Some 670 metrics, for example the number of events per time interval, would be 671 directly affected; for example a 10% variation in time interval would 672 lead directly to a 10% variation in the measured value. Other 673 metrics, such as the average packet loss ratio during some time 674 interval, would be affected to a lesser extent. 676 If it is necessary to correlate sampled values or intervals then it 677 is essential that the accuracy of sampling time and interval start/ 678 stop times is sufficient for the application (for example +/- 2%). 680 5.5.2. Dependencies of Performance Metric definitions on related events 681 or metrics 683 Performance Metric definitions may explicitly or implicitly rely on 684 factors that may not be obvious. For example, the recognition of a 685 packet as being "lost" relies on having some method to know the 686 packet was actually lost (e.g. RTP sequence number), and some time 687 threshold after which a non-received packet is declared as lost. It 688 is important that any such dependencies are recognized and 689 incorporated into the metric definition. 691 5.5.3. Relationship between Performance Metric and lower layer 692 Performance Metrics 694 Lower layer Performance Metrics may be used to compute or infer the 695 performance of higher layer applications, potentially using an 696 application performance model. The accuracy of this will depend on 697 many factors including: 699 (i) The completeness of the set of metrics - i.e. are there metrics 700 for all the input values to the application performance model? 702 (ii) Correlation between input variables (being measured) and 703 application performance 705 (iii) Variability in the measured metrics and how this variability 706 affects application performance 708 5.5.4. Middlebox presence 710 Presence of a middlebox [RFC3303], e.g., proxy, network address 711 translation (NAT), redirect server, session border controller (SBC, 712 [RFC5853]), and application layer gateway (ALG) may add variability 713 to or restrict the scope of measurements of a metric. For example, 714 an SBC that does not process RTP loopback packets may block or 715 locally terminate this traffic rather then pass it through to its 716 target. 718 5.6. Organization of Results 720 The IPPM Framework [RFC2330] organizes the results of metrics into 721 three related notions: 723 o singleton, an elementary instance, or "atomic" value. 725 o sample, a set of singletons with some common properties and some 726 varying properties. 728 o statistic, a value derived from a sample through deterministic 729 calculation, such as the mean. 731 Performance Metrics MAY use this organization for the results, with 732 or without the term names used by IPPM WG. Section 11 of RFC 2330 733 [RFC2330] should consulted for further details. 735 5.7. Parameters, the variables of a Performance Metric 737 Metrics are completely defined when all options and input variables 738 have been identified and considered. These variables are sometimes 739 left unspecified in a metric definition, and their general name 740 indicates that the user must set them and report them with the 741 results. Such variables are called "parameters" in the IPPM metric 742 template. The scope of the metric, the time at which it was 743 conducted, the length interval of the sliding window measurement, the 744 settings for timers and the thresholds for counters are all examples 745 of parameters. 747 All documents defining Performance Metric SHOULD identify all key 748 parameters for each Performance Metric. 750 6. Performance Metric Development Process 752 6.1. New Proposals for Performance Metrics 754 This process is intended to add additional considerations to the 755 processes for adopting new work as described in RFC 2026 [RFC2026] 756 and RFC 2418 [RFC2418]. Note that new Performance Metrics work item 757 proposals SHALL be approved using the existing IETF process. The 758 following entry criteria will be considered for each proposal. 760 Proposals SHOULD be prepared as Internet Drafts, describing the 761 Performance Metric and conforming to the qualifications above as much 762 as possible. Proposals SHOULD be deliverables of the corresponding 763 protocol development WG charters. As such, the Proposals SHOULD be 764 vetted by that WG prior to discussion by the Performance Metrics 765 Directorate. This aspect of the process includes an assessment of 766 the need for the Performance Metric proposed and assessment of the 767 support for their development in IETF. 769 Proposals SHOULD include an assessment of interaction and/or overlap 770 with work in other Standards Development Organizations. Proposals 771 SHOULD identify additional expertise that might be consulted. 773 Proposals SHOULD specify the intended audience and users of the 774 Performance Metrics. The development process encourages 775 participation by members of the intended audience. 777 Proposals SHOULD identify any security and IANA requirements. 778 Security issues could potentially involve revealing of user 779 identifying data or the potential misuse of active test tools. IANA 780 considerations may involve the need for a Performance Metrics 781 registry. 783 6.2. Reviewing Metrics 785 Each Performance Metric SHOULD be assessed according to the following 786 list of qualifications: 788 o Are the performance metrics unambiguously defined? 790 o Are the units of measure specified? 792 o Does the metric clearly define the measurement interval where 793 applicable? 795 o Are significant sources of measurement errors identified and 796 discussed? 798 o Does the method of measurement ensure that results are repeatable? 800 o Do the metric or method of measurement appear to be implement- 801 able, (or offer evidence of working implementation)? 803 o Are there any undocumented assumptions concerning the underlying 804 process that would affect an implementation or interpretation of 805 the metric? 807 o Can the metric results related to application performance or user 808 experience, when such a relationship is of value? 810 o Relationship to metrics defined elsewhere within IETF or within 811 other SDO's 813 o Do the Security Considerations adequately address denial of 814 service attacks, unwanted interference with the metric/ 815 measurement, and user data confidentiality (when measuring live 816 traffic)? 818 6.3. Performance Metrics Directorate Interaction with other WGs 820 The Performance Metrics Directorate SHALL provide guidance to the related 821 protocol development WG when considering an Internet Draft that 822 specifies Performance Metrics for a protocol. A sufficient number of 823 individuals with expertise must be willing to consult on the draft. 824 If the related WG has concluded, comments on the proposal should 825 still be sought from key RFC authors and former chairs. 827 A formal review is recommended by the time the document is reviewed 828 by the Area Directors, or an IETF Last Call is being conducted - same 829 as expert reviews are being performed by other directorates. 831 Existing mailing lists SHOULD be used, however a dedicated mailing 832 list MAY be initiated if necessary to facilitate work on a draft. 834 In some cases, it will be appropriate to have the IETF session 835 discussion during the related protocol WG session, to maximize 836 visibility of the effort to that WG and expand the review. 838 6.4. Standards Track Performance Metrics 840 The Performance Metrics Directorate will assist with the progression 841 of RFCs along the Standards Track. See [I-D.bradner-metricstest]. 842 This may include the preparation of test plans to examine different 843 implementations of the metrics to ensure that the metric definitions 844 are clear and unambiguous (depending on the final form of the draft 845 above). 847 7. IANA Considerations 849 This document makes no request of IANA. 851 Note to RFC EDITOR: this section may be removed on publication as an 852 RFC. 854 8. Security Considerations 856 In general, the existence of a framework for Performance Metric 857 development does not constitute a security issue for the Internet. 858 Performance Metric definitions may introduce security issues and this 859 framework recommends that those defining Performance Metrics should 860 identify any such risk factors. 862 The security considerations that apply to any active measurement of 863 live networks are relevant here. See [RFC4656]. 865 The security considerations that apply to any passive measurement of 866 specific packets in live networks are relevant here as well. See the 867 security considerations in [RFC5475]. 869 9. Acknowledgements 871 The authors would like to thank Al Morton, Dan Romascanu, Daryl Malas 872 and Loki Jorgenson for their comments and contributions. The authors 873 would like to thank Aamer Akhter, Yaakov Stein, Carsten Schmoll, and 874 Jan Novak for their reviews. 876 10. References 878 10.1. Normative References 880 [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 881 3", BCP 9, RFC 2026, October 1996. 883 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 884 Requirement Levels", BCP 14, RFC 2119, March 1997. 886 [RFC2418] Bradner, S., "IETF Working Group Guidelines and 887 Procedures", BCP 25, RFC 2418, September 1998. 889 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 890 Zekauskas, "A One-way Active Measurement Protocol 891 (OWAMP)", RFC 4656, September 2006. 893 10.2. Informative References 895 [E.800] "ITU-T Recommendation E.800. SERIES E: OVERALL NETWORK 896 OPERATION, TELEPHONE SERVICE, SERVICE OPERATION AND HUMAN 897 FACTORS". 899 [G.1000] "ITU-T Recommendation G.1000. Communications Quality of 900 Service: A framework and definitions". 902 [G.107] "ITU-T Recommendation G.107. : The E-model, a 903 computational model for use in transmission planning.". 905 [I-D.bradner-metricstest] 906 Bradner, S. and V. Paxson, "Advancement of metrics 907 specifications on the IETF Standards Track", 908 draft-bradner-metricstest-03 (work in progress), 909 August 2007. 911 [P.800] "ITU-T Recommendation P.800. : Methods for subjective 912 determination of transmission quality". 914 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, 915 RFC 793, September 1981. 917 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 918 "Framework for IP Performance Metrics", RFC 2330, 919 May 1998. 921 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 922 A., Peterson, J., Sparks, R., Handley, M., and E. 923 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 924 June 2002. 926 [RFC3303] Srisuresh, P., Kuthan, J., Rosenberg, J., Molitor, A., and 927 A. Rayhan, "Middlebox communication architecture and 928 framework", RFC 3303, August 2002. 930 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 931 Jacobson, "RTP: A Transport Protocol for Real-Time 932 Applications", STD 64, RFC 3550, July 2003. 934 [RFC3611] Friedman, T., Caceres, R., and A. Clark, "RTP Control 935 Protocol Extended Reports (RTCP XR)", RFC 3611, 936 November 2003. 938 [RFC4710] Siddiqui, A., Romascanu, D., and E. Golovinsky, "Real-time 939 Application Quality-of-Service Monitoring (RAQMON) 940 Framework", RFC 4710, October 2006. 942 [RFC4960] Stewart, R., "Stream Control Transmission Protocol", 943 RFC 4960, September 2007. 945 [RFC5101] Claise, B., "Specification of the IP Flow Information 946 Export (IPFIX) Protocol for the Exchange of IP Traffic 947 Flow Information", RFC 5101, January 2008. 949 [RFC5102] Quittek, J., Bryant, S., Claise, B., Aitken, P., and J. 950 Meyer, "Information Model for IP Flow Information Export", 951 RFC 5102, January 2008. 953 [RFC5475] Zseby, T., Molina, M., Duffield, N., Niccolini, S., and F. 954 Raspall, "Sampling and Filtering Techniques for IP Packet 955 Selection", RFC 5475, March 2009. 957 [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation 958 Applicability Statement", RFC 5481, March 2009. 960 [RFC5706] Harrington, D., "Guidelines for Considering Operations and 961 Management of New Protocols and Protocol Extensions", 962 RFC 5706, November 2009. 964 [RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric 965 Composition", RFC 5835, April 2010. 967 [RFC5853] Hautakorpi, J., Camarillo, G., Penfield, R., Hawrylyshen, 968 A., and M. Bhatia, "Requirements from Session Initiation 969 Protocol (SIP) Session Border Control (SBC) Deployments", 970 RFC 5853, April 2010. 972 [RFC6035] Pendleton, A., Clark, A., Johnston, A., and H. Sinnreich, 973 "Session Initiation Protocol Event Package for Voice 974 Quality Reporting", RFC 6035, November 2010. 976 [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of 977 Metrics", RFC 6049, January 2011. 979 [RFC6183] Kobayashi, A., Claise, B., Muenz, G., and K. Ishibashi, 980 "IP Flow Information Export (IPFIX) Mediation: Framework", 981 RFC 6183, April 2011. 983 [RFC6248] Morton, A., "RFC 4148 and the IP Performance Metrics 984 (IPPM) Registry of Metrics Are Obsolete", RFC 6248, 985 April 2011. 987 Authors' Addresses 989 Alan Clark 990 Telchemy Incorporated 991 2905 Premiere Parkway, Suite 280 992 Duluth, Georgia 30097 993 USA 995 Phone: 996 Fax: 997 Email: alan.d.clark@telchemy.com 998 URI: 1000 Benoit Claise 1001 Cisco Systems, Inc. 1002 De Kleetlaan 6a b1 1003 Diegem 1831 1004 Belgium 1006 Phone: +32 2 704 5622 1007 Fax: 1008 Email: bclaise@cisco.com 1009 URI: