idnits 2.17.1 draft-ietf-pmol-metrics-framework-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (March 9, 2009) is 5526 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC2330' is defined on line 637, but no explicit reference was found in the text == Unused Reference: 'Casner' is defined on line 641, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ippm-framework-compagg' is defined on line 652, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ippm-spatial-composition' is defined on line 657, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. 'G1000' == Outdated reference: A later version (-09) exists of draft-ietf-ippm-framework-compagg-06 == Outdated reference: A later version (-16) exists of draft-ietf-ippm-spatial-composition-06 Summary: 1 error (**), 0 flaws (~~), 7 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Clark 3 Internet-Draft Telchemy Incorporated 4 Intended status: BCP March 9, 2009 5 Expires: September 5, 2009 7 Framework for Performance Metric Development 8 draft-ietf-pmol-metrics-framework-02 10 Status of this Memo 12 This Internet-Draft is submitted to IETF in full conformance with the 13 provisions of BCP 78 and BCP 79. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six months 21 and may be updated, replaced, or obsoleted by other documents at any 22 time. It is inappropriate to use Internet-Drafts as reference 23 material or to cite them other than as "work in progress." 25 The list of current Internet-Drafts can be accessed at 26 http://www.ietf.org/ietf/1id-abstracts.txt. 28 The list of Internet-Draft Shadow Directories can be accessed at 29 http://www.ietf.org/shadow.html. 31 This Internet-Draft will expire on September 9, 2009. 33 Copyright Notice 35 Copyright (c) 2009 IETF Trust and the persons identified as the 36 document authors. All rights reserved. 38 This document is subject to BCP 78 and the IETF Trust's Legal 39 Provisions Relating to IETF Documents in effect on the date of 40 publication of this document (http://trustee.ietf.org/license-info). 41 Please review these documents carefully, as they describe your rights 42 and restrictions with respect to this document. 44 This document may contain material from IETF Documents or IETF 45 Contributions published or made publicly available before November 46 10, 2008. The person(s) controlling the copyright in some of this 47 material may not have granted the IETF Trust the right to allow 48 modifications of such material outside the IETF Standards Process. 49 Without obtaining an adequate license from the person(s) controlling 50 the copyright in such materials, this document may not be modified 51 outside the IETF Standards Process, and derivative works of it may 52 not be created outside the IETF Standards Process, except to format 53 it for publication as an RFC or to translate it into languages other 54 than English. 56 Clark [Page 1] 57 Abstract 59 This memo describes a framework and guidelines for the development of 60 performance metrics that are beyond the scope of existing working 61 group charters in the IETF. In this version, the memo refers to a 62 Performance Metrics Entity, or PM Entity, which may in future be a 63 working group or directorate or a combination of these two. 65 Requirements Language 67 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 68 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 69 document are to be interpreted as described in RFC 2119 [RFC2119]. 71 Table of Contents 73 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 2 74 1.1. Background and Motivation . . . . . . . . . . . . . . . . 3 75 1.2. Organization of this memo . . . . . . . . . . . . . . . . 3 76 2. Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . 4 77 3. Metrics Development . . . . . . . . . . . . . . . . . . . . . 4 78 3.1. Audience for Metrics . . . . . . . . . . . . . . . . . . . 4 79 3.2. Definitions of a Metric . . . . . . . . . . . . . . . . . 4 80 3.3. Computed Metrics . . . . . . . . . . . . . . . . . . . . . 5 81 3.3.1 Composed Metrics . . . . . . . . . . . . . . . . . . . . 5 82 3.3.2 Index . . . . . . . . . . . . . . . . . . . . . . . . . . 5 83 3.4. Metric Specification . . . . . . . . . . . . . . . . . . . 6 84 3.4.1. Outline . . . . . . . . . . . . . . . . . . . . . . . 6 85 3.4.2. Normative parts of metric definition . . . . . . . . . 6 86 3.4.3. Informative parts of metric definition . . . . . . . . 7 87 3.4.4. Metric Definition Template . . . . . . . . . . . . . . 8 88 3.4.5. Examples . . . . . . . . . . . . . . . . . . . . . . . 8 89 3.5. Dependencies . . . . . . . . . . . . . . . . . . . . . . 9 90 3.5.1. Timing accuracy . . . . . . . . . . . . . . . . . . . 9 91 3.5.2. Dependencies of metric definitions on related 92 events or metrics . . . . . . . . . . . . . . . . . . 9 93 3.5.3. Relationship between application performance and 94 lower layer metrics . . . . . . . . . . . . . . . . . 10 95 4. Performance Metric Development Process . . . . . . . . . . . . 10 96 4.1. New Proposals for Metrics . . . . . . . . . . . . . . . . 11 97 4.2 Reviewing Metrics . . . . . . . . . . . . . . . . . . . . 11 98 4.3. Proposal Approval . . . . . . . . . . . . . . . . . . . . 11 99 4.4. PM Entity Interaction with other WGs . . . . . . . . . . . 11 100 4.5. Standards Track Performance Metrics . . . . . . . . . . . 11 101 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 102 6. Security Considerations . . . . . . . . . . . . . . . . . . . 12 103 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 12 104 8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 12 105 8.1. Normative References . . . . . . . . . . . . . . . . . . . 12 106 8.2. Informative References . . . . . . . . . . . . . . . . . . 12 107 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 13 108 Intellectual Property and Copyright Statements . . . . . . . . . . 13 110 Clark [Page 2] 111 1. Introduction 113 Many applications are distributed in nature, and their performance 114 may be impacted by a IP impairments, server capacity, congestion and 115 other factors. It is important to measure the performance of 116 applications and services to ensure that quality objectives are being 117 met and to support problem diagnosis. Standardized metrics help to 118 ensure that performance measurement is implemented consistently and 119 to facilitate interpretation and comparison. 121 There are at least three phases in the development of performance 122 standards. They are: 124 1. Definition of a Performance Metric and its units of measure 126 2. Specification of a Method of Measurement 128 3. Specification of the Reporting Format 130 During the development of metrics it is often useful to define 131 performance objectives and expected value ranges however this is not 132 defined as part of the metric specification. 134 This memo refers to a Performance Metrics Entity, or PM Entity, which 135 may in future be a working group or directorate or a combination of 136 these two. 138 1.1. Background and Motivation 140 Although the IETF has two active Working Groups dedicated to the 141 development of performance metrics, they each have strict limitations 142 in their charters: 144 - The Benchmarking Methodology WG has addressed a range of networking 145 technologies and protocols in their long history (such as IEEE 802.3, 146 ATM, Frame Relay, and Routing Protocols), but the charter strictly 147 limits their performance characterizations to the laboratory 148 environment. 150 - The IP Performance Metrics WG has the mandate to develop metrics 151 applicable to live IP networks, but it is specifically prohibited 152 from developing metrics that characterize traffic (such as a VoIP 153 stream). 155 A BOF held at IETF-69 introduced the IETF community to the 156 possibility of a generalized activity to define standardized 157 performance metrics. The existence of a growing list of Internet- 158 Drafts on performance metrics (with community interest in 159 development, but in un-chartered areas) illustrates the need for 160 additional performance work. The majority of people present at the 161 BOF supported the proposition that IETF should be working in these 162 areas, and no one objected to any of the proposals. 164 Clark [Page 3] 165 The IETF does have current and completed activities related to the 166 reporting of application performance metrics (e.g. RAQMON) and is 167 also actively involved in the development of reliable transport 168 protocols which would affect the relationship between IP performance 169 and application performance. 171 Thus there is a gap in the currently chartered coverage of IETF WGs: 172 development of performance metrics for non-IP-layer protocols that 173 can be used to characterize performance on live networks. 175 1.2. Organization of this memo 177 This memo is divided in two major sections beyond the Purpose and 178 Scope section. The first is a definition and description of a 179 performance metric and its key aspects. The second defines a process 180 to develop these metrics that is applicable to the IETF environment. 182 2. Purpose and Scope 184 The purpose of this memo is to define a framework and a process for 185 developing performance metrics for IP-based applications that operate 186 over reliable or datagram transport protocols, and that can be used 187 to characterize traffic on live networks and services. 189 The scope of this memo includes the support of metric definition for 190 any protocol developed by the IETF, however this memo is not intended 191 to supercede existing working methods within WGs that have existing 192 chartered work in this area. 194 This process is not intended to govern performance metric development 195 in existing IETF WG that are focused on metrics development, such as 196 IPPM and BMWG. However, the framework and guidelines may be useful 197 in these activities, and MAY be applied where appropriate. 199 3. Metrics Development 201 This section provides key definitions and qualifications of 202 performance metrics. 204 3.1. Audience for Metrics 205 Metrics are intended for use in measuring the performance of an 206 application, network or service. A key first step in metric 207 definition is to identify what metrics are needed by the "user" in 208 order to properly maintain service quality and to identify and 209 quantify problems, i.e. to consider the audience for the metrics. 211 3.2. Definitions of a Metric 212 A metric is a measure of an observable behavior of an application, 213 protocol or other system. The definition of a metric often assumes 214 some implicit or explicit underlying statistical process, and a 215 metric is an estimate of a parameter of this process. If the assumed 216 statistical process closely models the behavior of the system then 218 Clark [Page 4] 219 the metric is "better" in the sense that it more accurately 220 characterizes the state or behavior of the system. 222 A metric should serve some defined purpose. This may include the 223 measurement of capacity, quantifying how bad some problem is, 224 measurement of service level, problem diagnosis or location and other 225 such uses. A metric may also be an input to some other process, for 226 example the computation of a composite metric or a model or 227 simulation of a system. Tests of the "usefulness" of a metric 228 include: 230 (i) the degree to which its absence would cause significant loss 231 of information on the behavior or state of the application or 232 system being measured 234 (ii) the correlation between the metric and the QoS [G1000] 235 / experience delivered to the user (person or other application) 237 (iii) the degree to which the metric is able to support the 238 identification and location of problems affecting service quality. 240 For example, consider a distributed application operating over a 241 network connection that is subject to packet loss. A Packet Loss 242 Rate (PLR) metric is defined as the mean packet loss rate over some 243 time period. If the application performs poorly over network 244 connections with high packet loss rate and always performs well when 245 the packet loss rate is zero then the PLR metric is useful to some 246 degree. Some applications are sensitive to short periods of high 247 loss (bursty loss) and are relatively insensitive to isolated packet 248 loss events; for this type of application there would be very weak 249 correlation between PLR and application performance. A "better" 250 metric would consider both the packet loss rate and the distribution 251 of loss events. If application performance is degraded when the PLR 252 exceeds some rate then a useful metric may be a measure of the 253 duration and frequency of periods during which the PLR exceeds that 254 rate. 256 3.3. Computed Metrics 258 3.3.1 Composed Metrics 260 Some metrics may not be measured directly, but may be composed from 261 metrics that have been measured. Usually the contribution metrics 262 have a limited scope in time or space, and they can be combined to 263 estimate the performance of some larger entity. Some examples of 264 composed metrics and composed metric definitions are: 266 Spatial Composition is defined as the composition of metrics of the 267 same type with differing spatial domains [Ref ?]. For spatially 268 composed metrics to be meaningful, the spatial domains should be non- 269 overlapping and contiguous, and the composition operation should be 270 mathematically appropriate for the type of metric. 272 Clark [Page 5] 273 Temporal Composition is defined as the composition of sets of metrics 274 of the same type with differing time spans [Ref ?]. For temporally 275 composed metrics to be meaningful, the time spans should be 276 non-overlapping and contiguous, and the composition operation should 277 be mathematically appropriate for the type of metric. 279 Temporal Aggregation is a summarization of metrics into a smaller 280 number of metrics that relate to the total time span covered by the 281 original metrics. An example would be to compute the minimum, 282 maximum and average values of a series of time sampled values of a 283 metric. 285 3.3.2 Index (from compagg) 286 An Index is a metric for which the output value range has been 287 selected for convenience or clarity, and the behavior of which is 288 selected to support ease of understanding (e.g. G.107 R Factor). 289 The deterministic function for an index is often developed after 290 the index range and behavior have been determined. 292 3.4. Metric Specification 294 3.4.1. Outline 295 A metric definition MUST have a normative part that defines what the 296 metric is and how it is measured or computed and SHOULD have an 297 informative part that describes the metric and its application. 299 3.4.2. Normative parts of metric definition 300 The normative part of a metric definition MUST define at least the 301 following: 303 (i) Metric Name 304 Metric names MUST be unique within the set of metrics being defined 305 and MAY be descriptive. 307 (ii) Metric Description 308 The description MUST explain what the metric is, what is being 309 measured and how this relates to the performance of the system being 310 measured. 312 (iii) Measurement Method 313 This MUST define what is being measured, estimated or computed and 314 the specific algorithm to be used. Terms such as "average" should be 315 qualified (e.g. running average or average over some interval). 316 Exception cases SHOULD also be defined with the appropriate handling 317 method. For example, there are a number of commonly used metrics 318 related to packet loss; these often don't define the criteria by 319 which a packet is determined to be lost (vs very delayed) or how 320 duplicate packets are handled. For example, if the average packet 321 loss rate during a time interval is reported, and a packet's arrival 322 is delayed from one interval to the next then was it "lost" during 323 the interval during which it should have arrived or should it be 324 counted as received? 326 Clark [Page 6] 327 (iv) Units of measurement 328 The units of measurement MUST be clearly stated. 330 (v) Measurement timing 331 The acceptable range of timing intervals or sampling intervals for a 333 measurement and the timing accuracy required for such intervals MUST 334 be specified. Short sampling intervals or frequent samples provide 335 a rich source of information that can help to assess application 336 performance but may lead to excessive measurement data. Long 337 measurement or sampling intervals reduce the amount of reported and 338 collected data however may be insufficient to truly understand 339 potentially time varying application performance or service quality. 341 (vi) Measurement Point 342 If the measurement is specific to a measurement point this SHOULD be 343 defined. 345 3.4.3. Informative parts of metric definition 347 The informative part of a metric specification is intended to support 348 the implementation and use of the metric. This part SHOULD provide 349 the following data: 351 (i) Implementation 352 The implementation description MAY be in the form of text, algorithm 353 or example software. The objective of this part of the metric 354 definition is to assist implementers to achieve a consistent result. 356 (ii) Verification 357 The metric definition SHOULD provide guidance on verification 358 testing. This may be in the form of test vectors, a formal 359 verification test method or informal advice. 361 (iii) Use and Applications 363 The Use and Applications description is intended to assist the "user" 364 to understand how, when and where the metric can be applied, and what 365 significance the value range for the metric may have. This MAY 366 include a definition of the "typical" and "abnormal" range of the 367 metric, if this was not apparent from the nature of the metric. 368 For example: 370 (a) it is fairly intuitive that a lower packet loss rate 371 would equate to better performance however the user may 372 not know the significance of some given packet loss rate, 374 (b) the speech level of a telephone signal is commonly expressed 375 in dBm0. If the user is presented with: 377 Speech level = -7 dBm0 379 Clark [Page 7] 380 this is not intuitively understandable, unless the user is a 381 telephony expert. If the metric definition explains that the 382 typical range is -18 to -28 dBm0, a value higher than -18 383 means the signal may be too high (loud) and less than -28 384 means that the signal may be too low (quiet), it is much 385 easier to interpret the metric. 387 (iv) Reporting Model 389 The Reporting Model definition is intended to make any relationship 390 between the metric and the reporting model clear. There are often 391 implied relationships between the method of reporting metrics and the 392 metric itself, however these are often not made apparent to the 393 implementor. For example, if the metric is a short term running 394 average packet delay variation (e.g. PPDV as defined in RFC3550) 395 that is reported at intervals of 6-10 seconds the resulting 396 measurement may have limited accuracy if packet delay variation is 397 non-stationary. 399 3.4.4. Metric Definition Template 401 Normative 403 Metric Name 404 Metric Description 405 Measurement Method 406 Units of measurement 407 Measurement Timing 409 Informative 411 Implementation Guidelines 412 Verification 413 Use and Applications 414 Reporting Model 416 3.4.5. Examples 417 Example definition 419 Metric Name: BurstPacketLossFrequency 421 Metric Description: A burst of packet loss is defined as a longest 422 period starting and ending with lost packets during which no more 423 than Gmin consecutive packets are received. The 424 BurstPacketLossFrequency is defined as the number of bursts of packet 425 loss occurring during a specified time interval (e.g. per minute, per 426 hour, per day). If Gmin is set to 0 then a burst of packet loss 427 would comprise only consecutive lost packets, whereas a Gmin of 16 428 would define bursts as periods of both lost and received packets 429 (sparse bursts) having a loss rate of greater than 5.9%. 431 Measurement Method: Bursts may be detected using the Markov Model 432 algorithm defined in RFC3611. The BurstPacketLossFrequency is 434 Clark [Page 8] 435 calculated by counting the number of burst events within the defined 436 measurement interval. A burst that spans the boundary between two 437 time intervals shall be counted within the later of the two 438 intervals. 440 Units of Measurement: Bursts per time interval (e.g. per second, per 441 hour, per day) 443 Measurement Timing: This metric can be used over a wide range of time 444 intervals. Using time intervals of longer than one hour may prevent 445 the detection of variations in the value of this metric due to time- 446 of-day changes in network load. Timing intervals should not vary 447 in duration by more than +/- 2%. 449 Implementation Guidelines: See RFC3611. 451 Verification Testing: See Appendix for C code to generate test 452 vectors. 454 Use and Applications: This metric is useful to detect IP network 455 transients that affect the performance of applications such as 456 Voice over IP or IP Video. The value of Gmin may be selected to 457 ensure that bursts correspond to a packet loss rate that would 458 degrade the performance of the application of interest (e.g. 16 for 459 VoIP). 461 Reporting Model: This metric needs to be associated with a defined 462 time interval, which could be defined by fixed intervals or by a 463 sliding window. 465 3.5. Dependencies 467 3.5.1. Timing accuracy 469 The accuracy of the timing of a measurement may affect the accuracy 470 of the metric. This may not materially affect a sampled value metric 471 however would affect an interval based metric. Some metrics, for 472 example the number of events per time interval, would be directly 473 affected; for example a 10% variation in time interval would 474 lead directly to a 10% variation in the measured value. Other 475 metrics, such as the average packet loss rate during some time 476 interval, would be affected to a lesser extent. 478 If it is necessary to correlate sampled values or intervals then it 479 is essential that the accuracy of sampling time and interval start/ 480 stop times is sufficient for the application (for example +/- 2%). 482 3.5.2. Dependencies of metric definitions on related events or metrics 484 Metric definitions may explicitly or implicitly rely on factors that 485 may not be obvious. For example, the recognition of a packet as 486 being "lost" relies on having some method to know the packet was 487 actually lost (e.g. RTP sequence number), and some time threshold 489 Clark [Page 9] 490 after which a non-received packet is declared as lost. It is 491 important that any such dependencies are recognized and incorporated 492 into the metric definition. 494 3.5.3. Relationship between application performance and lower layer 495 metrics 497 Lower layer metrics may be used to compute or infer the performance 498 of higher layer applications, potentially using an application 499 performance model. The accuracy of this will depend on many factors 500 including: 502 (i) The completeness of the set of metrics - i.e. are there metrics 503 for all the input values to the application performance model? 505 (ii) Correlation between input variables (being measured) and 506 application performance 508 (iii) Variability in the measured metrics and how this variability 509 affects application performance 511 4. Performance Metric Development Process 513 4.1. New Proposals for Metrics 515 The following entry criteria will be considered for each proposal. 517 Proposals SHOULD be prepared as Internet Drafts, describing the 518 metrics and conforming to the qualifications above as much as 519 possible. 521 Proposals SHOULD be vetted by the corresponding protocol development 522 Working Group prior to discussion by the PM Entity. This aspect of 523 the process includes an assessment of the need for the metrics 524 proposed and assessment of the support for their development in IETF. 526 Proposals SHOULD include an assessment of interaction and/or overlap 527 with work in other Standards Development Organizations. 529 Proposals SHOULD specify the intended audience and users of the 530 metrics. The development process encourages participation by members 531 of the intended audience. 533 Proposals SHOULD survey the existing standards work in the area and 534 identify additional expertise that might be consulted, or possible 535 overlap with other standards development orgs. 537 Proposals SHOULD identify any security and IANA requirements. 538 Security issues could potentially involve revealing of user 539 identifying data or the potential misuse of active test tools. IANA 540 considerations may involve the need for a metrics registry. 542 Clark [Page 10] 543 4.2. Reviewing Metrics 545 Each metric SHOULD be assessed according to the following list of 546 qualifications: 548 o Unambiguously defined? 550 o Units of Measure Specified? 552 o Measurement Interval Specified? 554 o Measurement Errors Identified? 556 o Repeatable? 558 o Implementable? 560 o Assumptions concerning underlying process? 562 o Use cases? 564 o Correlation with application performance/ user experience? 566 4.3. Proposal Approval 568 New work item proposals SHALL be approved using the existing IETF 569 process. 571 4.4. PM Entity Interaction with other WGs 573 The PM Entity SHALL work in partnership with the related protocol 574 development WG when considering an Internet Draft that specifies 575 performance metrics for a protocol. A sufficient number of 576 individuals with expertise must be willing to consult on the draft. 577 If the related WG has concluded, comments on the proposal should 578 still be sought from key RFC authors and former chairs, or from the 579 WG mailing list if it was not closed. 581 Existing mailing lists SHOULD be used however a dedicated mailing 582 list MAY be initiated if necessary to facilitate work on a draft. 584 In some cases, it will be appropriate to have the IETF session 585 discussion during the related protocol WG session, to maximize 586 visibility of the effort to that WG and expand the review. 588 4.5. Standards Track Performance Metrics 590 The PM Entity will manage the progression of PM RFCs along the 591 Standards Track. See [I-D.bradner-metricstest]. This may include 592 the preparation of test plans to examine different implementations of 593 the metrics to ensure that the metric definitions are clear and 594 unambiguous (depending on the final form of the draft above). 596 Clark [Page 11] 597 5. IANA Considerations 599 This document makes no request of IANA. 601 Note to RFC Editor: this section may be removed on publication as an 602 RFC. 604 6. Security Considerations 606 In general, the existence of framework for performance metric 607 development does not constitute a security issue for the Internet. 608 Metric definitions may introduce security issues and this framework 609 recommends that those defining metrics should identify any such risk 610 factors. 612 The security considerations that apply to any active measurement of 613 live networks are relevant here as well. See [RFC4656]. 615 7. Acknowledgements 617 The authors would like to thank Al Morton, Dan Romascanu, Benoit 618 Claise, Daryl Malas and Loki Jorgenson for their comments and 619 contributions. 621 8. References 623 8.1. Normative References 625 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 626 Requirement Levels", BCP 14, RFC 2119, March 1997. 628 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 629 Zekauskas, "A One-way Active Measurement Protocol 630 (OWAMP)", RFC 4656, September 2006. 632 [G1000] ITU-T Recommendation G.1000. Communications Quality of 633 Service: A framework and definitions. 635 8.2. Informative References 637 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 638 "Framework for IP Performance Metrics", RFC 2330, 639 May 1998. 641 [Casner] "A Fine-Grained View of High Performance Networking, NANOG 642 22 Conf.; http://www.nanog.org/mtg-0105/agenda.html", May 643 20-22 2001. 645 [I-D.bradner-metricstest] 646 Bradner, S. and V. Paxson, "Advancement of metrics 647 specifications on the IETF Standards Track", 648 draft-bradner-metricstest-03 August 2007. 650 Clark [Page 12] 652 [I-D.ietf-ippm-framework-compagg] 653 Morton, A., "Framework for Metric Composition", 654 draft-ietf-ippm-framework-compagg-06 (work in progress), 655 February 2008. 657 [I-D.ietf-ippm-spatial-composition] 658 Morton, A. and E. Stephan, "Spatial Composition of 659 Metrics", draft-ietf-ippm-spatial-composition-06 (work in 660 progress), February 2008. 662 Author's Address 664 Alan Clark 665 Telchemy Incorporated 666 2905 Premiere Parkway, Suite 280 667 Duluth, Georgia 30097 668 USA 670 Phone: 671 Fax: 672 Email: alan.d.clark@telchemy.com 673 URI: 675 Clark [Page 14]