idnits 2.17.1 draft-ietf-pmol-metrics-framework-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 15. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 645. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 656. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 663. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 669. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 4, 2008) is 5773 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC2330' is defined on line 594, but no explicit reference was found in the text == Unused Reference: 'Casner' is defined on line 598, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ippm-framework-compagg' is defined on line 608, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ippm-spatial-composition' is defined on line 613, but no explicit reference was found in the text == Outdated reference: A later version (-09) exists of draft-ietf-ippm-framework-compagg-06 == Outdated reference: A later version (-16) exists of draft-ietf-ippm-spatial-composition-06 Summary: 1 error (**), 0 flaws (~~), 7 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Clark 3 Internet-Draft Telchemy Incorporated 4 Intended status: BCP July 4, 2008 5 Expires: January 5, 2009 7 Framework for Performance Metric Development 8 draft-ietf-pmol-metrics-framework-00 10 Status of this Memo 12 By submitting this Internet-Draft, each author represents that any 13 applicable patent or other IPR claims of which he or she is aware 14 have been or will be disclosed, and any of which he or she becomes 15 aware will be disclosed, in accordance with Section 6 of BCP 79. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that 19 other groups may also distribute working documents as Internet- 20 Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six months 23 and may be updated, replaced, or obsoleted by other documents at any 24 time. It is inappropriate to use Internet-Drafts as reference 25 material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt. 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 This Internet-Draft will expire on January 5, 2009. 35 Abstract 37 This memo describes a framework and guidelines for the development of 38 performance metrics that are beyond the scope of existing working 39 group charters in the IETF. In this version, the memo refers to a 40 Performance Metrics Entity, or PM Entity, which may in future be a 41 working group or directorate or a combination of these two. 43 Requirements Language 45 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 46 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 47 document are to be interpreted as described in RFC 2119 [RFC2119]. 49 Table of Contents 51 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 52 1.1. Background and Motivation . . . . . . . . . . . . . . . . 3 53 1.2. Organization of this memo . . . . . . . . . . . . . . . . 4 54 2. Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . 4 55 3. Metrics Development . . . . . . . . . . . . . . . . . . . . . 4 56 3.1. Audience for Metrics . . . . . . . . . . . . . . . . . . . 5 57 3.2. Definitions of a Metric . . . . . . . . . . . . . . . . . 5 58 3.3. Composed Metrics . . . . . . . . . . . . . . . . . . . . . 6 59 3.4. Metric Specification . . . . . . . . . . . . . . . . . . . 6 60 3.4.1. Outline . . . . . . . . . . . . . . . . . . . . . . . 6 61 3.4.2. Normative parts of metric definition . . . . . . . . . 6 62 3.4.3. Informative parts of metric definition . . . . . . . . 7 63 3.4.4. Metric Definition Template . . . . . . . . . . . . . . 8 64 3.4.5. Examples . . . . . . . . . . . . . . . . . . . . . . . 9 65 3.5. Qualifying Metrics . . . . . . . . . . . . . . . . . . . . 10 66 3.6. Reporting Models . . . . . . . . . . . . . . . . . . . . . 10 67 3.7. Dependencies . . . . . . . . . . . . . . . . . . . . . . . 10 68 3.7.1. Timing accuracy . . . . . . . . . . . . . . . . . . . 10 69 3.7.2. Dependencies of metric definitions on related 70 events or metrics . . . . . . . . . . . . . . . . . . 11 71 3.7.3. Relationship between application performance and 72 lower layer metrics . . . . . . . . . . . . . . . . . 11 73 4. Performance Metric Development Process . . . . . . . . . . . . 11 74 4.1. New Proposals for Metrics . . . . . . . . . . . . . . . . 11 75 4.2. Proposal Approval . . . . . . . . . . . . . . . . . . . . 12 76 4.3. PM Entity Interaction with other WGs . . . . . . . . . . . 12 77 4.4. Standards Track Performance Metrics . . . . . . . . . . . 12 78 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 79 6. Security Considerations . . . . . . . . . . . . . . . . . . . 13 80 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 13 81 8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 13 82 8.1. Normative References . . . . . . . . . . . . . . . . . . . 13 83 8.2. Informative References . . . . . . . . . . . . . . . . . . 13 84 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 14 85 Intellectual Property and Copyright Statements . . . . . . . . . . 15 87 1. Introduction 89 Many applications are distributed in nature, and their performance 90 may be impacted by a IP impairments, server capacity, congestion and 91 other factors. It is important to measure the performance of 92 applications and services to ensure that quality objectives are being 93 met and to support problem diagnosis. Standardized metrics help to 94 ensure that performance measurement is implemented consistently and 95 to facilitate interpretation and comparison. 97 There are at least three phases in the development of performance 98 standards. They are: 100 1. Definition of a Performance Metric and its units of measure 102 2. Specification of a Method of Measurement 104 3. Specification of the Reporting Format 106 During the development of metrics it is often useful to define 107 performance objectives and expected value ranges however this is not 108 defined as part of the metric specification. 110 This memo refers to a Performance Metrics Entity, or PM Entity, which 111 may in future be a working group or directorate or a combination of 112 these two. 114 1.1. Background and Motivation 116 Although the IETF has two Working Groups dedicated to the development 117 of performance metrics, they each have strict limitations in their 118 charters: 120 - The Benchmarking Methodology WG has addressed a range of networking 121 technologies and protocols in their long history (such as IEEE 802.3, 122 ATM, Frame Relay, and Routing Protocols), but the charter strictly 123 limits their performance characterizations to the laboratory 124 environment. 126 - The IP Performance Metrics WG has the mandate to develop metrics 127 applicable to live IP networks, but it is specifically prohibited 128 from developing metrics that characterize traffic (such as a VoIP 129 stream). 131 A BOF held at IETF-69 introduced the IETF community to the 132 possibility of a generalized activity to define standardized 133 performance metrics. The existence of a growing list of Internet- 134 Drafts on performance metrics (with community interest in 135 development, but in un-chartered areas) illustrates the need for 136 additional performance work. The majority of people present at the 137 BOF supported the proposition that IETF should be working in these 138 areas, and no one objected to any of the proposals. 140 The IETF does have current and completed activities related to the 141 reporting of application performance metrics (e.g. RAQMON) and is 142 also actively involved in the development of reliable transport 143 protocols which would affect the relationship between IP performance 144 and application performance. 146 Thus there is a gap in the currently chartered coverage of IETF WGs: 147 development of performance metrics for non-IP-layer protocols that 148 can be used to characterize performance on live networks. 150 1.2. Organization of this memo 152 This memo is divided in two major sections beyond the Purpose and 153 Scope section. The first is a definition and description of a 154 performance metric and its key aspects. The second defines a process 155 to develop these metrics that is applicable to the IETF environment. 157 2. Purpose and Scope 159 The purpose of this memo is to define a framework and a process for 160 developing performance metrics for IP-based applications that operate 161 over reliable or datagram transport protocols, and that can be used 162 to characterize traffic on live networks and services. 164 The scope of this memo includes the support of metric definition for 165 any protocol developed by the IETF, however this memo is not intended 166 to supercede existing working methods within WGs that have existing 167 chartered work in this area. 169 This process is not intended to govern performance metric development 170 in existing IETF WG that are focused on metrics development, such as 171 IPPM and BMWG. However, the framework and guidelines may be useful 172 in these activities, and MAY be applied where appropriate. 174 3. Metrics Development 176 This section provides key definitions and qualifications of 177 performance metrics. 179 3.1. Audience for Metrics 181 Metrics are intended for use in measuring the performance of an 182 application, network or service. A key first step in metric 183 definition is to identify what metrics are needed by the "user" in 184 order to properly maintain service quality and to identify and 185 quantify problems, i.e. to consider the audience for the metrics. 187 3.2. Definitions of a Metric 189 A metric is a measure of an observable behavior of an application, 190 protocol or other system. The definition of a metric often assumes 191 some implicit or explicit underlying statistical process, and a 192 metric is an estimate of a parameter of this process. If the assumed 193 statistical process closely models the behavior of the system then 194 the metric is "better" in the sense that it more accurately 195 characterizes the state or behavior of the system. 197 A metric should serve some defined purpose. This may include the 198 measurement of capacity, quantifying how bad some problem is, 199 measurement of service level, problem diagnosis or location and other 200 such uses. A metric may also be an input to some other process, for 201 example the computation of a composite metric or a model or 202 simulation of a system. Tests of the "usefulness" of a metric 203 include: 205 (i) the degree to which its absence would cause significant loss 206 of information on the behavior or state of the application or 207 system being measured 209 (ii) the correlation between the metric and the quality of service 210 / experience delivered to the user (person or other application) 212 (iii) the degree to which the metric is able to support the 213 identification and location of problems affecting service quality. 215 For example, consider a distributed application operating over a 216 network connection that is subject to packet loss. A Packet Loss 217 Rate (PLR) metric is defined as the mean packet loss rate over some 218 time period. If the application performs poorly over network 219 connections with high packet loss rate and always performs well when 220 the packet loss rate is zero then the PLR metric is useful to some 221 degree. Some applications are sensitive to short periods of high 222 loss (bursty loss) and are relatively insensitive to isolated packet 223 loss events; for this type of application there would be very weak 224 correlation between PLR and application performance. A "better" 225 metric would consider both the packet loss rate and the distribution 226 of loss events. If application performance is degraded when the PLR 227 exceeds some rate then a useful metric may be a measure of the 228 duration and frequency of periods during which the PLR exceeds that 229 rate. 231 3.3. Composed Metrics 233 Some metrics may not be measured directly, but may be composed from 234 metrics that have been measured. Usually the contribution metrics 235 have a limited scope in time or space, and they can be combined to 236 estimate the performance of some larger entity. Some examples of 237 composed metrics and composed metric definitions are: 239 Spatial Composition is defined as the composition of metrics of the 240 same type with differing spatial domains [Ref ?]. For spatially 241 composed metrics to be meaningful, the spatial domains should be non- 242 overlapping and contiguous, and the composition operation should be 243 mathematically appropriate for the type of metric. 245 Temporal Composition is defined as the composition of metrics of the 246 same type with differing time spans [Ref ?]. For temporally composed 247 metrics to be meaningful, the time spans should be non-overlapping 248 and contiguous, and the composition operation should be 249 mathematically appropriate for the type of metric. 251 Temporal Aggregation is a summarization of metrics into a smaller 252 number of metrics, each of which has a greater time span than the 253 original metrics. An example would be to compute the minimum, 254 maximum and average values of a series of time sampled values of a 255 metric. 257 3.4. Metric Specification 259 3.4.1. Outline 261 A metric definition MUST have a normative part that defines what the 262 metric is and how it is measured or computed and SHOULD have an 263 informative part that describes the metric and its application. 265 3.4.2. Normative parts of metric definition 267 The normative part of a metric definition MUST define at least the 268 following: 270 (i) Metric Name 272 Metric names MUST be unique within the set of metrics being defined 273 and MAY be descriptive. 275 (ii) Metric Description 277 The description MUST explain what the metric is, what is being 278 measured and how this relates to the performance of the system being 279 measured. 281 (iii) Measurement Method 283 This MUST define what is being measured, estimated or computed and 284 the specific algorithm to be used. Terms such as "average" should be 285 qualified (e.g. running average or average over some interval). It 286 is important to also define exception cases and how these are 287 handled. For example, there are a number of commonly used metrics 288 related to packet loss; these often don't define the criteria by 289 which a packet is determined to be lost (vs very delayed) or how 290 duplicate packets are handled. For example, if the average packet 291 loss rate during a time interval is reported, and a packet's arrival 292 is delayed from one interval to the next then was it "lost" during 293 the interval during which it should have arrived or should it be 294 counted as received? 296 (iv) Units of measurement 298 The units of measurement must be clearly stated. 300 (v) Measurement timing 302 The acceptable range of timing intervals or sampling intervals for a 303 measurement and the timing accuracy required for such intervals must 304 be specified. Short intervals or frequent sampling provides a richer 305 source of information that can be helpful in assessing application 306 performance however can lead to excessive measurement data. Long 307 measurement or sampling intervals reduce that amount of reported and 308 collected data however may be insufficient to truly understand 309 application performance or service quality if this varies with time. 311 3.4.3. Informative parts of metric definition 313 The informative part of a metric specification is intended to support 314 the implementation and use of the metric. This part SHOULD provide 315 the following data: (i) Implementation The implementation description 316 may be in the form of text, algorithm or example software. The 317 objective of this part of the metric definition is to assist 318 implementers to achieve a consistent result. (ii) Conformance Testing 319 The metric definition SHOULD provide guidance on conformance testing. 320 This may be in the form of test vectors, a formal conformance test 321 plan or informal advice. 323 (iii) Use and Applications The Use and Applications description is 324 intended to assist the "user" to understand how, when and where the 325 metric can be applied, and what significance the value range for the 326 metric may have. This would typically involve a definition of the 327 "typical" and "abnormal" range of the metric, if this was not 328 apparent from the nature of the metric. For example, it is fairly 329 intuitive that a lower packet loss rate would equate to better 330 performance however the user may not know the significance of some 331 given packet loss rate. For example, the speech level of a telephone 332 signal is commonly expressed in dBm0. If the user is presented with: 333 Speech level = -7 dBm0 this is not intuitively understandable, unless 334 the user is a telephony expert. If the metric definition explains 335 that the typical range is -18 to -28 dBm0, a value higher than -18 336 means the signal may be too high (loud) and less than -28 means that 337 the signal may be too low (quiet), it is much easier to interpret the 338 metric. (iv) Reporting Model 340 There are often implied relationships between the method of reporting 341 metrics and the metric itself. For example, if the metric is a short 342 term running average packet delay variation (e.g. PPDV as defined in 343 RFC3550) however this value is reported at intervals of 6-10 seconds 344 this results in a sampling model which may have limited accuracy if 345 packet delay variation is non-stationary. 347 3.4.4. Metric Definition Template 349 Normative 351 Metric Name 353 Metric Description 355 Measurement Method 357 Units of measurement 359 Measurement Timing 361 Informative 363 Implementation Guidelines 365 Use and Applications 367 Reporting Model 369 3.4.5. Examples 371 Example definition 373 Metric Name: BurstPacketLossFrequency 375 Metric Description: A burst of packet loss is defined as a longest 376 period starting and ending with lost packets during which no more 377 than Gmin consecutive packets are received. The 378 BurstPacketLossFrequency is defined as the number of bursts of packet 379 loss occurring during a specified time interval (e.g. per minute, per 380 hour, per day). If Gmin is set to 0 then a burst of packet loss 381 would comprise only consecutive lost packets, whereas a Gmin of 16 382 would define bursts as periods of both lost and received packets 383 (sparse bursts) having a loss rate of greater than 5.9%. 385 Measurement Method: Bursts may be detected using the Markov Model 386 algorithm defined in RFC3611. The BurstPacketLossFrequency is 387 calculated by counting the number of burst events within the defined 388 measurement interval. A burst that spans the boundary between two 389 time intervals shall be counted within the later of the two 390 intervals. 392 Units of Measurement: Bursts per time interval (e.g. per second, per 393 hour, per day) 395 Measurement Timing: This metric can be used over a wide range of time 396 intervals. Using time intervals of longer than one hour may prevent 397 the detection of variations in the value of this metric due to time- 398 of-day load network load changes. Timing intervals should not vary 399 in duration by more than +/- 2%. 401 Implementation Guidelines: See RFC3611. 403 Conformance Testing: See Appendix for C code to generate test 404 vectors. 406 Use and Applications: This metric is useful to detect IP network 407 transient problems that affect the quality of applications such as 408 Voice over IP or IP Video. The value of Gmin may be selected to 409 ensure that bursts correspond to a packet loss rate that would 410 degrade the performance of the application of interest (e.g. 16 for 411 VoIP). 413 Reporting Model: This metric needs to be associated with a defined 414 time interval, which could be defined by fixed intervals or by a 415 sliding window. 417 3.5. Qualifying Metrics 419 Each metric SHOULD be assessed according to the following list of 420 qualifications: 422 o Unambiguously defined? 424 o Units of Measure Specified? 426 o Measurement Interval Specified? 428 o Measurement Errors Identified? 430 o Repeatable? 432 o Implementable? 434 o Assumptions concerning underlying process? 436 o Use cases? 438 o Correlation with application performance/ user experience? 440 3.6. Reporting Models 442 A metric, or some set of metrics, may be measured over some time 443 period, measured continuously, sampled, aggregated and/or combined 444 into composite metrics and then reported using a "push" or "pull" 445 model. Reporting protocols typically introduce some limitations and 446 assumptions with regard to the definition of a metric. 448 3.7. Dependencies 450 3.7.1. Timing accuracy 452 The accuracy of the timing of a measurement may affect the accuracy 453 of the metric. This may not materially affect a sampled value metric 454 however would affect an interval based metric. Some metrics, for 455 example the number of events per time interval, would be directly 456 affected; for example a 10 percent variation in time interval would 457 lead directly to a 10 percent variation in the measured value. Other 458 metrics, such as the average packet loss rate during some time 459 interval, would be affected to a lesser extent. 461 If it is necessary to correlate sampled values or intervals then it 462 is essential that the accuracy of sampling time and interval start/ 463 stop times is sufficient for the application (for example +/- 2%). 465 3.7.2. Dependencies of metric definitions on related events or metrics 467 Metric definitions may explicitly or implicitly rely on factors that 468 may not be obvious. For example, the recognition of a packet as 469 being "lost" relies on having some method to know the packet was 470 actually lost (e.g. RTP sequence number), and some time threshold 471 after which a non-received packet is declared as lost. It is 472 important that any such dependencies are recognized and incorporated 473 into the metric definition. 475 3.7.3. Relationship between application performance and lower layer 476 metrics 478 Lower layer metrics may be used to compute or infer the performance 479 of higher layer applications, potentially using an application 480 performance model. The accuracy of this will depend on many factors 481 including: 483 (i) The completeness of the set of metrics - i.e. are there metrics 484 for all the input values to the application performance model? 486 (ii) Correlation between input variables (being measured) and 487 application performance 489 (iii) Variability in the measured metrics and how this variability 490 affects application performance 492 4. Performance Metric Development Process 494 4.1. New Proposals for Metrics 496 The following entry criteria will be considered for each proposal. 498 Proposals SHOULD be prepared as Internet Drafts, describing the 499 metrics and conforming to the qualifications above as much as 500 possible. 502 Proposals SHOULD be vetted by the corresponding protocol development 503 Working Group prior to discussion by the PM Entity. This aspect of 504 the process includes an assessment of the need for the metrics 505 proposed and assessment of the support for their development in IETF. 507 Proposals SHOULD include an assessment of interaction and/or overlap 508 with work in other Standards Development Organizations. 510 Proposals SHOULD specify the intended audience and users of the 511 metrics. The development process encourages participation by members 512 of the intended audience. 514 Proposals SHOULD survey the existing standards work in the area and 515 identify additional expertise that might be consulted, or possible 516 overlap with other standards development orgs. 518 Proposals SHOULD identify any security and IANA requirements. 519 Security issues could potentially involve revealing of user 520 identifying data or the potential misuse of active test tools. IANA 521 considerations may involve the need for a metrics registry. 523 4.2. Proposal Approval 525 Who does this??? 527 The IETF/IESG/Relevant ADs/Relevant WG/PM Entity ??? 529 This section depends on the direction of the solution, or form that 530 the PM Entity takes. 532 4.3. PM Entity Interaction with other WGs 534 The PM Entity SHALL work in partnership with the related protocol 535 development WG when considering an Internet Draft that specifies 536 performance metrics for a protocol. A sufficient number of 537 individuals with expertise must be willing to consult on the draft. 538 If the related WG has concluded, comments on the proposal should 539 still be sought from key RFC authors and former chairs, or from the 540 WG mailing list if it was not closed. 542 A dedicated mailing list MAY be initiated for each work area, so that 543 protocol experts can subscribe to and receive the message traffic 544 that is relevant to their work. 546 In some cases, it will be appropriate to have the IETF session 547 discussion during the related protocol WG session, to maximize 548 visibility of the effort to that WG and expand the review. 550 4.4. Standards Track Performance Metrics 552 The PM Entity will manage the progression of PM RFCs along the 553 Standards Track. See [I-D.bradner-metricstest]. This may include 554 the preparation of test plans to examine different implementations of 555 the metrics to ensure that the metric definitions are clear and 556 unambiguous (depending on the final form of the draft above). 558 5. IANA Considerations 560 This document makes no request of IANA. 562 Note to RFC Editor: this section may be removed on publication as an 563 RFC. 565 6. Security Considerations 567 In general, the existence of framework for performance metric 568 development does not constitute a security issue for the Internet. 569 Metric definitions may introduce security issues and this framework 570 recommends that those defining metrics should identify any such risk 571 factors. 573 The security considerations that apply to any active measurement of 574 live networks are relevant here as well. See [RFC4656]. 576 7. Acknowledgements 578 The authors would like to thank Al Morton and Benoit Claise for their 579 comments and contributions. 581 8. References 583 8.1. Normative References 585 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 586 Requirement Levels", BCP 14, RFC 2119, March 1997. 588 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 589 Zekauskas, "A One-way Active Measurement Protocol 590 (OWAMP)", RFC 4656, September 2006. 592 8.2. Informative References 594 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 595 "Framework for IP Performance Metrics", RFC 2330, 596 May 1998. 598 [Casner] "A Fine-Grained View of High Performance Networking, NANOG 599 22 Conf.; http://www.nanog.org/mtg-0105/agenda.html", May 600 20-22 2001. 602 [I-D.bradner-metricstest] 603 Bradner, S. and V. Paxson, "Advancement of metrics 604 specifications on the IETF Standards Track", 605 draft-bradner-metricstest-03 (work in progress), 606 August 2007. 608 [I-D.ietf-ippm-framework-compagg] 609 Morton, A., "Framework for Metric Composition", 610 draft-ietf-ippm-framework-compagg-06 (work in progress), 611 February 2008. 613 [I-D.ietf-ippm-spatial-composition] 614 Morton, A. and E. Stephan, "Spatial Composition of 615 Metrics", draft-ietf-ippm-spatial-composition-06 (work in 616 progress), February 2008. 618 Author's Address 620 Alan Clark 621 Telchemy Incorporated 622 2905 Premiere Parkway, Suite 280 623 Duluth, Georgia 30097 624 USA 626 Phone: 627 Fax: 628 Email: alan.d.clark@telchemy.com 629 URI: 631 Full Copyright Statement 633 Copyright (C) The IETF Trust (2008). 635 This document is subject to the rights, licenses and restrictions 636 contained in BCP 78, and except as set forth therein, the authors 637 retain all their rights. 639 This document and the information contained herein are provided on an 640 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 641 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 642 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 643 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 644 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 645 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 647 Intellectual Property 649 The IETF takes no position regarding the validity or scope of any 650 Intellectual Property Rights or other rights that might be claimed to 651 pertain to the implementation or use of the technology described in 652 this document or the extent to which any license under such rights 653 might or might not be available; nor does it represent that it has 654 made any independent effort to identify any such rights. Information 655 on the procedures with respect to rights in RFC documents can be 656 found in BCP 78 and BCP 79. 658 Copies of IPR disclosures made to the IETF Secretariat and any 659 assurances of licenses to be made available, or the result of an 660 attempt made to obtain a general license or permission for the use of 661 such proprietary rights by implementers or users of this 662 specification can be obtained from the IETF on-line IPR repository at 663 http://www.ietf.org/ipr. 665 The IETF invites any interested party to bring to its attention any 666 copyrights, patents or patent applications, or other proprietary 667 rights that may cover technology that may be required to implement 668 this standard. Please address the information to the IETF at 669 ietf-ipr@ietf.org.