idnits 2.17.1 draft-morton-ippm-reporting-metrics-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 715. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 726. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 733. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 739. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 8, 2007) is 6014 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC4737' is defined on line 637, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2679 (Obsoleted by RFC 7679) ** Obsolete normative reference: RFC 2680 (Obsoleted by RFC 7680) == Outdated reference: A later version (-09) exists of draft-ietf-ippm-framework-compagg-05 == Outdated reference: A later version (-06) exists of draft-ietf-ippm-reporting-01 Summary: 3 errors (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft G. Ramachandran 4 Intended status: Informational G. Maguluri 5 Expires: May 11, 2008 AT&T Labs 6 November 8, 2007 8 Reporting Metrics: Different Points of View 9 draft-morton-ippm-reporting-metrics-03 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on May 11, 2008. 36 Copyright Notice 38 Copyright (C) The IETF Trust (2007). 40 Abstract 42 Consumers of IP network performance metrics have many different uses 43 in mind. This memo categorizes the different audience points of 44 view. It describes how the categories affect the selection of metric 45 parameters and options when seeking info that serves their needs. 46 The memo then proceeds to discuss "long-term" reporting 47 considerations (e.g, days, weeks or months, as opposed to 10 48 seconds). 50 Requirements Language 52 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 53 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 54 document are to be interpreted as described in RFC 2119 [RFC2119]. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 59 2. Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . 3 60 3. Effect of POV on the Loss Metric . . . . . . . . . . . . . . . 4 61 3.1. Loss Threshold . . . . . . . . . . . . . . . . . . . . . . 4 62 3.2. Errored Packet Designation . . . . . . . . . . . . . . . . 6 63 3.3. Causes of Lost Packets . . . . . . . . . . . . . . . . . . 6 64 4. Effect of POV on the Delay Metric . . . . . . . . . . . . . . 6 65 4.1. Treatment of Lost Packets . . . . . . . . . . . . . . . . 7 66 4.1.1. Application Performance . . . . . . . . . . . . . . . 7 67 4.1.2. Network Characterization . . . . . . . . . . . . . . . 7 68 4.1.3. Delay Variation . . . . . . . . . . . . . . . . . . . 9 69 4.1.4. Reordering . . . . . . . . . . . . . . . . . . . . . . 9 70 4.2. Preferred Statistics . . . . . . . . . . . . . . . . . . . 9 71 4.3. Summary for Delay . . . . . . . . . . . . . . . . . . . . 10 72 5. Test Streams and Sample Size . . . . . . . . . . . . . . . . . 10 73 5.1. Test Stream Characteristics . . . . . . . . . . . . . . . 10 74 5.2. Sample Size . . . . . . . . . . . . . . . . . . . . . . . 11 75 6. Reporting Results . . . . . . . . . . . . . . . . . . . . . . 11 76 6.1. Overview of Metric Statistics . . . . . . . . . . . . . . 11 77 6.2. Long-Term Reporting Considerations . . . . . . . . . . . . 12 78 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 79 8. Security Considerations . . . . . . . . . . . . . . . . . . . 14 80 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 14 81 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 14 82 10.1. Normative References . . . . . . . . . . . . . . . . . . . 14 83 10.2. Informative References . . . . . . . . . . . . . . . . . . 15 84 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 85 Intellectual Property and Copyright Statements . . . . . . . . . . 17 87 1. Introduction 89 When designing measurements of IP networks and presenting the 90 results, knowledge of the audience is a key consideration. To 91 present a useful and relevant portrait of network conditions, one 92 must answer the following question: 94 "How will the results be used?" 96 There are two main audience categories: 98 1. Network Characterization - describes conditions in an IP network 99 for quality assurance, troubleshooting, modeling, etc. The 100 point-of-view looks inward, toward the network, and the consumer 101 intends their actions there. 103 2. Application Performance Estimation - describes the network 104 conditions in a way that facilitates determining affects on user 105 applications, and ultimately the users themselves. This point- 106 of-view looks outward, toward the user(s), accepting the network 107 as-is. This consumer intends to estimate a network-dependent 108 aspect of performance, or design some aspect of an application's 109 accommodation of the network. (These are *not* application 110 metrics, they are defined at the IP layer.) 112 This memo considers how these different points-of-view affect both 113 the measurement design (parameters and options of the metrics) and 114 statistics reported when serving their needs. 116 The IPPM framework [RFC2330] and other RFCs describing IPPM metrics 117 provide a background for this memo. 119 2. Purpose and Scope 121 The purpose of this memo is to clearly delineate two points-of-view 122 (POV) for using measurements, and describe their effects on the test 123 design, including the selection of metric parameters and reporting 124 the results. 126 The current scope of this memo is primarily limited to design and 127 reporting of the loss and delay metrics [RFC2680] [RFC2679], but will 128 also discuss the delay variation and reordering metrics where 129 applicable. Sampling, or the design of the active packet stream that 130 is the basis for the measurements, is also discussed. 132 3. Effect of POV on the Loss Metric 134 This section describes the ways in which the Loss metric can be tuned 135 to reflect the preferences of the two audience categories, or 136 different POV. 138 3.1. Loss Threshold 140 RFC 2680 [RFC2680] defines the concept of a waiting time for packets 141 to arrive, beyond which they are declared lost. The text of the RFC 142 declines to recommend a value, instead saying that "good engineering, 143 including an understanding of packet lifetimes, will be needed in 144 practice." Later, in the methodology, they give reasons for waiting 145 "a reasonable period of time", and leaving the definition of 146 "reasonable" intentionally vague. 148 Practical measurement experience has shown that unusual network 149 circumstances can cause long delays. One such circumstance is when 150 routing loops form during IGP re-convergence following a failure or 151 drastic link cost change. Packets will loop between two routers 152 until new routes are installed, or until the IPv4 Time-to-Live (TTL) 153 field (or the IPv6 Hop Limit) decrements to zero. Very long delays 154 on the order of several seconds have been measured [Casner] [Cia03]. 156 Therefore, network characterization activities prefer a long waiting 157 time in order to distinguish these events from other causes of loss 158 (such as packet discard at a full queue, or tail drop). This way, 159 the metric design helps to distinguish more reliably between packets 160 that might yet arrive, and those that are no longer traversing the 161 network. 163 It is possible to calculate a worst-case waiting time, assuming that 164 a routing loop is the cause. We model the path between Source and 165 Destination as a series of delays in links (t) and queues (q), as 166 these two are the dominant contributors to delay. The normal path 167 delay across n hops without encountering a loop, D, is 168 n 169 --- 170 \ 171 D = t + > t + q 172 0 / i i 173 --- 174 i = 1 176 Figure 1: Normal Path Delay 178 and the time spent in the loop with L hops, is 180 i + L-1 181 --- 182 \ (TTL - n) 183 R = C > t + q where C = --------- 184 / i i max L 185 --- 186 i 188 Figure 2: Delay due to Rotations in a Loop 190 and where C is the number of times a packet circles the loop. 192 If we take the delays of all links and queues as 100ms each, the 193 TTL=255, the number of hops n=5 and the hops in the loop L=4, then 195 D = 1.1 sec and R ~= 50 sec, and D + R ~= 51.1 seconds 197 We note that the link delays of 100ms would span most continents, and 198 a constant queue length of 100ms is also very generous. When a loop 199 occurs, it is almost certain to be resolved in 10 seconds or less. 200 The value calculated above is an upper limit for almost any realistic 201 circumstance. 203 A waiting time threshold parameter, dT, set consistent with this 204 calculation would not truncate the delay distribution (possibly 205 causing a change in its mathematical properties), because the packets 206 that might arrive have been given sufficient time to traverse the 207 network. 209 It is worth noting that packets that are stored and deliberately 210 forwarded at a much later time constitute a replay attack on the 211 measurement system, and are beyond the scope of normal performance 212 reporting. 214 Fortunately, application performance estimation activities are not 215 adversely affected by the estimated worst-case transfer time. 217 Although the designer's tendency might be to set the Loss Threshold 218 at a value equivalent to a particular application's threshold, this 219 specific threshold can be applied when post-processing the 220 measurements. A shorter waiting time can be enforced by locating 221 packets with delays longer than the application's threshold, and re- 222 designating such packets as lost. 224 3.2. Errored Packet Designation 226 RFC 2680 designates packets that arrive containing errors as lost 227 packets. Many packets that are corrupted by bit errors are discarded 228 within the network and do not reach their intended destination. 230 This is consistent with applications that would check the payload 231 integrity at higher layers, and discard the packet. However, some 232 applications prefer to deal with errored payloads on their own, and 233 even a corrupted payload is better than no packet at all. 235 To address this possibility, and to make network characterization 236 more complete, it is recommended to distinguish between packets that 237 do not arrive (lost) and errored packets that arrive (conditionally 238 lost). 240 3.3. Causes of Lost Packets 242 Although many measurement systems use a waiting time to determine if 243 a packet is lost or not, most of the waiting is in vain. The packets 244 are no-longer traversing the network, and have not reached their 245 destination. 247 There are many causes of packet loss, including: 249 1. Queue drop, or discard 251 2. Corruption of the IP header, or other essential header info 253 3. TTL expiration (or use of a TTL value that is too small) 255 4. Link or router failure 257 After waiting sufficient time, packet loss can probably be attributed 258 to one of these causes. 260 4. Effect of POV on the Delay Metric 262 This section describes the ways in which the Delay metric can be 263 tuned to reflect the preferences of the two consumer categories, or 264 different POV. 266 4.1. Treatment of Lost Packets 268 The Delay Metric [RFC2679] specifies the treatment of packets that do 269 not successfully traverse the network: their delay is undefined. 271 " >>The *Type-P-One-way-Delay* from Src to Dst at T is undefined 272 (informally, infinite)<< means that Src sent the first bit of a 273 Type-P packet to Dst at wire-time T and that Dst did not receive that 274 packet." 276 It is an accepted, but informal practice to assign infinite delay to 277 lost packets. We next look at how these two different treatments 278 align with the needs of measurement consumers who wish to 279 characterize networks or estimate application performance. Also, we 280 look at the way that lost packets have been treated in other metrics: 281 delay variation and reordering. 283 4.1.1. Application Performance 285 Applications need to perform different functions, dependent on 286 whether or not each packet arrives within some finite tolerance. In 287 other words, a receivers' packet processing forks on packet arrival: 289 o Packets that arrive within expected tolerance are handled by 290 processes that remove headers, restore smooth delivery timing (as 291 in a de-jitter buffer), restore sending order, check for errors in 292 payloads, and many other operations. 294 o Packets that do not arrive when expected spawn other processes 295 that attempt recovery from the apparent loss, such as 296 retransmission requests, loss concealment, or forward error 297 correction to replace the missing packet. 299 So, it is important to maintain a distinction between packets that 300 actually arrive, and those that do not. Therefore, it is preferable 301 to leave the delay of lost packets undefined, and to characterize the 302 delay distribution as a conditional distribution (conditioned on 303 arrival). 305 4.1.2. Network Characterization 307 In this discussion, we assume that both loss and delay metrics will 308 be reported for network characterization (at least). 310 Assume packets that do not arrive are reported as Lost, usually as a 311 fraction of all sent packets. If these lost packets are assigned 312 undefined delay, then network's inability to deliver them (in a 313 timely way) is captured only in the loss metric when we report 314 statistics on the Delay distribution conditioned on the event of 315 packet arrival (within the Loss waiting time threshold). We can say 316 that the Delay and Loss metrics are Orthogonal, in that they convey 317 non-overlapping information about the network under test. 319 However, if we assign infinite delay to all lost packets, then: 321 o The delay metric results are influenced both by packets that 322 arrive and those that do not. 324 o The delay singleton and the loss singleton do not appear to be 325 orthogonal (Delay is finite when Loss=0, Delay is infinite when 326 Loss=1). 328 o The network is penalized in both the loss and delay metrics, 329 effectively double-counting the lost packets. 331 As further evidence of overlap, consider the Cumulative Distribution 332 Function (CDF) of Delay when the value positive infinity is assigned 333 to all lost packets. Figure 3 shows a CDF where a small fraction of 334 packets are lost. 336 1 | - - - - - - - - - - - - - - - - - -+ 337 | | 338 | _..----'''''''''''''''''''' 339 | ,-'' 340 | ,' 341 | / Mass at 342 | / +infinity 343 | / = fraction 344 || lost 345 |/ 346 0 |_____________________________________ 348 0 Delay +o0 350 Figure 3: Cumulative Distribution Function for Delay when Loss = 351 +Infinity 353 We note that a Delay CDF that is conditioned on packet arrival would 354 not exhibit this apparent overlap with loss. 356 Although infinity is a familiar mathematical concept, it is somewhat 357 disconcerting to see any time-related metric reported as infinity, in 358 the opinion of the authors. Questions are bound to arise, and tend 359 to detract from the goal of informing the consumer with a performance 360 report. 362 4.1.3. Delay Variation 364 [RFC3393] excludes lost packets from samples, effectively assigning 365 an undefined delay to packets that do not arrive in a reasonable 366 time. Section 4.1 describes this specification and its rationale. 368 "The treatment of lost packets as having "infinite" or "undefined" 369 delay complicates the derivation of statistics for ipdv. 370 Specifically, when packets in the measurement sequence are lost, 371 simple statistics such as sample mean cannot be computed. One 372 possible approach to handling this problem is to reduce the event 373 space by conditioning. That is, we consider conditional statistics; 374 namely we estimate the mean ipdv (or other derivative statistic) 375 conditioned on the event that selected packet pairs arrive at the 376 destination (within the given timeout). While this itself is not 377 without problems (what happens, for example, when every other packet 378 is lost), it offers a way to make some (valid) statements about ipdv, 379 at the same time avoiding events with undefined outcomes." 381 4.1.4. Reordering 383 [RFC4737]defines metrics that are based on evaluation of packet 384 arrival order, and include a waiting time to declare a packet lost 385 (to exclude them from further processing). 387 If packets are assigned a delay value, then the reordering metric 388 would declare any packets with infinite delay to be reordered, 389 because their sequence numbers will surely be less than the "Next 390 Expected" threshold when (or if) they arrive. But this practice 391 would fail to maintain orthogonality between the reordering metric 392 and the loss metric. Confusion can be avoided by designating the 393 delay of non-arriving packets as undefined, and reserving delay 394 values only for packets that arrive within a sufficiently long 395 waiting time. 397 4.2. Preferred Statistics 399 Today in network characterization, the sample mean is one statistic 400 that is almost ubiquitously reported. It is easily computed and 401 understood by virtually everyone in this audience category. Also, 402 the sample is usually filtered on packet arrival, so that the mean is 403 based a conditional distribution. 405 The median is another statistic that summarizes a distribution, 406 having somewhat different properties from the sample mean. The 407 median is stable in distributions with a few outliers or without 408 them. However, the median's stability prevents it from indicating 409 when a large fraction of the distribution changes value. 50% or more 410 values would need to change for the median to capture the change. 412 Both the median and sample mean have difficulty with bimodal 413 distributions. The median will reside in only one of the modes, and 414 the mean may not lie in either mode range. For this and other 415 reasons, additional statistics such as the minimum, maximum, and 95%- 416 ile have value when summarizing a distribution. 418 When both the sample mean and median are available, a comparison will 419 sometimes be informative, because these two statistics are equal only 420 when the delay distribution is perfectly symmetrical. 422 Also, these statistics are generally useful from the Application 423 Performance POV, so there is a common set that should satisfy 424 audiences. 426 4.3. Summary for Delay 428 From the perspectives of: 430 1. application/receiver analysis, where processing forks on packet 431 arrival or time out, 433 2. straightforward network characterization without double-counting 434 defects, and 436 3. consistency with Delay variation and Reordering metric 437 definitions, 439 the most efficient practice is to distinguish between truly lost and 440 delayed packets with a sufficiently long waiting time, and to 441 designate the delay of non-arriving packets as undefined. 443 5. Test Streams and Sample Size 445 This section discusses two key aspects of measurement that are 446 sometimes omitted from the report: the description of the test stream 447 on which the measurements are based, and the sample size. 449 5.1. Test Stream Characteristics 451 Network Characterization has traditionally used Poisson-distributed 452 inter-packet spacing, as this provides an unbiased sample. The 453 average inter-packet spacing may be selected to allow observation of 454 specific network phenomena. Other test streams are designed to 455 sample some property of the network, such as the presence of 456 congestion, link bandwidth, or packet reordering. 458 If measuring a network in order to make inferences about applications 459 or receiver performance, then there are usually efficiencies derived 460 from a test stream that has similar characteristics to the sender. 461 In some cases, it is essential to synthesize the sender stream, as 462 with Bulk Transfer Capacity estimates. In other cases, it may be 463 sufficient to sample with a "known bias", e.g., a Periodic stream to 464 estimate real-time application performance. 466 5.2. Sample Size 468 Sample size is directly related to the accuracy of the results, and 469 plays a critical role in the report. Even if only the sample size 470 (in terms of number of packets) is given for each value or summary 471 statistic, it imparts a notion of the confidence in the result. 473 In practice, the sample size will be selected taking both statistical 474 and practical factors into account. Among these factors are: 476 1. The estimated variability of the quantity being measured 478 2. The desired confidence in the result (although this may be 479 dependent on assumption of the underlying distribution of the 480 measured quantity). 482 3. The effects of active measurement traffic on user traffic 484 4. etc. 486 A sample size may sometimes be referred to as "large". This is a 487 relative, and qualitative term. It is preferable to describe what 488 one is attempting to achieve with their sample. For example, stating 489 an implication may be helpful: this sample is large enough such that 490 a single outlier value at ten times the "typical" sample mean (the 491 mean without the outlier) would influence the mean by no more than X. 493 6. Reporting Results 495 This section gives an overview of recommendations, followed by 496 additional considerations for reporting results in the "long-term". 498 6.1. Overview of Metric Statistics 500 This section gives an overview of reporting recommendations for the 501 loss, delay, and delay variation metrics based on the discussion and 502 conclusions of the preceeding sections. 504 The minimal report on measurements MUST include both Loss and Delay 505 Metrics. 507 For Packet Loss, the loss ratio defined in [RFC2680] is a sufficient 508 starting point, especially the guidance for setting the loss 509 threshold waiting time. We have calculated a waiting time above that 510 should be sufficient to differentiate between packets that are truly 511 lost or have long finite delays under general measurement 512 circumstances, 51 seconds. Knowledge of specific conditions can help 513 to reduce this threshold, but 51 seconds is considered to be 514 manageable in practice. 516 We note that a loss ratio calculated according to [Y.1540] would 517 exclude errored packets form the numerator. In practice, the 518 difference between these two loss metrics is small if any, depending 519 on whether the last link prior to the destination contributes errored 520 packets. 522 For Packet Delay, we recommend providing both the mean delay and the 523 median delay with lost packets designated undefined (as permitted by 524 [RFC2679]). Both statistics are based on a conditional distribution, 525 and the condition is packet arrival prior to a waiting time dT, where 526 dT has been set to take maximum packet lifetimes into account, as 527 discussed above. Using a long dT helps to ensure that delay 528 distributions are not truncated. 530 For Packet Delay Variation, the minimum delay of the conditional 531 distribution should be used as the reference delay for computing IPDV 532 according to [RFC3393]. A useful value to report is a pseudo range 533 of delay variation based on calculating the difference between a high 534 percentile of delay and the minimum delay. For example, the 99.9%- 535 ile minus the minimum will give a value that can be compared with 536 objectives in [Y.1541]. 538 6.2. Long-Term Reporting Considerations 540 [I-D.ietf-ippm-reporting] describes methods to conduct measurements 541 and report the results on a near-immediate time scale (10 seocnds, 542 which we consider to be "short-term"). 544 Measurement intervals and reporting intervals need not be the same 545 length. Sometimes, the user is only concerned with the performance 546 levels achieved over a relatively long interval of time (e.g, days, 547 weeks, or months, as opposed to 10 seconds). However, there can be 548 risks involved with running a measurement continuously over a long 549 period without recording intermediate results: 551 o Temporary power failure may cause loss of all the results to date. 553 o Measurement system timing synchronization signals may experience a 554 temporary outage, causing sub-sets of measurements to be in error 555 or invalid. 557 o Maintenance may be necessary on the measurement system, or its 558 connectivity to the network under test. 560 For these and other reasons, such as 562 o the constraint to collect measurements on intervals similar to 563 user session length, or 565 o the dual-use of measurements in monitoring activities where 566 results are needed on a period of a few minutes, 568 there is value in conducting measurements on intervals that are much 569 shorter than the reporting interval. 571 There are several approaches for aggregating a series of measurement 572 results over time in order to make a statement about the longer 573 reporting interval. One approach requires the storage of all metric 574 singletons collected throughout the reporting interval, even though 575 the measurement interval stops and starts many times. 577 Another approach is described in [I-D.ietf-ippm-framework-compagg] as 578 "temporal aggregation". This approach would estimate the results for 579 the reporting interval based on many individual measurement interval 580 statistics (results) alone. The result would ideally appear in the 581 same form as though a continuous measurement was conducted. A memo 582 to address the details of temporal aggregation is yet to be prepared. 584 Yet another approach requires a numerical objective for the metric, 585 and the results of each measurement interval are compared with the 586 objective. Every measurement interval where the results meet the 587 objective contribute to the fraction of time with performance as 588 specified. When the reporting interval contains many measurement 589 intervals it is possible to present the results as "metric A was less 590 than or equal to objective X during Y% of time. 592 NOTE that numerical thresholds are not set in IETF performance work 593 and are explicitly excluded from the IPPM charter. 595 7. IANA Considerations 597 This document makes no request of IANA. 599 Note to RFC Editor: this section may be removed on publication as an 600 RFC. 602 8. Security Considerations 604 The security considerations that apply to any active measurement of 605 live networks are relevant here as well. See [RFC4656]. 607 9. Acknowledgements 609 The authors would like to thank Phil Chimento for his suggestion to 610 employ conditional distributions for Delay. 612 10. References 614 10.1. Normative References 616 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 617 Requirement Levels", BCP 14, RFC 2119, March 1997. 619 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 620 "Framework for IP Performance Metrics", RFC 2330, 621 May 1998. 623 [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 624 Delay Metric for IPPM", RFC 2679, September 1999. 626 [RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 627 Packet Loss Metric for IPPM", RFC 2680, September 1999. 629 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 630 Metric for IP Performance Metrics (IPPM)", RFC 3393, 631 November 2002. 633 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 634 Zekauskas, "A One-way Active Measurement Protocol 635 (OWAMP)", RFC 4656, September 2006. 637 [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, 638 S., and J. Perser, "Packet Reordering Metrics", RFC 4737, 639 November 2006. 641 10.2. Informative References 643 [Casner] "A Fine-Grained View of High Performance Networking, NANOG 644 22 Conf.; http://www.nanog.org/mtg-0105/agenda.html", May 645 20-22 2001. 647 [Cia03] "Standardized Active Measurements on a Tier 1 IP Backbone, 648 IEEE Communications Mag., pp 90-97.", June 2003. 650 [I-D.ietf-ippm-framework-compagg] 651 Morton, A., "Framework for Metric Composition", 652 draft-ietf-ippm-framework-compagg-05 (work in progress), 653 November 2007. 655 [I-D.ietf-ippm-reporting] 656 Shalunov, S., "Reporting IP Performance Metrics to Users", 657 draft-ietf-ippm-reporting-01 (work in progress), 658 October 2006. 660 [Y.1540] ITU-T Recommendation Y.1540, "Internet protocol data 661 communication service - IP packet transfer and 662 availability performance parameters", December 2002. 664 [Y.1541] ITU-T Recommendation Y.1540, "Network Performance 665 Objectives for IP-Based Services", February 2006. 667 Authors' Addresses 669 Al Morton 670 AT&T Labs 671 200 Laurel Avenue South 672 Middletown,, NJ 07748 673 USA 675 Phone: +1 732 420 1571 676 Fax: +1 732 368 1192 677 Email: acmorton@att.com 678 URI: http://home.comcast.net/~acmacm/ 679 Gomathi Ramachandran 680 AT&T Labs 681 200 Laurel Avenue South 682 Middletown,, NJ 07748 683 USA 685 Phone: +1 732 420 2353 686 Fax: 687 Email: gomathi@att.com 688 URI: 690 Ganga Maguluri 691 AT&T Labs 692 200 Laurel Avenue 693 Middletown, New Jersey 07748 694 USA 696 Phone: 732-420-2486 697 Fax: 698 Email: gmaguluri@att.com 699 URI: 701 Full Copyright Statement 703 Copyright (C) The IETF Trust (2007). 705 This document is subject to the rights, licenses and restrictions 706 contained in BCP 78, and except as set forth therein, the authors 707 retain all their rights. 709 This document and the information contained herein are provided on an 710 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 711 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 712 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 713 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 714 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 715 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 717 Intellectual Property 719 The IETF takes no position regarding the validity or scope of any 720 Intellectual Property Rights or other rights that might be claimed to 721 pertain to the implementation or use of the technology described in 722 this document or the extent to which any license under such rights 723 might or might not be available; nor does it represent that it has 724 made any independent effort to identify any such rights. Information 725 on the procedures with respect to rights in RFC documents can be 726 found in BCP 78 and BCP 79. 728 Copies of IPR disclosures made to the IETF Secretariat and any 729 assurances of licenses to be made available, or the result of an 730 attempt made to obtain a general license or permission for the use of 731 such proprietary rights by implementers or users of this 732 specification can be obtained from the IETF on-line IPR repository at 733 http://www.ietf.org/ipr. 735 The IETF invites any interested party to bring to its attention any 736 copyrights, patents or patent applications, or other proprietary 737 rights that may cover technology that may be required to implement 738 this standard. Please address the information to the IETF at 739 ietf-ipr@ietf.org. 741 Acknowledgment 743 Funding for the RFC Editor function is provided by the IETF 744 Administrative Support Activity (IASA).