idnits 2.17.1 draft-morton-ippm-reporting-metrics-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 744. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 755. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 762. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 768. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 17, 2007) is 6006 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC4737' is defined on line 665, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2679 (Obsoleted by RFC 7679) ** Obsolete normative reference: RFC 2680 (Obsoleted by RFC 7680) == Outdated reference: A later version (-09) exists of draft-ietf-ippm-framework-compagg-05 == Outdated reference: A later version (-06) exists of draft-ietf-ippm-reporting-01 Summary: 3 errors (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft G. Ramachandran 4 Intended status: Informational G. Maguluri 5 Expires: May 20, 2008 AT&T Labs 6 November 17, 2007 8 Reporting Metrics: Different Points of View 9 draft-morton-ippm-reporting-metrics-04 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on May 20, 2008. 36 Copyright Notice 38 Copyright (C) The IETF Trust (2007). 40 Abstract 42 Consumers of IP network performance metrics have many different uses 43 in mind. This memo categorizes the different audience points of 44 view. It describes how the categories affect the selection of metric 45 parameters and options when seeking info that serves their needs. 46 The memo then proceeds to discuss "long-term" reporting 47 considerations (e.g, days, weeks or months, as opposed to 10 48 seconds). 50 Requirements Language 52 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 53 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 54 document are to be interpreted as described in RFC 2119 [RFC2119]. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 59 2. Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . 3 60 3. Effect of POV on the Loss Metric . . . . . . . . . . . . . . . 4 61 3.1. Loss Threshold . . . . . . . . . . . . . . . . . . . . . . 4 62 3.1.1. Network Characterization . . . . . . . . . . . . . . . 4 63 3.1.2. Application Performance . . . . . . . . . . . . . . . 6 64 3.2. Errored Packet Designation . . . . . . . . . . . . . . . . 6 65 3.3. Causes of Lost Packets . . . . . . . . . . . . . . . . . . 6 66 3.4. Summary for Loss . . . . . . . . . . . . . . . . . . . . . 7 67 4. Effect of POV on the Delay Metric . . . . . . . . . . . . . . 7 68 4.1. Treatment of Lost Packets . . . . . . . . . . . . . . . . 7 69 4.1.1. Application Performance . . . . . . . . . . . . . . . 7 70 4.1.2. Network Characterization . . . . . . . . . . . . . . . 8 71 4.1.3. Delay Variation . . . . . . . . . . . . . . . . . . . 9 72 4.1.4. Reordering . . . . . . . . . . . . . . . . . . . . . . 10 73 4.2. Preferred Statistics . . . . . . . . . . . . . . . . . . . 10 74 4.3. Summary for Delay . . . . . . . . . . . . . . . . . . . . 11 75 5. Test Streams and Sample Size . . . . . . . . . . . . . . . . . 11 76 5.1. Test Stream Characteristics . . . . . . . . . . . . . . . 11 77 5.2. Sample Size . . . . . . . . . . . . . . . . . . . . . . . 11 78 6. Reporting Results . . . . . . . . . . . . . . . . . . . . . . 12 79 6.1. Overview of Metric Statistics . . . . . . . . . . . . . . 12 80 6.2. Long-Term Reporting Considerations . . . . . . . . . . . . 13 81 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 82 8. Security Considerations . . . . . . . . . . . . . . . . . . . 14 83 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 14 84 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 15 85 10.1. Normative References . . . . . . . . . . . . . . . . . . . 15 86 10.2. Informative References . . . . . . . . . . . . . . . . . . 15 87 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16 88 Intellectual Property and Copyright Statements . . . . . . . . . . 17 90 1. Introduction 92 When designing measurements of IP networks and presenting the 93 results, knowledge of the audience is a key consideration. To 94 present a useful and relevant portrait of network conditions, one 95 must answer the following question: 97 "How will the results be used?" 99 There are two main audience categories: 101 1. Network Characterization - describes conditions in an IP network 102 for quality assurance, troubleshooting, modeling, etc. The 103 point-of-view looks inward, toward the network, and the consumer 104 intends their actions there. 106 2. Application Performance Estimation - describes the network 107 conditions in a way that facilitates determining affects on user 108 applications, and ultimately the users themselves. This point- 109 of-view looks outward, toward the user(s), accepting the network 110 as-is. This consumer intends to estimate a network-dependent 111 aspect of performance, or design some aspect of an application's 112 accommodation of the network. (These are *not* application 113 metrics, they are defined at the IP layer.) 115 This memo considers how these different points-of-view affect both 116 the measurement design (parameters and options of the metrics) and 117 statistics reported when serving their needs. 119 The IPPM framework [RFC2330] and other RFCs describing IPPM metrics 120 provide a background for this memo. 122 2. Purpose and Scope 124 The purpose of this memo is to clearly delineate two points-of-view 125 (POV) for using measurements, and describe their effects on the test 126 design, including the selection of metric parameters and reporting 127 the results. 129 The current scope of this memo is primarily limited to design and 130 reporting of the loss and delay metrics [RFC2680] [RFC2679], but will 131 also discuss the delay variation and reordering metrics where 132 applicable. Sampling, or the design of the active packet stream that 133 is the basis for the measurements, is also discussed. 135 3. Effect of POV on the Loss Metric 137 This section describes the ways in which the Loss metric can be tuned 138 to reflect the preferences of the two audience categories, or 139 different POV. The waiting time to declare a packet lost, or loss 140 threshold is one area where there would appear to be a difference, 141 but the ability to post-process the results may resolve it. 143 3.1. Loss Threshold 145 RFC 2680 [RFC2680] defines the concept of a waiting time for packets 146 to arrive, beyond which they are declared lost. The text of the RFC 147 declines to recommend a value, instead saying that "good engineering, 148 including an understanding of packet lifetimes, will be needed in 149 practice." Later, in the methodology, they give reasons for waiting 150 "a reasonable period of time", and leaving the definition of 151 "reasonable" intentionally vague. 153 3.1.1. Network Characterization 155 Practical measurement experience has shown that unusual network 156 circumstances can cause long delays. One such circumstance is when 157 routing loops form during IGP re-convergence following a failure or 158 drastic link cost change. Packets will loop between two routers 159 until new routes are installed, or until the IPv4 Time-to-Live (TTL) 160 field (or the IPv6 Hop Limit) decrements to zero. Very long delays 161 on the order of several seconds have been measured [Casner] [Cia03]. 163 Therefore, network characterization activities prefer a long waiting 164 time in order to distinguish these events from other causes of loss 165 (such as packet discard at a full queue, or tail drop). This way, 166 the metric design helps to distinguish more reliably between packets 167 that might yet arrive, and those that are no longer traversing the 168 network. 170 It is possible to calculate a worst-case waiting time, assuming that 171 a routing loop is the cause. We model the path between Source and 172 Destination as a series of delays in links (t) and queues (q), as 173 these two are the dominant contributors to delay. The normal path 174 delay across n hops without encountering a loop, D, is 175 n 176 --- 177 \ 178 D = t + > t + q 179 0 / i i 180 --- 181 i = 1 183 Figure 1: Normal Path Delay 185 and the time spent in the loop with L hops, is 187 i + L-1 188 --- 189 \ (TTL - n) 190 R = C > t + q where C = --------- 191 / i i max L 192 --- 193 i 195 Figure 2: Delay due to Rotations in a Loop 197 and where C is the number of times a packet circles the loop. 199 If we take the delays of all links and queues as 100ms each, the 200 TTL=255, the number of hops n=5 and the hops in the loop L=4, then 202 D = 1.1 sec and R ~= 50 sec, and D + R ~= 51.1 seconds 204 We note that the link delays of 100ms would span most continents, and 205 a constant queue length of 100ms is also very generous. When a loop 206 occurs, it is almost certain to be resolved in 10 seconds or less. 207 The value calculated above is an upper limit for almost any realistic 208 circumstance. 210 A waiting time threshold parameter, dT, set consistent with this 211 calculation would not truncate the delay distribution (possibly 212 causing a change in its mathematical properties), because the packets 213 that might arrive have been given sufficient time to traverse the 214 network. 216 It is worth noting that packets that are stored and deliberately 217 forwarded at a much later time constitute a replay attack on the 218 measurement system, and are beyond the scope of normal performance 219 reporting. 221 3.1.2. Application Performance 223 Fortunately, application performance estimation activities are not 224 adversely affected by the estimated worst-case transfer time. 225 Although the designer's tendency might be to set the Loss Threshold 226 at a value equivalent to a particular application's threshold, this 227 specific threshold can be applied when post-processing the 228 measurements. A shorter waiting time can be enforced by locating 229 packets with delays longer than the application's threshold, and re- 230 designating such packets as lost. Thus, the measurement system can 231 use a single loss threshold and support both application and network 232 performance POVs simultaneously. 234 3.2. Errored Packet Designation 236 RFC 2680 designates packets that arrive containing errors as lost 237 packets. Many packets that are corrupted by bit errors are discarded 238 within the network and do not reach their intended destination. 240 This is consistent with applications that would check the payload 241 integrity at higher layers, and discard the packet. However, some 242 applications prefer to deal with errored payloads on their own, and 243 even a corrupted payload is better than no packet at all. 245 To address this possibility, and to make network characterization 246 more complete, it is recommended to distinguish between packets that 247 do not arrive (lost) and errored packets that arrive (conditionally 248 lost). 250 3.3. Causes of Lost Packets 252 Although many measurement systems use a waiting time to determine if 253 a packet is lost or not, most of the waiting is in vain. The packets 254 are no-longer traversing the network, and have not reached their 255 destination. 257 There are many causes of packet loss, including: 259 1. Queue drop, or discard 261 2. Corruption of the IP header, or other essential header info 263 3. TTL expiration (or use of a TTL value that is too small) 265 4. Link or router failure 267 After waiting sufficient time, packet loss can probably be attributed 268 to one of these causes. 270 3.4. Summary for Loss 272 Given that measurement post-processing is possible (even encouraged 273 in the definitions of IPPM metrics), measurements of loss can easily 274 serve both points of view: 276 o Use a long waiting time to serve network characterization and 277 revise results for specific application delay thresholds as 278 needed. 280 o Distinguish between errored packets and lost packets when possible 281 to aid network characterization, and combine the results for 282 application performance if appropriate. 284 4. Effect of POV on the Delay Metric 286 This section describes the ways in which the Delay metric can be 287 tuned to reflect the preferences of the two consumer categories, or 288 different POV. 290 4.1. Treatment of Lost Packets 292 The Delay Metric [RFC2679] specifies the treatment of packets that do 293 not successfully traverse the network: their delay is undefined. 295 " >>The *Type-P-One-way-Delay* from Src to Dst at T is undefined 296 (informally, infinite)<< means that Src sent the first bit of a 297 Type-P packet to Dst at wire-time T and that Dst did not receive that 298 packet." 300 It is an accepted, but informal practice to assign infinite delay to 301 lost packets. We next look at how these two different treatments 302 align with the needs of measurement consumers who wish to 303 characterize networks or estimate application performance. Also, we 304 look at the way that lost packets have been treated in other metrics: 305 delay variation and reordering. 307 4.1.1. Application Performance 309 Applications need to perform different functions, dependent on 310 whether or not each packet arrives within some finite tolerance. In 311 other words, a receivers' packet processing takes one of two 312 directions (or "forks" in the road): 314 o Packets that arrive within expected tolerance are handled by 315 processes that remove headers, restore smooth delivery timing (as 316 in a de-jitter buffer), restore sending order, check for errors in 317 payloads, and many other operations. 319 o Packets that do not arrive when expected spawn other processes 320 that attempt recovery from the apparent loss, such as 321 retransmission requests, loss concealment, or forward error 322 correction to replace the missing packet. 324 So, it is important to maintain a distinction between packets that 325 actually arrive, and those that do not. Therefore, it is preferable 326 to leave the delay of lost packets undefined, and to characterize the 327 delay distribution as a conditional distribution (conditioned on 328 arrival). 330 4.1.2. Network Characterization 332 In this discussion, we assume that both loss and delay metrics will 333 be reported for network characterization (at least). 335 Assume packets that do not arrive are reported as Lost, usually as a 336 fraction of all sent packets. If these lost packets are assigned 337 undefined delay, then network's inability to deliver them (in a 338 timely way) is captured only in the loss metric when we report 339 statistics on the Delay distribution conditioned on the event of 340 packet arrival (within the Loss waiting time threshold). We can say 341 that the Delay and Loss metrics are Orthogonal, in that they convey 342 non-overlapping information about the network under test. 344 However, if we assign infinite delay to all lost packets, then: 346 o The delay metric results are influenced both by packets that 347 arrive and those that do not. 349 o The delay singleton and the loss singleton do not appear to be 350 orthogonal (Delay is finite when Loss=0, Delay is infinite when 351 Loss=1). 353 o The network is penalized in both the loss and delay metrics, 354 effectively double-counting the lost packets. 356 As further evidence of overlap, consider the Cumulative Distribution 357 Function (CDF) of Delay when the value positive infinity is assigned 358 to all lost packets. Figure 3 shows a CDF where a small fraction of 359 packets are lost. 361 1 | - - - - - - - - - - - - - - - - - -+ 362 | | 363 | _..----'''''''''''''''''''' 364 | ,-'' 365 | ,' 366 | / Mass at 367 | / +infinity 368 | / = fraction 369 || lost 370 |/ 371 0 |_____________________________________ 373 0 Delay +o0 375 Figure 3: Cumulative Distribution Function for Delay when Loss = 376 +Infinity 378 We note that a Delay CDF that is conditioned on packet arrival would 379 not exhibit this apparent overlap with loss. 381 Although infinity is a familiar mathematical concept, it is somewhat 382 disconcerting to see any time-related metric reported as infinity, in 383 the opinion of the authors. Questions are bound to arise, and tend 384 to detract from the goal of informing the consumer with a performance 385 report. 387 4.1.3. Delay Variation 389 [RFC3393] excludes lost packets from samples, effectively assigning 390 an undefined delay to packets that do not arrive in a reasonable 391 time. Section 4.1 describes this specification and its rationale 392 (ipdv = inter-packet delay variation in the quote below). 394 "The treatment of lost packets as having "infinite" or "undefined" 395 delay complicates the derivation of statistics for ipdv. 396 Specifically, when packets in the measurement sequence are lost, 397 simple statistics such as sample mean cannot be computed. One 398 possible approach to handling this problem is to reduce the event 399 space by conditioning. That is, we consider conditional statistics; 400 namely we estimate the mean ipdv (or other derivative statistic) 401 conditioned on the event that selected packet pairs arrive at the 402 destination (within the given timeout). While this itself is not 403 without problems (what happens, for example, when every other packet 404 is lost), it offers a way to make some (valid) statements about ipdv, 405 at the same time avoiding events with undefined outcomes." 407 4.1.4. Reordering 409 [RFC4737]defines metrics that are based on evaluation of packet 410 arrival order, and include a waiting time to declare a packet lost 411 (to exclude them from further processing). 413 If packets are assigned a delay value, then the reordering metric 414 would declare any packets with infinite delay to be reordered, 415 because their sequence numbers will surely be less than the "Next 416 Expected" threshold when (or if) they arrive. But this practice 417 would fail to maintain orthogonality between the reordering metric 418 and the loss metric. Confusion can be avoided by designating the 419 delay of non-arriving packets as undefined, and reserving delay 420 values only for packets that arrive within a sufficiently long 421 waiting time. 423 4.2. Preferred Statistics 425 Today in network characterization, the sample mean is one statistic 426 that is almost ubiquitously reported. It is easily computed and 427 understood by virtually everyone in this audience category. Also, 428 the sample is usually filtered on packet arrival, so that the mean is 429 based a conditional distribution. 431 The median is another statistic that summarizes a distribution, 432 having somewhat different properties from the sample mean. The 433 median is stable in distributions with a few outliers or without 434 them. However, the median's stability prevents it from indicating 435 when a large fraction of the distribution changes value. 50% or more 436 values would need to change for the median to capture the change. 438 Both the median and sample mean have difficulty with bimodal 439 distributions. The median will reside in only one of the modes, and 440 the mean may not lie in either mode range. For this and other 441 reasons, additional statistics such as the minimum, maximum, and 95%- 442 ile have value when summarizing a distribution. 444 When both the sample mean and median are available, a comparison will 445 sometimes be informative, because these two statistics are equal only 446 when the delay distribution is perfectly symmetrical. 448 Also, these statistics are generally useful from the Application 449 Performance POV, so there is a common set that should satisfy 450 audiences. 452 4.3. Summary for Delay 454 From the perspectives of: 456 1. application/receiver analysis, where subsequent processing 457 depends on whether the packet arrives or times-out, 459 2. straightforward network characterization without double-counting 460 defects, and 462 3. consistency with Delay variation and Reordering metric 463 definitions, 465 the most efficient practice is to distinguish between truly lost and 466 delayed packets with a sufficiently long waiting time, and to 467 designate the delay of non-arriving packets as undefined. 469 5. Test Streams and Sample Size 471 This section discusses two key aspects of measurement that are 472 sometimes omitted from the report: the description of the test stream 473 on which the measurements are based, and the sample size. 475 5.1. Test Stream Characteristics 477 Network Characterization has traditionally used Poisson-distributed 478 inter-packet spacing, as this provides an unbiased sample. The 479 average inter-packet spacing may be selected to allow observation of 480 specific network phenomena. Other test streams are designed to 481 sample some property of the network, such as the presence of 482 congestion, link bandwidth, or packet reordering. 484 If measuring a network in order to make inferences about applications 485 or receiver performance, then there are usually efficiencies derived 486 from a test stream that has similar characteristics to the sender. 487 In some cases, it is essential to synthesize the sender stream, as 488 with Bulk Transfer Capacity estimates. In other cases, it may be 489 sufficient to sample with a "known bias", e.g., a Periodic stream to 490 estimate real-time application performance. 492 5.2. Sample Size 494 Sample size is directly related to the accuracy of the results, and 495 plays a critical role in the report. Even if only the sample size 496 (in terms of number of packets) is given for each value or summary 497 statistic, it imparts a notion of the confidence in the result. 499 In practice, the sample size will be selected taking both statistical 500 and practical factors into account. Among these factors are: 502 1. The estimated variability of the quantity being measured 504 2. The desired confidence in the result (although this may be 505 dependent on assumption of the underlying distribution of the 506 measured quantity). 508 3. The effects of active measurement traffic on user traffic 510 4. etc. 512 A sample size may sometimes be referred to as "large". This is a 513 relative, and qualitative term. It is preferable to describe what 514 one is attempting to achieve with their sample. For example, stating 515 an implication may be helpful: this sample is large enough such that 516 a single outlying value at ten times the "typical" sample mean (the 517 mean without the outlying value) would influence the mean by no more 518 than X. 520 6. Reporting Results 522 This section gives an overview of recommendations, followed by 523 additional considerations for reporting results in the "long-term". 525 6.1. Overview of Metric Statistics 527 This section gives an overview of reporting recommendations for the 528 loss, delay, and delay variation metrics based on the discussion and 529 conclusions of the preceding sections. 531 The minimal report on measurements MUST include both Loss and Delay 532 Metrics. 534 For Packet Loss, the loss ratio defined in [RFC2680] is a sufficient 535 starting point, especially the guidance for setting the loss 536 threshold waiting time. We have calculated a waiting time above that 537 should be sufficient to differentiate between packets that are truly 538 lost or have long finite delays under general measurement 539 circumstances, 51 seconds. Knowledge of specific conditions can help 540 to reduce this threshold, but 51 seconds is considered to be 541 manageable in practice. 543 We note that a loss ratio calculated according to [Y.1540] would 544 exclude errored packets form the numerator. In practice, the 545 difference between these two loss metrics is small if any, depending 546 on whether the last link prior to the destination contributes errored 547 packets. 549 For Packet Delay, we recommend providing both the mean delay and the 550 median delay with lost packets designated undefined (as permitted by 551 [RFC2679]). Both statistics are based on a conditional distribution, 552 and the condition is packet arrival prior to a waiting time dT, where 553 dT has been set to take maximum packet lifetimes into account, as 554 discussed above. Using a long dT helps to ensure that delay 555 distributions are not truncated. 557 For Packet Delay Variation (PDV), the minimum delay of the 558 conditional distribution should be used as the reference delay for 559 computing PDV according to [Y.1540] or [RFC3393]. A useful value to 560 report is a pseudo range of delay variation based on calculating the 561 difference between a high percentile of delay and the minimum delay. 562 For example, the 99.9%-ile minus the minimum will give a value that 563 can be compared with objectives in [Y.1541]. 565 6.2. Long-Term Reporting Considerations 567 [I-D.ietf-ippm-reporting] describes methods to conduct measurements 568 and report the results on a near-immediate time scale (10 seconds, 569 which we consider to be "short-term"). 571 Measurement intervals and reporting intervals need not be the same 572 length. Sometimes, the user is only concerned with the performance 573 levels achieved over a relatively long interval of time (e.g, days, 574 weeks, or months, as opposed to 10 seconds). However, there can be 575 risks involved with running a measurement continuously over a long 576 period without recording intermediate results: 578 o Temporary power failure may cause loss of all the results to date. 580 o Measurement system timing synchronization signals may experience a 581 temporary outage, causing sub-sets of measurements to be in error 582 or invalid. 584 o Maintenance may be necessary on the measurement system, or its 585 connectivity to the network under test. 587 For these and other reasons, such as 589 o the constraint to collect measurements on intervals similar to 590 user session length, or 592 o the dual-use of measurements in monitoring activities where 593 results are needed on a period of a few minutes, 595 there is value in conducting measurements on intervals that are much 596 shorter than the reporting interval. 598 There are several approaches for aggregating a series of measurement 599 results over time in order to make a statement about the longer 600 reporting interval. One approach requires the storage of all metric 601 singletons collected throughout the reporting interval, even though 602 the measurement interval stops and starts many times. 604 Another approach is described in [I-D.ietf-ippm-framework-compagg] as 605 "temporal aggregation". This approach would estimate the results for 606 the reporting interval based on many individual measurement interval 607 statistics (results) alone. The result would ideally appear in the 608 same form as though a continuous measurement was conducted. A memo 609 to address the details of temporal aggregation is yet to be prepared. 611 Yet another approach requires a numerical objective for the metric, 612 and the results of each measurement interval are compared with the 613 objective. Every measurement interval where the results meet the 614 objective contribute to the fraction of time with performance as 615 specified. When the reporting interval contains many measurement 616 intervals it is possible to present the results as "metric A was less 617 than or equal to objective X during Y% of time. 619 NOTE that numerical thresholds are not set in IETF performance work 620 and are explicitly excluded from the IPPM charter. 622 7. IANA Considerations 624 This document makes no request of IANA. 626 Note to RFC Editor: this section may be removed on publication as an 627 RFC. 629 8. Security Considerations 631 The security considerations that apply to any active measurement of 632 live networks are relevant here as well. See [RFC4656]. 634 9. Acknowledgements 636 The authors would like to thank Phil Chimento for his suggestion to 637 employ conditional distributions for Delay, and Steve Konish Jr. for 638 his careful review and suggestions. 640 10. References 642 10.1. Normative References 644 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 645 Requirement Levels", BCP 14, RFC 2119, March 1997. 647 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 648 "Framework for IP Performance Metrics", RFC 2330, 649 May 1998. 651 [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 652 Delay Metric for IPPM", RFC 2679, September 1999. 654 [RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 655 Packet Loss Metric for IPPM", RFC 2680, September 1999. 657 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 658 Metric for IP Performance Metrics (IPPM)", RFC 3393, 659 November 2002. 661 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 662 Zekauskas, "A One-way Active Measurement Protocol 663 (OWAMP)", RFC 4656, September 2006. 665 [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, 666 S., and J. Perser, "Packet Reordering Metrics", RFC 4737, 667 November 2006. 669 10.2. Informative References 671 [Casner] "A Fine-Grained View of High Performance Networking, NANOG 672 22 Conf.; http://www.nanog.org/mtg-0105/agenda.html", May 673 20-22 2001. 675 [Cia03] "Standardized Active Measurements on a Tier 1 IP Backbone, 676 IEEE Communications Mag., pp 90-97.", June 2003. 678 [I-D.ietf-ippm-framework-compagg] 679 Morton, A., "Framework for Metric Composition", 680 draft-ietf-ippm-framework-compagg-05 (work in progress), 681 November 2007. 683 [I-D.ietf-ippm-reporting] 684 Shalunov, S., "Reporting IP Performance Metrics to Users", 685 draft-ietf-ippm-reporting-01 (work in progress), 686 October 2006. 688 [Y.1540] ITU-T Recommendation Y.1540, "Internet protocol data 689 communication service - IP packet transfer and 690 availability performance parameters", December 2002. 692 [Y.1541] ITU-T Recommendation Y.1540, "Network Performance 693 Objectives for IP-Based Services", February 2006. 695 Authors' Addresses 697 Al Morton 698 AT&T Labs 699 200 Laurel Avenue South 700 Middletown, NJ 07748 701 USA 703 Phone: +1 732 420 1571 704 Fax: +1 732 368 1192 705 Email: acmorton@att.com 706 URI: http://home.comcast.net/~acmacm/ 708 Gomathi Ramachandran 709 AT&T Labs 710 200 Laurel Avenue South 711 Middletown, New Jersey 07748 712 USA 714 Phone: +1 732 420 2353 715 Fax: 716 Email: gomathi@att.com 717 URI: 719 Ganga Maguluri 720 AT&T Labs 721 200 Laurel Avenue 722 Middletown, New Jersey 07748 723 USA 725 Phone: 732-420-2486 726 Fax: 727 Email: gmaguluri@att.com 728 URI: 730 Full Copyright Statement 732 Copyright (C) The IETF Trust (2007). 734 This document is subject to the rights, licenses and restrictions 735 contained in BCP 78, and except as set forth therein, the authors 736 retain all their rights. 738 This document and the information contained herein are provided on an 739 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 740 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 741 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 742 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 743 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 744 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 746 Intellectual Property 748 The IETF takes no position regarding the validity or scope of any 749 Intellectual Property Rights or other rights that might be claimed to 750 pertain to the implementation or use of the technology described in 751 this document or the extent to which any license under such rights 752 might or might not be available; nor does it represent that it has 753 made any independent effort to identify any such rights. Information 754 on the procedures with respect to rights in RFC documents can be 755 found in BCP 78 and BCP 79. 757 Copies of IPR disclosures made to the IETF Secretariat and any 758 assurances of licenses to be made available, or the result of an 759 attempt made to obtain a general license or permission for the use of 760 such proprietary rights by implementers or users of this 761 specification can be obtained from the IETF on-line IPR repository at 762 http://www.ietf.org/ipr. 764 The IETF invites any interested party to bring to its attention any 765 copyrights, patents or patent applications, or other proprietary 766 rights that may cover technology that may be required to implement 767 this standard. Please address the information to the IETF at 768 ietf-ipr@ietf.org. 770 Acknowledgment 772 Funding for the RFC Editor function is provided by the IETF 773 Administrative Support Activity (IASA).