idnits 2.17.1 draft-morton-ippm-reporting-metrics-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 740. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 751. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 758. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 764. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (May 19, 2008) is 5821 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC4737' is defined on line 661, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2679 (Obsoleted by RFC 7679) ** Obsolete normative reference: RFC 2680 (Obsoleted by RFC 7680) == Outdated reference: A later version (-09) exists of draft-ietf-ippm-framework-compagg-06 == Outdated reference: A later version (-06) exists of draft-ietf-ippm-reporting-01 Summary: 3 errors (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft G. Ramachandran 4 Intended status: Informational G. Maguluri 5 Expires: November 20, 2008 AT&T Labs 6 May 19, 2008 8 Reporting Metrics: Different Points of View 9 draft-morton-ippm-reporting-metrics-05 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on November 20, 2008. 36 Abstract 38 Consumers of IP network performance metrics have many different uses 39 in mind. This memo categorizes the different audience points of 40 view. It describes how the categories affect the selection of metric 41 parameters and options when seeking info that serves their needs. 42 The memo then proceeds to discuss "long-term" reporting 43 considerations (e.g, days, weeks or months, as opposed to 10 44 seconds). 46 Requirements Language 48 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 49 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 50 document are to be interpreted as described in RFC 2119 [RFC2119]. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 55 2. Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . 3 56 3. Effect of POV on the Loss Metric . . . . . . . . . . . . . . . 4 57 3.1. Loss Threshold . . . . . . . . . . . . . . . . . . . . . . 4 58 3.1.1. Network Characterization . . . . . . . . . . . . . . . 4 59 3.1.2. Application Performance . . . . . . . . . . . . . . . 6 60 3.2. Errored Packet Designation . . . . . . . . . . . . . . . . 6 61 3.3. Causes of Lost Packets . . . . . . . . . . . . . . . . . . 6 62 3.4. Summary for Loss . . . . . . . . . . . . . . . . . . . . . 7 63 4. Effect of POV on the Delay Metric . . . . . . . . . . . . . . 7 64 4.1. Treatment of Lost Packets . . . . . . . . . . . . . . . . 7 65 4.1.1. Application Performance . . . . . . . . . . . . . . . 7 66 4.1.2. Network Characterization . . . . . . . . . . . . . . . 8 67 4.1.3. Delay Variation . . . . . . . . . . . . . . . . . . . 9 68 4.1.4. Reordering . . . . . . . . . . . . . . . . . . . . . . 10 69 4.2. Preferred Statistics . . . . . . . . . . . . . . . . . . . 10 70 4.3. Summary for Delay . . . . . . . . . . . . . . . . . . . . 11 71 5. Test Streams and Sample Size . . . . . . . . . . . . . . . . . 11 72 5.1. Test Stream Characteristics . . . . . . . . . . . . . . . 11 73 5.2. Sample Size . . . . . . . . . . . . . . . . . . . . . . . 11 74 6. Reporting Results . . . . . . . . . . . . . . . . . . . . . . 12 75 6.1. Overview of Metric Statistics . . . . . . . . . . . . . . 12 76 6.2. Long-Term Reporting Considerations . . . . . . . . . . . . 13 77 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 78 8. Security Considerations . . . . . . . . . . . . . . . . . . . 14 79 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 14 80 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 15 81 10.1. Normative References . . . . . . . . . . . . . . . . . . . 15 82 10.2. Informative References . . . . . . . . . . . . . . . . . . 15 83 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16 84 Intellectual Property and Copyright Statements . . . . . . . . . . 17 86 1. Introduction 88 When designing measurements of IP networks and presenting the 89 results, knowledge of the audience is a key consideration. To 90 present a useful and relevant portrait of network conditions, one 91 must answer the following question: 93 "How will the results be used?" 95 There are two main audience categories: 97 1. Network Characterization - describes conditions in an IP network 98 for quality assurance, troubleshooting, modeling, etc. The 99 point-of-view looks inward, toward the network, and the consumer 100 intends their actions there. 102 2. Application Performance Estimation - describes the network 103 conditions in a way that facilitates determining affects on user 104 applications, and ultimately the users themselves. This point- 105 of-view looks outward, toward the user(s), accepting the network 106 as-is. This consumer intends to estimate a network-dependent 107 aspect of performance, or design some aspect of an application's 108 accommodation of the network. (These are *not* application 109 metrics, they are defined at the IP layer.) 111 This memo considers how these different points-of-view affect both 112 the measurement design (parameters and options of the metrics) and 113 statistics reported when serving their needs. 115 The IPPM framework [RFC2330] and other RFCs describing IPPM metrics 116 provide a background for this memo. 118 2. Purpose and Scope 120 The purpose of this memo is to clearly delineate two points-of-view 121 (POV) for using measurements, and describe their effects on the test 122 design, including the selection of metric parameters and reporting 123 the results. 125 The current scope of this memo is primarily limited to design and 126 reporting of the loss and delay metrics [RFC2680] [RFC2679], but will 127 also discuss the delay variation and reordering metrics where 128 applicable. Sampling, or the design of the active packet stream that 129 is the basis for the measurements, is also discussed. 131 3. Effect of POV on the Loss Metric 133 This section describes the ways in which the Loss metric can be tuned 134 to reflect the preferences of the two audience categories, or 135 different POV. The waiting time to declare a packet lost, or loss 136 threshold is one area where there would appear to be a difference, 137 but the ability to post-process the results may resolve it. 139 3.1. Loss Threshold 141 RFC 2680 [RFC2680] defines the concept of a waiting time for packets 142 to arrive, beyond which they are declared lost. The text of the RFC 143 declines to recommend a value, instead saying that "good engineering, 144 including an understanding of packet lifetimes, will be needed in 145 practice." Later, in the methodology, they give reasons for waiting 146 "a reasonable period of time", and leaving the definition of 147 "reasonable" intentionally vague. 149 3.1.1. Network Characterization 151 Practical measurement experience has shown that unusual network 152 circumstances can cause long delays. One such circumstance is when 153 routing loops form during IGP re-convergence following a failure or 154 drastic link cost change. Packets will loop between two routers 155 until new routes are installed, or until the IPv4 Time-to-Live (TTL) 156 field (or the IPv6 Hop Limit) decrements to zero. Very long delays 157 on the order of several seconds have been measured [Casner] [Cia03]. 159 Therefore, network characterization activities prefer a long waiting 160 time in order to distinguish these events from other causes of loss 161 (such as packet discard at a full queue, or tail drop). This way, 162 the metric design helps to distinguish more reliably between packets 163 that might yet arrive, and those that are no longer traversing the 164 network. 166 It is possible to calculate a worst-case waiting time, assuming that 167 a routing loop is the cause. We model the path between Source and 168 Destination as a series of delays in links (t) and queues (q), as 169 these two are the dominant contributors to delay. The normal path 170 delay across n hops without encountering a loop, D, is 171 n 172 --- 173 \ 174 D = t + > t + q 175 0 / i i 176 --- 177 i = 1 179 Figure 1: Normal Path Delay 181 and the time spent in the loop with L hops, is 183 i + L-1 184 --- 185 \ (TTL - n) 186 R = C > t + q where C = --------- 187 / i i max L 188 --- 189 i 191 Figure 2: Delay due to Rotations in a Loop 193 and where C is the number of times a packet circles the loop. 195 If we take the delays of all links and queues as 100ms each, the 196 TTL=255, the number of hops n=5 and the hops in the loop L=4, then 198 D = 1.1 sec and R ~= 50 sec, and D + R ~= 51.1 seconds 200 We note that the link delays of 100ms would span most continents, and 201 a constant queue length of 100ms is also very generous. When a loop 202 occurs, it is almost certain to be resolved in 10 seconds or less. 203 The value calculated above is an upper limit for almost any realistic 204 circumstance. 206 A waiting time threshold parameter, dT, set consistent with this 207 calculation would not truncate the delay distribution (possibly 208 causing a change in its mathematical properties), because the packets 209 that might arrive have been given sufficient time to traverse the 210 network. 212 It is worth noting that packets that are stored and deliberately 213 forwarded at a much later time constitute a replay attack on the 214 measurement system, and are beyond the scope of normal performance 215 reporting. 217 3.1.2. Application Performance 219 Fortunately, application performance estimation activities are not 220 adversely affected by the estimated worst-case transfer time. 221 Although the designer's tendency might be to set the Loss Threshold 222 at a value equivalent to a particular application's threshold, this 223 specific threshold can be applied when post-processing the 224 measurements. A shorter waiting time can be enforced by locating 225 packets with delays longer than the application's threshold, and re- 226 designating such packets as lost. Thus, the measurement system can 227 use a single loss threshold and support both application and network 228 performance POVs simultaneously. 230 3.2. Errored Packet Designation 232 RFC 2680 designates packets that arrive containing errors as lost 233 packets. Many packets that are corrupted by bit errors are discarded 234 within the network and do not reach their intended destination. 236 This is consistent with applications that would check the payload 237 integrity at higher layers, and discard the packet. However, some 238 applications prefer to deal with errored payloads on their own, and 239 even a corrupted payload is better than no packet at all. 241 To address this possibility, and to make network characterization 242 more complete, it is recommended to distinguish between packets that 243 do not arrive (lost) and errored packets that arrive (conditionally 244 lost). 246 3.3. Causes of Lost Packets 248 Although many measurement systems use a waiting time to determine if 249 a packet is lost or not, most of the waiting is in vain. The packets 250 are no-longer traversing the network, and have not reached their 251 destination. 253 There are many causes of packet loss, including: 255 1. Queue drop, or discard 257 2. Corruption of the IP header, or other essential header info 259 3. TTL expiration (or use of a TTL value that is too small) 261 4. Link or router failure 263 After waiting sufficient time, packet loss can probably be attributed 264 to one of these causes. 266 3.4. Summary for Loss 268 Given that measurement post-processing is possible (even encouraged 269 in the definitions of IPPM metrics), measurements of loss can easily 270 serve both points of view: 272 o Use a long waiting time to serve network characterization and 273 revise results for specific application delay thresholds as 274 needed. 276 o Distinguish between errored packets and lost packets when possible 277 to aid network characterization, and combine the results for 278 application performance if appropriate. 280 4. Effect of POV on the Delay Metric 282 This section describes the ways in which the Delay metric can be 283 tuned to reflect the preferences of the two consumer categories, or 284 different POV. 286 4.1. Treatment of Lost Packets 288 The Delay Metric [RFC2679] specifies the treatment of packets that do 289 not successfully traverse the network: their delay is undefined. 291 " >>The *Type-P-One-way-Delay* from Src to Dst at T is undefined 292 (informally, infinite)<< means that Src sent the first bit of a 293 Type-P packet to Dst at wire-time T and that Dst did not receive that 294 packet." 296 It is an accepted, but informal practice to assign infinite delay to 297 lost packets. We next look at how these two different treatments 298 align with the needs of measurement consumers who wish to 299 characterize networks or estimate application performance. Also, we 300 look at the way that lost packets have been treated in other metrics: 301 delay variation and reordering. 303 4.1.1. Application Performance 305 Applications need to perform different functions, dependent on 306 whether or not each packet arrives within some finite tolerance. In 307 other words, a receivers' packet processing takes one of two 308 directions (or "forks" in the road): 310 o Packets that arrive within expected tolerance are handled by 311 processes that remove headers, restore smooth delivery timing (as 312 in a de-jitter buffer), restore sending order, check for errors in 313 payloads, and many other operations. 315 o Packets that do not arrive when expected spawn other processes 316 that attempt recovery from the apparent loss, such as 317 retransmission requests, loss concealment, or forward error 318 correction to replace the missing packet. 320 So, it is important to maintain a distinction between packets that 321 actually arrive, and those that do not. Therefore, it is preferable 322 to leave the delay of lost packets undefined, and to characterize the 323 delay distribution as a conditional distribution (conditioned on 324 arrival). 326 4.1.2. Network Characterization 328 In this discussion, we assume that both loss and delay metrics will 329 be reported for network characterization (at least). 331 Assume packets that do not arrive are reported as Lost, usually as a 332 fraction of all sent packets. If these lost packets are assigned 333 undefined delay, then network's inability to deliver them (in a 334 timely way) is captured only in the loss metric when we report 335 statistics on the Delay distribution conditioned on the event of 336 packet arrival (within the Loss waiting time threshold). We can say 337 that the Delay and Loss metrics are Orthogonal, in that they convey 338 non-overlapping information about the network under test. 340 However, if we assign infinite delay to all lost packets, then: 342 o The delay metric results are influenced both by packets that 343 arrive and those that do not. 345 o The delay singleton and the loss singleton do not appear to be 346 orthogonal (Delay is finite when Loss=0, Delay is infinite when 347 Loss=1). 349 o The network is penalized in both the loss and delay metrics, 350 effectively double-counting the lost packets. 352 As further evidence of overlap, consider the Cumulative Distribution 353 Function (CDF) of Delay when the value positive infinity is assigned 354 to all lost packets. Figure 3 shows a CDF where a small fraction of 355 packets are lost. 357 1 | - - - - - - - - - - - - - - - - - -+ 358 | | 359 | _..----'''''''''''''''''''' 360 | ,-'' 361 | ,' 362 | / Mass at 363 | / +infinity 364 | / = fraction 365 || lost 366 |/ 367 0 |_____________________________________ 369 0 Delay +o0 371 Figure 3: Cumulative Distribution Function for Delay when Loss = 372 +Infinity 374 We note that a Delay CDF that is conditioned on packet arrival would 375 not exhibit this apparent overlap with loss. 377 Although infinity is a familiar mathematical concept, it is somewhat 378 disconcerting to see any time-related metric reported as infinity, in 379 the opinion of the authors. Questions are bound to arise, and tend 380 to detract from the goal of informing the consumer with a performance 381 report. 383 4.1.3. Delay Variation 385 [RFC3393] excludes lost packets from samples, effectively assigning 386 an undefined delay to packets that do not arrive in a reasonable 387 time. Section 4.1 describes this specification and its rationale 388 (ipdv = inter-packet delay variation in the quote below). 390 "The treatment of lost packets as having "infinite" or "undefined" 391 delay complicates the derivation of statistics for ipdv. 392 Specifically, when packets in the measurement sequence are lost, 393 simple statistics such as sample mean cannot be computed. One 394 possible approach to handling this problem is to reduce the event 395 space by conditioning. That is, we consider conditional statistics; 396 namely we estimate the mean ipdv (or other derivative statistic) 397 conditioned on the event that selected packet pairs arrive at the 398 destination (within the given timeout). While this itself is not 399 without problems (what happens, for example, when every other packet 400 is lost), it offers a way to make some (valid) statements about ipdv, 401 at the same time avoiding events with undefined outcomes." 403 4.1.4. Reordering 405 [RFC4737]defines metrics that are based on evaluation of packet 406 arrival order, and include a waiting time to declare a packet lost 407 (to exclude them from further processing). 409 If packets are assigned a delay value, then the reordering metric 410 would declare any packets with infinite delay to be reordered, 411 because their sequence numbers will surely be less than the "Next 412 Expected" threshold when (or if) they arrive. But this practice 413 would fail to maintain orthogonality between the reordering metric 414 and the loss metric. Confusion can be avoided by designating the 415 delay of non-arriving packets as undefined, and reserving delay 416 values only for packets that arrive within a sufficiently long 417 waiting time. 419 4.2. Preferred Statistics 421 Today in network characterization, the sample mean is one statistic 422 that is almost ubiquitously reported. It is easily computed and 423 understood by virtually everyone in this audience category. Also, 424 the sample is usually filtered on packet arrival, so that the mean is 425 based a conditional distribution. 427 The median is another statistic that summarizes a distribution, 428 having somewhat different properties from the sample mean. The 429 median is stable in distributions with a few outliers or without 430 them. However, the median's stability prevents it from indicating 431 when a large fraction of the distribution changes value. 50% or more 432 values would need to change for the median to capture the change. 434 Both the median and sample mean have difficulty with bimodal 435 distributions. The median will reside in only one of the modes, and 436 the mean may not lie in either mode range. For this and other 437 reasons, additional statistics such as the minimum, maximum, and 95%- 438 ile have value when summarizing a distribution. 440 When both the sample mean and median are available, a comparison will 441 sometimes be informative, because these two statistics are equal only 442 when the delay distribution is perfectly symmetrical. 444 Also, these statistics are generally useful from the Application 445 Performance POV, so there is a common set that should satisfy 446 audiences. 448 4.3. Summary for Delay 450 From the perspectives of: 452 1. application/receiver analysis, where subsequent processing 453 depends on whether the packet arrives or times-out, 455 2. straightforward network characterization without double-counting 456 defects, and 458 3. consistency with Delay variation and Reordering metric 459 definitions, 461 the most efficient practice is to distinguish between truly lost and 462 delayed packets with a sufficiently long waiting time, and to 463 designate the delay of non-arriving packets as undefined. 465 5. Test Streams and Sample Size 467 This section discusses two key aspects of measurement that are 468 sometimes omitted from the report: the description of the test stream 469 on which the measurements are based, and the sample size. 471 5.1. Test Stream Characteristics 473 Network Characterization has traditionally used Poisson-distributed 474 inter-packet spacing, as this provides an unbiased sample. The 475 average inter-packet spacing may be selected to allow observation of 476 specific network phenomena. Other test streams are designed to 477 sample some property of the network, such as the presence of 478 congestion, link bandwidth, or packet reordering. 480 If measuring a network in order to make inferences about applications 481 or receiver performance, then there are usually efficiencies derived 482 from a test stream that has similar characteristics to the sender. 483 In some cases, it is essential to synthesize the sender stream, as 484 with Bulk Transfer Capacity estimates. In other cases, it may be 485 sufficient to sample with a "known bias", e.g., a Periodic stream to 486 estimate real-time application performance. 488 5.2. Sample Size 490 Sample size is directly related to the accuracy of the results, and 491 plays a critical role in the report. Even if only the sample size 492 (in terms of number of packets) is given for each value or summary 493 statistic, it imparts a notion of the confidence in the result. 495 In practice, the sample size will be selected taking both statistical 496 and practical factors into account. Among these factors are: 498 1. The estimated variability of the quantity being measured 500 2. The desired confidence in the result (although this may be 501 dependent on assumption of the underlying distribution of the 502 measured quantity). 504 3. The effects of active measurement traffic on user traffic 506 4. etc. 508 A sample size may sometimes be referred to as "large". This is a 509 relative, and qualitative term. It is preferable to describe what 510 one is attempting to achieve with their sample. For example, stating 511 an implication may be helpful: this sample is large enough such that 512 a single outlying value at ten times the "typical" sample mean (the 513 mean without the outlying value) would influence the mean by no more 514 than X. 516 6. Reporting Results 518 This section gives an overview of recommendations, followed by 519 additional considerations for reporting results in the "long-term". 521 6.1. Overview of Metric Statistics 523 This section gives an overview of reporting recommendations for the 524 loss, delay, and delay variation metrics based on the discussion and 525 conclusions of the preceding sections. 527 The minimal report on measurements MUST include both Loss and Delay 528 Metrics. 530 For Packet Loss, the loss ratio defined in [RFC2680] is a sufficient 531 starting point, especially the guidance for setting the loss 532 threshold waiting time. We have calculated a waiting time above that 533 should be sufficient to differentiate between packets that are truly 534 lost or have long finite delays under general measurement 535 circumstances, 51 seconds. Knowledge of specific conditions can help 536 to reduce this threshold, but 51 seconds is considered to be 537 manageable in practice. 539 We note that a loss ratio calculated according to [Y.1540] would 540 exclude errored packets form the numerator. In practice, the 541 difference between these two loss metrics is small if any, depending 542 on whether the last link prior to the destination contributes errored 543 packets. 545 For Packet Delay, we recommend providing both the mean delay and the 546 median delay with lost packets designated undefined (as permitted by 547 [RFC2679]). Both statistics are based on a conditional distribution, 548 and the condition is packet arrival prior to a waiting time dT, where 549 dT has been set to take maximum packet lifetimes into account, as 550 discussed above. Using a long dT helps to ensure that delay 551 distributions are not truncated. 553 For Packet Delay Variation (PDV), the minimum delay of the 554 conditional distribution should be used as the reference delay for 555 computing PDV according to [Y.1540] or [RFC3393]. A useful value to 556 report is a pseudo range of delay variation based on calculating the 557 difference between a high percentile of delay and the minimum delay. 558 For example, the 99.9%-ile minus the minimum will give a value that 559 can be compared with objectives in [Y.1541]. 561 6.2. Long-Term Reporting Considerations 563 [I-D.ietf-ippm-reporting] describes methods to conduct measurements 564 and report the results on a near-immediate time scale (10 seconds, 565 which we consider to be "short-term"). 567 Measurement intervals and reporting intervals need not be the same 568 length. Sometimes, the user is only concerned with the performance 569 levels achieved over a relatively long interval of time (e.g, days, 570 weeks, or months, as opposed to 10 seconds). However, there can be 571 risks involved with running a measurement continuously over a long 572 period without recording intermediate results: 574 o Temporary power failure may cause loss of all the results to date. 576 o Measurement system timing synchronization signals may experience a 577 temporary outage, causing sub-sets of measurements to be in error 578 or invalid. 580 o Maintenance may be necessary on the measurement system, or its 581 connectivity to the network under test. 583 For these and other reasons, such as 585 o the constraint to collect measurements on intervals similar to 586 user session length, or 588 o the dual-use of measurements in monitoring activities where 589 results are needed on a period of a few minutes, 591 there is value in conducting measurements on intervals that are much 592 shorter than the reporting interval. 594 There are several approaches for aggregating a series of measurement 595 results over time in order to make a statement about the longer 596 reporting interval. One approach requires the storage of all metric 597 singletons collected throughout the reporting interval, even though 598 the measurement interval stops and starts many times. 600 Another approach is described in [I-D.ietf-ippm-framework-compagg] as 601 "temporal aggregation". This approach would estimate the results for 602 the reporting interval based on many individual measurement interval 603 statistics (results) alone. The result would ideally appear in the 604 same form as though a continuous measurement was conducted. A memo 605 to address the details of temporal aggregation is yet to be prepared. 607 Yet another approach requires a numerical objective for the metric, 608 and the results of each measurement interval are compared with the 609 objective. Every measurement interval where the results meet the 610 objective contribute to the fraction of time with performance as 611 specified. When the reporting interval contains many measurement 612 intervals it is possible to present the results as "metric A was less 613 than or equal to objective X during Y% of time. 615 NOTE that numerical thresholds are not set in IETF performance work 616 and are explicitly excluded from the IPPM charter. 618 7. IANA Considerations 620 This document makes no request of IANA. 622 Note to RFC Editor: this section may be removed on publication as an 623 RFC. 625 8. Security Considerations 627 The security considerations that apply to any active measurement of 628 live networks are relevant here as well. See [RFC4656]. 630 9. Acknowledgements 632 The authors would like to thank Phil Chimento for his suggestion to 633 employ conditional distributions for Delay, and Steve Konish Jr. for 634 his careful review and suggestions. 636 10. References 638 10.1. Normative References 640 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 641 Requirement Levels", BCP 14, RFC 2119, March 1997. 643 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 644 "Framework for IP Performance Metrics", RFC 2330, 645 May 1998. 647 [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 648 Delay Metric for IPPM", RFC 2679, September 1999. 650 [RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 651 Packet Loss Metric for IPPM", RFC 2680, September 1999. 653 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 654 Metric for IP Performance Metrics (IPPM)", RFC 3393, 655 November 2002. 657 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 658 Zekauskas, "A One-way Active Measurement Protocol 659 (OWAMP)", RFC 4656, September 2006. 661 [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, 662 S., and J. Perser, "Packet Reordering Metrics", RFC 4737, 663 November 2006. 665 10.2. Informative References 667 [Casner] "A Fine-Grained View of High Performance Networking, NANOG 668 22 Conf.; http://www.nanog.org/mtg-0105/agenda.html", May 669 20-22 2001. 671 [Cia03] "Standardized Active Measurements on a Tier 1 IP Backbone, 672 IEEE Communications Mag., pp 90-97.", June 2003. 674 [I-D.ietf-ippm-framework-compagg] 675 Morton, A., "Framework for Metric Composition", 676 draft-ietf-ippm-framework-compagg-06 (work in progress), 677 February 2008. 679 [I-D.ietf-ippm-reporting] 680 Shalunov, S., "Reporting IP Performance Metrics to Users", 681 draft-ietf-ippm-reporting-01 (work in progress), 682 October 2006. 684 [Y.1540] ITU-T Recommendation Y.1540, "Internet protocol data 685 communication service - IP packet transfer and 686 availability performance parameters", December 2002. 688 [Y.1541] ITU-T Recommendation Y.1540, "Network Performance 689 Objectives for IP-Based Services", February 2006. 691 Authors' Addresses 693 Al Morton 694 AT&T Labs 695 200 Laurel Avenue South 696 Middletown, NJ 07748 697 USA 699 Phone: +1 732 420 1571 700 Fax: +1 732 368 1192 701 Email: acmorton@att.com 702 URI: http://home.comcast.net/~acmacm/ 704 Gomathi Ramachandran 705 AT&T Labs 706 200 Laurel Avenue South 707 Middletown, New Jersey 07748 708 USA 710 Phone: +1 732 420 2353 711 Fax: 712 Email: gomathi@att.com 713 URI: 715 Ganga Maguluri 716 AT&T Labs 717 200 Laurel Avenue 718 Middletown, New Jersey 07748 719 USA 721 Phone: 732-420-2486 722 Fax: 723 Email: gmaguluri@att.com 724 URI: 726 Full Copyright Statement 728 Copyright (C) The IETF Trust (2008). 730 This document is subject to the rights, licenses and restrictions 731 contained in BCP 78, and except as set forth therein, the authors 732 retain all their rights. 734 This document and the information contained herein are provided on an 735 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 736 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 737 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 738 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 739 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 740 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 742 Intellectual Property 744 The IETF takes no position regarding the validity or scope of any 745 Intellectual Property Rights or other rights that might be claimed to 746 pertain to the implementation or use of the technology described in 747 this document or the extent to which any license under such rights 748 might or might not be available; nor does it represent that it has 749 made any independent effort to identify any such rights. Information 750 on the procedures with respect to rights in RFC documents can be 751 found in BCP 78 and BCP 79. 753 Copies of IPR disclosures made to the IETF Secretariat and any 754 assurances of licenses to be made available, or the result of an 755 attempt made to obtain a general license or permission for the use of 756 such proprietary rights by implementers or users of this 757 specification can be obtained from the IETF on-line IPR repository at 758 http://www.ietf.org/ipr. 760 The IETF invites any interested party to bring to its attention any 761 copyrights, patents or patent applications, or other proprietary 762 rights that may cover technology that may be required to implement 763 this standard. Please address the information to the IETF at 764 ietf-ipr@ietf.org.