idnits 2.17.1 draft-morton-ippm-reporting-metrics-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.ii or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 7, 2009) is 5378 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC4737' is defined on line 679, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2679 (Obsoleted by RFC 7679) ** Obsolete normative reference: RFC 2680 (Obsoleted by RFC 7680) == Outdated reference: A later version (-09) exists of draft-ietf-ippm-framework-compagg-08 == Outdated reference: A later version (-06) exists of draft-ietf-ippm-reporting-03 Summary: 3 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft G. Ramachandran 4 Intended status: Informational G. Maguluri 5 Expires: January 8, 2010 AT&T Labs 6 July 7, 2009 8 Reporting Metrics: Different Points of View 9 draft-morton-ippm-reporting-metrics-07 11 Status of this Memo 13 This Internet-Draft is submitted to IETF in full conformance with the 14 provisions of BCP 78 and BCP 79. This document may contain material 15 from IETF Documents or IETF Contributions published or made publicly 16 available before November 10, 2008. The person(s) controlling the 17 copyright in some of this material may not have granted the IETF 18 Trust the right to allow modifications of such material outside the 19 IETF Standards Process. Without obtaining an adequate license from 20 the person(s) controlling the copyright in such materials, this 21 document may not be modified outside the IETF Standards Process, and 22 derivative works of it may not be created outside the IETF Standards 23 Process, except to format it for publication as an RFC or to 24 translate it into languages other than English. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF), its areas, and its working groups. Note that 28 other groups may also distribute working documents as Internet- 29 Drafts. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 The list of current Internet-Drafts can be accessed at 37 http://www.ietf.org/ietf/1id-abstracts.txt. 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html. 42 This Internet-Draft will expire on January 8, 2010. 44 Copyright Notice 46 Copyright (c) 2009 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents in effect on the date of 51 publication of this document (http://trustee.ietf.org/license-info). 52 Please review these documents carefully, as they describe your rights 53 and restrictions with respect to this document. 55 Abstract 57 Consumers of IP network performance metrics have many different uses 58 in mind. This memo categorizes the different audience points of 59 view. It describes how the categories affect the selection of metric 60 parameters and options when seeking info that serves their needs. 61 The memo then proceeds to discuss "long-term" reporting 62 considerations (e.g, days, weeks or months, as opposed to 10 63 seconds). 65 Requirements Language 67 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 68 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 69 document are to be interpreted as described in RFC 2119 [RFC2119]. 71 Table of Contents 73 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 74 2. Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . 4 75 3. Effect of POV on the Loss Metric . . . . . . . . . . . . . . . 5 76 3.1. Loss Threshold . . . . . . . . . . . . . . . . . . . . . . 5 77 3.1.1. Network Characterization . . . . . . . . . . . . . . . 5 78 3.1.2. Application Performance . . . . . . . . . . . . . . . 7 79 3.2. Errored Packet Designation . . . . . . . . . . . . . . . . 7 80 3.3. Causes of Lost Packets . . . . . . . . . . . . . . . . . . 7 81 3.4. Summary for Loss . . . . . . . . . . . . . . . . . . . . . 8 82 4. Effect of POV on the Delay Metric . . . . . . . . . . . . . . 8 83 4.1. Treatment of Lost Packets . . . . . . . . . . . . . . . . 8 84 4.1.1. Application Performance . . . . . . . . . . . . . . . 8 85 4.1.2. Network Characterization . . . . . . . . . . . . . . . 9 86 4.1.3. Delay Variation . . . . . . . . . . . . . . . . . . . 10 87 4.1.4. Reordering . . . . . . . . . . . . . . . . . . . . . . 11 88 4.2. Preferred Statistics . . . . . . . . . . . . . . . . . . . 11 89 4.3. Summary for Delay . . . . . . . . . . . . . . . . . . . . 12 90 5. Test Streams and Sample Size . . . . . . . . . . . . . . . . . 12 91 5.1. Test Stream Characteristics . . . . . . . . . . . . . . . 12 92 5.2. Sample Size . . . . . . . . . . . . . . . . . . . . . . . 12 93 6. Reporting Results . . . . . . . . . . . . . . . . . . . . . . 13 94 6.1. Overview of Metric Statistics . . . . . . . . . . . . . . 13 95 6.2. Long-Term Reporting Considerations . . . . . . . . . . . . 14 96 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15 97 8. Security Considerations . . . . . . . . . . . . . . . . . . . 15 98 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 15 99 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16 100 10.1. Normative References . . . . . . . . . . . . . . . . . . . 16 101 10.2. Informative References . . . . . . . . . . . . . . . . . . 16 102 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17 104 1. Introduction 106 When designing measurements of IP networks and presenting the 107 results, knowledge of the audience is a key consideration. To 108 present a useful and relevant portrait of network conditions, one 109 must answer the following question: 111 "How will the results be used?" 113 There are two main audience categories: 115 1. Network Characterization - describes conditions in an IP network 116 for quality assurance, troubleshooting, modeling, etc. The 117 point-of-view looks inward, toward the network, and the consumer 118 intends their actions there. 120 2. Application Performance Estimation - describes the network 121 conditions in a way that facilitates determining affects on user 122 applications, and ultimately the users themselves. This point- 123 of-view looks outward, toward the user(s), accepting the network 124 as-is. This consumer intends to estimate a network-dependent 125 aspect of performance, or design some aspect of an application's 126 accommodation of the network. (These are *not* application 127 metrics, they are defined at the IP layer.) 129 This memo considers how these different points-of-view affect both 130 the measurement design (parameters and options of the metrics) and 131 statistics reported when serving their needs. 133 The IPPM framework [RFC2330] and other RFCs describing IPPM metrics 134 provide a background for this memo. 136 2. Purpose and Scope 138 The purpose of this memo is to clearly delineate two points-of-view 139 (POV) for using measurements, and describe their effects on the test 140 design, including the selection of metric parameters and reporting 141 the results. 143 The current scope of this memo is primarily limited to design and 144 reporting of the loss and delay metrics [RFC2680] [RFC2679], but will 145 also discuss the delay variation and reordering metrics where 146 applicable. Sampling, or the design of the active packet stream that 147 is the basis for the measurements, is also discussed. 149 3. Effect of POV on the Loss Metric 151 This section describes the ways in which the Loss metric can be tuned 152 to reflect the preferences of the two audience categories, or 153 different POV. The waiting time to declare a packet lost, or loss 154 threshold is one area where there would appear to be a difference, 155 but the ability to post-process the results may resolve it. 157 3.1. Loss Threshold 159 RFC 2680 [RFC2680] defines the concept of a waiting time for packets 160 to arrive, beyond which they are declared lost. The text of the RFC 161 declines to recommend a value, instead saying that "good engineering, 162 including an understanding of packet lifetimes, will be needed in 163 practice." Later, in the methodology, they give reasons for waiting 164 "a reasonable period of time", and leaving the definition of 165 "reasonable" intentionally vague. 167 3.1.1. Network Characterization 169 Practical measurement experience has shown that unusual network 170 circumstances can cause long delays. One such circumstance is when 171 routing loops form during IGP re-convergence following a failure or 172 drastic link cost change. Packets will loop between two routers 173 until new routes are installed, or until the IPv4 Time-to-Live (TTL) 174 field (or the IPv6 Hop Limit) decrements to zero. Very long delays 175 on the order of several seconds have been measured [Casner] [Cia03]. 177 Therefore, network characterization activities prefer a long waiting 178 time in order to distinguish these events from other causes of loss 179 (such as packet discard at a full queue, or tail drop). This way, 180 the metric design helps to distinguish more reliably between packets 181 that might yet arrive, and those that are no longer traversing the 182 network. 184 It is possible to calculate a worst-case waiting time, assuming that 185 a routing loop is the cause. We model the path between Source and 186 Destination as a series of delays in links (t) and queues (q), as 187 these two are the dominant contributors to delay. The normal path 188 delay across n hops without encountering a loop, D, is 189 n 190 --- 191 \ 192 D = t + > t + q 193 0 / i i 194 --- 195 i = 1 197 Figure 1: Normal Path Delay 199 and the time spent in the loop with L hops, is 201 i + L-1 202 --- 203 \ (TTL - n) 204 R = C > t + q where C = --------- 205 / i i max L 206 --- 207 i 209 Figure 2: Delay due to Rotations in a Loop 211 and where C is the number of times a packet circles the loop. 213 If we take the delays of all links and queues as 100ms each, the 214 TTL=255, the number of hops n=5 and the hops in the loop L=4, then 216 D = 1.1 sec and R ~= 50 sec, and D + R ~= 51.1 seconds 218 We note that the link delays of 100ms would span most continents, and 219 a constant queue length of 100ms is also very generous. When a loop 220 occurs, it is almost certain to be resolved in 10 seconds or less. 221 The value calculated above is an upper limit for almost any realistic 222 circumstance. 224 A waiting time threshold parameter, dT, set consistent with this 225 calculation would not truncate the delay distribution (possibly 226 causing a change in its mathematical properties), because the packets 227 that might arrive have been given sufficient time to traverse the 228 network. 230 It is worth noting that packets that are stored and deliberately 231 forwarded at a much later time constitute a replay attack on the 232 measurement system, and are beyond the scope of normal performance 233 reporting. 235 3.1.2. Application Performance 237 Fortunately, application performance estimation activities are not 238 adversely affected by the estimated worst-case transfer time. 239 Although the designer's tendency might be to set the Loss Threshold 240 at a value equivalent to a particular application's threshold, this 241 specific threshold can be applied when post-processing the 242 measurements. A shorter waiting time can be enforced by locating 243 packets with delays longer than the application's threshold, and re- 244 designating such packets as lost. Thus, the measurement system can 245 use a single loss threshold and support both application and network 246 performance POVs simultaneously. 248 3.2. Errored Packet Designation 250 RFC 2680 designates packets that arrive containing errors as lost 251 packets. Many packets that are corrupted by bit errors are discarded 252 within the network and do not reach their intended destination. 254 This is consistent with applications that would check the payload 255 integrity at higher layers, and discard the packet. However, some 256 applications prefer to deal with errored payloads on their own, and 257 even a corrupted payload is better than no packet at all. 259 To address this possibility, and to make network characterization 260 more complete, it is recommended to distinguish between packets that 261 do not arrive (lost) and errored packets that arrive (conditionally 262 lost). 264 3.3. Causes of Lost Packets 266 Although many measurement systems use a waiting time to determine if 267 a packet is lost or not, most of the waiting is in vain. The packets 268 are no-longer traversing the network, and have not reached their 269 destination. 271 There are many causes of packet loss, including: 273 1. Queue drop, or discard 275 2. Corruption of the IP header, or other essential header info 277 3. TTL expiration (or use of a TTL value that is too small) 279 4. Link or router failure 281 After waiting sufficient time, packet loss can probably be attributed 282 to one of these causes. 284 3.4. Summary for Loss 286 Given that measurement post-processing is possible (even encouraged 287 in the definitions of IPPM metrics), measurements of loss can easily 288 serve both points of view: 290 o Use a long waiting time to serve network characterization and 291 revise results for specific application delay thresholds as 292 needed. 294 o Distinguish between errored packets and lost packets when possible 295 to aid network characterization, and combine the results for 296 application performance if appropriate. 298 4. Effect of POV on the Delay Metric 300 This section describes the ways in which the Delay metric can be 301 tuned to reflect the preferences of the two consumer categories, or 302 different POV. 304 4.1. Treatment of Lost Packets 306 The Delay Metric [RFC2679] specifies the treatment of packets that do 307 not successfully traverse the network: their delay is undefined. 309 " >>The *Type-P-One-way-Delay* from Src to Dst at T is undefined 310 (informally, infinite)<< means that Src sent the first bit of a 311 Type-P packet to Dst at wire-time T and that Dst did not receive that 312 packet." 314 It is an accepted, but informal practice to assign infinite delay to 315 lost packets. We next look at how these two different treatments 316 align with the needs of measurement consumers who wish to 317 characterize networks or estimate application performance. Also, we 318 look at the way that lost packets have been treated in other metrics: 319 delay variation and reordering. 321 4.1.1. Application Performance 323 Applications need to perform different functions, dependent on 324 whether or not each packet arrives within some finite tolerance. In 325 other words, a receivers' packet processing takes one of two 326 directions (or "forks" in the road): 328 o Packets that arrive within expected tolerance are handled by 329 processes that remove headers, restore smooth delivery timing (as 330 in a de-jitter buffer), restore sending order, check for errors in 331 payloads, and many other operations. 333 o Packets that do not arrive when expected spawn other processes 334 that attempt recovery from the apparent loss, such as 335 retransmission requests, loss concealment, or forward error 336 correction to replace the missing packet. 338 So, it is important to maintain a distinction between packets that 339 actually arrive, and those that do not. Therefore, it is preferable 340 to leave the delay of lost packets undefined, and to characterize the 341 delay distribution as a conditional distribution (conditioned on 342 arrival). 344 4.1.2. Network Characterization 346 In this discussion, we assume that both loss and delay metrics will 347 be reported for network characterization (at least). 349 Assume packets that do not arrive are reported as Lost, usually as a 350 fraction of all sent packets. If these lost packets are assigned 351 undefined delay, then network's inability to deliver them (in a 352 timely way) is captured only in the loss metric when we report 353 statistics on the Delay distribution conditioned on the event of 354 packet arrival (within the Loss waiting time threshold). We can say 355 that the Delay and Loss metrics are Orthogonal, in that they convey 356 non-overlapping information about the network under test. 358 However, if we assign infinite delay to all lost packets, then: 360 o The delay metric results are influenced both by packets that 361 arrive and those that do not. 363 o The delay singleton and the loss singleton do not appear to be 364 orthogonal (Delay is finite when Loss=0, Delay is infinite when 365 Loss=1). 367 o The network is penalized in both the loss and delay metrics, 368 effectively double-counting the lost packets. 370 As further evidence of overlap, consider the Cumulative Distribution 371 Function (CDF) of Delay when the value positive infinity is assigned 372 to all lost packets. Figure 3 shows a CDF where a small fraction of 373 packets are lost. 375 1 | - - - - - - - - - - - - - - - - - -+ 376 | | 377 | _..----'''''''''''''''''''' 378 | ,-'' 379 | ,' 380 | / Mass at 381 | / +infinity 382 | / = fraction 383 || lost 384 |/ 385 0 |_____________________________________ 387 0 Delay +o0 389 Figure 3: Cumulative Distribution Function for Delay when Loss = 390 +Infinity 392 We note that a Delay CDF that is conditioned on packet arrival would 393 not exhibit this apparent overlap with loss. 395 Although infinity is a familiar mathematical concept, it is somewhat 396 disconcerting to see any time-related metric reported as infinity, in 397 the opinion of the authors. Questions are bound to arise, and tend 398 to detract from the goal of informing the consumer with a performance 399 report. 401 4.1.3. Delay Variation 403 [RFC3393] excludes lost packets from samples, effectively assigning 404 an undefined delay to packets that do not arrive in a reasonable 405 time. Section 4.1 describes this specification and its rationale 406 (ipdv = inter-packet delay variation in the quote below). 408 "The treatment of lost packets as having "infinite" or "undefined" 409 delay complicates the derivation of statistics for ipdv. 410 Specifically, when packets in the measurement sequence are lost, 411 simple statistics such as sample mean cannot be computed. One 412 possible approach to handling this problem is to reduce the event 413 space by conditioning. That is, we consider conditional statistics; 414 namely we estimate the mean ipdv (or other derivative statistic) 415 conditioned on the event that selected packet pairs arrive at the 416 destination (within the given timeout). While this itself is not 417 without problems (what happens, for example, when every other packet 418 is lost), it offers a way to make some (valid) statements about ipdv, 419 at the same time avoiding events with undefined outcomes." 421 4.1.4. Reordering 423 [RFC4737]defines metrics that are based on evaluation of packet 424 arrival order, and include a waiting time to declare a packet lost 425 (to exclude them from further processing). 427 If packets are assigned a delay value, then the reordering metric 428 would declare any packets with infinite delay to be reordered, 429 because their sequence numbers will surely be less than the "Next 430 Expected" threshold when (or if) they arrive. But this practice 431 would fail to maintain orthogonality between the reordering metric 432 and the loss metric. Confusion can be avoided by designating the 433 delay of non-arriving packets as undefined, and reserving delay 434 values only for packets that arrive within a sufficiently long 435 waiting time. 437 4.2. Preferred Statistics 439 Today in network characterization, the sample mean is one statistic 440 that is almost ubiquitously reported. It is easily computed and 441 understood by virtually everyone in this audience category. Also, 442 the sample is usually filtered on packet arrival, so that the mean is 443 based a conditional distribution. 445 The median is another statistic that summarizes a distribution, 446 having somewhat different properties from the sample mean. The 447 median is stable in distributions with a few outliers or without 448 them. However, the median's stability prevents it from indicating 449 when a large fraction of the distribution changes value. 50% or more 450 values would need to change for the median to capture the change. 452 Both the median and sample mean have difficulty with bimodal 453 distributions. The median will reside in only one of the modes, and 454 the mean may not lie in either mode range. For this and other 455 reasons, additional statistics such as the minimum, maximum, and 95%- 456 ile have value when summarizing a distribution. 458 When both the sample mean and median are available, a comparison will 459 sometimes be informative, because these two statistics are equal only 460 when the delay distribution is perfectly symmetrical. 462 Also, these statistics are generally useful from the Application 463 Performance POV, so there is a common set that should satisfy 464 audiences. 466 4.3. Summary for Delay 468 From the perspectives of: 470 1. application/receiver analysis, where subsequent processing 471 depends on whether the packet arrives or times-out, 473 2. straightforward network characterization without double-counting 474 defects, and 476 3. consistency with Delay variation and Reordering metric 477 definitions, 479 the most efficient practice is to distinguish between truly lost and 480 delayed packets with a sufficiently long waiting time, and to 481 designate the delay of non-arriving packets as undefined. 483 5. Test Streams and Sample Size 485 This section discusses two key aspects of measurement that are 486 sometimes omitted from the report: the description of the test stream 487 on which the measurements are based, and the sample size. 489 5.1. Test Stream Characteristics 491 Network Characterization has traditionally used Poisson-distributed 492 inter-packet spacing, as this provides an unbiased sample. The 493 average inter-packet spacing may be selected to allow observation of 494 specific network phenomena. Other test streams are designed to 495 sample some property of the network, such as the presence of 496 congestion, link bandwidth, or packet reordering. 498 If measuring a network in order to make inferences about applications 499 or receiver performance, then there are usually efficiencies derived 500 from a test stream that has similar characteristics to the sender. 501 In some cases, it is essential to synthesize the sender stream, as 502 with Bulk Transfer Capacity estimates. In other cases, it may be 503 sufficient to sample with a "known bias", e.g., a Periodic stream to 504 estimate real-time application performance. 506 5.2. Sample Size 508 Sample size is directly related to the accuracy of the results, and 509 plays a critical role in the report. Even if only the sample size 510 (in terms of number of packets) is given for each value or summary 511 statistic, it imparts a notion of the confidence in the result. 513 In practice, the sample size will be selected taking both statistical 514 and practical factors into account. Among these factors are: 516 1. The estimated variability of the quantity being measured 518 2. The desired confidence in the result (although this may be 519 dependent on assumption of the underlying distribution of the 520 measured quantity). 522 3. The effects of active measurement traffic on user traffic 524 4. etc. 526 A sample size may sometimes be referred to as "large". This is a 527 relative, and qualitative term. It is preferable to describe what 528 one is attempting to achieve with their sample. For example, stating 529 an implication may be helpful: this sample is large enough such that 530 a single outlying value at ten times the "typical" sample mean (the 531 mean without the outlying value) would influence the mean by no more 532 than X. 534 6. Reporting Results 536 This section gives an overview of recommendations, followed by 537 additional considerations for reporting results in the "long-term". 539 6.1. Overview of Metric Statistics 541 This section gives an overview of reporting recommendations for the 542 loss, delay, and delay variation metrics based on the discussion and 543 conclusions of the preceding sections. 545 The minimal report on measurements MUST include both Loss and Delay 546 Metrics. 548 For Packet Loss, the loss ratio defined in [RFC2680] is a sufficient 549 starting point, especially the guidance for setting the loss 550 threshold waiting time. We have calculated a waiting time above that 551 should be sufficient to differentiate between packets that are truly 552 lost or have long finite delays under general measurement 553 circumstances, 51 seconds. Knowledge of specific conditions can help 554 to reduce this threshold, but 51 seconds is considered to be 555 manageable in practice. 557 We note that a loss ratio calculated according to [Y.1540] would 558 exclude errored packets form the numerator. In practice, the 559 difference between these two loss metrics is small if any, depending 560 on whether the last link prior to the destination contributes errored 561 packets. 563 For Packet Delay, we recommend providing both the mean delay and the 564 median delay with lost packets designated undefined (as permitted by 565 [RFC2679]). Both statistics are based on a conditional distribution, 566 and the condition is packet arrival prior to a waiting time dT, where 567 dT has been set to take maximum packet lifetimes into account, as 568 discussed above. Using a long dT helps to ensure that delay 569 distributions are not truncated. 571 For Packet Delay Variation (PDV), the minimum delay of the 572 conditional distribution should be used as the reference delay for 573 computing PDV according to [Y.1540] or [RFC3393]. A useful value to 574 report is a pseudo range of delay variation based on calculating the 575 difference between a high percentile of delay and the minimum delay. 576 For example, the 99.9%-ile minus the minimum will give a value that 577 can be compared with objectives in [Y.1541]. 579 6.2. Long-Term Reporting Considerations 581 [I-D.ietf-ippm-reporting] describes methods to conduct measurements 582 and report the results on a near-immediate time scale (10 seconds, 583 which we consider to be "short-term"). 585 Measurement intervals and reporting intervals need not be the same 586 length. Sometimes, the user is only concerned with the performance 587 levels achieved over a relatively long interval of time (e.g, days, 588 weeks, or months, as opposed to 10 seconds). However, there can be 589 risks involved with running a measurement continuously over a long 590 period without recording intermediate results: 592 o Temporary power failure may cause loss of all the results to date. 594 o Measurement system timing synchronization signals may experience a 595 temporary outage, causing sub-sets of measurements to be in error 596 or invalid. 598 o Maintenance may be necessary on the measurement system, or its 599 connectivity to the network under test. 601 For these and other reasons, such as 603 o the constraint to collect measurements on intervals similar to 604 user session length, or 606 o the dual-use of measurements in monitoring activities where 607 results are needed on a period of a few minutes, 609 there is value in conducting measurements on intervals that are much 610 shorter than the reporting interval. 612 There are several approaches for aggregating a series of measurement 613 results over time in order to make a statement about the longer 614 reporting interval. One approach requires the storage of all metric 615 singletons collected throughout the reporting interval, even though 616 the measurement interval stops and starts many times. 618 Another approach is described in [I-D.ietf-ippm-framework-compagg] as 619 "temporal aggregation". This approach would estimate the results for 620 the reporting interval based on many individual measurement interval 621 statistics (results) alone. The result would ideally appear in the 622 same form as though a continuous measurement was conducted. A memo 623 to address the details of temporal aggregation is yet to be prepared. 625 Yet another approach requires a numerical objective for the metric, 626 and the results of each measurement interval are compared with the 627 objective. Every measurement interval where the results meet the 628 objective contribute to the fraction of time with performance as 629 specified. When the reporting interval contains many measurement 630 intervals it is possible to present the results as "metric A was less 631 than or equal to objective X during Y% of time. 633 NOTE that numerical thresholds are not set in IETF performance work 634 and are explicitly excluded from the IPPM charter. 636 7. IANA Considerations 638 This document makes no request of IANA. 640 Note to RFC Editor: this section may be removed on publication as an 641 RFC. 643 8. Security Considerations 645 The security considerations that apply to any active measurement of 646 live networks are relevant here as well. See [RFC4656]. 648 9. Acknowledgements 650 The authors would like to thank Phil Chimento for his suggestion to 651 employ conditional distributions for Delay, and Steve Konish Jr. for 652 his careful review and suggestions. 654 10. References 656 10.1. Normative References 658 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 659 Requirement Levels", BCP 14, RFC 2119, March 1997. 661 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 662 "Framework for IP Performance Metrics", RFC 2330, 663 May 1998. 665 [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 666 Delay Metric for IPPM", RFC 2679, September 1999. 668 [RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 669 Packet Loss Metric for IPPM", RFC 2680, September 1999. 671 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 672 Metric for IP Performance Metrics (IPPM)", RFC 3393, 673 November 2002. 675 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 676 Zekauskas, "A One-way Active Measurement Protocol 677 (OWAMP)", RFC 4656, September 2006. 679 [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, 680 S., and J. Perser, "Packet Reordering Metrics", RFC 4737, 681 November 2006. 683 10.2. Informative References 685 [Casner] "A Fine-Grained View of High Performance Networking, NANOG 686 22 Conf.; http://www.nanog.org/mtg-0105/agenda.html", May 687 20-22 2001. 689 [Cia03] "Standardized Active Measurements on a Tier 1 IP Backbone, 690 IEEE Communications Mag., pp 90-97.", June 2003. 692 [I-D.ietf-ippm-framework-compagg] 693 Morton, A., "Framework for Metric Composition", 694 draft-ietf-ippm-framework-compagg-08 (work in progress), 695 June 2009. 697 [I-D.ietf-ippm-reporting] 698 Shalunov, S. and M. Swany, "Reporting IP Performance 699 Metrics to Users", draft-ietf-ippm-reporting-03 (work in 700 progress), March 2009. 702 [Y.1540] ITU-T Recommendation Y.1540, "Internet protocol data 703 communication service - IP packet transfer and 704 availability performance parameters", December 2002. 706 [Y.1541] ITU-T Recommendation Y.1540, "Network Performance 707 Objectives for IP-Based Services", February 2006. 709 Authors' Addresses 711 Al Morton 712 AT&T Labs 713 200 Laurel Avenue South 714 Middletown, NJ 07748 715 USA 717 Phone: +1 732 420 1571 718 Fax: +1 732 368 1192 719 Email: acmorton@att.com 720 URI: http://home.comcast.net/~acmacm/ 722 Gomathi Ramachandran 723 AT&T Labs 724 200 Laurel Avenue South 725 Middletown, New Jersey 07748 726 USA 728 Phone: +1 732 420 2353 729 Fax: 730 Email: gomathi@att.com 731 URI: 733 Ganga Maguluri 734 AT&T Labs 735 200 Laurel Avenue 736 Middletown, New Jersey 07748 737 USA 739 Phone: 732-420-2486 740 Fax: 741 Email: gmaguluri@att.com 742 URI: