idnits 2.17.1 draft-ietf-ippm-delay-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-23) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Abstract section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 1998) is 9383 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: '3' is defined on line 734, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 2330 (ref. '1') -- Possible downref: Non-RFC (?) normative reference: ref. '2' ** Obsolete normative reference: RFC 1305 (ref. '3') (Obsoleted by RFC 5905) -- Possible downref: Non-RFC (?) normative reference: ref. '4' Summary: 12 errors (**), 0 flaws (~~), 2 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group G. Almes 2 Internet Draft S. Kalidindi 3 Expiration Date: March 1999 M. Zekauskas 4 Advanced Network & Services 5 August 1998 7 A One-way Delay Metric for IPPM 8 10 1. Status of this Memo 12 This document is an Internet-Draft. Internet-Drafts are working 13 documents of the Internet Engineering Task Force (IETF), its areas, 14 and its working groups. Note that other groups may also distribute 15 working documents as Internet Drafts. 17 Internet-Drafts are draft documents valid for a maximum of six 18 months, and may be updated, replaced, or obsoleted by other documents 19 at any time. It is inappropriate to use Internet- Drafts as 20 reference material or to cite them other than as "work in progress." 22 To view the entire list of current Internet-Drafts, please check the 23 "1id-abstracts.txt" listing contained in the Internet-Drafts shadow 24 directories on ftp.is.co.za (Africa), nic.nordu.net (Northern 25 Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au (Pacific 26 Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu (US West Coast). 28 This memo provides information for the Internet community. This memo 29 does not specify an Internet standard of any kind. Distribution of 30 this memo is unlimited. 32 2. Introduction 34 This memo defines a metric for one-way delay of packets across 35 Internet paths. It builds on notions introduced and discussed in the 36 IPPM Framework document, RFC 2330 [1]; the reader is assumed to be 37 familiar with that document. 39 This memo is intended to be parallel in structure to a companion 40 document for Packet Loss ("A Packet Loss Metric for IPPM" 41 ) [2]. 43 The structure of the memo is as follows: 45 + A 'singleton' analytic metric, called Type-P-One-way-Delay, will 46 be introduced to measure a single observation of one-way delay. 48 + Using this singleton metric, a 'sample', called Type-P-One-way- 49 Delay-Poisson-Stream, will be introduced to measure a sequence of 50 singleton delays measured at times taken from a Poisson process. 52 + Using this sample, several 'statistics' of the sample will be 53 defined and discussed. 55 This progression from singleton to sample to statistics, with clear 56 separation among them, is important. 58 Whenever a technical term from the IPPM Framework document is first 59 used in this memo, it will be tagged with a trailing asterisk. For 60 example, "term*" indicates that "term" is defined in the Framework. 62 2.1. Motivation: 64 One-way delay of a Type-P* packet from a source host* to a 65 destination host is useful for several reasons: 67 + Some applications do not perform well (or at all) if end-to-end 68 delay between hosts is large relative to some threshold value. 70 + Erratic variation in delay makes it difficult (or impossible) to 71 support many real-time applications. 73 + The larger the value of delay, the more difficult it is for 74 transport-layer protocols to sustain high bandwidths. 76 + The minimum value of this metric provides an indication of the 77 delay due only to propagation and transmission delay. 79 + The minimum value of this metric provides an indication of the 80 delay that will likely be experienced when the path* traversed is 81 lightly loaded. 83 + Values of this metric above the minimum provide an indication of 84 the congestion present in the path. 86 It is outside the scope of this document to say precisely how delay 87 metrics would be applied to specific problems. 89 2.2. General Issues Regarding Time 91 Whenever a time (i.e., a moment in history) is mentioned here, it is 92 understood to be measured in seconds (and fractions) relative to UTC. 94 As described more fully in the Framework document, there are four 95 distinct, but related notions of clock uncertainty: 97 synchronization* 99 measures the extent to which two clocks agree on what time it 100 is. For example, the clock on one host might be 5.4 msec ahead 101 of the clock on a second host. 103 accuracy* 105 measures the extent to which a given clock agrees with UTC. For 106 example, the clock on a host might be 27.1 msec behind UTC. 108 resolution* 110 measures the precision of a given clock. For example, the clock 111 on an old Unix host might tick only once every 10 msec, and thus 112 have a resolution of only 10 msec. 114 skew* 116 measures the change of accuracy, or of synchronization, with 117 time. For example, the clock on a given host might gain 1.3 118 msec per hour and thus be 27.1 msec behind UTC at one time and 119 only 25.8 msec an hour later. In this case, we say that the 120 clock of the given host has a skew of 1.3 msec per hour relative 121 to UTC, and this threatens accuracy. We might also speak of the 122 skew of one clock relative to another clock, and this threatens 123 synchronization. 125 3. A Singleton Definition for One-way Delay 127 3.1. Metric Name: 129 Type-P-One-way-Delay 131 3.2. Metric Parameters: 133 + Src, the IP address of a host 135 + Dst, the IP address of a host 137 + T, a time 139 3.3. Metric Units: 141 The value of a Type-P-One-way-Delay is either a non-negative real 142 number, or an undefined (informally, infinite) number of seconds. 144 3.4. Definition: 146 For a non-negative real number dT, >>the *Type-P-One-way-Delay* from 147 Src to Dst at T is dT<< means that Src sent the first bit of a Type-P 148 packet to Dst at wire-time* T and that Dst received the last bit of 149 that packet at wire-time T+dT. 151 >>The *Type-P-One-way-Delay* from Src to Dst at T is undefined 152 (informally, infinite)<< means that Src sent the first bit of a Type- 153 P packet to Dst at wire-time T and that Dst did not receive that 154 packet. 156 Suggestions for what to report along with metric values appear in 157 Section 3.8 after a discussion of the metric, methodologies for 158 measuring the metric, and error analysis. 160 3.5. Discussion: 162 Type-P-One-way-Delay is a relatively simple analytic metric, and one 163 that we believe will afford effective methods of measurement. 165 The following issues are likely to come up in practice: 167 + Since delay values will often be as low as the 100 usec to 10 msec 168 range, it will be important for Src and Dst to synchronize very 169 closely. GPS systems afford one way to achieve synchronization to 170 within several 10s of usec. Ordinary application of NTP may allow 171 synchronization to within several msec, but this depends on the 172 stability and symmetry of delay properties among those NTP agents 173 used, and this delay is what we are trying to measure. A 174 combination of some GPS-based NTP servers and a conservatively 175 designed and deployed set of other NTP servers should yield good 176 results, but this is yet to be tested. 178 + A given methodology will have to include a way to determine 179 whether a delay value is infinite or whether it is merely very 180 large (and the packet is yet to arrive at Dst). As noted by 181 Mahdavi and Paxson [4], simple upper bounds (such as the 255 182 seconds theoretical upper bound on the lifetimes of IP 183 packets [5]) could be used, but good engineering, including an 184 understanding of packet lifetimes, will be needed in practice. 185 {Comment: Note that, for many applications of these metrics, the 186 harm in treating a large delay as infinite might be zero or very 187 small. A TCP data packet, for example, that arrives only after 188 several multiples of the RTT may as well have been lost.} 190 + If the packet is duplicated along the path (or paths) so that 191 multiple non-corrupt copies arrive at the destination, then the 192 packet is counted as received, and the first copy to arrive 193 determines the packet's one-way delay. 195 + If the packet is fragmented and if, for whatever reason, 196 reassembly does not occur, then the packet will be deemed lost. 198 3.6. Methodologies: 200 As with other Type-P-* metrics, the detailed methodology will depend 201 on the Type-P (e.g., protocol number, UDP/TCP port number, size, 202 precedence). 204 Generally, for a given Type-P, the methodology would proceed as 205 follows: 207 + Arrange that Src and Dst are synchronized; that is, that they have 208 clocks that are very closely synchronized with each other and each 209 fairly close to the actual time. 211 + At the Src host, select Src and Dst IP addresses, and form a test 212 packet of Type-P with these addresses. Any 'padding' portion of 213 the packet needed only to make the test packet a given size should 214 be filled with randomized bits to avoid a situation in which the 215 measured delay is lower than it would otherwise be due to 216 compression techniques along the path. 218 + At the Dst host, arrange to receive the packet. 220 + At the Src host, place a timestamp in the prepared Type-P packet, 221 and send it towards Dst. 223 + If the packet arrives within a reasonable period of time, take a 224 timestamp as soon as possible upon the receipt of the packet. By 225 subtracting the two timestamps, an estimate of one-way delay can 226 be computed. Error analysis of a given implementation of the 227 method must take into account the closeness of synchronization 228 between Src and Dst. If the delay between Src's timestamp and the 229 actual sending of the packet is known, then the estimate could be 230 adjusted by subtracting this amount; uncertainty in this value 231 must be taken into account in error analysis. Similarly, if the 232 delay between the actual receipt of the packet and Dst's timestamp 233 is known, then the estimate could be adjusted by subtracting this 234 amount; uncertainty in this value must be taken into account in 235 error analysis. See the next section, "Errors and Uncertainties", 236 for a more detailed discussion. 238 + If the packet fails to arrive within a reasonable period of time, 239 the one-way delay is taken to be undefined (informally, infinite). 240 Note that the threshold of 'reasonable' is a parameter of the 241 methodology. 243 Issues such as the packet format, the means by which Dst knows when 244 to expect the test packet, and the means by which Src and Dst are 245 synchronized are outside the scope of this document. {Comment: We 246 plan to document elsewhere our own work in describing such more 247 detailed implementation techniques and we encourage others to as 248 well.} 250 3.7. Errors and Uncertainties: 252 The description of any specific measurement method should include an 253 accounting and analysis of various sources of error or uncertainty. 254 The Framework document provides general guidence on this point, but 255 we note here the following specifics related to delay metrics: 257 + Errors or uncertainties due to uncertainties in the clocks of the 258 Src and Dst hosts. 260 + Errors or uncertainties due to the difference between 'wire time' 261 and 'host time'. 263 In addition, the loss threshold may affect the results. Each of 264 these are discussed in more detail below, along with a section 265 ("Calibration") on accounting for these errors and uncertainties. 267 3.7.1. Errors or uncertainties related to Clocks 269 The uncertainty in a measurement of one-way delay is related, in 270 part, to uncertainties in the clocks of the Src and Dst hosts. In 271 the following, we refer to the clock used to measure when the packet 272 was sent from Src as the source clock, we refer to the clock used to 273 measure when the packet was received by Dst as the dest clock, we 274 refer to the observed time when the packet was sent by the source 275 clock as Tsource, and the observed time when the packet was received 276 by the dest clock as Tdest. Alluding to the notions of 277 synchronization, accuracy, resolution, and skew mentioned in the 278 Introduction, we note the following: 280 + Any error in the synchronization between the source clock and the 281 dest clock will contribute to error in the delay measurement. We 282 say that the source clock and the dest clock have a 283 synchronization error of Tsynch if the source clock is Tsynch 284 ahead of the dest clock. Thus, if we know the value of Tsynch 285 exactly, we could correct for clock synchronization by adding 286 Tsynch to the uncorrected value of Tdest-Tsource. 288 + The accuracy of a clock is important only in identifying the time 289 at which a given delay was measured. Accuracy, per se, has no 290 importance to the accuracy of the measurement of delay. When 291 computing delays, we are interested only in the differences 292 between clock values, not the values themselves. 294 + The resolution of a clock adds to uncertainty about any time 295 measured with it. Thus, if the source clock has a resolution of 296 10 msec, then this adds 10 msec of uncertainty to any time value 297 measured with it. We will denote the resolution of the source 298 clock and the dest clock as Rsource and Rdest, respectively. 300 + The skew of a clock is not so much an additional issue as it is a 301 realization of the fact that Tsynch is itself a function of time. 302 Thus, if we attempt to measure or to bound Tsynch, this needs to 303 be done periodically. Over some periods of time, this function 304 can be approximated as a linear function plus some higher order 305 terms; in these cases, one option is to use knowledge of the 306 linear component to correct the clock. Using this correction, the 307 residual Tsynch is made smaller, but remains a source of 308 uncertainty that must be accounted for. We use the function 309 Esynch(t) to denote an upper bound on the uncertainty in 310 synchronization. Thus, |Tsynch(t)| <= Esynch(t). 312 Taking these items together, we note that naive computation Tdest- 313 Tsource will be off by Tsynch(t) +/- (|Rsource|+|Rdest|). Using the 314 notion of Esynch(t), we note that these clock-related problems 315 introduce a total uncertainty of Esynch(t)+|Rsource|+|Rdest|. This 316 estimate of total clock-related uncertainty should be included in the 317 error/uncertainty analysis of any measurement implementation. 319 3.7.2. Errors or uncertainties related to Wire-time vs Host-time 321 As we have defined one-way delay, we would like to measure the time 322 between when the test packet leaves the network interface of Src and 323 when it (completely) arrives at the network interface of Dst, and we 324 refer to this as 'wire time'. If the timings are themselves 325 performed by software on Src and Dst, however, then this software can 326 only directly measure the time between when Src grabs a timestamp 327 just prior to sending the test packet and when Dst grabs a timestamp 328 just after having received the test packet, and we refer to this as 329 'host time'. 331 To the extent that the difference between wire time and host time is 332 accurately known, this knowledge can be used to correct for host time 333 measurements and the corrected value more accurately estimates the 334 desired (wire time) metric. 336 To the extent, however, that the difference between wire time and 337 host time is uncertain, this uncertainty must be accounted for in an 338 analysis of a given measurement method. We denote by Hsource an 339 upper bound on the uncertainty in the difference between wire time 340 and host time on the Src host, and similarly define Hdest for the Dst 341 host. We then note that these problems introduce a total uncertainty 342 of Hsource+Hdest. This estimate of total wire-vs-host uncertainty 343 should be included in the error/uncertainty analysis of any 344 measurement implementation. 346 3.7.3. Calibration 348 Generally, the measured values can be decomposed as follows: 350 measured value = true value + systematic error + random error 352 If the systematic error (the constant bias in measured values) can be 353 determined, it can be compensated for in the reported results. 355 reported value = measured value - systematic error 357 therefore 359 reported value = true value + random error 361 The goal of calibration is to determine the systematic and random 362 error in as much detail as possible. At a minimum, a bound ("e") 363 should be found such that the reported value is in the range (true 364 value - e) to (true value + e) at least 95 percent of the time. We 365 call "e" the error bar for the measurements. {Comment: 95 percent 366 was chosen because (1) some confidence level is desirable to be able 367 to remove outliers which will be found in measuring any physical 368 property; (2) a particular confidence level should be specified so 369 that the results of independent implementations can be compared; and 370 (3) even with a prototype user-level implementation, 95% was loose 371 enough to exclude outliers.} 373 From the discussion in the previous two sections, the error in 374 measurements could be bounded by determining all the individual 375 uncertainties, and adding them together to form 376 Esynch(t) + |Rsource| + |Rdest| + Hsource + Hdest. 377 However, reasonable bounds on both the clock-related uncertainty 378 captured by the first three terms and the host-related uncertainty 379 captured by the last two terms should be possible by careful design 380 techniques and calibrating the instruments using a known, isolated, 381 network in a lab. 383 For example, the clock-related uncertainties are greatly reduced 384 through the use of a GPS time source. The sum of Esynch(t) + 385 |Rsource| + |Rdest| is small, and is also bounded for the duration of 386 the measurement because of the global time source. 388 The host-related uncertainties, Hsource + Hdest, could be bounded by 389 connecting two instruments back-to-back with a high-speed serial link 390 or isolated LAN (depending on the intended network connection for 391 actual measurement), and performing repeated measurements. In this 392 case, unlike measuring live networks, repeated measurements are 393 measuring the same wire time. (When measuring live networks, the 394 wire time is what you are measuring, and varies with the load 395 encountered on the path traversed by the test packets.) 397 If the test packets are small, such a network connection has a 398 minimal wire time that may be approximated by zero. The measured 399 delay therefore contains only systematic and random error in the 400 instrumentation. The "average value" of repeated measurements is the 401 systematic error, and the variation is the random error. 403 One way to compute the systematic error, and the random error to a 404 95% confidence is to repeat the experiment many times - at least 405 hundreds of tests. The systematic error would then be the median, 406 and likely the mode (the most frequently occuring value). {Comment: 407 It's likely the systematic error is represented by the minimum value 408 (which is also the median and the mode); with unloaded instruments on 409 a single test path all the random error will tend to be increased 410 time due to host processing. The only error resulting an a delay 411 less than the systematic error would be due to clock-related 412 uncertainties (resolution and relative skew).} The random error could 413 then be found by removing the systematic error from the measured 414 values. The 95% confidence interval would be the range from the 2nd 415 percentile to the 97th percentile of these deviations from the true 416 value. The error bar "e" could then be taken to be the largest 417 absolute value of these two numbers, plus the clock-related 418 uncertainty. If all of the deviations are positive, then the 95% 419 confidence interval is simply the 95th percentile, and that value 420 should be used instead of the larger of the 2nd and 97th percentiles. 421 {Comment: as described, this bound is relatively loose since the 422 uncertainties are added, and the absolute value of the largest 423 deviation is used. As long as the resulting value is not a 424 significant fraction of the measured values, it is a reasonable 425 bound. If the resulting value is a significant fraction of the 426 measured values, then more exact methods will be needed to compute an 427 error bar.} 429 Note that random error is a function of measurement load. For 430 example, if many paths will be measured by one instrument, this might 431 increase interrupts, process scheduling, and disk I/O (for example, 432 recording the measurements), all of which may increase the random 433 error in measured singletons. Therefore, in addition to minimal load 434 measurements to find the systematic error, calibration measurements 435 should be performed with the same measurement load that the 436 instruments will see in the field. 438 In addition to calibrating the instruments for finite one-way delay, 439 two checks should be made to ensure that packets reported as losses 440 were really lost. First, the threshold for loss should be verified. 441 In particular, ensure the "reasonable" threshold is reasonable: that 442 it is very unlikely a packet will arrive after the threshold value, 443 and therefore the number of packets lost over an interval is not 444 sensitive to the error bound on measurements. Second, consider the 445 probability that a packet arrives at the network interface, but is 446 lost due to congestion on that interface or to other resource 447 exhaustion (e.g. buffers) in the instrument. 449 3.8. Reporting the metric: 451 The calibration and context in which the metric is measured must be 452 carefully considered, and should always be reported along with metric 453 results. We now present four items to consider: the Type-P of test 454 packets, the threshold of infinite delay (if any), error calibration, 455 and the path traversed by the test packets. This list is not 456 exhaustive; any additional information that could be useful in 457 interpreting applications of the metrics should also be reported. 459 3.8.1. Type-P 461 As noted in the Framework document [1], the value of the metric may 462 depend on the type of IP packets used to make the measurement, or 463 "type-P". The value of Type-P-One-way-Delay could change if the 464 protocol (UDP or TCP), port number, size, or arrangement for special 465 treatment (e.g., IP precedence or RSVP) changes. The exact Type-P 466 used to make the measurements must be accurately reported. 468 3.8.2. Loss threshold 470 In addition, the threshold (or methodology to distinguish) between a 471 large finite delay and loss should be reported. 473 3.8.3. Calibration results 475 + If the systematic error can be determined, it should be removed 476 from the measured values. 478 + Report an error bar, e, such that the true value is the reported 479 value plus or minus e, with 95% confidence. 481 + If possible, report the probability that a test packet with finite 482 delay is reported as lost due to resource exhaustion on the 483 measurement instrument. 485 3.8.4. Path 487 Finally, the path traversed by the packet should be reported, if 488 possible. In general it is impractical to know the precise path a 489 given packet takes through the network. The precise path may be 490 known for certain Type-P on short or stable paths. If Type-P 491 includes the record route (or loose-source route) option in the IP 492 header, and the path is short enough, and all routers* on the path 493 support record (or loose-source) route, then the path will be 494 precisely recorded. This is impractical because the route must be 495 short enough, many routers do not support (or are not configured for) 496 record route, and use of this feature would often artificially worsen 497 the performance observed by removing the packet from common-case 498 processing. However, partial information is still valuable context. 499 For example, if a host can choose between two links* (and hence two 500 separate routes from src to dst), then the initial link used is 501 valuable context. {Comment: For example, with Merit's NetNow setup, 502 a Src on one NAP can reach a Dst on another NAP by either of several 503 different backbone networks.} 505 4. A Definition for Samples of One-way Delay 507 Given the singleton metric Type-P-One-way-Delay, we now define one 508 particular sample of such singletons. The idea of the sample is to 509 select a particular binding of the parameters Src, Dst, and Type-P, 510 then define a sample of values of parameter T. The means for 511 defining the values of T is to select a beginning time T0, a final 512 time Tf, and an average rate lambda, then define a pseudo-random 513 Poisson arrival process of rate lambda, whose values fall between T0 514 and Tf. The time interval between successive values of T will then 515 average 1/lambda. 517 4.1. Metric Name: 519 Type-P-One-way-Delay-Poisson-Stream 521 4.2. Metric Parameters: 523 + Src, the IP address of a host 525 + Dst, the IP address of a host 527 + T0, a time 529 + Tf, a time 531 + lambda, a rate in reciprocal seconds 533 4.3. Metric Units: 535 A sequence of pairs; the elements of each pair are: 537 + T, a time, and 539 + dT, either a non-negative real number or an undefined number of 540 seconds. 542 The values of T in the sequence are monotonic increasing. Note that 543 T would be a valid parameter to Type-P-One-way-Delay, and that dT 544 would be a valid value of Type-P-One-way-Delay. 546 4.4. Definition: 548 Given T0, Tf, and lambda, we compute a pseudo-random Poisson process 549 beginning at or before T0, with average arrival rate lambda, and 550 ending at or after Tf. Those time values greater than or equal to T0 551 and less than or equal to Tf are then selected. At each of the times 552 in this process, we obtain the value of Type-P-One-way-Delay at this 553 time. The value of the sample is the sequence made up of the 554 resulting pairs. If there are no such pairs, the 555 sequence is of length zero and the sample is said to be empty. 557 4.5. Discussion: 559 Note first that, since a pseudo-random number sequence is employed, 560 the sequence of times, and hence the value of the sample, is not 561 fully specified. Pseudo-random number generators of good quality 562 will be needed to achieve the desired qualities. 564 The sample is defined in terms of a Poisson process both to avoid the 565 effects of self-synchronization and also capture a sample that is 566 statistically as unbiased as possible. {Comment: there is, of 567 course, no claim that real Internet traffic arrives according to a 568 Poisson arrival process.} 570 All the singleton Type-P-One-way-Delay metrics in the sequence will 571 have the same values of Src, Dst, and Type-P. 573 Note also that, given one sample that runs from T0 to Tf, and given 574 new time values T0' and Tf' such that T0 <= T0' <= Tf' <= Tf, the 575 subsequence of the given sample whose time values fall between T0' 576 and Tf' are also a valid Type-P-One-way-Delay-Poisson-Stream sample. 578 4.6. Methodologies: 580 The methodologies follow directly from: 582 + the selection of specific times, using the specified Poisson 583 arrival process, and 585 + the methodologies discussion already given for the singleton Type- 586 P-One-way-Delay metric. 588 Care must, of course, be given to correctly handle out-of-order 589 arrival of test packets; it is possible that the Src could send one 590 test packet at TS[i], then send a second one (later) at TS[i+1], 591 while the Dst could receive the second test packet at TR[i+1], and 592 then receive the first one (later) at TR[i]. 594 4.7. Errors and Uncertainties: 596 In addition to sources of errors and uncertainties associated with 597 methods employed to measure the singleton values that make up the 598 sample, care must be given to analyze the accuracy of the Poisson 599 arrival process of the wire-time of the sending of the test packets. 600 Problems with this process could be caused by several things, 601 including problems with the pseudo-random number techniques used to 602 generate the Poisson arrival process, or with jitter in the value of 603 Hsource (mentioned above as uncertainty in the singleton delay 604 metric). The Framework document shows how to use the Anderson- 605 Darling test to verify the accuracy of the Poisson process. 607 4.8. Reporting the metric: 609 You should report the calibration and context for the underlying 610 singletons along with the stream. (See "Reporting the metric" for 611 Type-P-One-way-Delay.) 613 5. Some Statistics Definitions for One-way Delay 615 Given the sample metric Type-P-One-way-Delay-Poisson-Stream, we now 616 offer several statistics of that sample. These statistics are 617 offered mostly to be illustrative of what could be done. 619 5.1. Type-P-One-way-Delay-Percentile 621 Given a Type-P-One-way-Delay-Poisson-Stream and a percent X between 622 0% and 100%, the Xth percentile of all the dT values in the Stream. 623 In computing this percentile, undefined values are treated as 624 infinitely large. Note that this means that the percentile could 625 thus be undefined (informally, infinite). In addition, the Type-P- 626 One-way-Delay-Percentile is undefined if the sample is empty. 628 Example: suppose we take a sample and the results are: 629 Stream1 = < 630 631 632 633 634 635 > 636 Then the 50th percentile would be 110 msec, since 90 msec and 100 637 msec are smaller and 110 msec and 'undefined' are larger. 639 Note that if the probability that a finite packet is reported as lost 640 is significant, then a high percentile (90th or 95th) might be 641 reported as infinite instead of finite. 643 5.2. Type-P-One-way-Delay-Median 645 Given a Type-P-One-way-Delay-Poisson-Stream, the median of all the dT 646 values in the Stream. In computing the median, undefined values are 647 treated as infinitely large. 649 As noted in the Framework document, the median differs from the 50th 650 percentile only when the sample contains an even number of values, in 651 which case the mean of the two central values is used. 653 Example: suppose we take a sample and the results are: 654 Stream2 = < 655 656 657 658 659 > 660 Then the median would be 105 msec, the mean of 100 msec and 110 msec, 661 the two central values. 663 5.3. Type-P-One-way-Delay-Minumum 665 Given a Type-P-One-way-Delay-Poisson-Stream, the minimum of all the 666 dT values in the Stream. In computing this, undefined values are 667 treated as infinitely large. Note that this means that the minimum 668 could thus be undefined (informally, infinite) if all the dT values 669 are undefined. In addition, the Type-P-One-way-Delay-Minimum is 670 undefined if the sample is empty. 672 In the above example, the minimum would be 90 msec. 674 5.4. Type-P-One-way-Delay-Inverse-Percentile 676 Given a Type-P-One-way-Delay-Poisson-Stream and a non-negative time 677 duration threshold, the fraction of all the dT values in the Stream 678 less than or equal to the threshold. The result could be as low as 679 0% (if all the dT values exceed threshold) or as high as 100%. 681 In the above example, the Inverse-Percentile of 103 msec would be 682 50%. 684 6. Security Considerations 686 Conducting Internet measurements raises both security and privacy 687 concerns. This memo does not specify an implementation of the 688 metrics, so it does not directly affect the security of the Internet 689 nor of applications which run on the Internet. However, 690 implementations of these metrics must be mindful of security and 691 privacy concerns. 693 There are two types of security concerns: potential harm caused by 694 the measurements, and potential harm to the measurements. The 695 measurements could cause harm because they are active, and inject 696 packets into the network. The measurement parameters must be 697 carefully selected so that the measurements inject trivial amounts of 698 additional traffic into the networks they measure. If they inject 699 "too much" traffic, they can skew the results of the measurement, and 700 in extreme cases cause congestion and denial of service. 702 The measurements themselves could be harmed by routers giving 703 measurement traffic a different priority than "normal" traffic, or by 704 an attacker injecting artificial measurement traffic. If routers can 705 recognize measurement traffic and treat it separately, the 706 measurements will not reflect actual user traffic. If an attacker 707 injects artificial traffic that is accepted as legitimate, the loss 708 rate will be artificially lowered. Therefore, the measurement 709 methodologies should include appropriate techniques to reduce the 710 probability measurement traffic can be distinguished from "normal" 711 traffic. Authentication techniques, such as digital signatures, may 712 be used where appropriate to guard against injected traffic attacks. 714 The privacy concerns of network measurement are limited by the active 715 measurements described in this memo. Unlike passive measurements, 716 there can be no release of existing user data. 718 7. Acknowledgements 720 Special thanks are due to Vern Paxson of Lawrence Berkeley Labs for 721 his helpful comments on issues of clock uncertainty and statistics. 722 Thanks also to Will Leland, Sean Shapira, and Roland Wittig for 723 several useful suggestions. 725 8. References 727 [1] V. Paxson, G. Almes, J. Mahdavi, and M. Mathis, "Framework for 728 IP Performance Metrics", RFC 2330, May 1998. 730 [2] G. Almes, S. Kalidindi, and M. Zekauskas, "A Packet Loss Metric 731 for IPPM", Internet-Draft , August 732 1998. 734 [3] D. Mills, "Network Time Protocol (v3)", RFC 1305, April 1992. 736 [4] J. Mahdavi and V. Paxson, "IPPM Metrics for Measuring 737 Connectivity", Internet-Draft , August 1998. 740 [5] J. Postel, "Internet Protocol", RFC 791, September 1981. 742 9. Authors' Addresses 743 Guy Almes 744 Advanced Network & Services, Inc. 745 200 Business Park Drive 746 Armonk, NY 10504 747 USA 749 Phone: +1 914 765 1120 750 EMail: almes@advanced.org 752 Sunil Kalidindi 753 Advanced Network & Services, Inc. 754 200 Business Park Drive 755 Armonk, NY 10504 756 USA 758 Phone: +1 914 765 1128 759 EMail: kalidindi@advanced.org 761 Matthew J. Zekauskas 762 Advanced Network & Services, Inc. 763 200 Buisiness Park Drive 764 Armonk, NY 10504 765 USA 767 Phone: +1 914 765 1112 768 EMail: matt@advanced.org 770 Expiration date: March, 1999