idnits 2.17.1 draft-ietf-ippm-npmps-05.txt: ** The Abstract section seems to be numbered Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Abstract section. ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 114: '... T0, a time that MUST be selected at r...' RFC 2119 keyword, line 122: '...e time intervals MUST use an independe...' RFC 2119 keyword, line 184: '...rmine the status MUST be specified, if...' RFC 2119 keyword, line 426: '...xt in which the method is used MUST be...' RFC 2119 keyword, line 427: '... considered, and SHOULD always be repo...' (7 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 221 has weird spacing: '... derive the s...' == Line 285 has weird spacing: '... to the clock...' == Line 562 has weird spacing: '...s point of pa...' == Line 563 has weird spacing: '...lose to the i...' == Line 566 has weird spacing: '... follow the c...' == (1 more instance...) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Missing reference section? '1' on line 82 looks like a reference -- Missing reference section? '2' on line 82 looks like a reference -- Missing reference section? '3' on line 745 looks like a reference -- Missing reference section? '4' on line 543 looks like a reference -- Missing reference section? '5' on line 201 looks like a reference -- Missing reference section? 'I' on line 205 looks like a reference -- Missing reference section? '6' on line 479 looks like a reference -- Missing reference section? '7' on line 539 looks like a reference -- Missing reference section? '8' on line 544 looks like a reference Summary: 11 errors (**), 0 flaws (~~), 6 warnings (==), 11 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 IP Performance Measurement Working Group V.Raisanen 2 Internet Draft Nokia 3 Document: G.Grotefeld 4 Category: Informational Motorola 5 A.Morton 6 AT&T Labs 7 Network performance measurement with periodic streams 8 Status of this Memo 9 This document is an Internet-Draft and is in full conformance with 10 all provisions of Section 10 of RFC2026 [1]. 11 Internet-Drafts are working documents of the Internet Engineering 12 Task Force (IETF), its areas, and its working groups. Note that 13 other groups may also distribute working documents as Internet- 14 Drafts. Internet-Drafts are draft documents valid for a maximum of 15 six months and may be updated, replaced, or made obsolete by other 16 documents at any time. It is inappropriate to use Internet-Drafts as 17 reference material or to cite them other than as "work in progress." 18 The list of current Internet-Drafts can be accessed at 19 http://www.ietf.org/ietf/1id-abstracts.txt 20 The list of Internet-Draft Shadow Directories can be accessed at 21 http://www.ietf.org/shadow.html. 22 1. Abstract 23 This memo describes a periodic sampling method and relevant metrics 24 for assessing the performance of IP networks. First, the memo 25 motivates periodic sampling and addresses the question of its value 26 as an alternative to Poisson sampling described in RFC 2330. The 27 benefits include applicability to active and passive measurements, 28 simulation of constant bit rate (CBR) traffic (typical of multimedia 29 communication, or nearly CBR, as found with voice activity 30 detection), and several instances where analysis can be simplified. 31 The sampling method avoids predictability by mandating random start 32 times and finite length tests. Following descriptions of the 33 sampling method and sample metric parameters, measurement methods 34 and errors are discussed. Finally, we give additional information on 35 periodic measurements including security considerations. 36 2. Conventions used in this document 37 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 38 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 39 document are to be interpreted as described in RFC 2119 [2]. 40 Although RFC 2119 was written with protocols in mind, the key words 41 are used in this document for similar reasons. They are used to 42 ensure the results of measurements from two different 43 implementations are comparable, and to note instances when an 44 implementation could perturb the network. 45 3. Introduction 46 This memo describes a sampling method and performance metrics 47 relevant to certain applications of IP networks. The original driver 48 for this work was Quality of Service of interactive periodic streams 49 such as multimedia conferencing over IP, but the idea of periodic 50 sampling and measurement has wider applicability. Interactive 51 multimedia traffic is used as an example below to illustrate the 52 concept. 53 Transmitting equal size packets (or mostly same-size packets) 54 through a network at regular intervals simulates a constant bit-rate 55 (CBR), or nearly CBR multimedia bit stream. Hereafter, these packets 56 are called periodic streams. Cases of "mostly same-size packets" may 57 be found in applications that have multiple coding methods (e.g. 58 digitally coded comfort noise during silence gaps in speech). 59 In the following sections, a sampling methodology and metrics are 60 presented for periodic streams. The measurement results may be used 61 in derivative metrics such as average and maximum delays. The memo 62 seeks to formalize periodic stream measurements to achieve 63 comparable results between independent implementations. 64 3.1 Motivation 65 As noted in the IPPM framework RFC 2330 [3], a sample metric using 66 regularly spaced singleton tests has some limitations when 67 considered from a general measurement point of view: only part of 68 the network performance spectrum is sampled. However, some 69 applications also sample this limited performance spectrum and their 70 performance may be of critical interest. 71 Periodic sampling is useful for the following reasons: 72 * It is applicable to passive measurement, as well as active 73 measurement. 74 * An active measurement can be configured to match the 75 characteristics of media flows, and simplifies the estimation of 76 application performance. 77 * Measurements of many network impairments (e.g., delay variation, 78 consecutive loss, reordering) are sensitive to the sampling 79 frequency. When the impairments themselves are time-varying (and 80 the variations are somewhat rare, yet important), a constant 81 sampling frequency simplifies analysis. 83 * Frequency Domain analysis is simplified when the samples are 84 equally spaced. 85 Simulation of CBR flows with periodic streams encourages dense 86 sampling of network performance, since typical multimedia flows have 87 10 to 100 packets in each second. Dense sampling permits the 88 characterization of network phenomena with short duration. 89 4. Periodic Sampling Methodology 90 The Framework RFC [3] points out the following potential problems 91 with Periodic Sampling: 92 1. The performance sampled may be synchronized with some other 93 periodic behavior, or the samples may be anticipated and the results 94 manipulated. Unpredictable sampling is preferred. 95 2. Active measurements can cause congestion, and periodic sampling 96 might drive congestion-aware senders into a synchronized state, 97 producing atypical results. 98 Poisson sampling produces an unbiased sample for the various IP 99 performance metrics, yet there are situations where alternative 100 sampling methods are advantageous (as discussed under Motivation). 101 We can prescribe periodic sampling methods that address the problems 102 listed above. Predictability and some forms of synchronization can 103 be mitigated through the use of random start times and limited 104 stream duration over a test interval. The periodic sampling 105 parameters produce bias, and judicious selection can produce a known 106 bias of interest. The total traffic generated by this or any 107 sampling method should be limited to avoid adverse affects on non- 108 test traffic (packet size, packet rate, and sample duration and 109 frequency should all be considered). 110 The configuration parameters of periodic sampling are: 111 + T, the beginning of a time interval where a periodic sample is 112 desired. 113 + dT, the duration of the interval for allowed sample start times. 114 + T0, a time that MUST be selected at random from the interval [T, 115 T+dT] to start generating packets and taking measurements. 116 + Tf, a time, greater than T0, for stopping generation of packets 117 for a sample (Tf may be relative to T0 if desired). 118 + incT, the nominal duration of inter-packet interval, first bit to 119 first bit. 120 T0 may be drawn from a uniform distribution, or T0 = T + Unif(0,dT). 121 Other distributions may also be appropriate. Start times in 122 successive time intervals MUST use an independent value drawn from 123 the distribution. In passive measurement, the arrival of user media 124 flows may have sufficient randomness, or a randomized start time of 125 the measurement during a flow may be needed to meet this 126 requirement. 127 When a mix of packet sizes is desired, passive measurements usually 128 possess the sequence and statistics of sizes in actual use, while 129 active measurements would need to reproduce the intended 130 distribution of sizes. 131 5. Sample metrics for periodic streams 132 The sample metric presented here is similar to the sample metric 133 Type-P-One-way-Delay-Poisson-Stream presented in RFC 2679[4]. 134 Singletons defined in [3] and [4] are applicable here. 135 5.1 Metric name 136 Type-P-One-way-Delay-Periodic-Stream 137 5.2 Metric parameters 138 5.2.1 Global metric parameters 139 These parameters apply in all the sub-sections that follow (5.2.2, 140 5.2.3, and 5.2.4). 141 Parameters that each Singleton usually includes: 142 + Src, the IP address of a host 143 + Dst, the IP address of a host 144 + IPV, the IP version (IPv4/IPv6) used in the measurement 145 + dTloss, a time interval, the maximum waiting time for a packet 146 before declaring it lost. 147 + packet size p(j), the desired number of bytes in the Type-P 148 packet, where j is the size index. 149 Optional parameters: 150 + PktType, any additional qualifiers (transport address) 151 + Tcons, a time interval for consolidating parameters collected at 152 the measurement points. 153 While a number of applications will use one packet size (j = 1), 154 other applications may use packets of different sizes (j > 1). 155 Especially in cases of congestion, it may be useful to use packets 156 smaller than the maximum or predominant size of packets in the 157 periodic stream. 158 A topology where Src and Dst are separate from the measurement 159 points is assumed. 160 5.2.2 Parameters collected at the measurement point MP(Src) 161 Parameters that each Singleton usually includes: 163 + Tstamp(Src)[i], for each packet [i], the time of the packet as 164 measured at MP(Src) 165 Additional parameters: 166 + PktID(Src) [i], for each packet [i], a unique identification or 167 sequence number. 168 + PktSi(Src) [i], for each packet [i], the actual packet size. 169 Some applications may use packets of different sizes, either 170 because of application requirements or in response to IP 171 performance experienced. 172 5.2.3 Parameters collected at the measurement point MP(Dst) 173 + Tstamp(Dst)[i], for each packet [i], the time of the packet as 174 measured at MP(Dst) 175 + PktID(Dst) [i], for each packet [i], a unique identification or 176 sequence number. 177 + PktSi(Dst) [i], for each packet [i], the actual packet size. 178 Optional parameters: 179 + dTstop, a time interval, used to add to time Tf to determine when 180 to stop collecting metrics for a sample 181 + PktStatus [i], for each packet [i], the status of the packet 182 received. Possible status includes OK, packet header corrupt, 183 packet payload corrupt, duplicate, fragment. The criteria to 184 determine the status MUST be specified, if used. 185 5.2.4 Sample Metrics resulting from combining parameters at MP(Src) and 186 MP(Dst) 187 Using the parameters above, a delay singleton would be calculated as 188 follows: 189 + Delay [i], for each packet [i], the time interval 190 Delay[i] = Tstamp(Dst)[i] - Tstamp(Src)[i] 191 For the following conditions, it will not be possible to be able to 192 compute delay singletons: 193 Spurious: There will be no Tstamp(Src)[i] time 194 Not received: There will be no Tstamp (Dst) [i] 195 Corrupt packet header: There will be no Tstamp (Dst) [i] 196 Duplicate: Only the first non-corrupt copy of the packet 197 received at Dst should have Delay [i] computed. 198 A sample metric for average delay is as follows 199 AveDelay = (1/N)Sum(from i=1 to N, Delay[i]) 200 assuming all packets i= 1 though N have valid singletons. 201 A delay variation [5] singleton can also be computed: 203 + IPDV[i], for each packet [i] except the first one, delay 204 variation between successive packets would be calculated as 205 IPDV[I] = Delay[i] - Delay [i-1] 206 IPDV[i] may be negative, zero, or positive. Delay singletons for 207 packets i and i-1 must be calculable or IPDV[i] is undefined. 208 An example metric for the IPDV sample is the range: 209 RangeIPDV = max(IPDV[]) - min(IPDV[]) 210 5.3 High level description of the procedure to collect a sample 211 Beginning on or after time T0, Type-P packets are generated by Src 212 and sent to Dst until time Tf is reached with a nominal interval 213 between the first bit of successive packets of incT as measured at 214 MP(Src). incT may be nominal due to a number of reasons: variation 215 in packet generation at Src, clock issues (see section 5.6), etc. 216 MP(Src) records the parameters above only for packets with 217 timestamps between and including T0 and Tf having the required Src, 218 Dst, and any other qualifiers. MP (Dst) also records for packets 219 with time stamps between T0 and (Tf + dTstop). 220 Optionally at a time Tf + Tcons (but eventually in all cases), the 221 data from MP(Src) and MP(Dst) are consolidated to derive the sample 222 metric results. To prevent stopping data collection too soon, 223 dTcons should be greater than or equal to dTstop. Conversely, to 224 keep data collection reasonably efficient, dTstop should be some 225 reasonable time interval (seconds/minutes/hours), even if dTloss is 226 infinite or extremely long. 227 5.4 Discussion 228 This sampling methodology is intended to quantify the delays and the 229 delay variation as experienced by multimedia streams of an 230 application. Due to the definitions of these metrics, also packet 231 loss status is recorded. The nominal interval between packets 232 assesses network performance variations on a specific time scale. 233 There are a number of factors that should be taken into account when 234 collecting a sample metric of Type-P-One-way-Delay-Periodic-Stream. 235 + The interval T0 to Tf should be specified to cover a long enough 236 time interval to represent a reasonable use of the application under 237 test, yet not excessively long in the same context(e.g. phone calls 238 last longer than 100ms, but less than one week). 239 + The nominal interval between packets (incT) and the packet 240 size(s) (p(j)) should not define an equivalent bit rate that exceeds 241 the capacity of the egress port of Src, the ingress port of Dst, 242 or the capacity of the intervening network(s), if known. There may 243 be exceptional cases to test the response of the application to 244 overload conditions in the transport networks, but these cases 245 should be strictly controlled. 246 + Real delay values will be positive. Therefore, it does not make 247 sense to report a negative value as a real delay. However, an 248 individual zero or negative delay value might be useful as part of 249 a stream when trying to discover a distribution of the delay errors. 250 + Depending on measurement topology, delay values may be as low as 251 100 usec to 10 msec, whereby it may be important for Src and Dst to 252 synchronize very closely. GPS systems afford one way to achieve 253 synchronization to within several 10s of usec. Ordinary application 254 of NTP may allow synchronization to within several msec, but this 255 depends on the stability and symmetry of delay properties among the 256 NTP agents used, and this delay is what we are trying to measure. 257 + A given methodology will have to include a way to determine 258 whether packet was lost or whether delay is merely very large (and 259 the packet is yet to arrive at Dst). The global metric parameter 260 dTloss defines a time interval such that delays larger than dTloss 261 are interpreted as losses. {Comment: For many applications, the 262 treatment a large delay as infinite/loss will be inconsequential. A 263 TCP data packet, for example, that arrives only after several 264 multiples of the usual RTT may as well have been lost.} 265 5.5 Additional Methodology Aspects 266 As with other Type-P-* metrics, the detailed methodology will depend 267 on the Type-P (e.g., protocol number, UDP/TCP port number, size, 268 precedence). 269 5.6 Errors and uncertainties 270 The description of any specific measurement method should include an 271 accounting and analysis of various sources of error or uncertainty. 272 The Framework RFC [3] provides general guidance on this point, but 273 we note here the following specifics related to periodic streams and 274 delay metrics: 275 + Error due to variation of incT. The reasons for this can be 276 uneven process scheduling, possibly due to CPU load. 277 + Errors or uncertainties due to uncertainties in the clocks of the 278 MP(Src) and MP(Dst) measurement points. 279 + Errors or uncertainties due to the difference between 'wire time' 280 and 'host time'. 281 5.6.1. Errors or uncertainties related to Clocks 282 The uncertainty in a measurement of one-way delay is related, in 283 part, to uncertainties in the clocks of MP(Src) and MP(Dst). In the 284 following, we refer to the clock used to measure when the packet was 285 measured at MP(Src) as the MP(Src) clock and we refer to the clock 286 used to measure when the packet was received at MP(Dst) as the 287 MP(Dst) clock. Alluding to the notions of synchronization, 288 accuracy, resolution, and skew, we note the following: 289 + Any error in the synchronization between the MP(Src) clock and 290 the MP(Dst) clock will contribute to error in the delay measurement. 291 We say that the MP(Src) clock and the MP(Dst) clock have a 292 synchronization error of Tsynch if the MP(Src) clock is Tsynch ahead 293 of the MP(Dst) clock. Thus, if we know the value of Tsynch exactly, 294 we could correct for clock synchronization by adding Tsynch to the 295 uncorrected value of Tstamp(Dst)[i] - Tstamp(Src) [i]. 296 + The resolution of a clock adds to uncertainty about any time 297 measured with it. Thus, if the MP(Src) clock has a resolution of 298 10 msec, then this adds 10 msec of uncertainty to any time value 299 measured with it. We will denote the resolution of the source 300 clock and the MP(Dst) clock as ResMP(Src) and ResMP(Dst), 301 respectively. 302 + The skew of a clock is not so much an additional issue as it is a 303 realization of the fact that Tsynch is itself a function of time. 304 Thus, if we attempt to measure or to bound Tsynch, this needs to 305 be done periodically. Over some periods of time, this function can 306 be approximated as a linear function plus some higher order terms; 307 in these cases, one option is to use knowledge of the linear 308 component to correct the clock. Using this correction, the residual 309 Tsynch is made smaller, but remains a source of uncertainty that 310 must be accounted for. We use the function Esynch(t) to denote an 311 upper bound on the uncertainty in synchronization. Thus, 312 |Tsynch(t)| <= Esynch(t). 313 Taking these items together, we note that naive computation 314 Tstamp(Dst)[i] - Tstamp(Src) [i] will be off by Tsynch(t) +/- 315 (ResMP(SRc) + ResMP(Dst)). Using the notion of Esynch(t), we note 316 that these clock-related problems introduce a total uncertainty of 317 Esynch(t)+ Rsource + Rdest. This estimate of total clock-related 318 uncertainty should be included in the error/uncertainty analysis of 319 any measurement implementation. 320 5.6.2. Errors or uncertainties related to Wire-time vs Host-time 321 We would like to measure the time between when a packet is measured 322 and time-stamped at MP(Src) and when it arrives and is time-stamped 323 at MP(Dst) and we refer to these as "wire times." If timestamps are 324 applied by software on Src and Dst, however, then this software can 325 only directly measure the time between when Src generates the packet 326 just prior to sending the test packet and when Dst has started to 327 process the packet after having received the test packet, and we 328 refer to these two points as "host times". 329 To the extent that the difference between wire time and host time is 330 accurately known, this knowledge can be used to correct for wire 331 time measurements and the corrected value more accurately estimates 332 the desired (host time) metric, and visa-versa. 333 To the extent, however, that the difference between wire time and 334 host time is uncertain, this uncertainty must be accounted for in an 335 analysis of a given measurement method. We denote by Hsource an 336 upper bound on the uncertainty in the difference between wire time 337 of MP(Src) and host time on the Src host, and similarly define Hdest 338 for the difference between the host time on the Dst host and the 339 wire time of MP(Dst). We then note that these problems introduce a 340 total uncertainty of Hsource+Hdest. This estimate of total wire-vs- 341 host uncertainty should be included in the error/uncertainty 342 analysis of any measurement implementation. 343 5.6.3. Calibration 344 Generally, the measured values can be decomposed as follows: 345 measured value = true value + systematic error + random error 346 If the systematic error (the constant bias in measured values) can 347 be determined, it can be compensated for in the reported results. 348 reported value = measured value - systematic error 349 therefore 350 reported value = true value + random error 351 The goal of calibration is to determine the systematic and random 352 error generated by the instruments themselves in as much detail as 353 possible. At a minimum, a bound ("e") should be found such that the 354 reported value is in the range (true value - e) to (true value + e) 355 at least 95 percent of the time. We call "e" the calibration error 356 for the measurements. It represents the degree to which the values 357 produced by the measurement instrument are repeatable; that is, how 358 closely an actual delay of 30 ms is reported as 30 ms. 359 {Comment: 95 percent was chosen due to reasons discussed in [4], 360 briefly summarized as (1) some confidence level is desirable to be 361 able to remove outliers, which will be found in measuring any 362 physical property; (2) a particular confidence level should be 363 specified so that the results of independent implementations can be 364 compared.} 365 From the discussion in the previous two sections, the error in 366 measurements could be bounded by determining all the individual 367 uncertainties, and adding them together to form 368 Esynch(t) + ResMP(Src) + ResMP(Dst) + Hsource + Hdest. 369 However, reasonable bounds on both the clock-related uncertainty 370 captured by the first three terms and the host-related uncertainty 371 captured by the last two terms should be possible by careful design 372 techniques and calibrating the instruments using a known, isolated, 373 network in a lab. 374 For example, the clock-related uncertainties are greatly reduced 375 through the use of a GPS time source. The sum of Esynch(t) + 376 ResMP(Src) + ResMP(Dst) is small, and is also bounded for the 377 duration of the measurement because of the global time source. 378 The host-related uncertainties, Hsource + Hdest, could be bounded by 379 connecting two instruments back-to-back with a high-speed serial 380 link or isolated LAN segment. In this case, repeated measurements 381 are measuring the same one-way delay. 382 If the test packets are small, such a network connection has a 383 minimal delay that may be approximated by zero. The measured delay 384 therefore contains only systematic and random error in the 385 instrumentation. The "average value" of repeated measurements is 386 the systematic error, and the variation is the random error. 387 One way to compute the systematic error, and the random error to a 388 95% confidence is to repeat the experiment many times - at least 389 hundreds of tests. The systematic error would then be the median. 390 The random error could then be found by removing the systematic 391 error from the measured values. The 95% confidence interval would 392 be the range from the 2.5th percentile to the 97.5th percentile of 393 these deviations from the true value. The calibration error "e" 394 could then be taken to be the largest absolute value of these two 395 numbers, plus the clock-related uncertainty. {Comment: as 396 described, this bound is relatively loose since the uncertainties 397 are added, and the absolute value of the largest deviation is used. 398 As long as the resulting value is not a significant fraction of the 399 measured values, it is a reasonable bound. If the resulting value 400 is a significant fraction of the measured values, then more exact 401 methods will be needed to compute the calibration error.} 402 Note that random error is a function of measurement load. For 403 example, if many paths will be measured by one instrument, this 404 might increase interrupts, process scheduling, and disk I/O (for 405 example, recording the measurements), all of which may increase the 406 random error in measured singletons. Therefore, in addition to 407 minimal load measurements to find the systematic error, calibration 408 measurements should be performed with the same measurement load that 409 the instruments will see in the field. 410 We wish to reiterate that this statistical treatment refers to the 411 calibration of the instrument; it is used to "calibrate the meter 412 stick" and say how well the meter stick reflects reality. 414 5.6.4 Errors in incT 415 The nominal interval between packets, incT, can vary during either 416 active or passive measurements. In passive measurement, packet 417 headers may include a timestamp applied prior to most of the 418 protocol stack, and the actual sending time may vary due to 419 processor scheduling. For example, H.323 systems are required to 420 have packets ready for the network stack within 5 ms of their ideal 421 time. There may be additional variation from the network between the 422 Src and the MP(Src). Active measurement systems may encounter 423 similar errors, but to a lesser extent. These errors must be 424 accounted for in some types of analysis. 425 5.7 Reporting 426 The calibration and context in which the method is used MUST be 427 carefully considered, and SHOULD always be reported along with 428 metric results. We next present five items to consider: the Type-P 429 of test packets, the threshold of delay equivalent to loss, error 430 calibration, the path traversed by the test packets, and background 431 conditions at Src, Dst, and the intervening networks during a 432 sample. This list is not exhaustive; any additional information that 433 could be useful in interpreting applications of the metrics should 434 also be reported. 435 5.7.1. Type-P 436 As noted in the Framework document [3], the value of a metric may 437 depend on the type of IP packets used to make the measurement, or 438 "type-P". The value of Type-P-One-way-Periodic-Delay could change 439 if the protocol (UDP or TCP), port number, size, or arrangement for 440 special treatment (e.g., IP precedence or RSVP) changes. The exact 441 Type-P used to make the measurements MUST be reported. 442 5.7.2. Threshold for delay equivalent to loss 443 In addition, the threshold for delay equivalent to loss (or 444 methodology to determine this threshold) MUST be reported. 445 5.7.3. Calibration results 446 + If the systematic error can be determined, it SHOULD be removed 447 from the measured values. 448 + You SHOULD also report the calibration error, e, such that the 449 true value is the reported value plus or minus e, with 95% 450 confidence (see the last section.) 451 + If possible, the conditions under which a test packet with finite 452 delay is reported as lost due to resource exhaustion on the 453 measurement instrument SHOULD be reported. 454 5.7.4. Path 455 The path traversed by the packets SHOULD be reported, if possible. 456 In general it is impractical to know the precise path a given packet 457 takes through the network. The precise path may be known for 458 certain Type-P packets on short or stable paths. If Type-P includes 459 the record route (or loose-source route) option in the IP header, 460 and the path is short enough, and all routers on the path support 461 record (or loose-source) route, then the path will be precisely 462 recorded. 463 This may be impractical because the route must be short enough, many 464 routers do not support (or are not configured for) record route, and 465 use of this feature would often artificially worsen the performance 466 observed by removing the packet from common-case processing. 467 However, partial information is still valuable context. For example, 468 if a host can choose between two links (and hence two separate 469 routes from Src to Dst), then the initial link used is valuable 470 context. {Comment: For example, with one commercial setup, a Src on 471 one NAP can reach a Dst on another NAP by either of several 472 different backbone networks.} 473 6. Additional discussion on periodic sampling 474 Fig.1 illustrates measurements on multiple protocol levels that are 475 relevant to this memo. The user's focus is on transport quality 476 evaluation from application point of view. However, to properly 477 separate the quality contribution of the operating system and codec 478 on packet voice, for example, it is beneficial to be able to measure 479 quality at IP level [6]. Link layer monitoring provides a way of 480 accounting for link layer characteristics such as bit error rates. 481 --------------- 482 | application | 483 --------------- 484 | transport | <-- 485 --------------- 486 | network | <-- 487 --------------- 488 | link | <-- 489 --------------- 490 | physical | 491 --------------- 492 Fig. 1: Different possibilities for performing measurements: a 493 protocol view. Above, "application" refers to all layers above L4 494 and is not used in the OSI sense. 495 In general, the results of measurements may be influenced by 496 individual application requirements/responses related to the 497 following issues: 499 + Lost packets: Applications may have varying tolerance to lost 500 packets. Another consideration is the distribution of lost packets 501 (i.e. random or bursty). 502 + Long delays: Many applications will consider packets delayed 503 longer than a certain value to be equivalent to lost packets 504 (i.e. real time applications). 505 + Duplicate packets: Some applications may be perturbed if 506 duplicate packets are received. 507 + Reordering: Some applications may be perturbed if packets arrive 508 out of sequence. This may be in addition to the possibility of 509 exceeding the "long" delay threshold as a result of being out of 510 sequence. 511 + Corrupt packet header: Most applications will probably treat a 512 packet with a corrupt header as equivalent to a lost packet. + 513 Corrupt packet payload: Some applications (e.g. digital voice 514 codecs) may accept corrupt packet payload. In some cases, the 515 packet payload may contain application specific forward error 516 correction (FEC) that can compensate for some level of 517 corruption. 518 + Spurious packet: Dst may receive spurious packets (i.e. packets 519 that are not sent by the Src as part of the metric). Many 520 applications may be perturbed by spurious packets. 521 Depending, e.g., on the observed protocol level, some issues listed 522 above may be indistinguishable from others by the application, it 523 may be important to preserve the distinction for the operators of 524 Src, Dst, and/or the intermediate network(s). 525 6.1 Measurement applications 526 This sampling method provides a way to perform measurements 527 irrespective of the possible QoS mechanisms utilized in the IP 528 network. As an example, for a QoS mechanism without hard guarantees, 529 measurements may be used to ascertain that the "best" class gets the 530 service that has been promised for the traffic class in question. 531 Moreover, an operator could study the quality of a cheap, low- 532 guarantee service implemented using possible slack bandwidth in 533 other classes. Such measurements could be made either in studying 534 the feasibility of a new service, or on a regular basis. 535 IP delivery service measurements have been discussed within the 536 International Telecommunications Union (ITU). A framework for IP 537 service level measurements (with references to the framework for IP 538 performance [3]) that is intended to be suitable for service 539 planning has been approved as I.380 [7]. ITU-T Recommendation I.380 540 covers abstract definitions of performance metrics. This memo 541 describes a method that is useful both for service planning and end- 542 user testing purposes, in both active and passive measurements. 543 Delay measurements can be one-way [3,4], paired one-way, or round- 544 trip [8]. Accordingly, the measurements may be performed either with 545 synchronized or unsynchronized Src/Dst host clocks. Different 546 possibilities are listed below. 547 The reference measurement setup for all measurement types is shown 548 in Fig. 2. 549 ----------------< IP >-------------------- 550 | | | | 551 ------- ------- -------- -------- 552 | Src | | MP | | MP | | Dst | 553 ------- |(Src)| |(Dst) | -------- 554 ------- -------- 555 Fig. 2: Example measurement setup. 556 An example of the use of the method is a setup with a source host 557 (Src), a destination host (Dst), and corresponding measurement 558 points (MP(Src) and MP(Dst)) as shown in Figure 2. Separate 559 equipment for measurement points may be used if having Src and/or 560 Dst conduct the measurement may significantly affect the delay 561 performance to be measured. MP(Src)should be placed/measured close 562 to the egress point of packets from Src. MP(Dst) should be 563 placed/measure close to the ingress point of packets for Dst. 564 "Close" is defined as a distance sufficiently small so that 565 application-level performance characteristics measured (such as 566 delay) can be expected to follow the corresponding performance 567 characteristic between Src and Dst to an adequate accuracy. Basic 568 principle here is that measurement results between MP(Src) and 569 MP(Dst) should be the same as for a measurement between Src and Dst, 570 within the general error margin target of the measurement (e.g., < 1 571 ms; number of lost packets is the same). If this is not possible, 572 the difference between MP-MP measurement and Src-Dst measurement 573 should preferably be systematic. 574 The test setup just described fulfills two important criteria: 1) 575 Test is made with realistic stream metrics, emulating - for example 576 - a full-duplex Voice over IP (VoIP) call. 2) Either one-way or 577 round-trip characteristics may be obtained. 578 It is also possible to have intermediate measurement points between 579 MP(Src) and MP(Dst), but that is beyond the scope of this document. 580 6.1.1 One way measurement 581 In the interests of specifying metrics that are as generally usable 582 as possible, application-level measurements based on one-way delays 583 are used in the example metrics. The implication of application- 584 level measurement for bi-directional applications such as 585 interactive multimedia conferencing is discussed below. 586 Performing a single one-way measurement only yields information on 587 network behavior in one direction. Moreover, the stream at the 588 network transport level does not emulate accurately a full-duplex 589 multimedia connection. 590 6.1.2 Paired one way measurement 591 Paired one way delay refers to two multimedia streams: Src to Dst 592 and Dst to Src for the same Src and Dst. By way of example, for some 593 applications, the delay performance of each one way path is more 594 important than the round trip delay. This is the case for delay- 595 limited signals such as VoIP. Possible reasons for the difference 596 between one-way delays is different routing of streams from Src to 597 Dst vs. Dst to Src. 598 For example, a paired one way measurement may show that Src to Dst 599 has an average delay of 30ms while Dst to Src has an average delay 600 of 120ms. To a round trip delay measurement, this example would look 601 like an average of 150ms delay. Without the knowledge of the 602 asymmetry, we might miss a problem that the application at either 603 end may have with delays averaging more than 100ms. 604 Moreover, paired one way delay measurement emulates a full-duplex 605 VoIP call more accurately than a single one-way measurement only. 606 6.1.3 Round trip measurement 607 From the point of view of periodic multimedia streams, round-trip 608 measurements have two advantages: they avoid the need of host clock 609 synchronization and they allow for a simulation of full-duplex 610 communication. The former aspect means that a measurement is easily 611 performed, since no special equipment or NTP setup is needed. The 612 latter property means that measurement streams are transmitted in 613 both directions. Thus, the measurement provides information on 614 quality of service as experienced by two-way applications. 615 The downsides of round-trip measurement are the need for more 616 bandwidth than an one-way test and more complex accounting of packet 617 loss. Moreover, the stream that is returning towards the original 618 sender may be more bursty than the one on the first "leg" of the 619 round-trip journey. The last issue, however, means in practice that 620 returning stream may experience worse QoS than the out-going one, 621 and the performance estimates thus obtained are pessimistic ones. 622 The possibility of asymmetric routing and queuing must be taken into 623 account during analysis of the results. 624 Note that with suitable arrangements, round-trip measurements may be 625 performed using paired one way measurements. 626 6.2 Statistics calculable from one sample 627 Some statistics may be particularly relevant to applications 628 simulated by periodic streams, such as the range of delay values 629 recorded during the sample. 630 For example, a sample metric generates 100 packets at MP(Src) with 631 the following measurements at MP(Dst): 632 + 80 packets received with delay [i] <= 20 ms 633 + 8 packets received with delay [i] > 20 ms 634 + 5 packets received with corrupt packet headers 635 + 4 packets from MP(Src) with no matching packet recorded 636 at MP(Dst) (effectively lost) 637 + 3 packets received with corrupt packet payload and delay [i] <= 638 20 ms 639 + 2 packets that duplicate one of the 80 packets received 640 correctly as indicated in the first item 641 For this example, packets are considered acceptable if they are 642 received with less than or equal to 20ms delays and without corrupt 643 packet headers or packet payload. In this case, the percentage of 644 acceptable packets is 80/100 = 80%. 645 For a different application which will accept packets with corrupt 646 packet payload and no delay bound (so long as the packet is 647 received), the percentage of acceptable packets is (80+8+3)/100 = 648 91%. 649 6.3 Statistics calculable from multiple samples 650 There may be value in running multiple tests using this method to 651 collect a "sample of samples". For example, it may be more 652 appropriate to simulate 1,000 two-minute VoIP calls rather than a 653 single 2,000 minute call. When considering collection of multiple 654 samples, issues like the interval between samples (e.g. minutes, 655 hours), composition of samples (e.g. equal Tf-T0 duration, different 656 packet sizes), and network considerations (e.g. run different 657 samples over different intervening link-host combinations) should be 658 taken into account. For items like the interval between samples, 659 the usage pattern for the application of interest should be 660 considered. 661 When computing statistics for multiple samples, more general 662 statistics (e.g. median, percentile, etc.) may have relevance with a 663 larger number of packets. 664 6.4 Background conditions 665 In many cases, the results may be influenced by conditions at Src, 666 Dst, and/or any intervening networks. Factors that may affect the 667 results include: traffic levels and/or bursts during the sample, 668 link and/or host failures, etc. Information about the background 669 conditions may only be available by external means (e.g. phone 670 calls, television) and may only become available days after samples 671 are taken. 672 6.5 Considerations related to delay 673 For interactive multimedia sessions, end-to-end delay is an 674 important factor. Too large a delay reduces the quality of the 675 multimedia session as perceived by the participants. One approach 676 for managing end-to-end delays on an Internet path involving 677 heterogeneous link layer technologies is to use per-domain delay 678 quotas (e.g. 50 ms for a particular IP domain). However, this scheme 679 has clear inefficiencies, and can over-constrain the problem of 680 achieving some end-to-end delay objective. A more flexible 681 implementation ought to address issues like possibility of 682 asymmetric delays on paths, and sensitivity of an application to 683 delay variations in a given domain. There are several alternatives 684 as to the delay statistic one ought to use in managing end-to-end 685 QoS. This question, although very interesting, is not within the 686 scope of this memo and is not discussed further here. 687 7. Security Considerations 688 7.1 Denial of Service Attacks 689 This metric generates a periodic stream of packets from one host 690 (Src) to another host (Dst) through intervening networks. This 691 method could be abused for denial of service attacks directed at Dst 692 and/or the intervening network(s). 693 Administrators of Src, Dst, and the intervening network(s) should 694 establish bilateral or multi-lateral agreements regarding the 695 timing, size, and frequency of collection of sample metrics. Use of 696 this method in excess of the terms agreed between the participants 697 may be cause for immediate rejection or discard of packets or other 698 escalation procedures defined between the affected parties. 699 7.2 User data confidentiality 700 Active use of this method generates packets for a sample, rather 701 than taking samples based on user data, and does not threaten user 702 data confidentiality. Passive measurement must restrict attention to 703 the headers of interest. Since user payloads may be temporarily 704 stored for length analysis, suitable precautions MUST be taken to 705 keep this information safe and confidential. 706 7.3 Interference with the metric 707 It may be possible to identify that a certain packet or stream of 708 packets is part of a sample. With that knowledge at Dst and/or the 709 intervening networks, it is possible to change the processing of the 710 packets (e.g. increasing or decreasing delay) that may distort the 711 measured performance. It may also be possible to generate 712 additional packets that appear to be part of the sample metric. 713 These additional packets are likely to perturb the results of the 714 sample measurement. 715 To discourage the kind of interference mentioned above, packet 716 interference checks, such as cryptographic hash, may be used. 717 8. IANA Considerations 718 Since this method and metric do not define a protocol or well-known 719 values, there are no IANA considerations in this memo. 720 9. References 721 1 Bradner, S., "The Internet Standards Process -- Revision 3", BCP 722 9, RFC 2026, October 1996. 723 2 Bradner, S., "Key words for use in RFCs to Indicate Requirement 724 Levels", RFC 2119, March 1997. 725 3 Paxson, V., Almes, G., Mahdavi, J., and Mathis, M., "Framework 726 for IP Performance Metrics", RFC 2330, May 1998. 727 4 Almes, G., Kalidindi, S., and Zekauskas, M., "A one-way delay 728 metric for IPPM", RFC 2679, September 1999. 729 5 Demichelis, C., and Chimento, P., "IP Packet Delay Variation 730 Metric for IPPM", work in progress. 731 6 ETSI TIPHON document TS-101329-5 (to be published in July). 732 7 International Telecommunications Union, "Internet protocol data 733 communication service _ IP packet transfer and availability 734 performance parameters", Telecommunications Sector Recommendation 735 I.380, February 1999. 736 8 Almes, G., Kalidindi, S., and Zekauskas, M., "A round-trip delay 737 metric for IPPM", IETF RFC 2681. 738 10. Acknowledgments 739 The authors wish to thank the chairs of the IPPM WG (Matt Zekauskas 740 and Merike Kaeo) for comments that have made the present draft 741 clearer and more focused. Howard Stanislevic and Will Leland have 742 also presented useful comments and questions. We also acknowledge 743 Henk Uijterwaal's continued challenge to develop the motivation for 744 this method. The authors have built on the substantial foundation 745 laid by the authors of the framework for IP performance [3]. 746 11. Author's Addresses 747 Vilho Raisanen 748 Nokia Networks 749 P.O. Box 300 750 FIN-00045 Nokia Group 751 Finland 752 Phone +358 9 4376 1 Fax. +358 9 4376 6852 753 754 Glenn Grotefeld 755 Motorola, Inc. 756 1501 W. Shure Drive, MS 2F1 757 Arlington Heights, IL 60004 USA 758 Phone +1 847 435-0730 Fax +1 847 632-6800 759 760 Al Morton 761 AT&T Labs 762 Room D3 - 3C06 763 200 Laurel Ave. South 764 Middletown, NJ 07748 USA 765 Phone +1 732 420 1571 Fax +1 732 368 1192 766 767 Full Copyright Statement 768 "Copyright (C) The Internet Society (date). All Rights Reserved. 769 This document and translations of it may be copied and furnished to 770 others, and derivative works that comment on or otherwise explain it 771 or assist in its implmentation may be prepared, copied, published 772 and distributed, in whole or in part, without restriction of any 773 kind, provided that the above copyright notice and this paragraph 774 are included on all such copies and derivative works. However, this 775 document itself may not be modified in any way, such as by removing 776 the copyright notice or references to the Internet Society or other 777 Internet organizations, except as needed for the purpose of 778 developing Internet standards in which case the procedures for 779 copyrights defined in the Internet Standards process must be 780 followed, or as required to translate it into languages other than 781 English. 782 The limited permissions granted above are perpetual and will not be 783 revoked by the Internet Society or its successors or assigns. 784 This document and the information contained herein is provided on an 785 "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING 786 TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING 787 BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION 788 HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF 789 MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.