idnits 2.17.1 draft-ietf-ippm-npmps-03.txt: ** The Abstract section seems to be numbered Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 22 instances of too long lines in the document, the longest one being 3 characters in excess of 72. ** There is 1 instance of lines with control characters in the document. ** The abstract seems to contain references ([2], [1]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 50 has weird spacing: '...vant to appli...' == Line 64 has weird spacing: '... metric using...' == Line 173 has weird spacing: '... number highe...' == Line 287 has weird spacing: '...han the round...' == Line 306 has weird spacing: '...ization and t...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 2000) is 8556 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Downref: Normative reference to an Informational RFC: RFC 2330 (ref. '1') ** Obsolete normative reference: RFC 2679 (ref. '2') (Obsoleted by RFC 7679) -- Possible downref: Non-RFC (?) normative reference: ref. '3' -- Possible downref: Non-RFC (?) normative reference: ref. '5' Summary: 11 errors (**), 0 flaws (~~), 6 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group V. Raisanen 2 INTERNET-DRAFT Nokia 3 Expiration Date: May 2001 G. Grotefeld 4 Motorola 5 November 2000 7 Network performance measurement for periodic streams 8 10 1. Status of this Memo 12 This document is an Internet-Draft and is in full conformance with 13 all provisions of Section 10 of RFC2026. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six months 21 and may be updated, replaced, or obsoleted by other documents at any 22 time. It is inappropriate to use Internet-Drafts as reference 23 material or to cite them other than as "work in progress." 25 The list of current Internet-Drafts can be accessed at 26 http://www.ietf.org/ietf/1id-abstracts.txt 28 The list of Internet-Draft shadow directories can be accessed at 29 http://www.ietf.org/shadow.html 31 This memo provides information for the Internet community. This 32 memo does not specify an Internet standard of any 33 kind. Distribution of this memo is unlimited. 35 2. Abstract 37 This document describes a sample metric suitable for application- 38 level IP network transport measurement for periodic streams, such as 39 VoIP or streaming multimedia over IP. In this document, the reader 40 is assumed to be familiar with the terminology of the Framework for 41 IP Performance Metrics RFC 2330 [1]. This document is parallel to 42 A One-way Delay Metric for IPPM RFC 2679 [2]. Although this document 43 is based on the delay metrics, other characteristics can be measured 44 with this approach, too. For example, packet loss rate, reordering / 45 out-of sequence, and successive delay variation are all additional 46 metrics which can be built from this baseline set of measurements. 48 3. Introduction 50 This document discusses concepts relevant to application-level 51 performance measurements of an IP network. The original driver for 52 this work is Quality of Service of interactive periodic streams such 53 as multimedia conference over IP, but the idea of application-level 54 measurement may have a wider scope. In the following, interactive 55 multimedia traffic is used as an example to illustrate the concept. 57 A constant bit-rate (CBR), or nearly CBR, streaming (hereinafter 58 called periodic) multimedia bit stream may be simulated by 59 transmitting uniformly sized packets (or mostly uniformly sized 60 packets) at regular intervals through the network to be evaluated. 61 The "mostly uniformly sized packets" may be found in applications 62 that may use smaller packets during a portion of the stream (e.g. 63 digitally coded voice during silence periods). As noted in the 64 framework document [1], a sample metric using regularly spaced 65 singleton tests has some limitations when considered from a 66 general measurement point of view: only part of the network 67 performance spectrum is sampled. However, from the point of view of 68 application-level performance, this is actually good news as 69 explained below. 71 IP delivery service measurements have been discussed within the 72 International Telecommunications Union (ITU). A framework for IP 73 service level measurements (with references to the framework for IP 74 performance [1]) that is intended to be suitable for service planning 75 has been approved as I.380 [3]. The emphasis in the ITU 76 recommendation is on passive measurements, though not explicitly 77 forbidding active measurements. The present contribution proposes a 78 method that is usable both for service planning and end-user testing 79 purposes, and is based on active measurements. 81 3.1 Terminology 83 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 84 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 85 document are to be interpreted as described in RFC 2119 [4]. 86 Although RFC 2119 was written with protocols in mind, the key words 87 are used in this document for similar reasons. They are used to 88 ensure the results of measurements from two different implementations 89 are comparable, and to note instances when an implementation could 90 perturb the network. 92 3.2 Considerations related to delay 94 For interactive multimedia sessions, end-to-end delay is an 95 important factor. Too large a delay reduces the quality of the 96 multimedia session as perceived by the participants. One approach for 97 managing end-to-end delays on an Internet path involving 98 heterogeneous link layer technologies is to use per-domain delay 99 quotas (e.g. 50 ms for a particular IP domain). The 50 ms would 100 then be included into a calculation of an end-to-end delay bound. A 101 practical implementation of such as scheme ought to address issues 102 like possibility of asymmetric delays in a route in different 103 directions, and sensitivity of an application to delay variations in 104 a given domain. There are several alternatives as to which kind of 105 derivative delay metric one ought to use in managing end-to-end QoS. 106 This question, although very interesting, is not within the scope of 107 this draft and is not discussed further here. 109 In the following, a methodology and metric are presented for 110 measuring media stream transport QoS in an IP domain. The 111 measurement results may be used in derivative metrics such as 112 average and maximum delays. A metric is presented that is a standard 113 way for performing a measurement irrespective of the possible QoS 114 mechanism utilized in the core network. As an example, for a QoS 115 mechanism without hard guarantees, measurements may be used to 116 ascertain that the "best" class gets the service that has been 117 promised for the traffic class in question. Moreover, an operator 118 could study the quality of a cheap, low-guarantee service 119 implemented using possible slack bandwidth in other classes. Such 120 measurements could be made either in studying the feasibility of a 121 new service, or on a regular basis. 123 The present draft seeks to formalize the measurements in such a way 124 that interoperable results are achieved. 126 3.3 Protocol level issues 128 The version of the Internet Protocol used in the measurement affects 129 (at least) packet sizes, and should be reported. 131 Fig.1 illustrates measurements on multiple protocol levels that 132 are relevant to this draft. The major focus of the present draft 133 is on transport quality evaluation from application point of 134 view. However, to properly account for quality effects of, e.g., 135 operating system and codec on packet voice, it is beneficial to be 136 able to measure quality at IP level [5]. Link layer monitoring 137 provides a way of accounting for link layer characteristics such 138 as bit error rates. 140 --------------- 141 | application | 142 --------------- 143 | transport | <-- 144 --------------- 145 | network | <-- 146 --------------- 147 | link | <-- 148 --------------- 149 | physical | 150 --------------- 152 Fig. 1: Different possibilities for performing measurements: a 153 protocol view. Above, "application" refers to all layers above 154 L4 and is not used in the OSI sense. 156 In general, the results of measurements may be influenced by 157 individual application requirements/responses related to the 158 following issues: 160 + Lost packets: Applications may have varying tolerance to lost 161 packets. Another consideration is the distribution of lost 162 packets (i.e. random or bursty). 163 + Long delays: Many applications will consider packets delayed 164 longer than a certain value to be equivalent to lost packets 165 (i.e. real time applications). 166 + Duplicate packets: Some applications may be perturbed if 167 duplicate packets are received. 168 + Out-of-sequence: Some applications may be perturbed if packets 169 are received out of sequence. This may be in addition to the 170 possibility of exceeding the "long" delay threshold as a result 171 of being out of sequence. An out-of-sequence packet outcome 172 occurs when a single IP packet received at a DST measurement 173 point has a sequence number higher than that which is 174 expected, and therefore, the packet is OOS due to re-ordering. 175 + Corrupt packet header: Most applications will probably treat a 176 packet with a corrupt header as equivalent to a lost packet. 177 + Corrupt packet payload: Some applications (e.g. digital voice 178 codecs) may accept corrupt packet payload. In some cases, the 179 packet payload may contain application specific forward error 180 correction (FEC) that can compensate for some level of 181 corruption. 182 + Spurious packet: Dst may receive spurious packets (i.e. packets 183 that are not sent by the Src as part of the metric). Many 184 applications may be perturbed by spurious packets. 186 Depending, e.g., on the observed protocol level, some issues listed 187 above may be indistinguishable from others by the application, it 188 may be important to preserve the distinction for the operators of 189 Src, Dst, and/or the intermediate network(s). 191 Because of the possible errors listed above, in most cases it is 192 recommended to use a packet identifier for each packet generated at 193 Src. Identifiers for the metric sample may be those used by the 194 underlying transport layer (e.g. RTP sequence number) or the same 195 identifiers used by an application if the application to be modeled 196 by the metric uses an identifier. The possibility of identifier 197 roll-over (reuse if intentional) during a metric collected over 198 a "long" (application dependent) time should be observed. 200 If the application does not use an identifier, it may still be 201 useful to add identifiers to the packets in the metric sample to 202 help identify possible anomalies such as out of sequence packets. 203 This would be most useful in the case where the application 204 expects to receive packets in sequence, but has no capability to 205 identify the sequence of packets received at Dst. 207 3.4 Application-level measurement 209 In what follows, a metric is proposed for application-level network 210 performance measurement. In effect, the metric is an emulation of 211 periodic multimedia stream performance. The justification for using 212 realistic application metrics in the measurement: 214 + The results of the measurement are automatically relevant to the 215 performance as perceived by the application in question. 216 + All the packets in the measurement contribute to accuracy of the 217 estimation of performance variation at timescale that is 218 important to the multimedia application (packetization 219 interval). 220 + Effects of elastic traffic (TCP) on measurement packets are 221 different for a sustained stream than for single packets during 222 overloading situations as discussed in [3]. 224 3.5 Measurement types 226 Delay measurements can be one-way [2,3], paired one-way, or 227 round-trip [6]. Accordingly, the measurements may be performed 228 either with synchronized or unsynchronized Src/Dst host clocks. 229 Different possibilities are listed below. 231 The reference measurement setup for all measurement types is 232 shown in Fig. 2. 234 ----------------< IP >-------------------- 235 | | | | 236 ------- ------- -------- -------- 237 | Src | | MP | | MP | | Dst | 238 ------- |(Src)| |(Dst) | -------- 239 ------- -------- 241 Fig. 2: Example setup for the metric usage. 243 An example of the use of the metric is a setup with a source host 244 (Src), a destination host (Dst), and corresponding measurement 245 points (MP(Src) and MP(Dst)) as shown in Figure 2. Separate equipment 246 for measurement points may be used if having Src and/or Dst conduct 247 the measurement may significantly affect the delay performance to be 248 measured. MP(Src)should be placed/measured close to the egress point 249 of packets from Src. MP(Dst) should be placed/measure close to 250 the ingress point of packets for Dst. "Close" is defined as a 251 distance sufficiently small so that application-level performance 252 characteristics measured (such as delay) can be expected to follow 253 the corresponding performance characteristic between Src and Dst to 254 an adequate accuracy. Basic principle here is that measurement 255 results between MP(Src) and MP(Dst) should be the same as for a 256 measurement between Src and Dst, within the general error margin 257 target of the measurement (e.g., < 1 ms; number of lost packets is 258 the same). If this is not possible, the difference between MP-MP 259 measurement and Src-Dst measurement should preferably be systematic. 261 The test setup just described fulfills two important criteria: 262 1) Test is made with realistic stream metrics, emulating - for example - 263 a full-duplex Voice over IP (VoIP) call. 264 2) Either one-way or round-trip characteristics may be obtained. 266 It is also possible to have intermediate measurement points between 267 MP(Src) and MP(Dst), but that is beyond the scope of this document. 269 3.5.1 One way measurement 271 In the interests of specifying metrics that are as generally usable 272 as possible, application-level measurements based on one-way delays 273 are used in the example metrics. The implication of application-level 274 measurement for bi-directional applications such as interactive 275 multimedia conferencing is discussed below. 277 Performing a single one-way measurement only yields information on 278 network behavior in one direction. Moreover, the stream at the 279 network transport level does not emulate accurately a full-duplex 280 multimedia connection. 282 3.5.2 Paired one way measurement 284 Paired one way delay refers to two multimedia streams: Src to Dst 285 and Dst to Src for the same Src and Dst. By way of example, for 286 some applications, the delay performance of each one way path is 287 more important than the round trip delay. This is the case for 288 delay-limited signals such as VoIP. Possible reasons for the 289 difference between one-way delays is different routing of streams 290 from Src to Dst vs. Dst to Src. 292 For example, a paired one way measurement may show that Src to Dst 293 has an average delay of 30ms while Dst to Src has an average delay 294 of 120ms. To a round trip delay measurement, this example would 295 look like an average of 150ms delay. Without the knowledge of the 296 asymmetry, we might miss a problem that the application at either 297 end may have with delays averaging more than 100ms. 299 Moreover, paired one way delay measurement emulates a full-duplex 300 VoIP call more accurately than a single one-way measurement only. 302 3.5.3 Round trip measurement 304 From the point of view of periodic multimedia streams, 305 round-trip measurements have two advantages: they avoid the need of 306 host clock synchronization and they allow for a simulation of 307 full-duplex connections. The former aspect means that a measurement 308 is easily performed, since no special equipment or NTP setup is 309 needed. The latter property means that measurement streams are 310 transmitted in both directions. Thus, the measurement provides 311 information on quality of service as experienced by appropriate 312 application. 314 The downsides of round-trip measurement are the need for more 315 bandwidth than an one-way test and more complex accounting of 316 packet loss. Moreover, the stream that is returning towards the 317 original sender may be more bursty than the one on the first "leg" of 318 the round-trip journey. The last issue, however, means in practice 319 that returning stream experiences worse QoS than the other one, and 320 the performance estimates thus obtained are pessimistic ones. The 321 possibility of asymmetric routing and queuing must be taken into 322 account during analysis of the results. 324 Please note that with suitable arrangements, round-trip measurements 325 may be performed using paired one way measurements. 327 4 Sample metric for multimedia stream simulation 329 The sample metric presented here is similar to the sample metric 330 Type-P-One-way-Delay-Poisson-Stream presented in [2]. "Singletons", as 331 defined in [1] and [2] are not directly used in this document because 332 certain key results (such as duplicate or out of sequence packets) 333 cannot be identified in the context of a singleton, but only as part 334 of a sample. 336 4.1 Metric name 338 Type-P-One-way-Delay-Periodic-Stream 340 4.2 Metric parameters 342 4.2.1 Global metric parameters 344 These parameters are applicable to the metrics collected in the 345 following sections (4.2.2, 4.2.3, and 4.2.4). 347 + Src, the IP address of a host 348 + Dst, the IP address of a host 349 + IPV, the IP version (IPv4/IPv6) used in the measurement 350 + T0, a time, for starting to generate packets and taking 351 measurements for a sample 352 + Tf, a time, greater than T0, for stopping generation of packets 353 for a sample 354 + periodic packet interval incT, a time duration 355 + packet size p(j), the number of bytes in each packet of Type-P of 356 size j 357 + dTloss, a time interval, used for determining if a packet should 358 be considered lost 359 + Tcons, a time interval [optional] 361 While a number of applications will use one packet size (j = 1), 362 other applications may use packets of different sizes (j > 1). 363 Especially in cases of congestion, it may be useful to have 364 packets smaller than the maximum or predominant size of packets 365 in the periodic stream. 367 4.2.2 Metrics collected at MP(Src) 369 + Tstamp(Src)[i], for each packet [i], the time of the packet as 370 measured at MP(Src) 371 + PktID [i], for each packet [i], an identification number for the 372 the packet sent from Src to Dst. 374 + PktSiTy [i], for each packet [i], the packet size and/or type. 375 Some applications may use packets of different size, either 376 because of application requirements or in response to IP 377 performance experienced. 379 4.2.3 Metrics collected at MP (Dst) 381 + dTstop, a time interval, used to add to time Tf to determine when to 382 stop collecting metrics for a sample 383 + Tstamp(Dst)[i], for each packet [i], the time of the packet as 384 measured at MP(Dst) 385 + PktID [i], for each packet [i], an identification number for the 386 the packet received at Dst from Src. 387 + PktSiTy [i], for each packet [i], the packet size and/or type. 388 Some applications may use packets of different size, either 389 because of application requirements or in response to IP 390 performance experienced. 391 + PktStatus [i], for each packet [i], the status of the packet 392 received. Possible status includes: OK, packet header corrupt, 393 packet payload corrupt, spurious, duplicate, out-of-sequence. 395 4.2.4 Metrics resulting when metrics collected at MP(Src) and MP(Dst) 396 are merged 398 These parameters are only available as a complete set when the 399 parameters from the preceding sections (4.2.1, 4.2.2, and 4.2.3 are 400 combined. 402 + Tstamp(Src)[i], for each packet [i], the time of the packet as 403 measured at MP(Src). This entry may be blank or noted as N/A 404 for spurious packets received at MP(Dst) 405 + Tstamp(Dst)[i], for each packet [i], the time of the packet as 406 measured at MP(Dst). This entry may be blank or noted as N/A 407 for packets not received at MP(Dst), received with corrupt 408 packet headers, or for duplicate packets received at MP(Dst). 409 + PktID [i], for each packet [i], an identification number for the 410 the packet received. This identification number may be corrupted 411 for certain packets received at MP (Dst). 412 + PktSiTy [i], for each packet [i], the packet size and/or type. 413 + PktStatus [i], for each packet [i], the status of the packet 414 received. Possible status includes: OK, packet header corrupt, 415 packet payload corrupt, spurious, duplicate, out-of-sequence. 417 + Delay [i], for each packet [i], the time interval Tstamp(Dst)[i] - 418 Tstamp(Src)[i]. For the following conditions, it will not be 419 possible to be able to compute delay: 420 Spurious: There will be no Tstamp(Src)[i] time 421 Not received: There will be no Tstamp (Dst) [i] 422 Corrupt packet header: There will be no Tstamp (Dst) [i] 423 Duplicate: Only the first non-corrupt copy of the packet 424 received at Dst should have Delay [i] computed. 425 + SDV[i] [optional] , for each packet [i] except the first one: 426 momentary delay variation between successive packets, i.e., the 427 time interval Delay[i] - Delay [i-1]. SDV[i] may be negative, 428 zero, or positive. Delay for both packets i and i+1 must be 429 calculable according to the definition above or SDV[i] is 430 undefined. 432 4.3 High level description of the procedure to collect a sample 434 Beginning on or after time T0, Type-P packets are generated 435 by Src and sent to Dst until time Tf is reached with a nominal 436 interval between the first bit of successive packets of incT as 437 measured at MP(Src). incT may be nominal due to a number of reasons: 438 variation in packet generation at Src, clock issues (see section 4.6), 439 etc. 441 MP(Src) records the following information only for packets with 442 timestamps between and including T0 and Tf: timestamp, 443 packet identifier, and packet size/type of each packet sent from Src 444 to Dst that is part of the sample. 446 MP (Dst) records the following information only for packets with 447 time stamps between T0 and (Tf+ dTstop): timestamp, packet identifier, 448 packet size/type, and received status of each packet received from 449 Src at Dst that is part of the sample. Optionally, at a time Tf + 450 Tcons, the data from MP(Src) and MP(Dst) are consolidated to derive 451 the results of the sample metric. 453 To prevent stopping data collection too soon, dTcons should be greater 454 than or equal to dTstop. Conversely, to keep data collection 455 reasonably efficient, dTstop should be some reasonable time interval 456 (seconds/minutes/hours), even if dTloss is infinite or extremely long. 458 4.4 Discussion 460 The sample metric thus defined is intended to probe the delays and 461 the delay variation as experienced by multimedia streams of 462 an application. Subsequently, the delay is assumed to be measured at 463 transport layer level. Since a range of packet sizes and nominal 464 interval between packets is used, the method probes only a specific 465 time scale of network QoS variations. 467 There are a number of factors that should be taken into account when 468 collecting a sample metric of Type-P-One-way-Delay-Periodic-Stream. 470 + T0 and (Tf + dTloss) should specify a long enough time interval to 471 represent a reasonable use of the application under test (e.g. do 472 not provide only a 100 ms time interval for a phone call) 474 + T0 and (Tf + dTloss) should specify a time interval that is not 475 excessively long compared to the usage of the application under test 476 (e.g. do not provide a one week continuous phone call) 478 + The nominal interval between packets (incT) and the packet size(s) 479 (p(j)) should not define an equivalent bit rate that is in excess 480 of the capacity of the egress port of Src, the ingress port of Dst, 481 or the carrying capacity of the intervening network(s). There may 482 be exceptional cases to test the response of the application to 483 overload conditions in the transport networks, but these cases 484 should be strictly controlled. 486 + Real delay values will be positive. Therefore, it does not make 487 sense to report a negative value as a real delay. However, an 488 individual zero or negative delay value might be useful as part of 489 a stream when trying to discover a distribution of the delay values 490 of a stream. 492 + Depending on measurement topology, delay values may be as low as 493 100 usec to 10 msec, whereby it may be important for Src and Dst to 494 synchronize very closely. GPS systems afford one way to achieve 495 synchronization to within several 10s of usec. Ordinary application 496 of NTP may allow synchronization to within several msec, but this 497 depends on the stability and symmetry of delay properties among those 498 NTP agents used, and this delay is what we are trying to measure. A 499 combination of some GPS-based NTP servers and a conservatively 500 designed and deployed set of other NTP servers should yield good 501 results, but this is yet to be tested. 503 + Reordering of packets is best discussed in terms of the entire 504 set of measurement packets received, i.e. should be addressed in 505 Sec. 4.9.1. 507 + A given methodology will have to include a way to determine 508 whether packet was lost or whether delay is merely very large (and 509 the packet is yet to arrive at Dst). The global metric parameter 510 dTloss defines a time interval such that delays larger than dTloss 511 are interpreted as losses. 512 {Comment: Note that, for many applications of these metrics, the 513 harm in treating a large delay as infinite might be zero or very 514 small. A TCP data packet, for example, that arrives only after 515 several multiples of the RTT may as well have been lost.} 517 4.5 Additional Methodology Aspects 519 As with other Type-P-* metrics, the detailed methodology will depend 520 on the Type-P (e.g., protocol number, UDP/TCP port number, size, 521 precedence). 523 4.6 Errors and uncertainties 525 The description of any specific measurement method should include an 526 accounting and analysis of various sources of error or uncertainty. 527 The Framework document [1] provides general guidance on this point, 528 but we note here the following specifics related to delay metrics: 530 + Errors or uncertainties due to uncertainties in the clocks of the 531 MP(Src) and MP(Dst) measurement points. 533 + Errors or uncertainties due to the difference between 'wire time' 534 and 'host time'. 536 4.6.1. Errors or uncertainties related to Clocks 538 The uncertainty in a measurement of one-way delay is related, in 539 part, to uncertainties in the clocks of MP(Src) and MP(Dst). In 540 the following, we refer to the clock used to measure when the packet 541 was measured at MP(Src) as the MP(Src) clock and we refer to the 542 clock used to measure when the packet was received at MP(Dst) as the 543 MP(Dst) clock. Alluding to the notions of synchronization, accuracy, 544 resolution, and skew, we note the following: 546 + Any error in the synchronization between the MP(Src) clock and 547 the MP(Dst) clock will contribute to error in the delay 548 measurement. We say that the MP(Src) clock and the MP(Dst) 549 clock have a synchronization error of Tsynch if the MP(Src) clock 550 is Tsynch ahead of the MP(Dst) clock. Thus, if we know the 551 value of Tsynch exactly, we could correct for clock 552 synchronization by adding Tsynch to the uncorrected value of 553 Tstamp(Dst)[i] - Tstamp(Src) [i]. 555 + The accuracy of a clock is important only in identifying the time 556 at which a given delay was measured. Accuracy, per se, has no 557 importance to the accuracy of the measurement of delay. When 558 computing delays, we are interested only in the differences 559 between clock values, not the values themselves. 561 + The resolution of a clock adds to uncertainty about any time 562 measured with it. Thus, if the MP(Src) clock has a resolution of 563 10 msec, then this adds 10 msec of uncertainty to any time value 564 measured with it. We will denote the resolution of the source 565 clock and the MP(Dst) clock as ResMP(Src) and ResMP(Dst), 566 respectively. 567 + The skew of a clock is not so much an additional issue as it is a 568 realization of the fact that Tsynch is itself a function of time. 569 Thus, if we attempt to measure or to bound Tsynch, this needs to 570 be done periodically. Over some periods of time, this function 571 can be approximated as a linear function plus some higher order 572 terms; in these cases, one option is to use knowledge of the 573 linear component to correct the clock. Using this correction, the 574 residual Tsynch is made smaller, but remains a source of 575 uncertainty that must be accounted for. We use the function 576 Esynch(t) to denote an upper bound on the uncertainty in 577 synchronization. Thus, |Tsynch(t)| <= Esynch(t). 579 Taking these items together, we note that naive computation 580 Tstamp(Dst)[i] - Tstamp(Src) [i] will be off by Tsynch(t) +/- 581 (ResMP(SRc) + ResMP(Dst)). Using the notion of Esynch(t), we note 582 that these clock-related problems introduce a total uncertainty of 583 Esynch(t)+ Rsource + Rdest. This estimate of total clock-related 584 uncertainty should be included in the error/uncertainty analysis of 585 any measurement implementation. 587 4.6.2. Errors or uncertainties related to Wire-time vs Host-time 589 As we have defined one-way periodic delay, we would like to measure 590 the time between when a packet is measured and time-stamped at 591 MP(Src) and when it arrives and is time-stamped at MP(Dst) and we 592 refer to these as "wire times." If the timings are themselves 593 performed by software on Src and Dst, however, then this software can 594 only directly measure the time between when Src generates the packet 595 just prior to sending the test packet and when Dst has started to 596 process the packet after having received the test packet, and we refer 597 to these two points as "host times". 599 To the extent that the difference between wire time and host time is 600 accurately known, this knowledge can be used to correct for wire time 601 measurements and the corrected value more accurately estimates the 602 desired (host time) metric. 604 To the extent, however, that the difference between wire time and 605 host time is uncertain, this uncertainty must be accounted for in an 606 analysis of a given measurement method. We denote by Hsource an 607 upper bound on the uncertainty in the difference between wire time 608 of MP(Src) and host time on the Src host, and similarly define Hdest 609 for the difference between the host time on the Dst host and the wire 610 time of MP(Dst). We then note that these problems introduce a total 611 uncertainty of Hsource+Hdest. This estimate of total wire-vs-host 612 uncertainty should be included in the error/uncertainty analysis of 613 any measurement implementation. 615 4.6.3. Calibration 617 Generally, the measured values can be decomposed as follows: 619 measured value = true value + systematic error + random error 621 If the systematic error (the constant bias in measured values) can be 622 determined, it can be compensated for in the reported results. 624 reported value = measured value - systematic error 626 therefore 628 reported value = true value + random error 630 The goal of calibration is to determine the systematic and random 631 error generated by the instruments themselves in as much detail as 632 possible. At a minimum, a bound ("e") should be found such that the 633 reported value is in the range (true value - e) to (true value + e) 634 at least 95 percent of the time. We call "e" the calibration error 635 for the measurements. It represents the degree to which the values 636 produced by the measurement instrument are repeatable; that is, how 637 closely an actual delay of 30 ms is reported as 30 ms. {Comment: 95 638 percent was chosen due to reasons discussed in [2], briefly 639 summarized as (1) some confidence level is desirable to be able to 640 remove outliers, which will be found in measuring any physical 641 property; (2) a particular confidence level should be specified so 642 that the results of independent implementations can be compared.} 644 From the discussion in the previous two sections, the error in 645 measurements could be bounded by determining all the individual 646 uncertainties, and adding them together to form 648 Esynch(t) + ResMP(Src) + ResMP(Dst) + Hsource + Hdest. 650 However, reasonable bounds on both the clock-related uncertainty 651 captured by the first three terms and the host-related uncertainty 652 captured by the last two terms should be possible by careful design 653 techniques and calibrating the instruments using a known, isolated, 654 network in a lab. 656 For example, the clock-related uncertainties are greatly reduced 657 through the use of a GPS time source. The sum of Esynch(t) + 658 ResMP(Src) + ResMP(Dst) is small, and is also bounded for the 659 duration of the measurement because of the global time source. 661 The host-related uncertainties, Hsource + Hdest, could be bounded by 662 connecting two instruments back-to-back with a high-speed serial link 663 or isolated LAN segment. In this case, repeated measurements are 664 measuring the same one-way delay. 666 If the test packets are small, such a network connection has a 667 minimal delay that may be approximated by zero. The measured delay 668 therefore contains only systematic and random error in the 669 instrumentation. The "average value" of repeated measurements is the 670 systematic error, and the variation is the random error. 672 One way to compute the systematic error, and the random error to a 673 95% confidence is to repeat the experiment many times - at least 674 hundreds of tests. The systematic error would then be the median. 675 The random error could then be found by removing the systematic error 676 from the measured values. The 95% confidence interval would be the 677 range from the 2.5th percentile to the 97.5th percentile of these 678 deviations from the true value. The calibration error "e" could then 679 be taken to be the largest absolute value of these two numbers, plus 680 the clock-related uncertainty. {Comment: as described, this bound is 681 relatively loose since the uncertainties are added, and the absolute 682 value of the largest deviation is used. As long as the resulting 683 value is not a significant fraction of the measured values, it is a 684 reasonable bound. If the resulting value is a significant fraction 685 of the measured values, then more exact methods will be needed to 686 compute the calibration error.} 688 Note that random error is a function of measurement load. For 689 example, if many paths will be measured by one instrument, this might 690 increase interrupts, process scheduling, and disk I/O (for example, 691 recording the measurements), all of which may increase the random 692 error in measured singletons. Therefore, in addition to minimal load 693 measurements to find the systematic error, calibration measurements 694 should be performed with the same measurement load that the 695 instruments will see in the field. 697 We wish to reiterate that this statistical treatment refers to the 698 calibration of the instrument; it is used to "calibrate the meter 699 stick" and say how well the meter stick reflects reality. 701 4.7 Reporting the metric 703 The calibration and context in which the metric is measured MUST be 704 carefully considered, and SHOULD always be reported along with metric 705 results. We now present five items to consider: the Type-P of test 706 packets, the threshold of delay equivalent to loss, error 707 calibration, the path traversed by the test packets, and background 708 conditions at Src, Dst, and the intervening networks during a sample. 709 This list is not exhaustive; any additional information that could be 710 useful in interpreting applications of the metrics should also be 711 reported. 713 4.7.1. Type-P 715 As noted in the Framework document [1], the value of the metric may 716 depend on the type of IP packets used to make the measurement, or 717 "type-P". The value of Type-P-One-way-Periodic-Delay could change 718 if the protocol (UDP or TCP), port number, size, or arrangement for 719 special treatment (e.g., IP precedence or RSVP) changes. The exact 720 Type-P used to make the measurements MUST be accurately reported. 722 4.7.2. Threshold for delay equivalent to loss 724 In addition, the threshold for delay equivalent to loss (or 725 methodology to determine this threshold) MUST be reported. 727 4.7.3. Calibration results 729 + If the systematic error can be determined, it SHOULD be removed 730 from the measured values. 732 + You SHOULD also report the calibration error, e, such that the 733 true value is the reported value plus or minus e, with 95% 734 confidence (see the last section.) 736 + If possible, the conditions under which a test packet with finite 737 delay is reported as lost due to resource exhaustion on the 738 measurement instrument SHOULD be reported. 740 4.7.4. Path 742 The path traversed by the packets SHOULD be reported, if possible. 743 In general it is impractical to know the precise path a given packet 744 takes through the network. The precise path may be known for 745 certain Type-P packets on short or stable paths. If Type-P includes 746 the record route (or loose-source route) option in the IP header, 747 and the path is short enough, and all routers* on the path support 748 record (or loose-source) route, then the path will be precisely 749 recorded. 751 This may be impractical because the route must be short enough, 752 many routers do not support (or are not configured for) record route, 753 and use of this feature would often artificially worsen the 754 performance observed by removing the packet from common-case 755 processing. However, partial information is still valuable context. 756 For example, if a host can choose between two links* (and hence two 757 separate routes from Src to Dst), then the initial link used is 758 valuable context. {Comment: For example, with Merit's NetNow setup, 759 a Src on one NAP can reach a Dst on another NAP by either of several 760 different backbone networks.} 762 4.7.5 Background conditions 764 In many cases, the results of a sample may be influenced by conditions 765 at Src, Dst, and/or any intervening networks. Some things that may 766 affect the results of a sample include: traffic levels and/or bursts 767 during the sample, link and/or host failures, etc. Information about 768 the background conditions may only be available by non-Internet means 769 (e.g. phone calls, television) and may only become available days after 770 samples are taken. 772 4.8 Single sample vs. a "sample of samples" 774 Because this metric represents a periodic stream as one sample, there 775 may be value in running multiple tests using this metric to collect 776 a "sample of samples". For example, it may be more appropriate to 777 test 1,000 two-minute VoIP calls rather than a single 2,000 minute 778 VoIP call. When considering collection of a sample of samples, issues 779 like the interval between samples (e.g. Poisson vs. periodic, time of 780 day/day of week), composition of samples (e.g. equal (Tf-T0 duration, 781 different packet sizes), and network considerations (e.g. run different 782 samples over different intervening link-host combinations) should be 783 taken into account. For items like the interval between samples, 784 the pattern of use of the application being measured should be 785 considered. 787 4.9 Statistics based on Type-P-One-way-Delay-Periodic-Stream 789 4.9.1 Statistics calculable from one sample 791 As a metric based on a sample representative of certain 792 applications, some general purpose statistics (e.g. median and 793 percentile) may be less applicable than ways to characterize the 794 range of delay values recorded during the sample metrics. 796 Example, a sample metric generates 100 packets as measured at MP(Src) 797 with the following measurements at MP(Dst) 799 + 80 packets received with delay [i] <= 20 ms 800 + 8 packets received with delay [i] > 20 ms 801 + 5 packets received with corrupt packet headers 802 + 4 packets from MP(Src) with no matching packet recorded 803 at MP(Dst) (effectively lost) 804 + 3 packets received with corrupt packet payload and 805 and delay [i] <= 20 ms 806 + 2 packets that duplicate one of the 80 packets received 807 correctly in the first line 809 For this example, packets are considered acceptable if they are 810 received with less than or equal to 20ms delays and without corrupt 811 packet headers or packet payload. In this case, the percentage 812 of acceptable packets is 80/100 = 80%. 814 For a different application which will accept packets with corrupt 815 packet payload and no delay bound (so long as the packet is received), 816 the percentage of acceptable packets is (80+8+3)/100 = 91%. 818 4.9.2 Statistics calculable from multiple samples 820 For computing statistics, a "sample of samples" series of 821 measurements may be performed. As discussed in section 4.8, under 822 these conditions, general purpose statistics (e.g. median, percentile, 823 etc.) may be more relevant as a more statistically significant 824 number of packets are used. 826 5. Security Considerations 828 5.1 Denial of Service Attacks 830 This metric generates a periodic stream of packets from one host (Src) 831 to another host (Dst) through intervening networks. This metric 832 could be abused for denial of service attacks directed at Dst and/or 833 the intervening network(s). 835 Administrators of Src, Dst, and the intervening network(s) should 836 establish bilateral or multi-lateral agreements regarding the timing, 837 size, and frequency of collection of sample metrics. Use of this 838 metric in excess the terms agreed between the participants MAY BE 839 cause for immediate rejection or discard of packets or other 840 escalation procedures defined between the affected parties. 842 5.2 User data confidentiality 844 This metric generates packets for a sample metric, rather than 845 taking samples based on user data. Thus, this metric does not 846 threaten user data confidentiality. 848 5.3 Interference with the metric 850 It may be possible to identify that a certain packet or stream of 851 packets are part of a sample metric. With that knowledge at Dst 852 and/or the intervening networks, it is possible to change the 853 processing of the packets (e.g. increasing or decreasing delay) 854 that may distort the measured performance. It may also be 855 possible to generate additional packets that appear to be part of 856 the sample metric. These additional packets are likely to perturb 857 the results of the sample measurement. 859 To discourage the kind of interference mentioned above, packet 860 interference checks, such as cryptographic hash, MAY be used. 862 6. Acknowledgements 864 The authors wish to thank the chairs of the IPPM WG for comments 865 that have made the present draft clearer and more focused. Howard 866 Stanislevic and Al Morton ahave presented useful comments and 867 questions. The authors have also built on the substantial 868 foundations laid by the authors of the framework for IP 869 performance [1]. 871 7. References 873 [1] V.Paxson, G.Almes, J.Mahdavi, and M.Mathis: Framework for IP 874 Performance Metrics, IETF RFC 2330, May 1998. 875 [2] G.Almes, S.Kalidindi, and M.Zekauskas: A one-way delay metric 876 for IPPM, IETF RFC 2679, September 1999. 877 [3] International Telecommunications Union recommendation I.380, 878 February 1999. 879 [4] S. Bradner: Key words for use in RFCs to Indicate Requirement 880 Levels, RFC 2119, March 1997. 881 [5] ETSI TIPHON document TS-101329-5 (to be published in July). 882 [6] G.Almes, S.Kalidindi, and M.Zekauskas: A round-trip delay 883 metric for IPPM, IETF RFC 2681. 885 8. Authors' contact information 887 Vilho Raisanen 888 P.O. Box 407 889 Communication Systems Laboratory 890 Nokia Research Center 891 FIN-00045 Nokia Group 892 Finland 893 Phone +358 9 4376 1 894 Fax. +358 9 4376 6852 896 Glenn Grotefeld 897 Motorola, Inc. 898 1303 E. Algonquin Road 899 4th Floor 900 Schaumburg, IL 60196 901 USA 902 Phone +1 847 576-5992 903 Fax +1 847 538-7455 905 EXPIRES May 2001