idnits 2.17.1 draft-morton-ippm-testplan-rfc2679-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 1 character in excess of 72. == There are 7 instances of lines with non-RFC2606-compliant FQDNs in the document. == There are 11 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 908 has weird spacing: '...Payload s1 ...' == Line 914 has weird spacing: '...Payload p1 ...' == Line 1040 has weird spacing: '...centile no ...' == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 29, 2011) is 4684 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'AS 7018' is mentioned on line 380, but not defined == Missing Reference: 'AS 3320' is mentioned on line 384, but not defined == Unused Reference: 'RFC4814' is defined on line 1105, but no explicit reference was found in the text == Unused Reference: 'RFC5226' is defined on line 1109, but no explicit reference was found in the text == Outdated reference: A later version (-05) exists of draft-ietf-ippm-metrictest-02 ** Obsolete normative reference: RFC 2679 (Obsoleted by RFC 7679) ** Obsolete normative reference: RFC 2680 (Obsoleted by RFC 7680) ** Obsolete normative reference: RFC 5226 (Obsoleted by RFC 8126) Summary: 4 errors (**), 0 flaws (~~), 12 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group L. Ciavattone 3 Internet-Draft AT&T Labs 4 Intended status: Informational R. Geib 5 Expires: December 31, 2011 Deutsche Telekom 6 A. Morton 7 AT&T Labs 8 M. Wieser 9 University of Applied Sciences 10 Darmstadt 11 June 29, 2011 13 Test Plan and Results for Advancing RFC 2679 on the Standards Track 14 draft-morton-ippm-testplan-rfc2679-01 16 Abstract 18 This memo proposes to advance a performance metric RFC along the 19 standards track, specifically RFC 2679 on One-way Delay Metrics. 20 Observing that the metric definitions themselves should be the 21 primary focus rather than the implementations of metrics, this memo 22 describes the test procedures to evaluate specific metric requirement 23 clauses to determine if the requirement has been interpreted and 24 implemented as intended. Two completely independent implementations 25 have been tested against the key specifications of RFC 2679. 27 Requirements Language 29 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 30 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 31 document are to be interpreted as described in RFC 2119 [RFC2119]. 33 Status of this Memo 35 This Internet-Draft is submitted in full conformance with the 36 provisions of BCP 78 and BCP 79. 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF). Note that other groups may also distribute 40 working documents as Internet-Drafts. The list of current Internet- 41 Drafts is at http://datatracker.ietf.org/drafts/current/. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on December 31, 2011. 50 Copyright Notice 52 Copyright (c) 2011 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (http://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 This document may contain material from IETF Documents or IETF 66 Contributions published or made publicly available before November 67 10, 2008. The person(s) controlling the copyright in some of this 68 material may not have granted the IETF Trust the right to allow 69 modifications of such material outside the IETF Standards Process. 70 Without obtaining an adequate license from the person(s) controlling 71 the copyright in such materials, this document may not be modified 72 outside the IETF Standards Process, and derivative works of it may 73 not be created outside the IETF Standards Process, except to format 74 it for publication as an RFC or to translate it into languages other 75 than English. 77 Table of Contents 79 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 80 1.1. RFC 2679 Coverage . . . . . . . . . . . . . . . . . . . . 5 81 2. A Definition-centric metric advancement process . . . . . . . 5 82 3. Test configuration . . . . . . . . . . . . . . . . . . . . . . 6 83 4. Error Calibration, RFC 2679 . . . . . . . . . . . . . . . . . 10 84 4.1. NetProbe Error and Type-P . . . . . . . . . . . . . . . . 11 85 4.2. Perfas Error and Type-P . . . . . . . . . . . . . . . . . 13 86 5. Pre-determined Limits on Equivalence . . . . . . . . . . . . . 14 87 6. Tests to evaluate RFC 2679 Specifications . . . . . . . . . . 14 88 6.1. One-way Delay, ADK Sample Comparison - Same & Cross 89 Implementation . . . . . . . . . . . . . . . . . . . . . . 15 90 6.1.1. NetProbe Same-implementation results . . . . . . . . . 16 91 6.1.2. Perfas Same-implementation results . . . . . . . . . . 17 92 6.1.3. One-way Delay, Cross-Implementation ADK Comparison . . 18 93 6.1.4. Conclusions on the ADK Results for One-way Delay . . . 18 94 6.2. One-way Delay, Loss threshold, RFC 2679 . . . . . . . . . 19 95 6.2.1. NetProbe results for Loss Threshold . . . . . . . . . 20 96 6.2.2. Perfas Results for Loss Threshold . . . . . . . . . . 20 97 6.2.3. Conclusions for Loss Threshold . . . . . . . . . . . . 20 98 6.3. One-way Delay, First-bit to Last bit, RFC 2679 . . . . . . 20 99 6.3.1. NetProbe and Perfas Results for Serialization . . . . 21 100 6.3.2. Conclusions for Serialization . . . . . . . . . . . . 22 101 6.4. One-way Delay, Difference Sample Metric (Lab) . . . . . . 22 102 6.4.1. NetProbe results for Differential Delay . . . . . . . 23 103 6.4.2. Perfas results for Differential Delay . . . . . . . . 24 104 6.4.3. Conclusions for Differential Delay . . . . . . . . . . 24 105 6.5. Implementation of Statistics for One-way Delay . . . . . . 24 106 7. Security Considerations . . . . . . . . . . . . . . . . . . . 25 107 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 25 108 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 25 109 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 25 110 10.1. Normative References . . . . . . . . . . . . . . . . . . . 25 111 10.2. Informative References . . . . . . . . . . . . . . . . . . 26 112 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 26 114 1. Introduction 116 The IETF (IP Performance Metrics working group, IPPM) has considered 117 how to advance their metrics along the standards track since 2001, 118 with the initial publication of Bradner/Paxson/Mankin's memo [ref to 119 work in progress, draft-bradner-metricstest-]. The original proposal 120 was to compare the results of implementations of the metrics, because 121 the usual procedures for advancing protocols did not appear to apply. 122 It was found to be difficult to achieve consensus on exactly how to 123 compare implementations, since there were many legitimate sources of 124 variation that would emerge in the results despite the best attempts 125 to keep the network paths equal, and because considerable variation 126 was allowed in the parameters (and therefore implementation) of each 127 metric. Flexibility in metric definitions, essential for 128 customization and broad appeal, made the comparison task quite 129 difficult. 131 A renewed work effort sought to investigate ways in which the 132 measurement variability could be reduced and thereby simplify the 133 problem of comparison for equivalence. 135 There is *preliminary* consensus [I-D.ietf-ippm-metrictest] that the 136 metric definitions should be the primary focus of evaluation rather 137 than the implementations of metrics, and equivalent results are 138 deemed to be evidence that the metric specifications are clear and 139 unambiguous. This is the metric specification equivalent of protocol 140 interoperability. The advancement process either produces confidence 141 that the metric definitions and supporting material are clearly 142 worded and unambiguous, OR, identifies ways in which the metric 143 definitions should be revised to achieve clarity. 145 The process should also permit identification of options that were 146 not implemented, so that they can be removed from the advancing 147 specification (this is an aspect more typical of protocol advancement 148 along the standards track). 150 This memo's purpose is to implement the current approach for 151 [RFC2679]. It was prepared to help progress discussions on the topic 152 of metric advancement, both through e-mail and at the upcoming IPPM 153 meeting at IETF. 155 In particular, consensus is sought on the extent of tolerable errors 156 when assessing equivalence in the results. In discussions, the IPPM 157 working group agreed that test plan and procedures should include the 158 threshold for determining equivalence, and this information should be 159 available in advance of cross-implementation comparisons. This memo 160 includes procedures for same-implementation comparisons to help set 161 the equivalence threshold. 163 Another aspect of the metric RFC advancement process is the 164 requirement to document the work and results. The procedures of 165 [RFC2026] are expanded in[RFC5657], including sample implementation 166 and interoperability reports. This memo follows the template in 167 [I-D.morton-ippm-advance-metrics] for the report that accompanies the 168 protocol action request submitted to the Area Director, including 169 description of the test set-up, procedures, results for each 170 implementation and conclusions. 172 1.1. RFC 2679 Coverage 174 This plan, in it's first draft version, does not cover all critical 175 requirements and sections of [RFC2679]. Material will be added as it 176 is "discovered" (not all requirements use requirements language). 178 2. A Definition-centric metric advancement process 180 The process described in Section 3.5 of [I-D.ietf-ippm-metrictest] 181 takes as a first principle that the metric definitions, embodied in 182 the text of the RFCs, are the objects that require evaluation and 183 possible revision in order to advance to the next step on the 184 standards track. 186 IF two implementations do not measure an equivalent singleton or 187 sample, or produce the an equivalent statistic, 189 AND sources of measurement error do not adequately explain the lack 190 of agreement, 192 THEN the details of each implementation should be audited along with 193 the exact definition text, to determine if there is a lack of clarity 194 that has caused the implementations to vary in a way that affects the 195 correspondence of the results. 197 IF there was a lack of clarity or multiple legitimate interpretations 198 of the definition text, 200 THEN the text should be modified and the resulting memo proposed for 201 consensus and advancement along the standards track. 203 Finally, all the findings MUST be documented in a report that can 204 support advancement on the standards track, similar to those 205 described in [RFC5657]. The list of measurement devices used in 206 testing satisfies the implementation requirement, while the test 207 results provide information on the quality of each specification in 208 the metric RFC (the surrogate for feature interoperability). 210 The figure below illustrates this process: 212 ,---. 213 / \ 214 ( Start ) 215 \ / Implementations 216 `-+-' +-------+ 217 | /| 1 `. 218 +---+----+ / +-------+ `.-----------+ ,-------. 219 | RFC | / |Check for | ,' was RFC `. YES 220 | | / |Equivalence..... clause x -------+ 221 | |/ +-------+ |under | `. clear? ,' | 222 | Metric \.....| 2 ....relevant | `---+---' +----+---+ 223 | Metric |\ +-------+ |identical | No | |Report | 224 | Metric | \ |network | +---+---. |results+| 225 | ... | \ |conditions | |Modify | |Advance | 226 | | \ +-------+ | | |Spec +----+ RFC | 227 +--------+ \| n |.'+-----------+ +-------+ |request?| 228 +-------+ +--------+ 230 3. Test configuration 232 One metric implementation used was NetProbe version 5.8.5, (an 233 earlier version is used in the WIPM system and deployed world-wide). 234 NetProbe uses UDP packets of variable size, and can produce test 235 streams with Periodic [RFC3432] or Poisson [RFC2330] sample 236 distributions. 238 The other metric implementation used was Perfas+ version 3.1, 239 developed by Deutsche Telekom. Perfas+ uses UDP unicast packets of 240 variable size (but supports also TCP and multicast). Test streams 241 with periodic, Poisson or uniform sample distributions may be used. 243 Figure 2 shows a view of the test path as each Implementation's test 244 flows pass through the Internet and the L2TPv3 tunnel IDs (1 and 2), 245 based on Figure 1 of [I-D.ietf-ippm-metrictest]. 247 +----+ +----+ +----+ +----+ 248 |Imp1| |Imp1| ,---. |Imp2| |Imp2| 249 +----+ +----+ / \ +-------+ +----+ +----+ 250 | V100 | V200 / \ | Tunnel| | V300 | V400 251 | | ( ) | Head | | | 252 +--------+ +------+ | |__| Router| +----------+ 253 |Ethernet| |Tunnel| |Internet | +---B---+ |Ethernet | 254 |Switch |--|Head |-| | | |Switch | 255 +-+--+---+ |Router| | | +---+---+--+--+--+----+ 256 |__| +--A---+ ( ) |Network| |__| 257 \ / |Emulat.| 258 U-turn \ / |"netem"| U-turn 259 V300 to V400 `-+-' +-------+ V100 to V200 261 Implementations ,---. +--------+ 262 +~~~~~~~~~~~/ \~~~~~~| Remote | 263 +------->-----F2->-| / \ |->---. | 264 | +---------+ | Tunnel ( ) | | | 265 | | transmit|-F1->-| ID 1 ( ) |->. | | 266 | | Imp 1 | +~~~~~~~~~| |~~~~| | | | 267 | | receive |-<--+ ( ) | F1 F2 | 268 | +---------+ | |Internet | | | | | 269 *-------<-----+ F1 | | | | | | 270 +---------+ | | +~~~~~~~~~| |~~~~| | | | 271 | transmit|-* *-| | | |<-* | | 272 | Imp 2 | | Tunnel ( ) | | | 273 | receive |-<-F2-| ID 2 \ / |<----* | 274 +---------+ +~~~~~~~~~~~\ /~~~~~~| Switch | 275 `-+-' +--------+ 277 Illustrations of a test setup with a bi-directional tunnel. The 278 upper diagram emphasizes the VLAN connectivity and geographical 279 location. The lower diagram shows example flows traveling between 280 two measurement implementations (for simplicity, only two flows are 281 shown). 283 Figure 1 285 The testing employs the Layer 2 Tunnel Protocol, version 3 (L2TPv3) 286 [RFC3931] tunnel between test sites on the Internet. The tunnel IP 287 and L2TPv3 headers are intended to conceal the test equipment 288 addresses and ports from hash functions that would tend to spread 289 different test streams across parallel network resources, with likely 290 variation in performance as a result. 292 At each end of the tunnel, one pair of VLANs encapsulated in the 293 tunnel are looped-back so that test traffic is returned to each test 294 site. Thus, test streams traverse the L2TP tunnel twice, but appear 295 to be one-way tests from the test equipment point of view. 297 The network emulator is a host running Fedora 14 Linux 298 [http://fedoraproject.org/] with IP forwarding enabled and the 299 "netem" Network emulator as part of the Fedora Kernel 2.6.35.11 [http 300 ://www.linuxfoundation.org/collaborate/workgroups/networking/netem] 301 loaded and operating. Connectivity across the netem/Fedora host was 302 accomplished by bridging Ethernet VLAN interfaces together with 303 "brctl" commands (e.g., eth1.100 <-> eth2.100). The netem emulator 304 was activated on one interface (eth1) and only operates on test 305 streams traveling in one direction. In some tests, independent netem 306 instances operated separately on each VLAN. 308 The links between the netem emulator host and router and switch were 309 found to be 100baseTx-HD (100Mbps half duplex) as reported by "mii- 310 tool"when the testing was complete. Use of Half Duplex was not 311 intended, but probably added a small amount of delay variation that 312 could have been avoided in full duplex mode. 314 Each individual test was run with common packet rates (1 pps, 10pps) 315 Poisson/Periodic distributions, and IP packet sizes of 64, 340, and 316 500 Bytes. 318 For these tests, a stream of at least 300 packets were sent from 319 Source to Destination in each implementation. Periodic streams (as 320 per [RFC3432]) with 1 second spacing were used, except as noted. 322 With the L2TPv3 tunnel in use, the metric name for the testing 323 configured here (with respect to the IP header exposed to Internet 324 processing) is: 326 Type-IP-protocol-115-One-way-Delay--Stream 328 With (Section 4.2. [RFC2679]) Metric Parameters: 330 + Src, the IP address of a host (12.3.167.16 or 193.159.144.8) 332 + Dst, the IP address of a host (193.159.144.8 or 12.3.167.16) 334 + T0, a time 336 + Tf, a time 338 + lambda, a rate in reciprocal seconds 340 + Thresh, a maximum waiting time in seconds (see Section 3.82 of 342 [RFC2679]) And (Section 4.3. [RFC2679]) 344 Metric Units: A sequence of pairs; the elements of each pair are: 346 + T, a time, and 348 + dT, either a real number or an undefined number of seconds. 350 The values of T in the sequence are monotonic increasing. Note that 351 T would be a valid parameter to Type-P-One-way-Delay, and that dT 352 would be a valid value of Type-P-One-way-Delay. 354 Also, Section 3.8.4 of [RFC2679] recommends that the path SHOULD be 355 reported. In this test set-up, most of the path details will be 356 concealed from the implementations by the L2TPv3 tunnels, thus a more 357 informative path trace route can be conducted by the routers at each 358 location. 360 When NetProbe is used in production, a traceroute is conducted in 361 parallel with, and at the outset of measurements. 363 Perfas+ does not support traceroute. 365 IPLGW#traceroute 193.159.144.8 367 Type escape sequence to abort. 368 Tracing the route to 193.159.144.8 370 1 12.126.218.245 [AS 7018] 0 msec 0 msec 4 msec 371 2 cr84.n54ny.ip.att.net (12.123.2.158) [AS 7018] 4 msec 4 msec 372 cr83.n54ny.ip.att.net (12.123.2.26) [AS 7018] 4 msec 373 3 cr1.n54ny.ip.att.net (12.122.105.49) [AS 7018] 4 msec 374 cr2.n54ny.ip.att.net (12.122.115.93) [AS 7018] 0 msec 375 cr1.n54ny.ip.att.net (12.122.105.49) [AS 7018] 0 msec 376 4 n54ny02jt.ip.att.net (12.122.80.225) [AS 7018] 4 msec 0 msec 377 n54ny02jt.ip.att.net (12.122.80.237) [AS 7018] 4 msec 378 5 192.205.34.182 [AS 7018] 0 msec 379 192.205.34.150 [AS 7018] 0 msec 380 192.205.34.182 [AS 7018] 4 msec 381 6 da-rg12-i.DA.DE.NET.DTAG.DE (62.154.1.30) [AS 3320] 88 msec 88 msec 382 88 msec 383 7 217.89.29.62 [AS 3320] 88 msec 88 msec 88 msec 384 8 217.89.29.55 [AS 3320] 88 msec 88 msec 88 msec 385 9 * * * 387 It was only possible to conduct the traceroute for the measured path 388 on one of the tunnel-head routers (the normal trace facilities of the 389 measurement systems are confounded by the L2TPv3 tunnel 390 encapsulation). 392 4. Error Calibration, RFC 2679 394 An implementation is required to report on its error calibration in 395 Section 3.8 of [RFC2679] (also required in Section 4.8 for sample 396 metrics). Sections 3.6, 3.7, and 3.8 of [RFC2679] give the detailed 397 formulation of the errors and uncertainties for calibration. In 398 summary, Section 3.7.1 of [RFC2679] describes the total time-varying 399 uncertainty as: 401 Esynch(t)+ Rsource + Rdest 403 where: 405 Esynch(t) denotes an upper bound on the magnitude of clock 406 synchronization uncertainty. 408 Rsource and Rdest denote the resolution of the source clock and the 409 destination clock, respectively. 411 Further, Section 3.7.2 of [RFC2679] describes the total wire-time 412 uncertainty as 414 Hsource + Hdest 416 referring to the upper bounds on host-time to wire-time for source 417 and destination, respectively. 419 Section 3.7.3 of [RFC2679] describes a test with small packets over 420 an isolated minimal network where the results can be used to estimate 421 systematic and random components of the sum of the above errors or 422 uncertainties. In a test with hundreds of singletons, the median is 423 the systematic error and when the median is subtracted from all 424 singletons, the remaining variability is the random error. 426 The test context, or Type-P of the test packets, must also be 427 reported, as required in Section 3.8 of [RFC2679] and all metrics 428 defined there. Type-P is defined in Section 13 of [RFC2330] (as are 429 many terms used below). 431 4.1. NetProbe Error and Type-P 433 Type-P for this test was IP-UDP with Best Effort DCSP. These headers 434 were encapsulated according to the L2TPv3 specifications [RFC3931], 435 and thus may not influence the treatment received as the packets 436 traversed the Internet. 438 In general, NetProbe error is dependent on the specific version and 439 installation details. 441 NetProbe operates using host time above the UDP layer, which is 442 different from the wire-time preferred in [RFC2330], but can be 443 identified as a source of error according to Section 3.7.2 of 444 [RFC2679]. 446 Accuracy of NetProbe measurements is usually limited by NTP 447 synchronization performance (which is typically taken as ~+/-1ms 448 error or greater), although the installation used in this testing 449 often exhibits errors much less than typical for NTP. The primary 450 stratum 1 NTP server is closely located on a sparsely utilized 451 network management LAN, thus it avoids many concerns raised in 452 Section 10 of[RFC2330] (in fact, smooth adjustment, long-term drift 453 analysis and compensation, and infrequent adjustment all lead to 454 stability during measurement intervals, the main concern). 456 The resolution of the reported results is 1us (us = microsecond) in 457 the version of NetProbe tested here, which contributes to at least 458 +/-1us error. 460 NetProbe implements a time-keeping sanity check on sending and 461 receiving time-stamping processes. When the significant process 462 interruption takes place, individual test packets are flagged as 463 possibly containing unusual time errors, and are excluded from the 464 sample used for all "time" metrics. 466 We performed a NetProbe calibration of the type described in Section 467 3.7.3 of [RFC2679], using 64 Byte packets over a cross-connect cable. 468 The results estimate systematic and random components of the sum of 469 the Hsource + Hdest errors or uncertainties. In a test with 300 470 singletons conducted over 30 seconds (periodic sample with 100ms 471 spacing), the median is the systematic error and the remaining 472 variability is the random error. One set of results is tabulated 473 below: 475 (Results from the "R" software environment for statistical computing 476 and graphics - http://www.r-project.org/ ) 477 > summary(XD4CAL) 478 CAL1 CAL2 CAL3 479 Min. : 89.0 Min. : 68.00 Min. : 54.00 480 1st Qu.: 99.0 1st Qu.: 77.00 1st Qu.: 63.00 481 Median :110.0 Median : 79.00 Median : 65.00 482 Mean :116.8 Mean : 83.74 Mean : 69.65 483 3rd Qu.:127.0 3rd Qu.: 88.00 3rd Qu.: 74.00 484 Max. :205.0 Max. :177.00 Max. :163.00 485 > 486 NetProbe Calibration with Cross-Connect Cable, one-way delay values 487 in microseconds (us) 489 The median or systematic error can be as high as 110 us, and the 490 range of the random error is also on the order of 116 us for all 491 streams. 493 Also, anticipating the Anderson-Darling K-sample (ADK) comparisons to 494 follow, we corrected the CAL2 values for the difference between means 495 between CAL2 and CAL3 (as specified in [I-D.ietf-ippm-metrictest]), 496 and found strong support for the (Null Hypothesis that) the samples 497 are from the same distribution (resolution of 1 us and alpha equal 498 0.05 and 0.01) 499 > XD4CVCAL2 <- XD4CAL$CAL2 - (mean(XD4CAL$CAL2)-mean(XD4CAL$CAL3)) 500 > boxplot(XD4CVCAL2,XD4CAL$CAL3) 501 > XD4CV2_ADK <- adk.test(XD4CVCAL2, XD4CAL$CAL3) 502 > XD4CV2_ADK 503 Anderson-Darling k-sample test. 505 Number of samples: 2 506 Sample sizes: 300 300 507 Total number of values: 600 508 Number of unique values: 97 510 Mean of Anderson Darling Criterion: 1 511 Standard deviation of Anderson Darling Criterion: 0.75896 513 T = (Anderson Darling Criterion - mean)/sigma 515 Null Hypothesis: All samples come from a common population. 517 t.obs P-value extrapolation 518 not adj. for ties 0.71734 0.17042 0 519 adj. for ties -0.39553 0.44589 1 520 > 522 4.2. Perfas Error and Type-P 524 Perfas+ is configured to use GPS synchronisation and uses NTP 525 synchronization as a fall-back or default. GPS synchronisation 526 worked throughout this test with the exception of the calibration 527 stated here (one implementation was NTP synchronised only). The time 528 stamp accuracy typically is 0.1 ms. 530 The resolution of the results reported by Perfas+ is 1us (us = 531 microsecond) in the version tested here, which contributes to at 532 least +/-1us error. 534 Port 5001 5002 5003 535 Min. -227 -226 294 536 Median -169 -167 323 537 Mean -159 -157 335 538 Max. 6 -52 376 539 s 102 102 93 540 Perfas Calibration with Cross-Connect Cable, one-way delay values in 541 microseconds (us) 543 The median or systematic error can be as high as 323 us, and the 544 range of the random error is also less than 232 us for all streams. 546 5. Pre-determined Limits on Equivalence 548 In this section, we provide the numerical limits on comparisons 549 between implementations, in order to declare that the results are 550 equivalent and therefore, the tested specification is clear. 552 A key point is that the allowable errors, corrections, and confidence 553 levels only need to be sufficient to detect mis-interpretation of the 554 tested specification resulting in diverging implementations. 556 Also, the allowable error must be sufficient to compensate for 557 measured path differences. It was simply not possible to measure 558 fully identical paths in the VLAN-loopback test configuration used, 559 and this practical compromise must be taken into account. 561 For Anderson-Darling K-sample (ADK) comparisons, the required 562 confidence factor for the cross-implementation comparisons SHALL be 563 the smallest of: 565 o 0.95 confidence factor at 1ms resolution, or 567 o the smallest confidence factor (in combination with resolution) of 568 the two same-implementation comparisons for the same test 569 conditions. 571 A constant time accuracy error of as much as +/-0.5ms MAY be removed 572 from one implementation's distributions (all singletons) before the 573 ADK comparison is conducted. 575 A constant propagation delay error (due to use of different sub-nets 576 between the switch and measurement devices at each location) of as 577 much as +2ms MAY be removed from one implementation's distributions 578 (all singletons) before the ADK comparison is conducted. 580 For comparisons involving the mean of a sample or other central 581 statistics, the limits on both the time accuracy error and the 582 propagation delay error constants given above also apply. 584 6. Tests to evaluate RFC 2679 Specifications 586 This section describes some results from real-world (cross-Internet) 587 tests with measurement devices implementing IPPM metrics and a 588 network emulator to create relevant conditions, to determine whether 589 the metric definitions were interpreted consistently by implementors. 591 The procedures are slightly modified from the original procedures 592 contained in Appendix A.1 of [I-D.ietf-ippm-metrictest]. The 593 modifications include the use of the mean statistic for comparisons. 595 Note that there are only five instances of the requirement term 596 "MUST" in [RFC2679] outside of the boilerplate and [RFC2119] 597 reference. 599 6.1. One-way Delay, ADK Sample Comparison - Same & Cross Implementation 601 This test determines if implementations produce results that appear 602 to come from a common delay distribution, as an overall evaluation of 603 Section 4 of [RFC2679], "A Definition for Samples of One-way Delay". 604 Same-implementation comparison results help to set the threshold of 605 equivalence that will be applied to cross-implementation comparisons. 607 This test is intended to evaluate measurements in sections 3 and 4 of 608 [RFC2679]. 610 By testing the extent to which the distributions of one-way delay 611 singletons from two implementations of [RFC2679] appear to be from 612 the same distribution, we economize on comparisons, because comparing 613 a set of individual summary statistics (as defined in Section 5 of 614 [RFC2679]) would require another set of individual evaluations of 615 equivalence. Instead, we can simply check which statistics were 616 implemented, and report on those facts. 618 1. Configure an L2TPv3 path between test sites, and each pair of 619 measurement devices to operate tests in their designated pair of 620 VLANs. 622 2. Measure a sample of one-way delay singletons with 2 or more 623 implementations, using identical options and network emulator 624 settings (if used). 626 3. Measure a sample of one-way delay singletons with *four* 627 instances of the *same* implementations, using identical options, 628 noting that connectivity differences SHOULD be the same as for 629 the cross implementation testing. 631 4. Apply the ADK comparison procedures (see Appendix C of 632 [I-D.ietf-ippm-metrictest]) and determine the resolution and 633 confidence factor for distribution equivalence of each same- 634 implementation comparison and each cross-implementation 635 comparison. 637 5. Take the coarsest resolution and confidence factor for 638 distribution equivalence from the same-implementation pairs, or 639 the limit defined in Section 5 above, as a limit on the 640 equivalence threshold for these experimental conditions. 642 6. Apply constant correction factors to all singletons of the sample 643 distributions, as described and limited in Section 5 above. 645 7. Compare the cross-implementation ADK performance with the 646 equivalence threshold determined in step 5 to determine if 647 equivalence can be declared. 649 The common parameters used for tests in this section are: 651 o IP header + payload = 64 octets 653 o Periodic sampling at 1 packet per second 655 o Test duration = 300 seconds (March 29) 657 The netem emulator was set for 100ms average delay, with uniform 658 delay variation of +/-50ms. In this experiment, the netem emulator 659 was configured to operate independently on each VLAN and thus the 660 emulator itself is a potential source of error when comparing streams 661 that traverse the test path in different directions. 663 In the result analysis of this section: 665 o All comparisons used 1 microsecond resolution. 667 o No Correction Factors were applied. 669 o The 0.95 confidence factor (1.960 for paired stream comparison) 670 was used. 672 6.1.1. NetProbe Same-implementation results 674 A single same-implementation comparison fails the ADK criterion (s1 675 <-> sB). We note that these streams traversed the test path in 676 opposite directions, making the live network factors a possibility to 677 explain the difference. 679 All other pair comparisons pass the ADK criterion. 681 +------------------------------------------------------+ 682 | | | | | 683 | ti.obs (P) | s1 | s2 | sA | 684 | | | | | 685 .............|.............|.............|.............| 686 | | | | | 687 | s2 | 0.25 (0.28) | | | 688 | | | | | 689 ...........................|.............|.............| 690 | | | | | 691 | sA | 0.60 (0.19) |-0.80 (0.57) | | 692 | | | | | 693 ...........................|.............|.............| 694 | | | | | 695 | sB | 2.64 (0.03) | 0.07 (0.31) |-0.52 (0.48) | 696 | | | | | 697 +------------+-------------+-------------+-------------+ 699 NetProbe ADK Results for same-implementation 701 6.1.2. Perfas Same-implementation results 703 All pair comparisons pass the ADK criterion. 705 +------------------------------------------------------+ 706 | | | | | 707 | ti.obs (P) | p1 | p2 | p3 | 708 | | | | | 709 .............|.............|.............|.............| 710 | | | | | 711 | p2 | 0.06 (0.32) | | | 712 | | | | | 713 .........................................|.............| 714 | | | | | 715 | p3 | 1.09 (0.12) | 0.37 (0.24) | | 716 | | | | | 717 ...........................|.............|.............| 718 | | | | | 719 | p4 |-0.81 (0.57) |-0.13 (0.37) | 1.36 (0.09) | 720 | | | | | 721 +------------+-------------+-------------+-------------+ 723 Perfas ADK Results for same-implementation 725 6.1.3. One-way Delay, Cross-Implementation ADK Comparison 727 The cross-implementation results are compared using a combined ADK 728 analysis [ref], where all NetProbe results are compared with all 729 Perfas results after testing that the combined same-implementation 730 results pass the ADK criterion. 732 When 4 (same) samples are compared, the ADK criterion for 0.95 733 confidence is 1.915, and when all 8 (cross) samples are compared it 734 is 1.85. 736 Combination of Anderson-Darling K-Sample Tests. 738 Sample sizes within each data set: 739 Data set 1 : 299 297 298 300 (NetProbe) 740 Data set 2 : 300 300 298 300 (Perfas) 741 Total sample size per data set: 1194 1198 742 Number of unique values per data set: 1188 1192 743 ... 744 Null Hypothesis: 745 All samples within a data set come from a common distribution. 746 The common distribution may change between data sets. 748 NetProbe ti.obs P-value extrapolation 749 not adj. for ties 0.64999 0.21355 0 750 adj. for ties 0.64833 0.21392 0 751 Perfas 752 not adj. for ties 0.55968 0.23442 0 753 adj. for ties 0.55840 0.23473 0 755 Combined Anderson-Darling Criterion: 756 tc.obs P-value extrapolation 757 not adj. for ties 0.85537 0.17967 0 758 adj. for ties 0.85329 0.18010 0 760 The combined same-implementation samples and the combined cross- 761 implementation comparison all pass the ADK criteria at P>=0.18 and 762 support the Null Hypothesis (both data sets come from a common 763 distribution). 765 We also see that the paired ADK comparisons are rather critical. 766 Although the NetProbe s1-sB comparison failed, the combined data set 767 from 4 streams passed the ADK criterion easily. 769 6.1.4. Conclusions on the ADK Results for One-way Delay 771 Similar testing was repeated many times in the months of March and 772 April 2011. There were many experiments where a single test stream 773 from NetProbe or Perfas proved to be different from the others in 774 paired comparisons (even same comparisons). When the out lier stream 775 was removed from the comparison, the remaining streams passed 776 combined ADK criterion. Also, the application of correction factors 777 resulted in higher comparison success. 779 We conclude that the two implementations are capable of producing 780 equivalent one-way delay distributions based on their interpretation 781 of [RFC2679] . 783 6.2. One-way Delay, Loss threshold, RFC 2679 785 This test determines if implementations use the same configured 786 maximum waiting time delay from one measurement to another under 787 different delay conditions, and correctly declare packets arriving in 788 excess of the waiting time threshold as lost. 790 See Section 3.5 of [RFC2679], 3rd bullet point and also Section 3.8.2 791 of [RFC2679]. 793 1. configure an L2TPv3 path between test sites, and each pair of 794 measurement devices to operate tests in their designated pair of 795 VLANs. 797 2. configure the network emulator to add 1.0 sec one-way constant 798 delay in one direction of transmission. 800 3. measure (average) one-way delay with 2 or more implementations, 801 using identical waiting time thresholds (Thresh) for loss set at 802 3 seconds. 804 4. configure the network emulator to add 3 sec one-way constant 805 delay in one direction of transmission equivalent to 2 seconds of 806 additional one-way delay (or change the path delay while test is 807 in progress, when there are sufficient packets at the first delay 808 setting) 810 5. repeat/continue measurements 812 6. observe that the increase measured in step 5 caused all packets 813 with 2 sec additional delay to be declared lost, and that all 814 packets that arrive successfully in step 3 are assigned a valid 815 one-way delay. 817 The common parameters used for tests in this section are: 819 o IP header + payload = 64 octets 821 o Poisson sampling at lambda = 1 packet per second 823 o Test duration = 900 seconds total (March 21) 825 The netem emulator was set to add constant delays as specified in the 826 procedure above. 828 6.2.1. NetProbe results for Loss Threshold 830 In NetProbe, the Loss Threshold is implemented uniformly over all 831 packets as a post-processing routine. With the Loss Threshold set at 832 3 seconds, all packets with one-way delay >3 seconds are marked 833 "Lost" and included in the Lost Packet list with their transmission 834 time (as required in Section 3.3 of [RFC2680]). This resulted in 342 835 packets designated as lost in one of the test streams (with average 836 delay = 3.091 sec). 838 6.2.2. Perfas Results for Loss Threshold 840 Perfas uses a fixed Loss Threshold which was not adjustable during 841 this study. The Loss Threshold is approximately one minute, and 842 emulation of a delay of this size was not attempted. However, it is 843 possible to implement any delay threshold desired with a post- 844 processing routine and subsequent analysis. Using this method, 195 845 packets would be declared lost (with average delay = 3.091 sec). 847 6.2.3. Conclusions for Loss Threshold 849 Both implementations assume that any constant delay value desired can 850 be used as the Loss Threshold, since all delays are stored as a pair 851 as required in [RFC2679] . This is a simple way to 852 enforce the constant loss threshold envisioned in [RFC2679] (see 853 specific section references above). We take the position that the 854 assumption of post-processing is compliant, and that the text of the 855 RFC should be revised slightly to include this point. 857 6.3. One-way Delay, First-bit to Last bit, RFC 2679 859 This test determines if implementations register the same relative 860 change in delay from one packet size to another, indicating that the 861 first-to-last time-stamping convention has been followed. This test 862 tends to cancel the sources of error which may be present in an 863 implementation. 865 See Section 3.7.2 of [RFC2679], and Section 10.2 of [RFC2330]. 867 1. configure an L2TPv3 path between test sites, and each pair of 868 measurement devices to operate tests in their designated pair of 869 VLANs, and ideally including a low-speed link (it was not 870 possible to change the link configuration during testing, so the 871 lowest speed link present was the basis for serialization time 872 comparisons). 874 2. measure (average) one-way delay with 2 or more implementations, 875 using identical options and equal size small packets (64 octet IP 876 header and payload) 878 3. maintain the same path with additional emulated 100 ms one-way 879 delay 881 4. measure (average) one-way delay with 2 or more implementations, 882 using identical options and equal size large packets (500 octet 883 IP header and payload) 885 5. observe that the increase measured between steps 2 and 4 is 886 equivalent to the increase in ms expected due to the larger 887 serialization time for each implementation. Most of the 888 measurement errors in each system should cancel, if they are 889 stationary. 891 The common parameters used for tests in this section are: 893 o IP header + payload = 64 octets 895 o Periodic sampling at l packet per second 897 o Test duration = 300 seconds total (April 12) 899 The netem emulator was set to add constant 100ms delay. 901 6.3.1. NetProbe and Perfas Results for Serialization 903 When the IP header + payload size was increased from 64 octets to 500 904 octets, there was a delay increase observed. 906 Mean Delays in us 907 NetProbe 908 Payload s1 s2 sA sB 909 500 190893 191179 190892 190971 910 64 189642 189785 189747 189467 911 Diff 1251 1394 1145 1505 913 Perfas 914 Payload p1 p2 p3 p4 915 500 190908 190911 191126 190709 916 64 189706 189752 189763 190220 917 Diff 1202 1159 1363 489 919 Serialization tests, all values in microseconds 921 The typical delay increase when the larger packets were used was 1.1 922 to 1.5 ms (with one outlier). The typical measurements indicate that 923 a link with approximately 3 Mbit/s capacity is present on the path. 925 Through investigation of the facilities involved, it was determined 926 that the lowest speed link was approximately 45 Mbit/s, and therefore 927 the estimated difference should be about 0.077 ms. The observed 928 differences are much higher. 930 The unexpected large delay difference was also the outcome when 931 testing serialization times in a lab environment, using the NIST Net 932 Emulator and NetProbe [ref to earlier lab tests]. 934 6.3.2. Conclusions for Serialization 936 Since it was not possible to confirm the estimated serialization time 937 increases in field tests, we resort to examination of the 938 implementations to determine compliance. 940 NetProbe performs all time stamping above the IP-layer, accepting 941 that some compromises must be made to achieve extreme portability and 942 measurement scale. Therefore, the first-to-last bit convention is 943 supported because the serialization time is included in the one-way 944 delay measurement, enabling comparison with other implementations. 946 Perfas >>>>>>>>>>>>>>> TBD 948 6.4. One-way Delay, Difference Sample Metric (Lab) 950 This test determines if implementations register the same relative 951 increase in delay from one measurement to another under different 952 delay conditions. This test tends to cancel the sources of error 953 which may be present in an implementation. 955 This test is intended to evaluate measurements in sections 3 and 4 of 956 [RFC2679]. 958 1. configure an L2TPv3 path between test sites, and each pair of 959 measurement devices to operate tests in their designated pair of 960 VLANs. 962 2. measure (average) one-way delay with 2 or more implementations, 963 using identical options 965 3. configure the path with X+Y ms one-way delay 967 4. repeat measurements 969 5. observe that the (average) increase measured in steps 2 and 4 is 970 ~Y ms for each implementation. Most of the measurement errors in 971 each system should cancel, if they are stationary. 973 In this test, X=1000ms and Y=1000ms. 975 The common parameters used for tests in this section are: 977 o IP header + payload = 64 octets 979 o Poisson sampling at lambda = 1 packet per second 981 o Test duration = 900 seconds total (March 21) 983 The netem emulator was set to add constant delays as specified in the 984 procedure above. 986 6.4.1. NetProbe results for Differential Delay 988 Average pre-increase delay, microseconds 1089868.0 989 Average post 1s additional, microseconds 2089686.0 990 Difference (should be ~= Y = 1s) 999818.0 992 Average delays before/after 1 second increase 994 The NetProbe implementation observed a 1 second increase with a 182 995 microsecond error (assuming that the netem emulated delay difference 996 is exact). 998 We note that this differential delay test has been run under lab 999 conditions and published in prior work [ref to "advance metrics" 1000 draft]. The error was 6 microseconds. 1002 6.4.2. Perfas results for Differential Delay 1004 Average pre-increase delay, microseconds 1089794.0 1005 Average post 1s additional, microseconds 2089801.0 1006 Difference (should be ~= Y = 1s) 1000007.0 1008 Average delays before/after 1 second increase 1010 The Perfas implementation observed a 1 second increase with a 7 1011 microsecond error. 1013 6.4.3. Conclusions for Differential Delay 1015 Again, the live network conditions appear to have influenced the 1016 results, but both implementations measured the same delay increase 1017 within their calibration accuracy. 1019 6.5. Implementation of Statistics for One-way Delay 1021 The ADK tests the extent to which the sample distributions of one-way 1022 delay singletons from two implementations of [RFC2679] appear to be 1023 from the same overall distribution. By testing this way, we 1024 economize on the number of comparisons, because comparing a set of 1025 individual summary statistics (as defined in Section 5 of [RFC2679]) 1026 would require another set of individual evaluations of equivalence. 1027 Instead, we can simply check which statistics were implemented, and 1028 report on those facts, noting that Section 5 of [RFC2679] does not 1029 specify the calculations exactly, and gives only some illustrative 1030 examples. 1032 NetProbe Perfas 1034 5.1. Type-P-One-way-Delay-Percentile yes no 1036 5.2. Type-P-One-way-Delay-Median yes no 1038 5.3. Type-P-One-way-Delay-Minimum yes yes 1040 5.4. Type-P-One-way-Delay-Inverse-Percentile no no 1042 Implementation of Section 5 Statistics 1044 5.1. Type-P-One-way-Delay-Percentile 5.2. Type-P-One-way-Delay- 1045 Median 5.3. Type-P-One-way-Delay-Minimum 5.4. Type-P-One-way-Delay- 1046 Inverse-Percentile 1048 7. Security Considerations 1050 The security considerations that apply to any active measurement of 1051 live networks are relevant here as well. See [RFC4656] and 1052 [RFC5357]. 1054 8. IANA Considerations 1056 This memo makes no requests of IANA, and hopes that IANA will be as 1057 accepting of our new computer overlords as the authors intend to be. 1059 9. Acknowledgements 1061 The authors thank Lars Eggert for his continued encouragement to 1062 advance the IPPM metrics during his tenure as AD Advisor. 1064 Nicole Kowalski supplied the needed CPE router for the NetProbe side 1065 of the test set-up, and graciously managed her testing in spite of 1066 issues caused by dual-use of the router. Thanks Nicole! 1068 The "NetProbe Team" also acknowledges many useful discussions with 1069 Ganga Maguluri. 1071 10. References 1073 10.1. Normative References 1075 [I-D.ietf-ippm-metrictest] 1076 Geib, R., Morton, A., Fardid, R., and A. Steinmitz, "IPPM 1077 standard advancement testing", 1078 draft-ietf-ippm-metrictest-02 (work in progress), 1079 March 2011. 1081 [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 1082 3", BCP 9, RFC 2026, October 1996. 1084 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1085 Requirement Levels", BCP 14, RFC 2119, March 1997. 1087 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 1088 "Framework for IP Performance Metrics", RFC 2330, 1089 May 1998. 1091 [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 1092 Delay Metric for IPPM", RFC 2679, September 1999. 1094 [RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 1095 Packet Loss Metric for IPPM", RFC 2680, September 1999. 1097 [RFC3432] Raisanen, V., Grotefeld, G., and A. Morton, "Network 1098 performance measurement with periodic streams", RFC 3432, 1099 November 2002. 1101 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 1102 Zekauskas, "A One-way Active Measurement Protocol 1103 (OWAMP)", RFC 4656, September 2006. 1105 [RFC4814] Newman, D. and T. Player, "Hash and Stuffing: Overlooked 1106 Factors in Network Device Benchmarking", RFC 4814, 1107 March 2007. 1109 [RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an 1110 IANA Considerations Section in RFCs", BCP 26, RFC 5226, 1111 May 2008. 1113 [RFC5357] Hedayat, K., Krzanowski, R., Morton, A., Yum, K., and J. 1114 Babiarz, "A Two-Way Active Measurement Protocol (TWAMP)", 1115 RFC 5357, October 2008. 1117 [RFC5657] Dusseault, L. and R. Sparks, "Guidance on Interoperation 1118 and Implementation Reports for Advancement to Draft 1119 Standard", BCP 9, RFC 5657, September 2009. 1121 10.2. Informative References 1123 [I-D.morton-ippm-advance-metrics] 1124 Morton, A., "Lab Test Results for Advancing Metrics on the 1125 Standards Track", draft-morton-ippm-advance-metrics-02 1126 (work in progress), October 2010. 1128 [RFC3931] Lau, J., Townsley, M., and I. Goyret, "Layer Two Tunneling 1129 Protocol - Version 3 (L2TPv3)", RFC 3931, March 2005. 1131 Authors' Addresses 1133 Len Ciavattone 1134 AT&T Labs 1135 200 Laurel Avenue South 1136 Middletown, NJ 07748 1137 USA 1139 Phone: +1 732 420 1239 1140 Fax: 1141 Email: lencia@att.com 1142 URI: 1144 Ruediger Geib 1145 Deutsche Telekom 1146 Heinrich Hertz Str. 3-7 1147 Darmstadt, 64295 1148 Germany 1150 Phone: +49 6151 58 12747 1151 Email: Ruediger.Geib@telekom.de 1153 Al Morton 1154 AT&T Labs 1155 200 Laurel Avenue South 1156 Middletown, NJ 07748 1157 USA 1159 Phone: +1 732 420 1571 1160 Fax: +1 732 368 1192 1161 Email: acmorton@att.com 1162 URI: http://home.comcast.net/~acmacm/ 1164 Matthias Wieser 1165 University of Applied Sciences Darmstadt 1166 Birkenweg 8 Department EIT 1167 Darmstadt, 64295 1168 Germany 1170 Phone: 1171 Email: matthias.wieser@stud.h-da.de