idnits 2.17.1 draft-ietf-bmwg-2544-as-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. -- The draft header indicates that this document updates RFC2544, but the abstract doesn't seem to directly say this. It does mention RFC2544 though, so this could be OK. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 323 has weird spacing: '...Bradner serv...' == Line 325 has weird spacing: '... Dubray ser...' (Using the creation date from RFC2544, updated by this document, for RFC5378 checks: 1999-03-01) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (September 4, 2012) is 4224 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group S. Bradner 3 Internet-Draft Harvard University 4 Updates: 2544 (if approved) K. Dubray 5 Intended status: Informational Juniper Networks 6 Expires: March 8, 2013 J. McQuaid 7 Turnip Video 8 A. Morton 9 AT&T Labs 10 September 4, 2012 12 RFC 2544 Applicability Statement: 13 Use on Production Networks Considered Harmful 14 draft-ietf-bmwg-2544-as-06 16 Abstract 18 Benchmarking Methodology Working Group (BMWG) has been developing key 19 performance metrics and laboratory test methods since 1990, and 20 continues this work at present. The methods described in RFC 2544 21 are intended to generate traffic that overloads network device 22 resources in order to assess their capacity. Overload of shared 23 resources would likely be harmful to user traffic performance on a 24 production network, and there are further negative consequences 25 identified with production application of the methods. This memo 26 clarifies the scope of RFC 2544 and other IETF BMWG benchmarking work 27 for isolated test environments only, and encourages new standards 28 activity for measurement methods applicable outside that scope. 30 Status of this Memo 32 This Internet-Draft is submitted in full conformance with the 33 provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF). Note that other groups may also distribute 37 working documents as Internet-Drafts. The list of current Internet- 38 Drafts is at http://datatracker.ietf.org/drafts/current/. 40 Internet-Drafts are draft documents valid for a maximum of six months 41 and may be updated, replaced, or obsoleted by other documents at any 42 time. It is inappropriate to use Internet-Drafts as reference 43 material or to cite them other than as "work in progress." 45 This Internet-Draft will expire on March 8, 2013. 47 Copyright Notice 48 Copyright (c) 2012 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 1.1. Requirements Language . . . . . . . . . . . . . . . . . . . 3 65 2. Scope and Goals . . . . . . . . . . . . . . . . . . . . . . . . 4 66 3. The Concept of an Isolated Test Environment . . . . . . . . . . 4 67 4. Why RFC 2544 Methods are intended only for ITE . . . . . . . . 4 68 4.1. Experimental Control and Accuracy . . . . . . . . . . . . . 4 69 4.2. Containing Damage . . . . . . . . . . . . . . . . . . . . . 5 70 5. Advisory on RFC 2544 Methods in Production Networks . . . . . . 5 71 6. Considering Performance Testing in Production Networks . . . . 6 72 7. Security Considerations . . . . . . . . . . . . . . . . . . . . 7 73 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 7 74 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 8 75 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 8 76 10.1. Normative References . . . . . . . . . . . . . . . . . . . 8 77 10.2. Informative References . . . . . . . . . . . . . . . . . . 8 78 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 8 80 1. Introduction 82 This memo clarifies the scope and use of IETF Benchmarking 83 Methodology Working Group (BMWG) tests including [RFC2544], which 84 discusses and defines several tests that may be used to characterize 85 the performance of a network interconnecting device. 87 Benchmarking methodologies (beginning with [RFC2544]) have always 88 relied on test conditions that can only be produced and replicated 89 reliably in the laboratory. These methodologies are not appropriate 90 for inclusion in wider specifications such as: 92 1. Validation of telecommunication service configuration, such as 93 the Committed Information Rate (CIR). 95 2. Validation of performance metrics in a telecommunication Service 96 Level Agreement (SLA), such as frame loss and latency. 98 3. Telecommunication service activation testing, where traffic that 99 shares network resources with the test might be adversely 100 affected. 102 Above, we distinguish "telecommunication service" (where a network 103 service provider contracts with a customer to transfer information 104 between specified interfaces at different geographic locations) from 105 the generic term "service". Below, we use the adjective "production" 106 to refer to networks carrying live user traffic. [RFC2544] used the 107 term "real-world" to refer to production networks and to 108 differentiate them from test networks. 110 Although RFC 2544 has been held up as the standard reference for such 111 testing, we believe that the actual methods used vary from [RFC2544] 112 in significant ways. Since the only citation is to [RFC2544], the 113 modifications are opaque to the standards community and to users in 114 general. 116 Since applying the test traffic and methods described in [RFC2544] on 117 a production network risks causing overload in shared resources there 118 is direct risk of harming user traffic if the methods are misused in 119 this way. Therefore, IETF BMWG developed this Applicability 120 Statement for [RFC2544] to directly address the situation. 122 1.1. Requirements Language 124 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 125 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 126 document are to be interpreted as described in RFC 2119 [RFC2119]. 128 2. Scope and Goals 130 This memo clarifies the scope of [RFC2544] with the goal to provide 131 guidance to the industry on its applicability, which is limited to 132 laboratory testing. 134 3. The Concept of an Isolated Test Environment 136 An Isolated Test Environment (ITE) used with [RFC2544] methods (as 137 illustrated in Figures 1 through 3 of [RFC2544]) has the ability to: 139 o contain the test streams to paths within the desired set-up 141 o prevent non-test traffic from traversing the test set-up 143 These features allow unfettered experimentation, while at the same 144 time protecting lab equipment management/control LANs and other 145 production networks from the unwanted effects of the test traffic. 147 4. Why RFC 2544 Methods are intended only for ITE 149 The following sections discuss some of the reasons why [RFC2544] 150 methods are applicable only for isolated laboratory use, and the 151 consequences of applying these methods outside the lab environment. 153 4.1. Experimental Control and Accuracy 155 All of the tests described in RFC 2544 require that the tester and 156 device under test are the only devices on the networks that are 157 transmitting data. The presence of other traffic (unwanted on the 158 ITE network) would mean that the specified test conditions have not 159 been achieved and flawed results are a likely consequence. 161 If any other traffic appears and the amount varies over time, the 162 repeatability of any test result will likely depend to some degree on 163 the amount and variation of the other traffic. 165 The presence of other traffic makes accurate, repeatable, and 166 consistent measurements of the performance of the device under test 167 very unlikely, since the complete details of test conditions will not 168 be reported. 170 For example, the RFC 2544 Throughput Test attempts to characterize a 171 maximum reliable load, thus there will be testing above the maximum 172 that causes packet/frame loss. Any other sources of traffic on the 173 network will cause packet loss to occur at a tester data rate lower 174 than the rate that would be achieved without the extra traffic. 176 4.2. Containing Damage 178 [RFC2544] methods, specifically to determine Throughput as defined in 179 [RFC1242] and other benchmarks, may overload the resources of the 180 device under test, and may cause failure modes in the device under 181 test. Since failures can become the root cause of more wide-spread 182 failure, it is clearly desirable to contain all test traffic within 183 the ITE. 185 In addition, such testing can have a negative effect on any traffic 186 that shares resources with the test stream(s) since, in most cases, 187 the traffic load will be close to the capacity of the network links. 189 Appendix C.2.2 of [RFC2544] (as adjusted by errata) gives the private 190 IPv4 address range for testing: 192 "...The network addresses 198.18.0.0 through 198.19.255.255 have been 193 assigned to the BMWG by the IANA for this purpose. This assignment 194 was made to minimize the chance of conflict in case a testing device 195 were to be accidentally connected to part of the Internet. The 196 specific use of the addresses is detailed below." 198 In other words, devices operating on the Internet may be configured 199 to discard any traffic they observe in this address range, as it is 200 intended for laboratory ITE use only. Thus, testers using the 201 assigned testing address ranges are connected to the Internet and 202 test packets are forwarded across the Internet, it is likely that the 203 packets will be discarded and the test will not work. 205 We note that a range of IPv6 addresses has been assigned to BMWG for 206 laboratory test purposes, in [RFC5180] (as amended by errata). 208 See the Security Considerations Section below for further 209 considerations on containing damage. 211 5. Advisory on RFC 2544 Methods in Production Networks 213 The tests in [RFC2544] were designed to measure the performance of 214 network devices, not of networks, and certainly not production 215 networks carrying user traffic on shared resources. There will be 216 undesirable consequences when applying these methods outside the 217 isolated test environment. 219 One negative consequence stems from reliance on frame loss as an 220 indicator of resource exhaustion in [RFC2544] methods. In practice, 221 link-layer and physical-layer errors prevent production networks from 222 operating loss-free. The [RFC2544] methods will not correctly assess 223 Throughput when loss from uncontrolled sources is present. Frame 224 loss occurring at the SLA levels of some networks could affect every 225 iteration of Throughput testing (when each step includes sufficient 226 packets to experience facility-related loss). Flawed results waste 227 the time and resources of the testing service user and of the service 228 provider when called to dispute the measurement. These are 229 additional examples of harm that compliance with this advisory should 230 help to avoid. 232 The methods described in [RFC2544] are intended to generate traffic 233 that overloads network device resources in order to assess their 234 capacity. Overload of shared resources would likely be harmful to 235 user traffic performance on a production network. These tests MUST 236 NOT be used on production networks and as discussed above. The tests 237 will not produce a reliable or accurate benchmarking result on a 238 production network. 240 [RFC2544] methods have never been validated on a network path, even 241 when that path is not part of a production network and carrying no 242 other traffic. It is unknown whether the tests can be used to 243 measure valid and reliable performance of a multi-device, multi- 244 network path. It is possible that some of the tests may prove valid 245 in some path scenarios, but that work has not been done or has not 246 been shared with the IETF community. Thus, such testing is contra- 247 indicated by the BMWG. 249 6. Considering Performance Testing in Production Networks 251 The IETF has addressed the problem of production network performance 252 measurement by chartering a different working group: IP Performance 253 Metrics (IPPM). This working group has developed a set of standard 254 metrics to assess the quality, performance, and reliability of 255 Internet packet transfer services. These metrics can be measured by 256 network operators, end users, or independent testing groups. We note 257 that some IPPM metrics differ from RFC 2544 metrics with similar 258 names, and there is likely to be confusion if the details are 259 ignored. 261 IPPM has not yet standardized methods for raw capacity measurement of 262 Internet paths. Such testing needs to adequately consider the strong 263 possibility for degradation to any other traffic that may be present 264 due to congestion. There are no specific methods proposed for 265 activation of a packet transfer service in IPPM at this time. Thus, 266 individuals who need to conduct capacity tests on production networks 267 should actively participate in standards development to ensure their 268 methods receive appropriate industry review and agreement, in the 269 IETF or in alternate standards development organizations. 271 Other standards may help to fill gaps in telecommunication service 272 testing. For example, the IETF has many standards intended to assist 273 with network operation, administration and maintenance (OAM), and 274 ITU-T Study Group 12 has a Recommendation on service activation test 275 methodology [Y.1564]. 277 The world will not spin off axis while waiting for appropriate and 278 standardized methods to emerge from the consensus process. 280 7. Security Considerations 282 This Applicability Statement intends to help preserve the security of 283 the Internet by clarifying that the scope of [RFC2544] and other BMWG 284 memos are all limited to testing in a laboratory ITE, thus avoiding 285 accidental Denial of Service attacks or congestion due to high 286 traffic volume test streams. 288 All Benchmarking activities are limited to technology 289 characterization using controlled stimuli in a laboratory 290 environment, with dedicated address space and the other constraints 291 [RFC2544]. 293 The benchmarking network topology will be an independent test setup 294 and MUST NOT be connected to devices that may forward the test 295 traffic into a production network, or misroute traffic to the test 296 management network. 298 Further, benchmarking is performed on a "black-box" basis, relying 299 solely on measurements observable external to the device under test/ 300 system under test (DUT/SUT). 302 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 303 benchmarking purposes. Any implications for network security arising 304 from the DUT/SUT SHOULD be identical in the lab and in production 305 networks. 307 8. IANA Considerations 309 This memo makes no requests of IANA. 311 9. Acknowledgements 313 Thanks to Matt Zekauskas, Bill Cerveny, Barry Constantine, Curtis 314 Villamizar, David Newman, and Adrian Farrel for suggesting 315 improvements to this memo. 317 Specifically, Al Morton would like to thank his co-authors, who 318 constitute the complete set of Chairmen-Emeritus of the BMWG, for 319 returning from other pursuits to develop this statement and see it 320 through to approval. This has been a rare privilege; one that likely 321 will not be matched in the IETF again: 323 Scott Bradner served as Chairman from 1990 to 1993 324 Jim McQuaid served as Chairman from 1993 to 1995 325 Kevin Dubray served as Chairman from 1995 to 2006 327 It's all about the band. 329 10. References 331 10.1. Normative References 333 [RFC1242] Bradner, S., "Benchmarking terminology for network 334 interconnection devices", RFC 1242, July 1991. 336 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 337 Requirement Levels", BCP 14, RFC 2119, March 1997. 339 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 340 Network Interconnect Devices", RFC 2544, March 1999. 342 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 343 Dugatkin, "IPv6 Benchmarking Methodology for Network 344 Interconnect Devices", RFC 5180, May 2008. 346 10.2. Informative References 348 [Y.1564] ITU-T Recommendation Y.1564, "Ethernet Service Activation 349 Test Methodology", March 2011. 351 Authors' Addresses 353 Scott Bradner 354 Harvard University 355 29 Oxford St. 356 Cambridge, MA 02138 357 USA 359 Phone: +1 617 495 3864 360 Fax: 361 Email: sob@harvard.edu 362 URI: http://www.sobco.com 364 Kevin Dubray 365 Juniper Networks 367 Phone: 368 Fax: 369 Email: kdubray@juniper.net 370 URI: 372 Jim McQuaid 373 Turnip Video 374 6 Cobbleridge Court 375 Durham, North Carolina 27713 376 USA 378 Phone: +1 919-619-3220 379 Fax: 380 Email: jim@turnipvideo.com 381 URI: www.turnipvideo.com 383 Al Morton 384 AT&T Labs 385 200 Laurel Avenue South 386 Middletown,, NJ 07748 387 USA 389 Phone: +1 732 420 1571 390 Fax: +1 732 368 1192 391 Email: acmorton@att.com 392 URI: http://home.comcast.net/~acmacm/