idnits 2.17.1 draft-ietf-bmwg-2544-as-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 20, 2011) is 4569 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2679' is defined on line 303, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2679 (Obsoleted by RFC 7679) Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group S. Bradner 3 Internet-Draft Harvard University 4 Intended status: Informational K. Dubray 5 Expires: April 22, 2012 Juniper Networks 6 J. McQuaid 7 Turnip Video 8 A. Morton 9 AT&T Labs 10 October 20, 2011 12 RFC 2544 Applicability Statement: Use on Production Networks Considered 13 Harmful 14 draft-ietf-bmwg-2544-as-01 16 Abstract 18 Benchmarking Methodology Working Group (BMWG) has been developing key 19 performance metrics and laboratory test methods since 1990, and 20 continues this work at present. Recent application of the methods 21 beyond their intended scope is cause for concern. This memo 22 clarifies the scope of RFC 2544 and other benchmarking work for the 23 IETF community. 25 Status of this Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at http://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on April 22, 2012. 42 Copyright Notice 44 Copyright (c) 2011 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 60 1.1. Requirements Language . . . . . . . . . . . . . . . . . . . 3 61 2. Scope and Goals . . . . . . . . . . . . . . . . . . . . . . . . 4 62 3. The Concept of an Isolated Test Environment . . . . . . . . . . 4 63 4. Why RFC 2544 Methods are intended for ITE . . . . . . . . . . . 4 64 4.1. Experimental Control, Repeatability, and Accuracy . . . . . 4 65 4.2. Containment of Implementation Failure Impact . . . . . . . 5 66 5. Advisory on RFC 2544 Methods in Real-world Networks . . . . . . 5 67 6. What to do without RFC 2544? . . . . . . . . . . . . . . . . . 6 68 7. Security Considerations . . . . . . . . . . . . . . . . . . . . 6 69 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 7 70 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 7 71 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 7 72 10.1. Normative References . . . . . . . . . . . . . . . . . . . 7 73 10.2. Informative References . . . . . . . . . . . . . . . . . . 8 74 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 8 76 1. Introduction 78 This memo clarifies the scope of RFC 2544 [RFC2544], and other 79 benchmarking work for the IETF community. 81 Benchmarking Methodologies (beginning with [RFC2544]) have always 82 relied on test conditions that can only be reliably produced in the 83 laboratory. Thus it was surprising to find that this foundation 84 methodology was being cited in several unintended applications, such 85 as: 87 1. Validation of telecommunication service configuration, such as 88 the Committed Information Rate (CIR). 90 2. Validation of performance metrics in a telecommunication Service 91 Level Agreement (SLA), such as frame loss and latency. 93 3. As an integral part of telecommunication service activation 94 testing, where traffic that shares network resources with the 95 test might be adversely affected. 97 Above, we distinguish "telecommunication service" (where a network 98 service provider contracts with a customer to transfer information 99 between specified interfaces at different geographic locations in the 100 real world) from the generic term "service". Also, we use the 101 adjective "production" to refer to networks carrying live user 102 traffic. [RFC2544] used the term "real-world" to refer production 103 networks and to differentiate them from test networks. 105 Although RFC 2544 is held up as the standard reference for such 106 testing, we believe that the actual methods used vary from RFC 2544 107 in significant ways. Since the only citation is to RFC 2544, the 108 modifications are opaque to the standards community and to users in 109 general (an undesirable situation). 111 To directly address this situation, the past and present Chairs of 112 the IETF Benchmarking Methodology Working Group (BMWG) have prepared 113 this Applicability Statement for RFC 2544. 115 1.1. Requirements Language 117 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 118 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 119 document are to be interpreted as described in RFC 2119 [RFC2119]. 121 2. Scope and Goals 123 This memo clarifies the scope of [RFC2544], with the goal to provide 124 guidance to the community on its applicability, which is limited to 125 laboratory testing. 127 3. The Concept of an Isolated Test Environment 129 An Isolated Test Environment (ITE) used with [RFC2544] methods (as 130 illustrated in Figures 1 through 3 of [RFC2544])has the ability to: 132 o contain the test streams to paths within the desired set-up 134 o prevent non-test traffic from traversing the test set-up 136 These features allow unfettered experimentation, while at the same 137 time protecting equipment management LANs and other production 138 networks from the unwanted effects of the test traffic. 140 4. Why RFC 2544 Methods are intended for ITE 142 The following sections discuss some of the reasons why RFC 2544 143 [RFC2544] methods were intended only for isolated laboratory use, and 144 the difficulties of applying these methods outside the lab 145 environment. 147 4.1. Experimental Control, Repeatability, and Accuracy 149 All of the tests described in RFC 2544 assume that the tester and 150 device under test are the only devices on the networks that are 151 transmitting data. The presence of other unwanted traffic on the 152 network would mean that the specified test conditions have not been 153 achieved. 155 Assuming that the unwanted traffic appears in variable amounts over 156 time, the repeatability of any test result will likely depend to some 157 degree on the unwanted traffic. 159 The presence of unwanted or unknown traffic makes accurate, 160 repeatable, and consistent measurements of the performance of the 161 device under test very unlikely, since the actual test conditions 162 will not be reported. 164 For example, the RFC 2544 Throughput Test attempts to characterize a 165 maximum reliable load, thus there will be testing above the maximum 166 that causes packet/frame loss. Any other sources of traffic on the 167 network will cause packet loss to occur at a tester data rate lower 168 than the rate that would be achieved without the extra traffic. 170 4.2. Containment of Implementation Failure Impact 172 RFC 2544 methods, specifically to determine Throughput as defined in 173 [RFC1242] and other benchmarks, may overload the resources of the 174 device under test, and may cause failure modes in the device under 175 test. Since failures can become the root cause of more wide-spread 176 failure, it is clearly desirable to contain all DUT traffic within 177 the ITE. 179 In addition, such testing can have a negative affect on any traffic 180 which shares resources with the test stream(s) since, in most cases, 181 the traffic load will be close to the capacity of the network links. 183 Appendix C.2.2 of [RFC2544] (as adjusted by errata) gives the private 184 IPv4 address range for testing: 186 "...The network addresses 198.18.0.0 through 198.19.255.255 have been 187 assigned to the BMWG by the IANA for this purpose. This assignment 188 was made to minimize the chance of conflict in case a testing device 189 were to be accidentally connected to part of the Internet. The 190 specific use of the addresses is detailed below." 192 In other words, devices operating on the Internet may be configured 193 to discard any traffic they observe in this address range, as it is 194 intended for laboratory ITE use only. Thus, testers using the 195 assigned testing address ranges MUST NOT be connected to the 196 Internet. 198 We note that a range of IPv6 addresses has been assigned to BMWG for 199 laboratory test purposes, in [RFC5180]. Also, the strong statements 200 in the Security Considerations Section of this memo make the scope 201 even more clear; this is now a standard fixture of all BMWG memos. 203 5. Advisory on RFC 2544 Methods in Real-world Networks 205 The tests in [RFC2544] were designed to measure the performance of 206 network devices, not of networks, and certainly not production 207 networks carrying user traffic on shared resources. There will be 208 unanticipated difficulties when applying these methods outside the 209 lab environment. 211 Operating test equipment on production networks according to the 212 methods described in [RFC2544], where overload is a possible outcome, 213 would no doubt be harmful to user traffic performance. These tests 214 MUST NOT be used on active networks. And as discussed above, the 215 tests will never produce a reliable or accurate benchmarking result. 217 [RFC2544] methods have never been validated on a network path, even 218 when that path is not part of a production network and carrying no 219 other traffic. It is unknown whether the tests can be used to 220 measure valid and reliable performance of a multi-device, multi- 221 network path. It is possible that some of the tests may prove to be 222 valid in some path scenarios, but that work has not been done or has 223 not been shared with the IETF community. Thus, such testing is 224 contra-indicated by the BMWG. 226 6. What to do without RFC 2544? 228 The IETF has addressed the problem of production network performance 229 measurement by chartering a different working group: IP Performance 230 Metrics (IPPM). This working group has developed a set of standard 231 metrics to assess the quality, performance, and reliability of 232 Internet packet transfer services. These metrics can be measured by 233 network operators, end users, or independent testing groups. We note 234 that some IPPM metrics differ from RFC 2544 metrics with similar 235 names, and there is likely to be confusion if the details are 236 ignored. 238 IPPM has not standardized methods for raw capacity measurement of 239 Internet paths. Such testing needs to adequately consider the strong 240 possibility for degradation to any other traffic that may be present 241 due to congestion. There are no specific methods proposed for 242 activation of a packet transfer service in IPPM. 244 Other standards may help to fill gaps in telecommunication service 245 testing. For example, the IETF has many standards intended to assist 246 with network operation, administration and maintenance (OAM), the 247 ITU-T Study Group 12 has a recommendation on service activation test 248 methodology. 250 The world will not spin off axis while waiting for appropriate and 251 standardized methods to emerge from the consensus process. 253 7. Security Considerations 255 This Applicability Statement is also intended to help preserve the 256 security of the Internet by clarifying that the scope of [RFC2544] 257 and other BMWG memos are all limited to testing in laboratory ITE, 258 thus avoiding accidental Denial of Service attacks or congestion due 259 to high traffic volume test streams. 261 All Benchmarking activities are limited to technology 262 characterization using controlled stimuli in a laboratory 263 environment, with dedicated address space and the other constraints 264 [RFC2544]. 266 The benchmarking network topology will be an independent test setup 267 and MUST NOT be connected to devices that may forward the test 268 traffic into a production network, or misroute traffic to the test 269 management network. 271 Further, benchmarking is performed on a "black-box" basis, relying 272 solely on measurements observable external to the device under test/ 273 system under test (DUT/SUT). 275 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 276 benchmarking purposes. Any implications for network security arising 277 from the DUT/SUT SHOULD be identical in the lab and in production 278 networks. 280 8. IANA Considerations 282 This memo makes no requests of IANA, and hopes that IANA will leave 283 it alone as well. 285 9. Acknowledgements 287 Thanks to Matt Zekauskas, Bill Cerveny, Barry Constantine, and Curtis 288 Villamizar for reading and suggesting improvements for this memo. 290 10. References 292 10.1. Normative References 294 [RFC1242] Bradner, S., "Benchmarking terminology for network 295 interconnection devices", RFC 1242, July 1991. 297 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 298 Requirement Levels", BCP 14, RFC 2119, March 1997. 300 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 301 Network Interconnect Devices", RFC 2544, March 1999. 303 [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 304 Delay Metric for IPPM", RFC 2679, September 1999. 306 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 307 Dugatkin, "IPv6 Benchmarking Methodology for Network 308 Interconnect Devices", RFC 5180, May 2008. 310 10.2. Informative References 312 Authors' Addresses 314 Scott Bradner 315 Harvard University 316 29 Oxford St. 317 Cambridge, MA 02138 318 USA 320 Phone: +1 617 495 3864 321 Fax: 322 Email: sob@harvard.edu 323 URI: http://www.sobco.com 325 Kevin Dubray 326 Juniper Networks 328 Phone: 329 Fax: 330 Email: kdubray@juniper.net 331 URI: 333 Jim McQuaid 334 Turnip Video 335 6 Cobbleridge Court 336 Durham, North Carolina 27713 337 USA 339 Phone: +1 919-619-3220 340 Fax: 341 Email: jim@turnipvideo.com 342 URI: www.turnipvideo.com 343 Al Morton 344 AT&T Labs 345 200 Laurel Avenue South 346 Middletown,, NJ 07748 347 USA 349 Phone: +1 732 420 1571 350 Fax: +1 732 368 1192 351 Email: acmorton@att.com 352 URI: http://home.comcast.net/~acmacm/