idnits 2.17.1 draft-hamilton-bmwg-ca-bench-term-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 7, 2011) is 4800 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: '7' is defined on line 446, but no explicit reference was found in the text == Unused Reference: '8' is defined on line 450, but no explicit reference was found in the text ** Obsolete normative reference: RFC 3511 (ref. '3') (Obsoleted by RFC 9411) ** Obsolete normative reference: RFC 793 (ref. '6') (Obsoleted by RFC 9293) -- Obsolete informational reference (is this intentional?): RFC 5226 (ref. '9') (Obsoleted by RFC 8126) Summary: 2 errors (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force M. Hamilton 3 Internet-Draft BreakingPoint Systems 4 Intended status: Informational S. Banks 5 Expires: September 8, 2011 Cisco Systems 6 March 7, 2011 8 Benchmarking Terminology for Content-Aware Network Devices 9 draft-hamilton-bmwg-ca-bench-term-00 11 Abstract 13 The purpose of this document is to define and outline the terminology 14 necessary to appropriately follow and implement "Benchmarking 15 Methodology for Content-Aware Network Devices". Relevant terms will 16 be defined and discussed throughout this document in order to ensure 17 the comprehension of the previously mentioned methodology. 19 Status of this Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF). Note that other groups may also distribute 26 working documents as Internet-Drafts. The list of current Internet- 27 Drafts is at http://datatracker.ietf.org/drafts/current/. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 This Internet-Draft will expire on September 8, 2011. 36 Copyright Notice 38 Copyright (c) 2011 IETF Trust and the persons identified as the 39 document authors. All rights reserved. 41 This document is subject to BCP 78 and the IETF Trust's Legal 42 Provisions Relating to IETF Documents 43 (http://trustee.ietf.org/license-info) in effect on the date of 44 publication of this document. Please review these documents 45 carefully, as they describe your rights and restrictions with respect 46 to this document. Code Components extracted from this document must 47 include Simplified BSD License text as described in Section 4.e of 48 the Trust Legal Provisions and are provided without warranty as 49 described in the Simplified BSD License. 51 Table of Contents 53 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 54 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 55 2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 56 3. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 4 57 3.1. Application Flow . . . . . . . . . . . . . . . . . . . . . 5 58 3.2. Application Throughput . . . . . . . . . . . . . . . . . . 5 59 3.3. Average Time to TCP Session Establishment . . . . . . . . 6 60 3.4. Content-Aware Device . . . . . . . . . . . . . . . . . . . 6 61 3.5. Deep Packet Inspection . . . . . . . . . . . . . . . . . . 7 62 3.6. Network 5-Tuple . . . . . . . . . . . . . . . . . . . . . 7 63 3.7. Session Establishment Rate . . . . . . . . . . . . . . . . 8 64 3.8. Session Establishment Time . . . . . . . . . . . . . . . . 8 65 3.9. Simultaneous TCP Sessions . . . . . . . . . . . . . . . . 9 66 3.10. Time To SYN . . . . . . . . . . . . . . . . . . . . . . . 9 67 4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 10 68 5. Security Considerations . . . . . . . . . . . . . . . . . . . 10 69 6. References . . . . . . . . . . . . . . . . . . . . . . . . . . 10 70 6.1. Normative References . . . . . . . . . . . . . . . . . . . 10 71 6.2. Informative References . . . . . . . . . . . . . . . . . . 11 72 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 11 74 1. Introduction 76 Content-aware and deep packet inspection (DPI) device penetration has 77 grown significantly over the last decade. No longer are devices 78 simply using Ethernet headers and IP headers to make forwarding 79 decisions. Devices that could historically be classified as 80 'stateless' or raw forwarding devices are now seeing more DPI 81 functionality. Devices such as core and edge routers are now being 82 developed with DPI functionality to make more intelligent routing and 83 forwarding decisions. 85 The Benchmarking Working Group (BMWG) has historically produced 86 Internet Drafts and Requests for Comment that are focused 87 specifically on creating output metrics that are derived from a very 88 specific and well-defined set of input parameters that are completely 89 and unequivocally reproducible from testbed to testbed. The end goal 90 of such methodologies is to, in the words of the BMWG charter "reduce 91 specmanship" from network equipment manufacturers(NEM's). Existing 92 BMWG work has certainly met this stated goal. 94 Today, device sophistication has expanded beyond existing 95 methodologies, allowing vendors to reengage in specmanship. In order 96 to achieve the stated BMWG goals, the methodologies designed to hold 97 vendors accountable must evolve with the enhanced device 98 functionality. 100 The BMWG has historically avoided the use of the term "realistic" 101 throughout all of its drafts and RFCs. While this document will not 102 explicitly use this term, the end goal of the terminology and 103 methodology is to generate performance metrics that will be as close 104 as possible to equivalent metrics in a production environment. It 105 should be further noted than any metrics acquired from a production 106 network MUST be captured according to the policies and procedures of 107 the IPPM or PMOL working groups. 109 An explicit non-goal of this document is to replace existing 110 methodology/terminology pairs such as RFC 2544 [1]/RFC 1242 [2] or 111 RFC 3511 [3]/RFC 2647 [4]. The explicit goal of this document is to 112 create a methodology and terminology pair that is more suited for 113 modern devices while complementing the data acquired using existing 114 BMWG methodologies. Existing BMWG work generally revolves around 115 completely repeatable input stimulus, expecting fully repeatable 116 output. This document departs from this mantra due to the nature of 117 modern traffic and is more focused on output repeatability than on 118 static input stimulus. 120 Some of the terms used throughout this draft have previously been 121 defined in "Benchmarking Terminology for Firewall Performance" RFC 122 2647 [4]. This document SHOULD be consulted prior to using this 123 document. 125 1.1. Requirements Language 127 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 128 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 129 document are to be interpreted as described in RFC 2119 [5]. 131 2. Scope 133 Content-aware devices take many forms, shapes and architectures. 134 These devices are advanced network interconnect devices that inspect 135 deep into the application payload of network data packets to do 136 classification. They may be as simple as a firewall that uses 137 application data inspection for rule set enforcement, or they may 138 have advanced functionality such as performing protocol decoding and 139 validation, anti-virus, anti-spam and even application exploit 140 filtering. 142 This document is strictly focused on examining performance and 143 robustness across a focused set of metrics that may be used to more 144 accurately predict device performance when deployed in modern 145 networks. These metrics will be implementation independent. 147 It should also be noted that the purpose of this document is not to 148 perform functional testing of the potential features in the Device/ 149 System Under Test (DUT/SUT)[4] nor specify the configurations that 150 should be tested. Various definitions of proper operation and 151 configuration may be appropriate within different contexts. While 152 the definition of these parameters are outside the scope of this 153 document, the specific configuration of both the DUT and tester 154 SHOULD be published with the test results for repeatability and 155 comparison purposes. 157 While a list of devices that fall under this category will quickly 158 become obsolete, an initial list of devices that would be well served 159 by utilizing this type of methodology should prove useful. Devices 160 such as firewalls, intrusion detection and prevention devices, 161 application delivery controllers, deep packet inspection devices, and 162 unified threat management systems generally fall into the content- 163 aware category. 165 3. Definitions 166 3.1. Application Flow 168 Definition: 169 An application flow is the virtual connection between two network 170 hosts that is used to exchange user data above the transport 171 layer. 173 Discussion: 174 Content-aware devices may potentially proxy session-layer 175 connections, acting as a virtual server to the client and a 176 virtual client to the server. In this mode, the SUT/DUT may 177 modify members of the network 5-tuple or act on their behalf, thus 178 each end host is actually disconnected at the session layer. 179 Application flows are virtual connections that are between the two 180 hosts, irrespective of the nature of the session layer semantics. 182 Unit of Measurement: 183 N/A 185 Issues: 186 N/A 188 See Also: 189 5-Tuple 191 3.2. Application Throughput 193 Definition: 194 The rate at which data associated with an application flow is 195 transmitted through the SUT/DUT. 197 Discussion: 198 Throughput metrics may be calculated at various layers in the 199 network protocol stack. Each layer does contain associated 200 overhead necessary to maintain that layer. Application throughput 201 is the number of bits transmitted through a SUT/DUT, not including 202 the overhead associated with lower layer protocols. Measurement 203 should be taken at the receiver side to minimize the impact of 204 session layer retransmissions. 206 Unit of Measurement: 207 N/A 209 Issues: 210 Some applications may not rely on session layer reliability 211 mechanisms. This definition does not cover the case where an 212 application may utilize its own specific reliability/ 213 retransmission algorithm. 215 See Also: 216 N/A 218 3.3. Average Time to TCP Session Establishment 220 Definition: 221 The average time that a SUT/DUT requires to complete the TCP 222 session establishment process. 224 Discussion: 225 The average time to TCP session establishment is calculated by 226 taking the sum of all "TCP Session Establishment Time" values 227 acquired in the specified time frame and divide by the total 228 number of sessions established within that timeframe. The 229 timeframe in which the average is taken will depend on the 230 methodology itself and what is trying to be measured. 232 Unit of Measurement: 233 Seconds. 235 Issues: 236 Depending on how the DUT/SUT handles TCP session establishment, 237 the client and server may have different values for the same TCP 238 session. A client-side session may be established prior to the 239 server-side session being established. 241 See Also: 242 See Also. 244 3.4. Content-Aware Device 246 Definition: 247 A networking device which performs deep packet inspection. 249 Discussion: 250 For a more detailed discussion, please see "deep packet 251 inspection". 253 Unit of Measurement: 254 Not Applicable. 256 Issues: 257 Not Applicable. 259 See Also: 260 Deep Packet Inspection 262 3.5. Deep Packet Inspection 264 Definition: 265 The process by which a network device inspects layer 7 payload as 266 well as protocol headers when making processing decisions. 268 Discussion: 269 Deep packet inspection (DPI) has grown from a feature reserved for 270 Intrusion Prevention Devices into functionality that is shared 271 across many next generation networking devices. Devices 272 traditionally classified as firewalls are now looking at layer 7 273 payloads to make decisions, whether it is classification, rate- 274 shaping, or actually deeming whether a flow is allowed. Many 275 deep-packet inspection devices utilize proxy behavior as a 276 functional choice for performing inspection. 278 Unit of Measurement: 279 Not Applicable. 281 Issues: 282 Not Applicable. 284 See Also: 285 Content-Aware Device 287 3.6. Network 5-Tuple 289 Definition: 290 The set of 5 metrics which distinguish two session layer 291 connections from each other. 293 Discussion: 294 When discussing data transfer between hosts, a Network 5-tuple is 295 typically used to differentiate between multiple session layer 296 connections. Source and destination IP addresses, source and 297 destination session-layer ports, and the session layer protocol 298 make up the network 5-tuple. The session layer protocol is 299 typically TCP or UDP, but may be SCTP or another session layer 300 protocol. 302 Unit of Measurement: 303 N/A 305 Issues: 306 N/A 308 3.7. Session Establishment Rate 310 Definition: 311 The rate at which TCP sessions may be established through a given 312 DUT/SUT. 314 Discussion: 315 The session establishment rate is a measurement of how many TCP 316 sessions the DUT/SUT is able to establish in a given unit of time. 317 If within a 1 second time interval the tester is able to establish 318 10,000 sessions, that rate will be measured at 10,000 sessions per 319 second. The session must be established in accordance with the 320 policy set forth in "Session Establishment Time". 322 Unit of Measurement: 323 TCP session(s) per second 325 Issues: 326 Issues. 328 See Also: 329 See Also. 331 3.8. Session Establishment Time 333 Definition: 334 Session establishment time is the difference in time between the 335 first TCP SYN packet sent from the client and when TCP ACK 336 packet's arrival at the server interface. 338 Discussion: 339 This metric is calculated between the time the first bit of the 340 TCP SYN packet is sent from the client and the time the last bit 341 of the TCP ACK packet arrives on the server interface. 343 Unit of Measurement: 344 Seconds. 346 Issues: 347 Depending on how the DUT/SUT handles TCP session establishment, 348 the client and server may have different values for the same 349 logical TCP session. A client-side session may be established 350 prior to the server-side session being established. 352 See Also: 354 3.9. Simultaneous TCP Sessions 356 Definition: 357 The number of TCP sessions which are in the 'Established State' as 358 defined by RFC 793 [6]. 360 Discussion: 361 This measurement counts the number of TCP sessions which are in 362 the 'Established State'. Sessions which are in this state must be 363 able to maintain data transfer between client and server, bi- 364 directionally. 366 Unit of Measurement: 367 Sessions. 369 Issues: 370 Depending on the nature of the SUT/DUT, the number of simultaneous 371 sessions may instantaneously be different when counted from the 372 client and server sides of the SUT/DUT. 374 See Also: 375 See Also. 377 3.10. Time To SYN 379 Definition: 380 The Time to SYN is a one-way metric, which is the difference 381 between the that that the first TCP SYN packet is sent by the 382 client and the time at which the server receives the TCP SYN 383 packet from the client. 385 Discussion: 386 This metric is more important with content-aware devices due to 387 the potential proxying issues. Content-aware devices may proxy a 388 TCP session on behalf of the server. Many times, the client will 389 receive the SYN/ACK from the DUT/SUT and complete the TCP 390 handshake before the SYN has been forwarded to the server. This 391 measurement is actually a proxy measure for client-side session 392 establishment time through the DUT/SUT, if the session is in fact 393 proxied. 395 Unit of Measurement: 396 Seconds. 398 See Also: 400 4. IANA Considerations 402 This memo includes no request to IANA. 404 All drafts are required to have an IANA considerations section (see 405 the update of RFC 2434 [9] for a guide). If the draft does not 406 require IANA to do anything, the section contains an explicit 407 statement that this is the case (as above). If there are no 408 requirements for IANA, the section will be removed during conversion 409 into an RFC by the RFC Editor. 411 5. Security Considerations 413 Benchmarking activities as described in this memo are limited to 414 technology characterization using controlled stimuli in a laboratory 415 environment, with dedicated address space and the other constraints 416 RFC 2544 [1]. 418 The benchmarking network topology will be an independent test setup 419 and MUST NOT be connected to devices that may forward the test 420 traffic into a production network, or misroute traffic to the test 421 management network 423 6. References 425 6.1. Normative References 427 [1] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 428 Network Interconnect Devices", RFC 2544, March 1999. 430 [2] Bradner, S., "Benchmarking terminology for network 431 interconnection devices", RFC 1242, July 1991. 433 [3] Hickman, B., Newman, D., Tadjudin, S., and T. Martin, 434 "Benchmarking Methodology for Firewall Performance", RFC 3511, 435 April 2003. 437 [4] Newman, D., "Benchmarking Terminology for Firewall Performance", 438 RFC 2647, August 1999. 440 [5] Bradner, S., "Key words for use in RFCs to Indicate Requirement 441 Levels", BCP 14, RFC 2119, March 1997. 443 [6] Postel, J., "Transmission Control Protocol", STD 7, RFC 793, 444 September 1981. 446 [7] Popoviciu, C., Hamza, A., Van de Velde, G., and D. Dugatkin, 447 "IPv6 Benchmarking Methodology for Network Interconnect 448 Devices", RFC 5180, May 2008. 450 [8] Brownlee, N., Mills, C., and G. Ruth, "Traffic Flow Measurement: 451 Architecture", RFC 2722, October 1999. 453 6.2. Informative References 455 [9] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA 456 Considerations Section in RFCs", BCP 26, RFC 5226, May 2008. 458 Authors' Addresses 460 Mike Hamilton 461 BreakingPoint Systems 462 Austin, TX 78717 463 US 465 Phone: +1 512 636 2303 466 Email: mhamilton@breakingpoint.com 468 Sarah Banks 469 Cisco Systems 470 San Jose, CA 95134 471 US 473 Email: sabanks@cisco.com