idnits 2.17.1 draft-green-bmwg-seceff-bench-meth-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 2012) is 4418 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC1242' is defined on line 277, but no explicit reference was found in the text == Unused Reference: 'RFC3511' is defined on line 291, but no explicit reference was found in the text -- Obsolete informational reference (is this intentional?): RFC 3511 (Obsoleted by RFC 9411) -- No information found for draft-hamilton-bmwg-ca-bench - is the name correct? Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT K. Green, T. Alexander 3 Intended Status: Informational (Ixia) 4 Expires: Aug 1, 2012 March 2012 6 Benchmarking Methodology for Evaluating the Security Effectiveness 7 of Content Aware Devices 8 draft-green-bmwg-seceff-bench-meth-01 10 Abstract 12 This document defines a methodology for evaluating the ability of 13 content-aware network devices to correctly detect and block malicious 14 or administratively disallowed traffic flows. This benchmark 15 addresses the issue of classification accuracy under well defined 16 conditions. It is not concerned with measuring forwarding performance 17 which is covered by other BMWG documents. 19 Status of this Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF). Note that other groups may also distribute 26 working documents as Internet-Drafts. The list of current Internet- 27 Drafts is at http://datatracker.ietf.org/drafts/current/. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 This Internet-Draft will expire on Aug 1, 2012. 36 Copyright and License Notice 38 Copyright (c) 2012 IETF Trust and the persons identified as the 39 document authors. All rights reserved. 41 This document is subject to BCP 78 and the IETF Trust's Legal 42 Provisions Relating to IETF Documents 43 (http://trustee.ietf.org/license-info) in effect on the date of 44 publication of this document. Please review these documents 45 carefully, as they describe your rights and restrictions with respect 46 to this document. Code Components extracted from this document must 47 include Simplified BSD License text as described in Section 4.e of 48 the Trust Legal Provisions and are provided without warranty as 49 described in the Simplified BSD License. 51 Table of Contents 53 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 54 1.1 Requirements Language . . . . . . . . . . . . . . . . . . . 5 55 2 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . 5 56 2.1 Existing Terminology . . . . . . . . . . . . . . . . . . . 5 57 2.1.1 Illegal Traffic . . . . . . . . . . . . . . . . . . . . 5 58 2.2 New Terminology . . . . . . . . . . . . . . . . . . . . . . 5 59 2.2.1 Attack . . . . . . . . . . . . . . . . . . . . . . . . 5 60 2.2.1 Legal Traffic . . . . . . . . . . . . . . . . . . . . . 6 61 3 Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 6 62 3.1 Application Traffic Mix . . . . . . . . . . . . . . . . . . 6 63 4 Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . . 7 64 4.1 Attack-only Blocking Rate . . . . . . . . . . . . . . . . . 7 65 4.2 Error-free Attack Blocking Rate . . . . . . . . . . . . . . 7 66 4.3 Attack Blocking Effectiveness . . . . . . . . . . . . . . . 7 67 5 Security Considerations . . . . . . . . . . . . . . . . . . . . 7 68 6 IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 8 69 7 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 8 70 8 References . . . . . . . . . . . . . . . . . . . . . . . . . . 8 71 8.1 Normative References . . . . . . . . . . . . . . . . . . . 8 72 8.2 Informative References . . . . . . . . . . . . . . . . . . 8 73 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 8 75 1 Introduction 77 Networks of the 21st century exist in an environment flooded with 78 complex and highly sophisticated security threats. There is an 79 intense and enduring arms race under way between those developing and 80 distributing attack technology and those developing and supplying 81 defense technology. 83 In addition there is a growing need to limit access by users inside 84 private or corporate networks to Internet sites or services deemed 85 undesirable and to ensure that intellectual property and other 86 private information is not allowed to pass freely from inside the 87 protected network to the outside world. 89 In response to the this dynamic and constantly expanding range of 90 security threats and privacy requirements there is a growing 91 diversity of network devices that provide a variety of defensive 92 services including but not limited to firewall, intrusion detection, 93 intrusion prevention, anti-virus, anti-malware, anti-spam, anti-dos, 94 anti-ddos, unified threat management, data leakage prevention and 95 more. These content-aware devices use a mixture of stateless and 96 stateful L3 to L7 technologies, including deep packet inspection 97 (DPI), to categorize traffic flows. 99 What all of these defensive solutions have in common is the 100 requirement that they reliably and accurately distinguish between 101 legal (benign and allowed) traffic and illegal (malicious or 102 disallowed) traffic. 104 Categorization of traffic as either legal or illegal is fundamental 105 to the operation of these devices since it is a prerequisite to all 106 security functions. 108 Security Effectiveness is a measure of how accurately the device 109 under test (DUT) categorizes traffic: 111 o No false negatives = correctly blocks all illegal traffic 113 o No false positives = never blocks legal traffic 115 In contrast, Security Performance is the characterization of the 116 DUT's forwarding performance while under attack. Security Performance 117 measures how well the device forwards legal traffic with security 118 features enabled and in the presence of illegal traffic. This is 119 addressed in [HAMILTON]. 121 Security Effectiveness is orthogonal to Security Performance. 123 1.1 Requirements Language 125 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 126 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 127 document are to be interpreted as described in [RFC2119]. 129 2 Terminology 131 2.1 Existing Terminology 133 2.1.1 Illegal Traffic 135 Definition: 136 Illegal traffic is defined by [RFC2647] as "Packets specified for 137 rejection in the rule set of the DUT/SUT". 139 Discussion: 140 That definition is interpreted in this context to be any illicit 141 traffic flow which should be blocked by the DUT. That means 142 illegal traffic is either malicious (i.e. part of a deliberate 143 attack) or administratively banned (i.e. disallowed from passing 144 in to or out of the protected network due to its content and/or 145 destination). 147 Unit of measurement: 148 not applicable 150 Issues: 152 2.2 New Terminology 154 2.2.1 Attack 156 Definition: 157 An attack is an attempt to transmit illegal traffic across the 158 DUT. 160 Discussion: 161 Attacks can be classified as follows: 163 Attempted = injected into the DUT 165 Blocked = dropped by the DUT 167 Successful = passed through the DUT 169 A successful attack indicates a failure by the DUT to recognize 170 and block the illegal traffic. 172 Unit of measurement: 173 not applicable 175 Issues: 177 See also: 178 legal traffic 180 2.2.1 Legal Traffic 182 Definition: 183 Legal traffic is any traffic flow which should not be blocked by 184 the DUT. 186 Discussion: 187 Legal traffic is implicitly benign and allowed. 189 Unit of measurement: 190 not applicable 192 Issues: 194 See also: 195 illegal traffic 197 3 Test Setup 199 3.1 Application Traffic Mix 201 Some test cases require the test equipment to inject legal traffic 202 mixed with the illegal traffic. The purpose of the legal traffic is 203 to force the DUT to distinguish between legal and illegal traffic and 204 it is not used to quantify the forwarding performance of the DUT from 205 an application perspective. 207 Given this purpose, in order to protect the integrity and 208 repeatability of the benchmark, a single fixed definition of the 209 legal traffic application mix is provided. No attempt is made to 210 accurately model any particular mix of application traffic such as 211 might be seen in an operational network. 213 Rather, the traffic mix includes an appropriate mix of traffic types 214 to ensure that the security engine cannot blindly assume that every 215 packet is either legal or illegal and so deliver unrealistically high 216 performance or otherwise undermine the benchmark. 218 In those test scenarios where application traffic is specified, the 219 following mix MUST be used: 221 *** TBD but likely to include at least UDP, TCP, HTTP *** 223 4 Benchmarking Tests 225 4.1 Attack-only Blocking Rate 227 Attack-only Blocking Rate (attacks/second) is defined as the largest 228 number of attacks per second where 100% of attacks are blocked with 229 no application traffic present. 231 4.2 Error-free Attack Blocking Rate 233 Error-free Attack Blocking Rate (attacks/second) is defined as the 234 largest number of attacks per second where 100% of attacks are 235 blocked in the presence of legal traffic and 0% of the legal traffic 236 is blocked or dropped. 238 4.3 Attack Blocking Effectiveness 240 Attack Blocking Effectiveness (percentage) is the ratio of blocked 241 attacks/attempted attacks counted over the total number of different 242 types of attack in the presence of legal traffic and where 0% of the 243 legal traffic is dropped or blocked. 245 5 Security Considerations 247 Benchmarking activities as described in this memo are limited to 248 technology characterization using controlled stimuli in a laboratory 249 environment, with dedicated address space and the other constraints 250 defined in [RFC2544]. 252 The benchmarking network topology will be an independent test setup 253 and MUST NOT be connected to devices that may forward the test 254 traffic into a production network, or misroute traffic to the test 255 management network. 257 Further, benchmarking is performed on a "black-box" basis, relying 258 solely on measurements observable external to the DUT/SUT. 260 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 261 benchmarking purposes. Any implications for network security arising 262 from the DUT/SUT SHOULD be identical in the lab and in production 263 networks. 265 6 IANA Considerations 267 This memo includes no request to IANA. 269 7 Acknowledgements 271 Thanks to X, Y & Z for their review and comments. 273 8 References 275 8.1 Normative References 277 [RFC1242] Bradner, S., "Benchmarking Terminology for Network 278 Interconnection Devices", RFC 1242, July 1991. 280 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 281 Requirement Levels", BCP 14, RFC 2119, March 1997. 283 [RFC2647] Newman, D., "Benchmarking Terminology for Firewall 284 Performance", RFC 2647, August 1999. 286 8.2 Informative References 288 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 289 Network Interconnect Devices", RFC 2544, March 1999. 291 [RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin, 292 "Benchmarking Methodology for Firewall Performance", 293 RFC 3511, April 2003. 295 [HAMILTON] "Benchmarking Methodology for Content Aware Network 296 Devices", draft-hamilton-bmwg-ca-bench-07.txt 298 Authors' Addresses 300 Kenneth Green 301 Ixia 302 Australia 303 EMail: kgreen@ixiacom.com 305 Tom Alexander 306 Ixia 307 USA 309 EMail: talexander@ixiacom.com