idnits 2.17.1 draft-ietf-bmwg-firewall-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard == It seems as if not all pages are separated by form feeds - found 0 form feeds but 29 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 2 instances of too long lines in the document, the longest one being 2 characters in excess of 72. ** There are 8 instances of lines with control characters in the document. == There are 24 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 95: '...inology document SHOULD be consulted b...' RFC 2119 keyword, line 118: '... of the MUST requirements. An imple...' RFC 2119 keyword, line 119: '... MUST and all the SHOULD requirement...' RFC 2119 keyword, line 120: '...atisfies all the MUST requirements but...' RFC 2119 keyword, line 121: '... the SHOULD requirements is said to ...' (164 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The tester originating the TCP SYN attack MUST be attached to the unprotected network. In addition, the tester MUST not respond to the SYN/ACK packets sent by target server or NAT proxy in response to the SYN packet. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (September 2002) is 7865 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: '2' is defined on line 1314, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 2647 (ref. '1') ** Obsolete normative reference: RFC 2616 (ref. '2') (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) ** Downref: Normative reference to an Informational RFC: RFC 1242 (ref. '3') ** Downref: Normative reference to an Informational RFC: RFC 2544 (ref. '4') ** Downref: Normative reference to an Unknown state RFC: RFC 815 (ref. '5') ** Downref: Normative reference to an Informational RFC: RFC 2285 (ref. '6') ** Downref: Normative reference to an Informational RFC: RFC 2889 (ref. '7') ** Obsolete normative reference: RFC 793 (ref. '8') (Obsoleted by RFC 9293) Summary: 14 errors (**), 0 flaws (~~), 6 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Benchmarking Working Group Brooks Hickman 2 Internet-Draft Spirent Communications 3 Expiration Date: March 2003 David Newman 4 Network Test 5 Saldju Tadjudin 6 Spirent Communications 7 Terry Martin 8 GVNW Consulting Inc 9 September 2002 11 Benchmarking Methodology for Firewall Performance 12 14 Status of this Memo 16 This document is an Internet-Draft and is in full conformance with 17 all provisions of Section 10 of RFC2026. 19 Internet-Drafts are working documents of the Internet Engineering 20 Task Force (IETF), its areas, and its working groups. Note that 21 other groups may also distribute working documents as Internet- 22 Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six 25 months and may be updated, replaced, or obsoleted by other documents 26 at any time. It is inappropriate to use Internet-Drafts as 27 reference material or to cite them other than as "work in progress." 29 The list of current Internet-Drafts can be accessed at 30 http://www.ietf.org/ietf/1id-abstracts.txt 32 The list of Internet-Draft Shadow Directories can be accessed at 33 http://www.ietf.org/shadow.html. 35 Copyright Notice 37 Copyright (C) The Internet Society (2002). All Rights Reserved. 39 Abstract 41 This document discusses and defines a number of tests that may be 42 used to describe the performance characteristics of firewalls. In 43 addition to defining the tests, this document also describes 44 specific formats for reporting the results of the tests. 46 This document is a product of the Benchmarking Methodology Working 47 Group (BMWG) of the Internet Engineering Task Force (IETF). 49 Table of Contents 51 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 52 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 2 53 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 54 4. Test setup . . . . . . . . . . . . . . . . . . . . . . . . . 3 55 4.1 Test Considerations . . . . . . . . . . . . . . . . . . 4 56 4.2 Virtual Client/Servers . . . . . . . . . . . . . . . . . 4 57 4.3 Test Traffic Requirements . . . . . . . . . . . . . . . . 4 58 4.4 DUT/SUT Traffic Flows . . . . . . . . . . . . . . . . . . 4 59 4.5 Multiple Client/Server Testing . . . . . . . . . . . . . 5 60 4.6 NAT(Network Address Translation) . . . . . . . . . . . . 5 61 4.7 Rule Sets . . . . . . . . . . . . . . . . . . . . . . . . 5 62 4.8 Web Caching . . . . . . . . . . . . . . . . . . . . . . . 6 63 4.9 Authentication . . . . . . . . . . . . . . . . . . . . . 6 64 4.10 TCP Stack Considerations . . . . . . . . . . . . . . . . 6 65 5. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 6 66 5.1 IP throughput . . . . . . . . . . . . . . . . . . . . . . 6 67 5.2 Concurrent TCP Connection Capacity . . . . . . . . . . . 8 68 5.3 Maximum TCP Connection Establishment Rate . . . . . . . . 10 69 5.4 Maximum TCP Connection Tear Down Rate . . . . . . . . . . 12 70 5.5 Denial Of Service Handling . . . . . . . . . . . . . . . 14 71 5.6 HTTP Transfer Rate . . . . . . . . . . . . . . . . . . . 15 72 5.7 Maximum HTTP Transaction Rate . . . . . . . . . . . . . . 18 73 5.8 Illegal Traffic Handling . . . . . . . . . . . . . . . . 20 74 5.9 IP Fragmentation Handling . . . . . . . . . . . . . . . . 21 75 5.10 Latency . . . . . . . . . . . . . . . . . . . . . . . . 22 76 6. References . . . . . . . . . . . . . . . . . . . . . . . . . 25 77 7. Security Consideration . . . . . . . . . . . . . . . . . . . 25 78 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . 25 79 9. Authors' Addresses . . . . . . . . . . . . . . . . . . . . . 26 80 Appendix A - HyperText Transfer Protocol(HTTP) . . . . . . . . 27 81 Appendix B - Connection Establishment Time Measurements . . . . 27 82 Appendix C - Connection Tear Down Time Measurements . . . . . . 28 83 Full Copy Statement . . . . . . . . . . . . . . . . . . . . . . 29 85 1. Introduction 87 This document provides methodologies for the performance 88 benchmarking of firewalls. It provides methodologies in four areas: 89 forwarding, connection, latency and filtering. In addition to 90 defining the tests, this document also describes specific formats 91 for reporting the results of the tests. 93 A previous document, "Benchmarking Terminology for Firewall 94 Performance" [1], defines many of the terms that are used in this 95 document. The terminology document SHOULD be consulted before 96 attempting to make use of this document. 98 2. Requirements 100 In this document, the words that are used to define the significance 101 of each particular requirement are capitalized. These words are: 103 * "MUST" This word, or the words "REQUIRED" and "SHALL" mean that 104 the item is an absolute requirement of the specification. 106 * "SHOULD" This word or the adjective "RECOMMENDED" means that 107 there may exist valid reasons in particular circumstances to 108 ignore this item, but the full implications should be understood 109 and the case carefully weighed before choosing a different course. 111 * "MAY" This word or the adjective "OPTIONAL" means that this item 112 is truly optional. One vendor may choose to include the item 113 because a particular marketplace requires it or because it 114 enhances the product, for example; another vendor may omit the 115 same item. 117 An implementation is not compliant if it fails to satisfy one or more 118 of the MUST requirements. An implementation that satisfies all the 119 MUST and all the SHOULD requirements is said to be "unconditionally 120 compliant"; one that satisfies all the MUST requirements but not all 121 the SHOULD requirements is said to be "conditionally compliant". 123 3. Scope 125 Firewalls can provide a single point of defense between networks. 126 Usually, a firewall protects private networks from the public or 127 shared networks to which it is connected. A firewall can be as 128 simple as a device that filters different packets or as complex 129 as a group of devices that combine packet filtering and 130 application-level proxy or network translation services. This RFC 131 will focus on developing benchmark testing of DUT/SUTs, wherever 132 possible, independent of their implementation. 134 4. Test Setup 136 Test configurations defined in this document will be confined to 137 dual-homed and tri-homed as shown in figure 1 and figure 2 138 respectively. 140 Firewalls employing dual-homed configurations connect two networks. 141 One interface of the firewall is attached to the unprotected 142 network[1], typically the public network(Internet). The other 143 interface is connected to the protected network[1], typically the 144 internal LAN. 146 In the case of dual-homed configurations, servers which are made 147 accessible to the public(Unprotected) network are attached to the 148 private(Protected) network. 150 +----------+ +----------+ 151 | | | +----------+ | | | 152 | Servers/ |----| | | |------| Servers/ | 153 | Clients | | | | | | Clients | 154 | | |-------| DUT/SUT |--------| | | 155 +----------+ | | | | +----------+ 156 Protected | +----------+ | Unprotected 157 Network | | Network 158 Figure 1(Dual-Homed) 160 Tri-homed[1] configurations employ a third segment called a 161 Demilitarized Zone(DMZ). With tri-homed configurations, servers 162 accessible to the public network are attached to the DMZ. Tri-Homed 163 configurations offer additional security by separating server(s) 164 accessible to the public network from internal hosts. 166 +----------+ +----------+ 167 | | | +----------+ | | | 168 | Clients |----| | | |------| Servers/ | 169 | | | | | | | Clients | 170 +----------+ |-------| DUT/SUT |--------| | | 171 | | | | +----------+ 172 | +----------+ | 173 Protected | | | Unprotected 174 Network | Network 175 | 176 | 177 ----------------- 178 | DMZ 179 | 180 | 181 +-----------+ 182 | | 183 | Servers | 184 | | 185 +-----------+ 187 Figure 2(Tri-Homed) 189 4.1 Test Considerations 191 4.2 Virtual Clients/Servers 193 Since firewall testing may involve data sources which emulate 194 multiple users or hosts, the methodology uses the terms virtual 195 clients/servers. For these firewall tests, virtual clients/servers 196 specify application layer entities which may not be associated with 197 a unique physical interface. For example, four virtual clients may 198 originate from the same data source[1]. The test report MUST 199 indicate the number of virtual clients and virtual servers 200 participating in the test. 202 4.3 Test Traffic Requirements 204 While the function of a firewall is to enforce access control 205 policies, the criteria by which those policies are defined vary 206 depending on the implementation. Firewalls may use network layer, 207 transport layer or, in many cases, application-layer criteria to 208 make access-control decisions. 210 For the purposes of benchmarking firewall performance, this document 211 references HTTP 1.1 or higher as the application layer entity. The 212 methodologies MAY be used as a template for benchmarking with other 213 applications. Since testing may involve proxy based DUT/SUTs, HTTP 214 version considerations are discussed in appendix A. 216 4.4 DUT/SUT Traffic Flows 218 Since the number of interfaces are not fixed, the traffic flows will 219 be dependent upon the configuration used in benchmarking the 220 DUT/SUT. Note that the term "traffic flows" is associated with 221 client-to-server requests. 223 For Dual-Homed configurations, there are two unique traffic flows: 225 Client Server 226 ------ ------ 227 Protected -> Unprotected 228 Unprotected -> Protected 230 For Tri-Homed configurations, there are three unique traffic flows: 232 Client Server 233 ------ ------ 234 Protected -> Unprotected 235 Protected -> DMZ 236 Unprotected -> DMZ 238 4.5 Multiple Client/Server Testing 240 One or more clients may target multiple servers for a given 241 application. Each virtual client MUST initiate connections in a 242 round-robin fashion. For example, if the test consisted of six 243 virtual clients targeting three servers, the pattern would be as 244 follows: 246 Client Target Server(In order of request) 247 #1 1 2 3 1... 248 #2 2 3 1 2... 249 #3 3 1 2 3... 250 #4 1 2 3 1... 251 #5 2 3 1 2... 252 #6 3 1 2 3... 254 4.6 Network Address Translation(NAT) 256 Many firewalls implement network address translation(NAT)[1], a 257 function which translates internal host IP addresses attached to 258 the protected network to a virtual IP address for communicating 259 across the unprotected network(Internet). This involves additional 260 processing on the part of the DUT/SUT and may impact performance. 261 Therefore, tests SHOULD be ran with NAT disabled and NAT enabled 262 to determine the performance differentials. The test report MUST 263 indicate whether NAT was enabled or disabled. 265 4.7 Rule Sets 267 Rule sets[1] are a collection of access control policies that 268 determine which packets the DUT/SUT will forward and which it will 269 reject[1]. Since criteria by which these access control policies may 270 be defined will vary depending on the capabilities of the DUT/SUT, 271 the following is limited to providing guidelines for configuring 272 rule sets when benchmarking the performance of the DUT/SUT. 274 It is RECOMMENDED that a rule be entered for each host(Virtual 275 client). In addition, testing SHOULD be performed using different 276 size rule sets to determine its impact on the performance of the 277 DUT/SUT. Rule sets MUST be configured in a manner, such that, rules 278 associated with actual test traffic are configured at the end of the 279 rule set and not the beginning. 281 The DUT/SUT SHOULD be configured to deny access to all traffic 282 which was not previously defined in the rule set. The test report 283 SHOULD include the DUT/SUT configured rule set(s). 285 4.8 Web Caching 287 Some firewalls include caching agents to reduce network load. When 288 making a request through a caching agent, the caching agent attempts 289 to service the response from its internal memory. The cache itself 290 saves responses it receives, such as responses for HTTP GET 291 requests. Testing SHOULD be performed with any caching agents on the 292 DUT/SUT disabled. 294 4.9 Authentication 296 Access control may involve authentication processes such as user, 297 client or session authentication. Authentication is usually 298 performed by devices external to the firewall itself, such as an 299 authentication server(s) and may add to the latency of the system. 300 Any authentication processes MUST be included as part of connection 301 setup process. 303 4.10 TCP Stack Considerations 305 Some test instruments allow configuration of one or more TCP stack 306 parameters, thereby influencing the traffic flows which will be 307 offered and impacting performance measurements. While this document 308 does not attempt to specify which TCP parameters should be 309 configurable, any such TCP parameter(s) MUST be noted in the test 310 report. In addition, when comparing multiple DUT/SUTs, the same TCP 311 parameters MUST be used. 313 5. Benchmarking Tests 315 5.1 IP Throughput 317 5.1.1 Objective 319 To determine the throughput of network-layer data transversing 320 the DUT/SUT, as defined in RFC1242[1]. Note that while RFC1242 321 uses the term frames, which is associated with the link layer, the 322 procedure uses the term packets, since it is referencing the 323 network layer. 325 5.1.2 Setup Parameters 327 The following parameters MUST be defined: 329 Packet size - Number of bytes in the IP packet, exclusive of any 330 link layer header or checksums. 332 Test Duration - Duration of the test, expressed in seconds. 334 5.1.3 Procedure 336 The tester will offer client/server traffic to the DUT/SUT, 337 consisting of unicast IP packets. The tester MUST offer the packets 338 at a constant rate. The test MAY consist of either bi-directional or 339 unidirectional traffic, with the client offering a unicast stream of 340 packets to the server for the latter. 342 The test MAY employ an iterative search algorithm. Each iteration 343 will involve the tester varying the intended load until the maximum 344 rate, at which no packet loss occurs, is found. Since backpressure 345 mechanisms may be employed, resulting in the intended load and 346 offered load being different, the test SHOULD be performed in either 347 a packet based or time based manner as described in RFC2889[7]. As 348 with RFC1242, the term packet is used in place of frame. The 349 duration of the test portion of each trial MUST be at least 30 seconds. 351 It is RECOMMENDED to perform the throughput measurements with 352 different packet sizes. When testing with different packet sizes the 353 DUT/SUT configuration MUST remain the same. 355 5.1.4 Measurement 357 5.1.4.1 Network Layer 359 Throughput - Maximum offered load, expressed in either bits per 360 second or packets per second, at which no packet loss is detected. 362 Forwarding Rate - Forwarding rate, expressed in either bits per 363 second or packets per second, the device is observed to 364 successfully forward to the correct destination interface in 365 response to a specified offered load. 367 5.1.4 Reporting Format 369 The test report MUST note the packet size(s), test duration, 370 throughput and forwarding rate. If the test involved offering 371 packets which target more than one segment(Protected, Unprotected 372 or DMZ), the report MUST identify the results as an aggregate 373 throughput measurement. 375 The throughput results SHOULD be reported in the format of a table 376 with a row for each of the tested packet sizes. There SHOULD be 377 columns for the packet size, the intended load, the offered load, 378 resultant throughput and forwarding rate for each test. 380 The intermediate results of the search algorithm MAY be saved in 381 log file which includes the packet size, test duration and for 382 each iteration: 384 - Step Iteration 385 - Pass/Fail Status 386 - Total packets offered 387 - Total packets forwarded 388 - Intended load 389 - Offered load(If applicable) 390 - Forwarding rate 392 5.2 Concurrent TCP Connection Capacity 394 5.2.1 Objective 396 To determine the maximum number of concurrent TCP connections 397 supported through or with the DUT/SUT, as defined in RFC2647[1]. 398 This test is indented to find the maximum number of entries 399 the DUT/SUT can store in its connection table. 401 5.2.2 Setup Parameters 403 The following parameters MUST be defined for all tests: 405 5.2.2.1 Transport-Layer Setup Parameters 407 Connection Attempt Rate - The aggregate rate, expressed in 408 connections per second, at which TCP connection requests are 409 attempted. The rate SHOULD be set at or lower than the maximum 410 rate at which the DUT/SUT can accept connection requests. 412 Aging Time - The time, expressed in seconds, the DUT/SUT will keep a 413 connection in its connection table after receiving a TCP FIN or RST 414 packet. 416 5.2.2.2 Application-Layer Setup Parameters 418 Validation Method - HTTP 1.1 or higher MUST be used for this test. 420 Object Size - Defines the number of bytes, excluding any bytes 421 associated with the HTTP header, to be transferred in response to an 422 HTTP 1.1 or higher GET request. 424 5.2.3 Procedure 426 An iterative search algorithm MAY be used to determine the maximum 427 number of concurrent TCP connections supported through or with the 428 DUT/SUT. 430 For each iteration, the aggregate number of concurrent TCP 431 connections attempted by the virtual client(s) will be varied. The 432 destination address will be that of the server or that of the NAT 433 proxy. The aggregate rate will be defined by connection attempt 434 rate, and will be attempted in a round-robin fashion(See 4.5). 436 To validate all connections, the virtual client(s) MUST request an 437 object using an HTTP 1.1 or higher GET request. The requests MUST be 438 initiated on each connection after all of the TCP connections have 439 been established. 441 When testing proxy-based DUT/SUTs, the virtual client(s) MUST 442 request two objects using HTTP 1.1 or higher GET requests. The first 443 GET request is required for connection time establishment[1] 444 measurements as specified in appendix B. The second request is used 445 for validation as previously mentioned. When comparing proxy and 446 non-proxy based DUT/SUTs, the test MUST be performed in the same 447 manner. 449 Between each iteration, it is RECOMMENDED that the tester issue a 450 TCP RST referencing each connection attempted for the previous 451 iteration, regardless of whether or not the connection attempt was 452 successful. The tester will wait for aging time before continuing to 453 the next iteration. 455 5.2.4 Measurements 457 5.2.4.1 Application-Layer measurements 459 Number of objects requested 461 Number of objects returned 463 5.2.4.2 Transport-Layer measurements 465 Maximum concurrent connections - Total number of TCP connections 466 open for the last successful iteration performed in the search 467 algorithm. 469 Minimum connection establishment time - Lowest TCP connection 470 establishment time measured as defined in appendix B. 472 Maximum connection establishment time - Highest TCP connection 473 establishment time measured as defined in appendix B. 475 Average connection establishment time - The mean of all measurements 476 of connection establishment times. 478 Aggregate connection establishment time - The total of all 479 measurements of connection establishment times. 481 5.2.5 Reporting Format 483 5.2.5.1 Application-Layer Reporting: 485 The test report MUST note the object size, number of completed 486 requests and number of completed responses. 488 The intermediate results of the search algorithm MAY be reported 489 in a tabular format with a column for each iteration. There SHOULD 490 be rows for the number of requests attempted, number and percentage 491 requests completed, number of responses attempted, number and 492 percentage of responses completed. The table MAY be combined with 493 the transport-layer reporting, provided that the table identify this 494 as an application layer measurement. 496 Version information: 498 The test report MUST note the version of HTTP client(s) and 499 server(s). 501 5.2.5.2 Transport-Layer Reporting: 503 The test report MUST note the connection attempt rate, aging time, 504 minimum TCP connection establishment time, maximum TCP connection 505 establishment time, average connection establishment time, aggregate 506 connection establishment time and maximum concurrent connections 507 measured. 509 The intermediate results of the search algorithm MAY be reported 510 in the format of a table with a column for each iteration. There 511 SHOULD be rows for the total number of TCP connections attempted, 512 number and percentage of TCP connections completed, minimum TCP 513 connection establishment time, maximum TCP connection establishment 514 time, average connection establishment time and the aggregate 515 connection establishment time. 517 5.3 Maximum TCP Connection Establishment Rate 519 5.3.1 Objective 521 To determine the maximum TCP connection establishment rate through 522 or with the DUT/SUT, as defined by RFC2647[1]. This test is indented 523 to find the maximum rate the DUT/SUT can update its connection 524 table. 526 5.3.2 Setup Parameters 528 The following parameters MUST be defined for all tests: 530 5.3.2.1 Transport-Layer Setup Parameters 532 Number of Connections - Defines the aggregate number of TCP 533 connections that must be established. 535 Aging Time - The time, expressed in seconds, the DUT/SUT will keep a 536 connection in it's state table after receiving a TCP FIN or RST 537 packet. 539 5.3.2.2 Application-Layer Setup Parameters 541 Validation Method - HTTP 1.1 or higher MUST be used for this test. 543 Object Size - Defines the number of bytes, excluding any bytes 544 associated with the HTTP header, to be transferred in response to an 545 HTTP 1.1 or higher GET request. 547 5.3.3 Procedure 549 An iterative search algorithm MAY be used to determine the maximum 550 rate at which the DUT/SUT can accept TCP connection requests. 552 For each iteration, the aggregate rate at which TCP connection 553 requests are attempted by the virtual client(s) will be varied. The 554 destination address will be that of the server or that of the NAT 555 proxy. The aggregate number of connections, defined by number of 556 connections, will be attempted in a round-robin fashion(See 4.5). 558 The same application-layer object transfers required for validation 559 and establishment time measurements as described in the concurrent 560 TCP connection capacity test MUST be performed. 562 Between each iteration, it is RECOMMENDED that the tester issue a 563 TCP RST referencing each connection attempted for the previous 564 iteration, regardless of whether or not the connection attempt was 565 successful. The tester will wait for aging time before continuing to 566 the next iteration. 568 5.3.4 Measurements 570 5.3.4.1 Application-Layer measurements 572 Number of objects requested 574 Number of objects returned 576 5.3.4.2 Transport-Layer measurements 578 Highest connection rate - Highest rate, in connections per second, 579 for which all connections successfully opened in the search 580 algorithm. 582 Minimum connection establishment time - Lowest TCP connection 583 establishment time measured as defined in appendix B. 585 Maximum connection establishment time - Highest TCP connection 586 establishment time measured as defined in appendix B. 588 Average connection establishment time - The mean of all measurements 589 of connection establishment times. 591 Aggregate connection establishment time - The total of all 592 measurements of connection establishment times. 594 5.3.5 Reporting Format 596 5.3.5.1 Application-Layer Reporting: 598 The test report MUST note object size(s), number of completed 599 requests and number of completed responses. 601 The intermediate results of the search algorithm MAY be reported 602 in a tabular format with a column for each iteration. There SHOULD 603 be rows for the number of requests attempted, number and percentage 604 requests completed, number of responses attempted, number and 605 percentage of responses completed. The table MAY be combined with 606 the transport-layer reporting, provided that the table identify this 607 as an application layer measurement. 609 Version information: 611 The test report MUST note the version of HTTP client(s) and 612 server(s). 614 5.3.5.2 Transport-Layer Reporting: 616 The test report MUST note the number of connections, aging time, 617 minimum TCP connection establishment time, maximum TCP connection 618 establishment time, average connection establishment time, aggregate 619 connection establishment time and highest connection rate measured. 621 The intermediate results of the search algorithm MAY be reported 622 in the format of a table with a column for each iteration. There 623 SHOULD be rows for the connection attempt rate, total number of 624 TCP connections attempted, total number of TCP connections 625 completed, minimum TCP connection establishment time, maximum TCP 626 connection establishment time, average connection establishment time 627 and the aggregate connection establishment time. 629 5.4 Maximum TCP Connection Tear Down Rate 631 5.4.1 Objective 633 To determine the maximum TCP connection tear down rate through or 634 with the DUT/SUT, as defined by RFC2647[1]. 636 5.4.2 Setup Parameters 638 Number of Connections - Defines the number of TCP connections that 639 will be attempted to be torn down. 641 Aging Time - The time, expressed in seconds, the DUT/SUT will keep a 642 connection in it's state table after receiving a TCP FIN or RST 643 packet. 645 Close Method - Defines method for closing TCP connections. The test 646 MUST be performed with either a three-way or four-way handshake. In 647 a four-way handshake, each side sends separate FIN and ACK messages. 649 In a three-way handshake, one side sends a combined FIN/ACK message 650 upon receipt of a FIN. 652 Close Direction - Defines whether closing of connections are to be 653 initiated from the client or from the server. 655 5.4.3 Procedure 657 An iterative search algorithm MAY be used to determine the maximum 658 TCP connection tear down rate. The test iterates through different 659 TCP connection tear down rates with a fixed number of TCP 660 connections. 662 In the case of proxy based DUT/SUTs, the DUT/SUT will itself receive 663 the ACK in response to issuing a FIN packet to close its side of the 664 TCP connection. For validation purposes, the virtual client or 665 server, whichever is applicable, MAY verify that the DUT/SUT 666 received the final ACK by re-transmitting the final ACK. A TCP RST 667 should be received in response to the retransmitted ACK. 669 Between each iteration, it is RECOMMENDED that the virtual client(s) 670 or server(s), whichever is applicable, issue a TCP RST referencing 671 each connection which was attempted to be torn down, regardless of 672 whether or not the connection tear down attempt was successful. The 673 test will wait for aging time before continuing to the next 674 iteration. 676 5.4.4 Measurements 678 Highest connection tear down rate - Highest rate, in connections per 679 second, for which all TCP connections were successfully torn down in 680 the search algorithm. 682 The following tear down time[1] measurements MUST only include 683 connections for which both sides of the connection were successfully 684 torn down. For example, tear down times for connections which are 685 left in a FINWAIT-2[8] state should not be included: 687 Minimum connection tear down time - Lowest TCP connection tear down 688 time measured as defined in appendix C. 690 Maximum connection tear down time - Highest TCP connection tear down 691 time measured as defined in appendix C. 693 Average connection tear down time - The mean of all measurements of 694 connection tear down times. 696 Aggregate connection tear down time - The total of all measurements 697 of connection tear down times. 699 5.4.5 Reporting Format 701 The test report MUST note the number of connections, aging time, 702 close method, close direction, minimum TCP connection tear down 703 time, maximum TCP connection tear down time, average TCP connection 704 tear down time and the aggregate TCP connection tear down time and 705 highest connection tear down rate measured. 707 The intermediate results of the search algorithm MAY be reported 708 in the format of a table with a column for each iteration. There 709 SHOULD be rows for the number of TCP tear downs attempted, number 710 and percentage of TCP connection tear downs completed, minimum 711 TCP connection tear down time, maximum TCP connection tear down 712 time, average TCP connection tear down time, aggregate TCP 713 connection tear down time and validation failures, if required. 715 5.5 Denial Of Service Handling 717 5.5.1 Objective 719 To determine the effect of a denial of service attack on a DUT/SUT 720 TCP connection establishment and/or HTTP transfer rates. The denial 721 of service handling test MUST be run after obtaining baseline 722 measurements from sections 5.3 and/or 5.6. 724 The TCP SYN flood attack exploits TCP's three-way handshake 725 mechanism by having an attacking source host generate TCP SYN 726 packets with random source addresses towards a victim host, thereby 727 consuming that host's resources. 729 5.5.2 Setup Parameters 731 Use the same setup parameters as defined in section 5.3.2 or 5.6.2, 732 depending on whether testing against the baseline TCP connection 733 establishment rate test or HTTP transfer rate test, respectfully. 735 In addition, the following setup parameters MUST be defined. 737 SYN attack rate - Rate, expressed in packets per second, at which 738 the server(s) or NAT proxy address is targeted with TCP SYN packets. 740 5.5.3 Procedure 742 Use the same procedure as defined in section 5.3.3 or 5.6.3, 743 depending on whether testing against the baseline TCP connection 744 establishment rate or HTTP transfer rate test, respectfully. In 745 addition, the tester will generate TCP SYN packets targeting the 746 server(s) IP address or NAT proxy address at a rate defined by SYN 747 attack rate. 749 The tester originating the TCP SYN attack MUST be attached to the 750 unprotected network. In addition, the tester MUST not respond to the 751 SYN/ACK packets sent by target server or NAT proxy in response to 752 the SYN packet. 754 Some firewalls employ mechanisms to guard against SYN attacks. If 755 such mechanisms exist on the DUT/SUT, tests SHOULD be run with these 756 mechanisms enabled and disabled to determine how well the DUT/SUT 757 can maintain, under such attacks, the baseline connection 758 establishment rates and HTTP transfer rates determined in section 759 5.3 and section 5.6, respectively. 761 5.5.4 Measurements 763 Perform the same measurements as defined in section 5.3.4 or 5.6.4, 764 depending on whether testing against the baseline TCP connection 765 establishment rate test or HTTP transfer rate, respectfully. 767 In addition, the tester SHOULD track TCP SYN packets associated with 768 the SYN attack which the DUT/SUT forwards on the protected or DMZ 769 interface(s). 771 5.5.5 Reporting Format 773 The test SHOULD use the same reporting format as described in 774 section 5.3.5 or 5.6.5, depending on whether testing against the 775 baseline TCP connection establishment rate test or HTTP transfer rate, 776 respectfully. 778 In addition, the report MUST indicate a denial of service handling 779 test, SYN attack rate, number TCP SYN attack packets transmitted 780 and the number of TCP SYN attack packets forwarded by the DUT/SUT. 781 The report MUST indicate whether or not the DUT has any SYN attack 782 mechanisms enabled. 784 5.6 HTTP Transfer Rate 786 5.6.1 Objective 788 To determine the transfer rate of HTTP requested object transversing 789 the DUT/SUT. 791 5.6.2 Setup Parameters 793 The following parameters MUST be defined for all tests: 795 5.6.2.1 Transport-Layer Setup Parameters 797 Number of connections - Defines the aggregate number of connections 798 attempted. The number SHOULD be a multiple of the number of virtual 799 clients participating in the test. 801 Close Method - Defines method for closing TCP connections. The test 802 MUST be performed with either a three-way or four-way handshake. In 803 a four-way handshake, each side sends separate FIN and ACK messages. 804 In a three-way handshake, one side sends a combined FIN/ACK message 805 upon receipt of a FIN. 807 Close Direction - Defines whether closing of connections are to be 808 initiated from the client or from the server. 810 5.6.2.2 Application-Layer Setup Parameters 812 Session Type - The virtual clients/servers MUST use HTTP 1.1 or 813 higher. 815 GET requests per connection - Defines the number of HTTP 1.1 or 816 higher GET requests attempted per connection. 818 Object Size - Defines the number of bytes, excluding any bytes 819 associated with the HTTP header, to be transferred in response to an 820 HTTP 1.1 or higher GET request. 822 5.6.3 Procedure 824 Each HTTP 1.1 or higher virtual client will request one or more 825 objects from an HTTP 1.1 or higher server using one or more HTTP 826 GET requests over each connection. The aggregate number of 827 connections attempted, defined by number of connections, MUST be 828 evenly divided among all of the participating virtual clients. 830 If the virtual client(s) make multiple HTTP GET requests per 831 connection, it MUST request the same object size for each GET 832 request. Multiple iterations of this test may be run with objects 833 of different sizes. 835 5.6.4 Measurements 837 5.6.4.1 Application-Layer measurements 839 Average Transfer Rate - The average transfer rate of the DUT/SUT 840 MUST be measured and shall be referenced to the requested object(s). 841 The measurement will start on transmission of the first bit of the 842 first requested object and end on transmission of the last bit of 843 the last requested object. The average transfer rate, in bits per 844 second, will be calculated using the following formula: 846 OBJECTS * OBJECTSIZE * 8 847 TRANSFER RATE(bit/s) = -------------------------- 848 DURATION 850 OBJECTS - Total number of objects successfully transferred across 851 all connections. 853 OBJECTSIZE - Object size in bytes 855 DURATION - Aggregate transfer time based on aforementioned time 856 references. 858 5.6.4.2 Measurements at or below the Transport-Layer 860 The following measurements SHOULD be performed for each connection- 861 oriented protocol: 863 Goodput[1] - Goodput as defined in section 3.17 of RFC2647. 865 Measurements MUST only reference the protocol payload, excluding 866 any of the protocol header. In addition, the tester MUST exclude 867 any bits associated with the connection establishment, connection 868 tear down, security associations[1] or connection maintenance[1]. 869 Since connection-oriented protocols require that data be 870 acknowledged, the offered load[6] will be varying. Therefore, the 871 tester should measure the average forwarding rate over the 872 duration of the test. Measurement should start on transmission of 873 the first bit of the payload of the first datagram and end on 874 transmission of the last bit of the payload of the last datagram. 876 Number of bytes transferred - Total payload bytes transferred. 878 Number of Timeouts - Total number of timeout events. 880 Retransmitted bytes - Total number of retransmitted bytes. 882 5.6.5 Reporting Format 884 5.6.5.1 Application-Layer reporting 886 The test report MUST note number of GET requests per connection and 887 object size(s). 889 The transfer rate results SHOULD be reported in tabular form with a 890 column for each of the object sizes tested. There SHOULD be a row 891 for the object size, number and percentage of completed requests, 892 number and percentage of completed responses, and the resultant 893 transfer rate for each iteration of the test. 895 Failure analysis: 897 The test report SHOULD indicate the number and percentage of HTTP 898 GET request and responses that failed to complete. 900 Version information: 902 The test report MUST note the version of HTTP client(s) and 903 server(s). 905 5.6.5.2 Transport-Layer and below reporting 907 The test report MUST note the number of connections, close method, 908 close direction and the protocol for which the measurement was 909 made. 911 The results SHOULD be reported in tabular form for each of the HTTP 912 object sizes tested. There SHOULD be a row for the HTTP object size, 913 resultant goodput, total timeouts, total retransmitted bytes and 914 total bytes transferred. Note that total bytes refers to total 915 datagram payload bytes transferred. The table MAY be combined with 916 the application layer reporting, provided the table clearly identify 917 the protocol for which the measurement was made. 919 Failure analysis: 921 The test report SHOULD indicate the number and percentage of 922 connection establishment failures as well as number and percentage 923 of TCP tear down failures. 925 It is RECOMMENDED that the report include a graph to plot the 926 distribution of both connection establishment failures and 927 connection tear down failures. The x coordinate SHOULD be the 928 elapsed test time, the y coordinate SHOULD be the number of failures 929 for a given sampling period. There SHOULD be two lines on the graph, 930 one for connection failures and one for tear down failures. The 931 graph MUST note the sampling period. 933 5.7 Maximum HTTP Transaction Rate 935 5.7.1 Objective 937 Determine the maximum transaction rate the DUT/SUT can sustain. 938 This test is intended to find the maximum rate at which users can 939 access objects. 941 5.7.2 Setup Parameters 943 5.7.2.1 Transport-Layer Setup Parameters 945 Close Method - Defines method for closing TCP connections. The test 946 MUST be performed with either a three-way or four-way handshake. In 947 a four-way handshake, each side sends separate FIN and ACK messages. 948 In a three-way handshake, one side sends a combined FIN/ACK message 949 upon receipt of a FIN. 951 Close Direction - Defines whether closing of connections are to be 952 initiated from the client or from the server. 954 5.7.2.2 Application-Layer Setup Parameters 956 Session Type - HTTP 1.1 or higher MUST be used for this test. 958 Test Duration - Time, expressed in seconds, for which the 959 virtual client(s) will sustain the attempted GET request rate. 960 It is RECOMMENDED that the duration be at least 30 seconds. 962 Requests per connection - Number of object requests per connection. 964 Object Size - Defines the number of bytes, excluding any bytes 965 associated with the HTTP header, to be transferred in response to an 966 HTTP 1.1 or higher GET request. 968 5.7.3 Procedure 970 An iterative search algorithm MAY be used to determine the maximum 971 transaction rate that the DUT/SUT can sustain. 973 For each iteration, HTTP 1.1 or higher virtual client(s) will 974 vary the aggregate GET request rate offered to HTTP 1.1 or higher 975 server(s). The virtual client(s) will maintain the offered request 976 rate for the defined test duration. 978 If the virtual client(s) make multiple HTTP GET requests per 979 connection, it MUST request the same object size for each GET 980 request. Multiple tests MAY be performed with different object 981 sizes. 983 5.7.4 Measurements 985 Maximum Transaction Rate - The maximum rate at which all 986 transactions -- that is all requests/responses cycles -- are 987 completed. 989 Transaction Time - The tester SHOULD measure minimum, maximum and 990 average transaction times. The transaction time will start when the 991 virtual client issues the GET request and end when the requesting 992 virtual client receives the last bit of the requested object. 994 5.7.5 Reporting Format 996 5.7.5.1 Application-Layer reporting 998 The test report MUST note the test duration, object size, requests 999 per connection and the measured minimum, maximum and average 1000 transaction rate. 1002 The intermediate results of the search algorithm MAY be reported 1003 in a table format with a column for each iteration. There SHOULD be 1004 rows for the GET request attempt rate, number of requests attempted, 1005 number and percentage of requests completed, number of responses 1006 attempted, number and percentage of responses completed, minimum 1007 transaction time, average transaction time and maximum transaction 1008 time. 1010 Version information: 1012 The test report MUST note the version of HTTP client(s) and 1013 server(s). 1015 5.7.5.2 Transport-Layer 1017 The test report MUST note the close method, close direction, number 1018 of connections established and number of connections torn down. 1020 The intermediate results of the search algorithm MAY be reported 1021 in a table format with a column for each iteration. There SHOULD be 1022 rows for the number of connections attempted, number and percentage 1023 of connections completed, number and percentage of connection tear 1024 downs completed. The table MAY be combined with the application 1025 layer reporting, provided the table identify this as transport layer 1026 measurement. 1028 5.8 Illegal Traffic Handling 1030 5.8.1 Objective 1032 To character the behavior of the DUT/SUT when presented with a 1033 combination of both legal and Illegal[1] traffic. Note that Illegal 1034 traffic does not refer to an attack, but traffic which has been 1035 explicitly defined by a rule(s) to drop. 1037 5.8.2 Setup Parameters 1039 Setup parameters will use the same parameters as specified in the 1040 HTTP transfer rate test(Section 5.6.2). In addition, the following 1041 setup parameters MUST be defined: 1043 Illegal traffic percentage - Percentage of HTTP 1.1 or higher 1044 connections which have been explicitly defined in a rule(s) to drop. 1046 5.8.3 Procedure 1048 Each HTTP 1.1 or higher client will request one or more objects from 1049 an HTTP 1.1 or higher server using one or more HTTP GET requests 1050 over each connection. The aggregate number of connections attempted, 1051 defined by number of connections, MUST be evenly divided among all 1052 of the participating virtual clients. 1054 The virtual client(s) MUST offer the connection requests, both legal 1055 and illegal, in an evenly distributed manner. Many firewalls have 1056 the capability to filter on different traffic criteria( IP 1057 addresses, Port numbers, etc). Multiple iterations of this test MAY 1058 be run with the DUT/SUT configured to filter on different traffic 1059 criteria. 1061 5.8.4 Measurements 1063 The same measurements as defined in HTTP transfer rate test(Section 1064 5.6.4) SHOULD be performed. Any forwarding rate measurements MUST 1065 only include bits which are associated with legal traffic. 1067 5.8.5 Reporting Format 1069 Test reporting format SHOULD be the same as specified in the HTTP 1070 transfer rate test(Section 5.6.5). 1072 In addition, the report MUST note the percentage of illegal HTTP 1073 connections. 1075 Failure analysis: 1077 Test report MUST note the number and percentage of illegal 1078 connections that were allowed by the DUT/SUT. 1080 5.9 IP Fragmentation Handling 1082 5.9.1 Objective 1084 To determine the performance impact when the DUT/SUT is presented 1085 with IP fragmented[5] traffic. IP packets which have been 1086 fragmented, due to crossing a network that supports a smaller 1087 MTU(Maximum Transmission Unit) than the actual IP packet, may 1088 require the firewall to perform re-assembly prior to the rule set 1089 being applied. 1091 While IP fragmentation is a common form of attack, either on the 1092 firewall itself or on internal hosts, this test will focus on 1093 determining how the additional processing associated with the 1094 re-assembly of the packets have on the forwarding rate of the 1095 DUT/SUT. RFC 1858 addresses some fragmentation attacks that 1096 get around IP filtering processes used in routers and hosts. 1098 5.9.2 Setup Parameters 1100 The following parameters MUST be defined. 1102 5.9.2.1 Non-Fragmented Traffic Parameters 1104 Setup parameters will be the same as defined in the HTTP transfer 1105 rate test(Sections 5.6.2.1 and 5.6.2.2). 1107 5.9.2.2 Fragmented Traffic Parameters 1109 Packet size - Number of bytes in the IP/UDP packet, exclusive of 1110 link-layer headers and checksums, prior to fragmentation. 1112 MTU - Maximum transmission unit, expressed in bytes. For testing 1113 purposes, this MAY be configured to values smaller than the MTU 1114 supported by the link layer. 1116 Intended Load - Intended load, expressed as percentage of media 1117 utilization. 1119 5.9.3 Procedure 1121 Each HTTP 1.1 or higher client will request one or more objects from 1122 an HTTP 1.1 or higher server using one or more HTTP GET requests 1123 over each connection. The aggregate number of connections attempted, 1124 defined by number of connections, MUST be evenly divided among all 1125 of the participating virtual clients. If the virtual client(s) make 1126 multiple HTTP GET requests per connection, it MUST request the same 1127 object size for each GET request. 1129 A tester attached to the unprotected side of the network, will offer 1130 a unidirectional stream of unicast fragmented IP/UDP traffic, 1131 targeting a server attached to either the protected or DMZ segment. 1132 The tester MUST offer the unidirectional stream over the duration of 1133 the test -- that is, duration over which the HTTP traffic is being 1134 offered. 1136 Baseline measurements SHOULD be performed with IP filtering deny 1137 rule(s) to filter fragmented traffic. If the DUT/SUT has logging 1138 capability, the log SHOULD be checked to determine if it contains 1139 the correct information regarding the fragmented traffic. 1141 The test SHOULD be repeated with the DUT/SUT rule set changed to 1142 allow the fragmented traffic through. When running multiple 1143 iterations of the test, it is RECOMMENDED to vary the MTU while 1144 keeping all other parameters constant. 1146 Then setup the DUT/SUT to the policy or rule set the manufacturer 1147 required to be defined to protect against fragmentation attacks and 1148 repeat the measurements outlined in the baseline procedures. 1150 5.9.4 Measurements 1152 Tester SHOULD perform the same measurements as defined in HTTP 1153 test(Section 5.6.4). 1155 Transmitted UDP/IP Packets - Number of UDP packets transmitted by 1156 client. 1158 Received UDP/IP Packets - Number of UDP/IP Packets received by 1159 server. 1161 5.9.5 Reporting Format 1163 5.10.1 Non-Fragmented Traffic 1165 The test report SHOULD be the same as described in section 5.6.5. 1166 Note that any forwarding rate measurements for the HTTP traffic 1167 excludes any bits associated with the fragmented traffic which 1168 may be forward by the DUT/SUT. 1170 5.9.2 Fragmented Traffic 1172 The test report MUST note the packet size, MTU size, intended load, 1173 number of UDP/IP packets transmitted and number of UDP/IP packets 1174 forwarded. The test report SHOULD also note whether or not the 1175 DUT/SUT forwarded the offered UDP/IP traffic fragmented. 1177 5.10 Latency 1179 5.10.1 Objective 1181 To determine the latency of network-layer or application-layer data 1182 traversing the DUT/SUT. RFC 1242 [3] defines latency. 1184 5.10.2 Setup Parameters 1186 The following parameters MUST be defined: 1188 5.10.2.1 Network-layer Measurements 1190 Packet size, expressed as the number of bytes in the IP packet, 1191 exclusive of link-layer headers and checksums. 1193 Intended load, expressed as percentage of media utilization. 1195 Test duration, expressed in seconds. 1197 Test instruments MUST generate packets with unique timestamp 1198 signatures. 1200 5.10.2.2 Application-layer Measurements 1202 Object Size - Defines the number of bytes, excluding any bytes 1203 associated with the HTTP header, to be transferred in response to 1204 an HTTP 1.1 or higher GET request. Testers SHOULD use the minimum 1205 object size supported by the media, but MAY use other object 1206 sizes as well. 1208 Connection type. The tester MUST use one HTTP 1.1 or higher 1209 connection for latency measurements. 1211 Number of objects requested. 1213 Number of objects transferred. 1215 Test duration, expressed in seconds. 1217 Test instruments MUST generate packets with unique timestamp 1218 signatures. 1220 5.10.3 Network-layer procedure 1222 A client will offer a unidirectional stream of unicast packets to a 1223 server. The packets MUST use a connectionless protocol like IP or 1224 UDP/IP. 1226 The tester MUST offer packets in a steady state. As noted in the 1227 latency discussion in RFC 2544 [4], latency measurements MUST be 1228 taken at the throughput level -- that is, at the highest offered 1229 load with zero packet loss. Measurements taken at the throughput 1230 level are the only ones that can legitimately be termed latency. 1232 It is RECOMMENDED that implementers use offered loads not only at 1233 the throughput level, but also at load levels that are less than 1234 or greater than the throughput level. To avoid confusion with 1235 existing terminology, measurements from such tests MUST be labeled 1236 as delay rather than latency. 1238 It is RECOMMENDED to perform the latency measurements with different 1239 packet sizes. When testing with different packet sizes the DUT/SUT 1240 configuration MUST remain the same. 1242 If desired, the tester MAY use a step test in which offered loads 1243 increment or decrement through a range of load levels. 1245 The duration of the test portion of each trial MUST be at least 30 1246 seconds. 1248 5.10.4 Application layer procedure 1250 An HTTP 1.1 or higher client will request one or more objects from 1251 an HTTP or higher 1.1 server using one or more HTTP GET requests. If 1252 the tester makes multiple HTTP GET requests, it MUST request the 1253 same-sized object each time. Testers may run multiple iterations of 1254 this test with objects of different sizes. 1256 Implementers MAY configure the tester to run for a fixed duration. 1257 In this case, the tester MUST report the number of objects requested 1258 and returned for the duration of the test. For fixed-duration tests 1259 it is RECOMMENDED that the duration be at least 30 seconds. 1261 5.10.5 Measurements 1263 Minimum delay - The smallest delay incurred by data traversing the 1264 DUT/SUT at the network layer or application layer, as appropriate. 1266 Maximum delay - The largest delay incurred by data traversing the 1267 DUT/SUT at the network layer or application layer, as appropriate. 1269 Average delay - The mean of all measurements of delay incurred by 1270 data traversing the DUT/SUT at the network layer or application 1271 layer, as appropriate. 1273 Delay distribution - A set of histograms of all delay measurements 1274 observed for data traversing the DUT/SUT at the network layer or 1275 application layer, as appropriate. 1277 5.10.6 Network-layer reporting format 1279 The test report MUST note the packet size(s), offered load(s) and 1280 test duration used. 1282 The latency results SHOULD be reported in the format of a table with 1283 a row for each of the tested packet sizes. There SHOULD be columns 1284 for the packet size, the intended rate, the offered rate, and the 1285 resultant latency or delay values for each test. 1287 5.10.7 Application-layer reporting format 1289 The test report MUST note the object size(s) and number of requests 1290 and responses completed. If applicable, the report MUST note the 1291 test duration if a fixed duration was used. 1293 The latency results SHOULD be reported in the format of a table with 1294 a row for each of the object sizes. There SHOULD be columns for the 1295 object size, the number of completed requests, the number of 1296 completed responses, and the resultant latency or delay values for 1297 each test. 1299 Failure analysis: 1301 The test report SHOULD indicate the number and percentage of HTTP 1302 GET request or responses that failed to complete within the test 1303 duration. 1305 Version information: 1307 The test report MUST note the version of HTTP client and server. 1309 6. References 1311 [1] D. Newman, "Benchmarking Terminology for Firewall Devices", 1312 RFC 2647, August 1999. 1314 [2] R. Fielding, J. Gettys, J. Mogul, H Frystyk, L.Masinter, 1315 P. Leach, T. Berners-Lee , "Hypertext Transfer Protocol - 1316 HTTP/1.1", RFC 2616 June 1999. 1318 [3] S. Bradner, editor. "Benchmarking Terminology for Network 1319 Interconnection Devices," RFC 1242, July 1991. 1321 [4] S. Bradner, J. McQuaid, "Benchmarking Methodology for Network 1322 Interconnect Devices," RFC 2544, March 1999. 1324 [5] David C. Clark, "IP Datagram Reassembly Algorithm", RFC 815 , 1325 July 1982. 1327 [6] Mandeville, R., "Benchmarking Terminology for LAN Switching 1328 Devices", RFC 2285, February 1998. 1330 [7] Mandeville, R., Perser,J., "Benchmarking Methodology for LAN 1331 Switching Devices", RFC 2889, August 2000. 1333 [8] Postel, J. (ed.), "Internet Protocol - DARPA Internet Program 1334 Protocol Specification", RFC 793, USC/Information Sciences 1335 Institute, September 1981. 1337 7. Security Considerations 1339 The primary goal of this document is to provide methodologies in 1340 benchmarking firewall performance. While there is some overlap 1341 between performance and security issues, assessment of firewall 1342 security is outside the scope of this document. 1344 8. Acknowledgement 1346 Funding for the RFC Editor function is currently provided by the 1347 Internet Society. 1349 9. Authors' Addresses 1351 Brooks Hickman 1352 Spirent Communications 1353 26750 Agoura Road 1354 Calabasas, CA 91302 1355 USA 1357 Phone: + 1 818 676 2412 1358 Email: brooks.hickman@spirentcom.com 1360 David Newman 1361 Network Test Inc. 1362 31324 Via Colinas, Suite 113 1363 Westlake Village, CA 91362-6761 1364 USA 1366 Phone: + 1 818 889-0011 1367 Email: dnewman@networktest.com 1369 Saldju Tadjudin 1370 Spirent Communications 1371 26750 Agoura Road 1372 Calabasas, CA 91302 1373 USA 1375 Phone: + 1 818 676 2468 1376 Email: saldju.Tadjudin@spirentcom.com 1378 Terry Martin 1379 GVNW Consulting Inc. 1380 8050 SW Warm Springs Road 1381 Tualatin Or. 97062 1382 USA 1384 Phone: + 1 503 612 4422 1385 Email: tmartin@gvnw.com 1387 APPENDIX A: HTTP(HyperText Transfer Protocol) 1389 The most common versions of HTTP in use today are HTTP/1.0 and 1390 HTTP/1.1 with the main difference being in regard to persistent 1391 connections. HTTP 1.0, by default, does not support persistent 1392 connections. A separate TCP connection is opened up for each 1393 GET request the client wants to initiate and closed after the 1394 requested object transfer is completed. While some implementations 1395 HTTP/1.0 supports persistence through the use of a keep-alive, 1396 there is no official specification for how the keep-alive operates. 1397 In addition, HTTP 1.0 proxies do support persistent connection as 1398 they do not recognize the connection header. 1400 HTTP/1.1, by default, does support persistent connection and 1401 is therefore the version that is referenced in this methodology. 1402 Proxy based DUT/SUTs may monitor the TCP connection and after a 1403 timeout, close the connection if no activity is detected. The 1404 duration of this timeout is not defined in the HTTP/1.1 1405 specification and will vary between DUT/SUTs. If the DUT/SUT 1406 closes inactive connections, the aging timer on the DUT SHOULD 1407 be configured for a duration that exceeds the test time. 1409 While this document cannot foresee future changes to HTTP 1410 and it impact on the methodologies defined herein, such 1411 changes should be accommodated for so that newer versions of 1412 HTTP may be used in benchmarking firewall performance. 1414 APPENDIX B: Connection Establishment Time Measurements 1416 Some connection oriented protocols, such as TCP, involve an odd 1417 number of messages when establishing a connection. In the case of 1418 proxy based DUT/SUTs, the DUT/SUT will terminate the connection, 1419 setting up a separate connection to the server. Since, in such 1420 cases, the tester does not own both sides of the connection, 1421 measurements will be made two different ways. While the following 1422 describes the measurements with reference to TCP, the methodology 1423 may be used with other connection oriented protocols which involve 1424 an odd number of messages. 1426 When testing non-proxy based DUT/SUTs , the establishment time shall 1427 be directly measured and is considered to be from the time the first 1428 bit of the first SYN packet is transmitted by the client to the 1429 time the last bit of the final ACK in the three-way handshake is 1430 received by the target server. 1432 If the DUT/SUT is proxy based, the connection establishment time is 1433 considered to be from the time the first bit of the first SYN packet 1434 is transmitted by the client to the time the client transmits the 1435 first bit of the first acknowledged TCP datagram(t4-t0 in the 1436 following timeline). 1438 t0: Client sends a SYN. 1439 t1: Proxy sends a SYN/ACK. 1440 t2: Client sends the final ACK. 1441 t3: Proxy establishes separate connection with server. 1442 t4: Client sends TCP datagram to server. 1443 *t5: Proxy sends ACK of the datagram to client. 1445 * While t5 is not considered part of the TCP connection 1446 establishment, acknowledgement of t4 must be received for the 1447 connection to be considered successful. 1449 APPENDIX C: Connection Tear Time Measurements 1451 While TCP connections are full duplex, tearing down of such 1452 connections are performed in a simplex fashion -- that is - FIN 1453 segments are sent by each host/device terminating each side of 1454 the TCP connection. 1456 When making connection tear down times measurements, such 1457 measurements will be made from the perspective of the entity - that 1458 is -- virtual client/server initiating the connection tear down 1459 request. In addition, the measurement will be performed in the same 1460 manner, independent of whether or not the DUT/SUT is proxy-based. 1461 The connection tear down will be considered the interval between the 1462 transmission of the first bit of the first TCP FIN packet transmitted 1463 by the virtual client or server, whichever is applicable, requesting 1464 a connection tear down to receipt of the last bit of the 1465 corresponding ACK packet on the same virtual client/server interface. 1467 Full Copyright Statement 1469 Copyright (C) The Internet Society (2002). All Rights Reserved. 1471 This document and translations of it may be copied and furnished to 1472 others, and derivative works that comment on or otherwise explain it 1473 or assist in its implementation may be prepared, copied, published 1474 and distributed, in whole or in part, without restriction of any 1475 kind, provided that the above copyright notice and this paragraph 1476 are included on all such copies and derivative works. However, this 1477 document itself may not be modified in any way, such as by removing 1478 the copyright notice or references to the Internet Society or other 1479 Internet organizations, except as needed for the purpose of 1480 developing Internet standards in which case the procedures for 1481 copyrights defined in the Internet Standards process must be 1482 followed, or as required to translate it into languages other than 1483 english. The limited permissions granted above are perpetual and 1484 will not be revoked by the Internet Society or its successors or 1485 assigns. This document and the information contained herein is 1486 provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE 1487 INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR 1488 IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1489 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1490 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.