idnits 2.17.1 draft-ietf-bmwg-firewall-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard == It seems as if not all pages are separated by form feeds - found 0 form feeds but 28 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 4 instances of too long lines in the document, the longest one being 1 character in excess of 72. ** There are 8 instances of lines with control characters in the document. == There are 22 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 95: '...inology document SHOULD be consulted b...' RFC 2119 keyword, line 118: '... of the MUST requirements for the pr...' RFC 2119 keyword, line 119: '...atisfies all the MUST and all the SHOU...' RFC 2119 keyword, line 121: '...atisfies all the MUST requirements but...' RFC 2119 keyword, line 122: '... the SHOULD requirements for its pro...' (160 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 833 has weird spacing: '...s at or below...' == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The tester originating the TCP SYN attack MUST be attached to the unprotected network. In addition, the tester MUST not respond to the SYN/ACK packets sent by target server or NAT proxy in response to the SYN packet. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: Tester SHOULD perform the same measurements as defined in HTTP transfer rate test(Section 5.6.4). Unlike the HTTP transfer rate test, the tester MUST not include any bits which are associated with illegal traffic in its forwarding rate measurements. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 2002) is 7985 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: '2' is defined on line 1292, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 2647 (ref. '1') ** Obsolete normative reference: RFC 2616 (ref. '2') (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) ** Downref: Normative reference to an Informational RFC: RFC 1242 (ref. '3') ** Downref: Normative reference to an Informational RFC: RFC 2544 (ref. '4') ** Downref: Normative reference to an Unknown state RFC: RFC 815 (ref. '5') ** Downref: Normative reference to an Informational RFC: RFC 2285 (ref. '6') ** Downref: Normative reference to an Informational RFC: RFC 2889 (ref. '7') ** Obsolete normative reference: RFC 793 (ref. '8') (Obsoleted by RFC 9293) Summary: 14 errors (**), 0 flaws (~~), 8 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Benchmarking Working Group Brooks Hickman 2 Internet-Draft Spirent Communications 3 Expiration Date: December 2002 David Newman 4 Network Test 5 Saldju Tadjudin 6 Spirent Communications 7 Terry Martin 8 GVNW Consulting Inc 9 June 2002 11 Benchmarking Methodology for Firewall Performance 12 14 Status of this Memo 16 This document is an Internet-Draft and is in full conformance with 17 all provisions of Section 10 of RFC2026. 19 Internet-Drafts are working documents of the Internet Engineering 20 Task Force (IETF), its areas, and its working groups. Note that 21 other groups may also distribute working documents as Internet- 22 Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six 25 months and may be updated, replaced, or obsoleted by other documents 26 at any time. It is inappropriate to use Internet-Drafts as 27 reference material or to cite them other than as "work in progress." 29 The list of current Internet-Drafts can be accessed at 30 http://www.ietf.org/ietf/1id-abstracts.txt 32 The list of Internet-Draft Shadow Directories can be accessed at 33 http://www.ietf.org/shadow.html. 35 Copyright Notice 37 Copyright (C) The Internet Society (2002). All Rights Reserved. 39 Abstract 41 This document discusses and defines a number of tests that may be 42 used to describe the performance characteristics of firewalls. In 43 addition to defining the tests this document also describes specific 44 formats for reporting the results of the tests. 46 This document is a product of the Benchmarking Methodology Working 47 Group (BMWG) of the Internet Engineering Task Force (IETF). 49 Table of Contents 51 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 52 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 2 53 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 54 4. Test setup . . . . . . . . . . . . . . . . . . . . . . . . . 3 55 4.1 Test Considerations . . . . . . . . . . . . . . . . . . 4 56 4.2 Virtual Client/Servers . . . . . . . . . . . . . . . . . 4 57 4.3 Test Traffic Requirements . . . . . . . . . . . . . . . . 4 58 4.4 DUT/SUT Traffic Flows . . . . . . . . . . . . . . . . . . 5 59 4.5 Multiple Client/Server Testing . . . . . . . . . . . . . 5 60 4.6 NAT(Network Address Translation) . . . . . . . . . . . . 5 61 4.7 Rule Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 62 4.8 Web Caching . . . . . . . . . . . . . . . . . . . . . . . 6 63 4.9 Authentication . . . . . . . . . . . . . . . . . . . . . 6 64 5. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 6 65 5.1 IP throughput . . . . . . . . . . . . . . . . . . . . . . 6 66 5.2 Concurrent TCP Connection Capacity . . . . . . . . . . . 8 67 5.3 Maximum TCP Connection Establishment Rate . . . . . . . . 10 68 5.4 Maximum TCP Connection Tear Down Rate . . . . . . . . . . 12 69 5.5 Denial Of Service Handling . . . . . . . . . . . . . . . 14 70 5.6 HTTP Transfer Rate . . . . . . . . . . . . . . . . . . . 15 71 5.7 HTTP Concurrent Transaction Capacity . . . . . . . . . . 17 72 5.8 HTTP Transaction Rate . . . . . . . . . . . . . . . . . . 18 73 5.9 Illegal Traffic Handling . . . . . . . . . . . . . . . . 20 74 5.10 IP Fragmentation Handling . . . . . . . . . . . . . . . 21 75 5.11 Latency . . . . . . . . . . . . . . . . . . . . . . . . 23 76 6. References . . . . . . . . . . . . . . . . . . . . . . . . . 25 77 7. Security Consideration . . . . . . . . . . . . . . . . . . . 26 78 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . 26 79 9. Authors' Addresses . . . . . . . . . . . . . . . . . . . . . 26 80 Appendix A - HyperText Transfer Protocol(HTTP) . . . . . . . . 27 81 Appendix B - Connection Establishment Time Measurements . . . . 27 82 Appendix C - Connection Tear Down Time Measurements . . . . . . 28 83 Full Copy Statement . . . . . . . . . . . . . . . . . . . . . . 28 85 1. Introduction 87 This document provides methodologies for the performance 88 benchmarking of firewalls. It provides methodologies in four areas: 89 forwarding, connection, latency and filtering. In addition to 90 defining the tests, this document also describes specific formats 91 for reporting the results of the tests. 93 A previous document, "Benchmarking Terminology for Firewall 94 Performance" [1], defines many of the terms that are used in this 95 document. The terminology document SHOULD be consulted before 96 attempting to make use of this document. 98 2. Requirements 100 In this document, the words that are used to define the significance 101 of each particular requirement are capitalized. These words are: 103 * "MUST" This word, or the words "REQUIRED" and "SHALL" mean that 104 the item is an absolute requirement of the specification. 106 * "SHOULD" This word or the adjective "RECOMMENDED" means that there 107 may exist valid reasons in particular circumstances to ignore this 108 item, but the full implications should be understood and the case 109 carefully weighed before choosing a different course. 111 * "MAY" This word or the adjective "OPTIONAL" means that this item 112 is truly optional. One vendor may choose to include the item 113 because a particular marketplace requires it or because it 114 enhances the product, for example; another vendor may omit the 115 same item. 117 An implementation is not compliant if it fails to satisfy one or more 118 of the MUST requirements for the protocols it implements. An 119 implementation that satisfies all the MUST and all the SHOULD 120 requirements for its protocols is said to be "unconditionally 121 compliant"; one that satisfies all the MUST requirements but not all 122 the SHOULD requirements for its protocols is said to be 123 "conditionally compliant". 125 3. Scope 127 Firewalls can provide a single point of defense between networks. 128 Usually, a firewall protects private networks from the public or 129 shared networks to which it is connected. A firewall can be as 130 simple as a device that filters different packets or as complex 131 as a group of devices that combine packet filtering and 132 application-level proxy or network translation services. This RFC 133 will focus on developing benchmark testing of DUT/SUTs, wherever 134 possible, independent of their implementation. 136 4. Test Setup 138 Test configurations defined in this document will be confined to 139 dual-homed and tri-homed as shown in figure 1 and figure 2 140 respectively. 142 Firewalls employing dual-homed configurations connect two networks. 143 One interface of the firewall is attached to the unprotected 144 network, typically the public network(Internet). The other interface 145 is connected to the protected network, typically the internal LAN. 147 In the case of dual-homed configurations, servers which are made 148 accessible to the public(Unprotected) network are attached to the 149 private(Protected) network. 151 +----------+ +----------+ 152 | | | +----------+ | | | 153 | Servers/ |----| | | |------| Servers/ | 154 | Clients | | | | | | Clients | 155 | | |-------| DUT/SUT |--------| | | 156 +----------+ | | | | +----------+ 157 Protected | +----------+ | Unprotected 158 Network | | Network 159 Figure 1(Dual-Homed) 161 Tri-homed[1] configurations employ a third segment called a 162 Demilitarized Zone(DMZ). With tri-homed configurations, servers 163 accessible to the public network are attached to the DMZ. Tri-Homed 164 configurations offer additional security by separating server(s) 165 accessible to the public network from internal hosts. 167 +----------+ +----------+ 168 | | | +----------+ | | | 169 | Clients |----| | | |------| Servers/ | 170 | | | | | | | Clients | 171 +----------+ |-------| DUT/SUT |--------| | | 172 | | | | +----------+ 173 | +----------+ | 174 Protected | | | Unprotected 175 Network | Network 176 | 177 | 178 ----------------- 179 | DMZ 180 | 181 | 182 +-----------+ 183 | | 184 | Servers | 185 | | 186 +-----------+ 188 Figure 2(Tri-Homed) 190 4.1 Test Considerations 192 4.2 Virtual Clients/Servers 194 Since firewall testing may involve data sources which emulate 195 multiple users or hosts, the methodology uses the terms virtual 196 clients/servers. For these firewall tests, virtual clients/servers 197 specify application layer entities which may not be associated with 198 a unique physical interface. For example, four virtual clients may 199 originate from the same data source[1]. The test report SHOULD 200 indicate the number of virtual clients and virtual servers 201 participating in the test. 203 Testers MUST synchronize all data sources participating in a test. 205 4.3 Test Traffic Requirements 207 While the function of a firewall is to enforce access control 208 policies, the criteria by which those policies are defined vary 209 depending on the implementation. Firewalls may use network layer, 210 transport layer or, in many cases, application-layer criteria to 211 make access-control decisions. 213 For the purposes of benchmarking firewall performance this document 214 references HTTP 1.1 or higher as the application layer entity, 215 although the methodologies may be used as a template for 216 benchmarking with other applications. Since testing may involve 217 proxy based DUT/SUTs, HTTP version considerations are discussed in 218 appendix A. 220 4.4 DUT/SUT Traffic Flows 222 Since the number of interfaces are not fixed, the traffic flows will 223 be dependent upon the configuration used in benchmarking the 224 DUT/SUT. Note that the term "traffic flows" is associated with 225 client-to-server requests. 227 For Dual-Homed configurations, there are two unique traffic flows: 229 Client Server 230 ------ ------ 231 Protected -> Unprotected 232 Unprotected -> Protected 234 For Tri-Homed configurations, there are three unique traffic flows: 236 Client Server 237 ------ ------ 238 Protected -> Unprotected 239 Protected -> DMZ 240 Unprotected -> DMZ 242 4.5 Multiple Client/Server Testing 244 One or more clients may target multiple servers for a given 245 application. Each virtual client MUST initiate connections in a 246 round-robin fashion. For example, if the test consisted of six 247 virtual clients targeting three servers, the pattern would be as 248 follows: 250 Client Target Server(In order of request) 251 #1 1 2 3 1... 252 #2 2 3 1 2... 253 #3 3 1 2 3... 254 #4 1 2 3 1... 255 #5 2 3 1 2... 256 #6 3 1 2 3... 258 4.6 Network Address Translation(NAT) 260 Many firewalls implement network address translation(NAT), a 261 function which translates internal host IP addresses attached to 262 the protected network to a virtual IP address for communicating 263 across the unprotected network(Internet). This involves additional 264 processing on the part of the DUT/SUT and may impact performance. 265 Therefore, tests SHOULD be ran with NAT disabled and NAT enabled 266 to determine the performance differentials. The test report MUST 267 indicate whether NAT was enabled or disabled. 269 4.7 Rule Sets 271 Rule sets[1] are a collection of access control policies that 272 determine which packets the DUT/SUT will forward and which it will 273 reject[1]. Since criteria by which these access control policies may 274 be defined will vary depending on the capabilities of the DUT/SUT, 275 the following is limited to providing guidelines for configuring 276 rule sets when benchmarking the performance of the DUT/SUT. 278 It is RECOMMENDED that a rule be entered for each host(Virtual 279 client). In addition, testing SHOULD be performed using different 280 size rule sets to determine its impact on the performance of the 281 DUT/SUT. Rule sets MUST be configured in a manner, such that, rules 282 associated with actual test traffic are configured at the end of the 283 rule set and not the beginning. 285 The DUT/SUT SHOULD be configured to deny access to all traffic 286 which was not previously defined in the rule set. The test report 287 SHOULD include the DUT/SUT configured rule set(s). 289 4.7 Web Caching 291 Some firewalls include caching agents to reduce network load. When 292 making a request through a caching agent, the caching agent attempts 293 to service the response from its internal memory. The cache itself 294 saves responses it receives, such as responses for HTTP GET 295 requests. Testing SHOULD be performed with any caching agents on the 296 DUT/SUT disabled. 298 4.8 Authentication 300 Access control may involve authentication processes such as user, 301 client or session authentication. Authentication is usually 302 performed by devices external to the firewall itself, such as an 303 authentication server(s) and may add to the latency of the system. 304 Any authentication processes MUST be included as part of connection 305 setup process. 307 5. Benchmarking Tests 309 5.1 IP Throughput 311 5.1.1 Objective 313 To determine the throughput of network-layer data transversing 314 the DUT/SUT, as defined in RFC1242[1]. Note that while RFC1242 315 uses the term frames, which is associated with the link layer, the 316 procedure uses the term packets, since it is referencing the 317 network layer. This test is intended to baseline the ability of 318 the DUT/SUT to forward packets at the network layer. 320 5.1.2 Setup Parameters 322 The following parameters MUST be defined: 324 Packet size - Number of bytes in the IP packet, exclusive of any 325 link layer header or checksums. 327 Test Duration - Duration of the test, expressed in seconds. 329 5.1.3 Procedure 331 The tester will offer client/server traffic to the DUT/SUT, 332 consisting of unicast IP packets. The tester MUST offer the packets 333 at a constant rate. The test MAY consist of either bi-directional or 334 unidirectional traffic, with the client offering a unicast stream of 335 packets to the server for the latter. 337 The test MAY employ an iterative search algorithm. Each iteration 338 will involve the tester varying the intended load until the maximum 339 rate, at which no packet loss occurs, is found. Since backpressure 340 mechanisms may be employed, resulting in the intended load and 341 offered load being different, the test SHOULD be performed in either 342 a packet based or time based manner as described in RFC2889[7]. As 343 with RFC1242, the term packet is used in place of frame. The 344 duration of the test portion of each trial MUST be at least 30 345 seconds. 347 When comparing DUT/SUTs with different MTUs, it is RECOMMENDED to 348 limit the maximum IP size tested to the maximum MTU supported by all 349 of the DUT/SUTs. 351 5.1.4 Measurement 353 5.1.4.1 Network Layer 355 Throughput - Maximum offered load, expressed in either bits per 356 second or packets per second, at which no packet loss is detected. 358 Forwarding Rate - Forwarding rate, expressed in either bits per 359 second or packets per second, the device is observed to 360 successfully forward to the correct destination interface in 361 response to a specified offered load. 363 5.1.4 Reporting Format 365 The test report MUST note the packet size(s), test duration, 366 throughput and forwarding rate. If the test involved offering 367 packets which target more than one segment(Protected, Unprotected 368 or DMZ), the report MUST identify the results as an aggregate 369 throughput measurement. 371 The throughput results SHOULD be reported in the format of a table 372 with a row for each of the tested packet sizes. There SHOULD be 373 columns for the packet size, the intended load, the offered load, 374 resultant throughput and forwarding rate for each test. 376 A log file MAY be generated which includes the packet size, test 377 duration and for each iteration: 379 - Step Iteration 380 - Pass/Fail Status 381 - Total packets offered 382 - Total packets forwarded 383 - Intended load 384 - Offered load(If applicable) 385 - Forwarding rate 387 5.2 Concurrent TCP Connection Capacity 389 5.2.1 Objective 391 To determine the maximum number of concurrent TCP connections 392 supported through or with the DUT/SUT, as defined in RFC2647[1]. 394 5.2.2 Setup Parameters 396 The following parameters MUST be defined for all tests: 398 5.2.2.1 Transport-Layer Setup Parameters 400 Connection Attempt Rate - The aggregate rate, expressed in 401 connections per second, at which new TCP connection requests are 402 attempted. The rate SHOULD be set at or lower than the maximum 403 rate at which the DUT/SUT can accept connection requests. 405 Age Time - The time, expressed in seconds, the DUT/SUT will keep a 406 connection in its connection table after receiving a TCP FIN or RST 407 packet. 409 5.2.2.2 Application-Layer Setup Parameters 411 Validation Method - HTTP 1.1 or higher MUST be used for this test. 413 Object Size - Defines the number of bytes, excluding any bytes 414 associated with the HTTP header, to be transferred in response to an 415 HTTP 1.1 or higher GET request. 417 5.2.3 Procedure 419 An iterative search algorithm MAY be used to determine the maximum 420 number of concurrent TCP connections supported through or with the 421 DUT/SUT. 423 For each iteration, the aggregate number of concurrent TCP 424 connections attempted by the virtual client(s) will be varied. The 425 destination address will be that of the server or that of the NAT 426 proxy. The aggregate rate will be defined by connection attempt 427 rate, and will be attempted in a round-robin fashion(See 4.5). 429 To validate all connections, the virtual client(s) MUST request an 430 object using an HTTP 1.1 or higher GET request. The requests MUST be 431 initiated on each connection after all of the TCP connections have 432 been established. 434 When testing proxy-based DUT/SUTs, the virtual client(s) MUST 435 request two objects using HTTP 1.1 or higher GET requests. The first 436 GET request is required for connection time establishment 437 measurements as specified in appendix B. The second request is used 438 for validation as previously mentioned. When comparing proxy and 439 non-proxy based DUT/SUTs, the test MUST be performed in the same 440 manner. 442 Between each iteration, it is RECOMMENDED that the tester issue a 443 TCP RST referencing all connections attempted for the previous 444 iteration, regardless of whether or not the connection attempt was 445 successful. The tester will wait for age time before continuing to 446 the next iteration. 448 5.2.4 Measurements 450 5.2.4.1 Application-Layer measurements 452 Number of objects requested 454 Number of objects returned 456 5.2.4.2 Transport-Layer measurements 458 Maximum concurrent connections - Total number of TCP connections 459 open for the last successful iteration performed in the search 460 algorithm. 462 The following measurements SHOULD be performed on a per iteration 463 basis: 465 Minimum connection establishment time - Lowest TCP connection 466 establishment time measured as defined in appendix B. 468 Maximum connection establishment time - Highest TCP connection 469 establishment time measured as defined in appendix B. 471 Average connection establishment time - The mean of all measurements 472 of connection establishment times. 474 Aggregate connection establishment time - The total of all 475 measurements of connection establishment times. 477 5.2.5 Reporting Format 479 5.2.5.1 Application-Layer Reporting: 481 The test report MUST note the object size, number of completed 482 requests and number of completed responses. 484 The intermediate results of the search algorithm MAY be reported 485 in a table format with a column for each iteration. There SHOULD be 486 rows for the number of requests attempted, number of requests 487 completed, number of responses attempted and number of responses 488 completed. The table MAY be combined with the transport-layer 489 reporting, provided that the table identify this as an application 490 layer measurement. 492 Version information: 494 The test report MUST note the version of HTTP client(s) and 495 server(s). 497 5.2.5.2 Transport-Layer Reporting: 499 The test report MUST note the connection attempt rate, age time and 500 maximum concurrent connections measured. 502 The intermediate results of the search algorithm MAY be reported 503 in the format of a table with a column for each iteration. There 504 SHOULD be rows for the total number of TCP connections attempted, 505 total number of TCP connections completed, minimum TCP connection 506 establishment time, maximum TCP connection establishment time, 507 average connection establishment time and the aggregate connection 508 establishment time. 510 5.3 Maximum TCP Connection Establishment Rate 512 5.3.1 Objective 514 To determine the maximum TCP connection establishment rate through 515 or with the DUT/SUT, as defined by RFC2647[1]. 517 5.3.2 Setup Parameters 519 The following parameters MUST be defined for all tests: 521 5.3.2.1 Transport-Layer Setup Parameters 523 Number of Connections - Defines the aggregate number of TCP 524 connections that must be established. 526 Age Time - The time, expressed in seconds, the DUT/SUT will keep a 527 connection in it's state table after receiving a TCP FIN or RST 528 packet. 530 5.3.2.2 Application-Layer Setup Parameters 532 Validation Method - HTTP 1.1 or higher MUST be used for this test. 534 Object Size - Defines the number of bytes, excluding any bytes 535 associated with the HTTP header, to be transferred in response to an 536 HTTP 1.1 or higher GET request. 538 5.3.3 Procedure 540 An iterative search algorithm MAY be used to determine the maximum 541 rate at which the DUT/SUT can accept TCP connection requests. 543 For each iteration, the aggregate rate at which TCP connection 544 requests are attempted by the virtual client(s) will be varied. The 545 destination address will be that of the server or that of the NAT 546 proxy. The aggregate number of connections, defined by number of 547 connections, will be attempted in a round-robin fashion(See 4.5). 549 The same application-layer object transfers required for validation 550 and establishment time measurements as described in the concurrent 551 TCP connection capacity test MUST be performed. 553 Between each iteration, it is RECOMMENDED that the tester issue a 554 TCP RST referencing all connections attempted for the previous 555 iteration, regardless of whether or not the connection attempt was 556 successful. The tester will wait for age time before continuing to 557 the next iteration. 559 5.3.4 Measurements 561 5.3.4.1 Application-Layer measurements 563 Number of objects requested 565 Number of objects returned 567 5.3.4.2 Transport-Layer measurements 569 Highest connection rate - Highest rate, in connections per second, 570 for which for the search algorithm passed. 572 The following measurements SHOULD performed on a per iteration 573 basis: 575 Minimum connection establishment time - Lowest TCP connection 576 establishment time measured as defined in appendix B. 578 Maximum connection establishment time - Highest TCP connection 579 establishment time measured as defined in appendix B. 581 Average connection establishment time - The mean of all measurements 582 of connection establishment times. 584 Aggregate connection establishment time - The total of all 585 measurements of connection establishment times. 587 5.3.5 Reporting Format 589 5.3.5.1 Application-Layer Reporting: 591 The test report MUST note object size(s), number of completed 592 requests and number of completed responses. 594 The intermediate results of the search algorithm MAY be reported 595 in a table format with a column for each iteration. There SHOULD be 596 rows for the number of requests and responses completed. The table 597 MAY be combined with the transport-layer reporting, provided that 598 the table identify this as an application layer measurement. 600 Version information: 602 The test report MUST note the version of HTTP client(s) and server(s). 604 5.3.5.2 Transport-Layer Reporting: 606 The test report MUST note the number of connections, age time and 607 highest connection rate measured. 609 The intermediate results of the search algorithm MAY be reported 610 in the format of a table with a column for each iteration. There 611 SHOULD be rows for the connection attempt rate, total number of 613 TCP connections attempted, total number of TCP connections 614 completed, minimum TCP connection establishment time, maximum TCP 615 connection establishment time, average connection establishment time 616 and the aggregate connection establishment time. 618 5.4 Maximum TCP Connection Tear Down Rate 620 5.4.1 Objective 622 To determine the maximum TCP connection tear down rate through or 623 with the DUT/SUT, as defined by RFC2647[1]. 625 5.4.2 Setup Parameters 627 Number of Connections - Defines the number of TCP connections that 628 the tester will attempt to tear down. 630 Age Time - The time, expressed in seconds, the DUT/SUT will keep a 631 connection in it's state table after receiving a TCP FIN or RST 632 packet. 634 5.4.3 Procedure 636 An iterative search algorithm MAY be used to determine the maximum 637 TCP connection tear down rate. The test iterates through different 639 5.4.3 Procedure 641 An iterative search algorithm MAY be used to determine the maximum 642 TCP connection tear down rate. The test iterates through different 643 TCP connection tear down rates with a fixed number of TCP 644 connections. 646 The virtual client(s) will initialize the test by establishing TCP 647 connections defined by number of connections. The virtual client(s) 648 will then attempt to tear down all of TCP connections, at a rate 649 defined by tear down attempt rate. For benchmarking purposes, the 650 tester MUST use a TCP FIN when initiating the connection tear down. 652 In the case of proxy based DUT/SUTs, the DUT/SUT will itself receive 653 the final ACK in the three-way handshake when a connection is being 654 torn down. For validation purposes, the virtual client(s) MAY 655 verify that the DUT/SUT received the final ACK in the connection tear 656 down exchange for all connections by transmitting a TCP datagram 657 referencing the previously town down connection. A TCP RST should be 658 received in response to the TCP datagram. 660 5.4.4 Measurements 662 Highest connection tear down rate - Highest rate, in connections per 663 second, for which all TCP connections were successfully torn down. 665 The following measurements SHOULD performed on a per iteration 666 basis. The tester MUST only include such measurements for which both 667 sides of the connection were successfully torn down. For example, 668 tear down times for connections which are left in a FINWAIT-2[8] 669 state should not be included: 671 Minimum connection tear down time - Lowest TCP connection tear down 672 time measured as defined in appendix C. 674 Maximum connection tear down time - Highest TCP connection tear down 675 time measured as defined in appendix C. 677 Average connection tear down time - The mean of all measurements of 678 connection tear down times. 680 Aggregate connection tear down time - The total of all measurements 681 of connection tear down times. 683 5.4.5 Reporting Format 685 The test report MUST note the number of connections, age time and 686 highest connection tear down rate measured. 688 The intermediate results of the search algorithm SHOULD be reported 689 in the format of a table with a column for each iteration. There 690 SHOULD be rows for the number of TCP tear downs attempted, number 691 of TCP connection tear downs completed, minimum TCP connection tear 692 down time, maximum TCP connection tear down time, average TCP 693 connection tear down time and the aggregate TCP connection tear down 694 time. 696 5.5 Denial Of Service Handling 698 5.5.1 Objective 700 To determine the effect of a denial of service attack on a DUT/SUT 701 TCP connection establishment and/or HTTP transfer rates. The denial 702 of service handling test MUST be run after obtaining baseline 703 measurements from sections 5.3 and/or 5.6. 705 The TCP SYN flood attack exploits TCP's three-way handshake 706 mechanism by having an attacking source host generate TCP SYN 707 packets with random source addresses towards a victim host, thereby 708 consuming that host's resources. 710 5.5.2 Setup Parameters 712 Use the same setup parameters as defined in section 5.3.2 or 5.6.2, 713 depending on whether testing against the baseline TCP connection 714 establishment rate test or HTTP transfer rate test, respectfully. 716 In addition, the following setup parameters MUST be defined. 718 SYN attack rate - Rate, expressed in packets per second, at which 719 the server(s) or NAT proxy address is targeted with TCP SYN packets. 721 5.5.3 Procedure 723 Use the same procedure as defined in section 5.3.3 or 5.6.3, 724 depending on whether testing against the baseline TCP connection 725 establishment rate or HTTP transfer rate test, respectfully. In 726 addition, the tester will generate TCP SYN packets targeting the 727 server(s) IP address or NAT proxy address at a rate defined by SYN 728 attack rate. 730 The tester originating the TCP SYN attack MUST be attached to the 731 unprotected network. In addition, the tester MUST not respond to the 732 SYN/ACK packets sent by target server or NAT proxy in response to 733 the SYN packet. 735 Some firewalls employ mechanisms to guard against SYN attacks. If 736 such mechanisms exist on the DUT/SUT, tests SHOULD be run with these 737 mechanisms enabled to determine how well the DUT/SUT can maintain, 738 under such attacks, the baseline connection establishment rates and 739 HTTP transfer rates determined in section 5.3 and section 5.6, 740 respectively. 742 5.5.4 Measurements 744 Perform the same measurements as defined in section 5.3.4 or 5.6.4, 745 depending on whether testing against the baseline TCP connection 746 establishment rate test or HTTP transfer rate, respectfully. 748 In addition, the tester SHOULD track TCP SYN packets associated with 749 the SYN attack which the DUT/SUT forwards on the protected or DMZ 750 interface(s). 752 5.5.5 Reporting Format 754 The test SHOULD use the same reporting format as described in 755 section 5.3.5 or 5.6.5, depending on whether testing against the 756 baseline TCP connection establishment rate test or HTTP transfer rate, 757 respectfully. 759 In addition, the report MUST indicate a denial of service handling 760 test, SYN attack rate, number TCP SYN attack packets transmitted 761 and the number of TCP SYN attack packets forwarded by the DUT/SUT. 762 The report MUST indicate whether or not the DUT has any SYN attack 763 mechanisms enabled. 765 5.6 HTTP Transfer Rate 767 5.6.1 Objective 769 To determine the transfer rate of HTTP requested object transversing 770 the DUT/SUT. 772 5.6.2 Setup Parameters 774 The following parameters MUST be defined for all tests: 776 5.6.2.1 Transport-Layer Setup Parameters 778 Number of connections - Defines the aggregate number of connections 779 attempted. The number SHOULD be a multiple of the number of virtual 780 clients participating in the test 782 5.6.2.2 Application-Layer Setup Parameters 784 Session type - The virtual clients/servers MUST use HTTP 1.1 or 785 higher. 787 GET requests per connection - Defines the number of HTTP 1.1 or 788 higher GET requests attempted per connection. 790 Object Size - Defines the number of bytes, excluding any bytes 791 associated with the HTTP header, to be transferred in response to an 792 HTTP 1.1 or higher GET request. 794 5.6.3 Procedure 796 Each HTTP 1.1 or higher client will request one or more objects from 797 an HTTP 1.1 or higher server using one or more HTTP GET requests. 798 The aggregate number of connections attempted, defined by number of 799 connections, MUST be evenly divided among all of the participating 800 virtual clients. 802 If the virtual client(s) make multiple HTTP GET requests per 803 connection, it MUST request the same object size for each GET 804 request. Multiple iterations of this test SHOULD be ran using 805 different object sizes. 807 5.6.4 Measurements 809 5.6.4.1 Application-Layer measurements 811 Average Transfer Rate - The average transfer rate of the DUT/SUT 812 MUST be measured and shall be referenced to the requested object(s). 813 The measurement will start on transmission of the first bit of the 814 first requested object and end on transmission of the last bit of 815 the last requested object. The average transfer rate, in bits per 816 second, will be calculated using the following formula: 818 OBJECTS * OBJECTSIZE * 8 819 TRANSFER RATE(bit/s) = -------------------------- 820 DURATION 822 OBJECTS - Total number of objects successfully transferred across 823 all connections. 825 OBJECTSIZE - Object size in bytes 827 DURATION - Aggregate transfer time based on aforementioned time 828 references. 830 5.6.4.2 Measurements at or below the Transport-Layer 832 The tester SHOULD make goodput[1] measurements for connection- 833 oriented protocols at or below the transport layer. Goodput 834 measurements MUST only reference the protocols payload, excluding 835 any of the protocols header. In addition, the tester MUST exclude 836 any bits associated with the connection establishment, connection 837 tear down, security associations or connection maintenance. 839 Since connection-oriented protocols require that data be 840 acknowledged, the offered load[6] will vary over the duration of the 841 test. When performing forwarding rate measurements, the tester 842 should measure the average forwarding rate over the duration of the 843 test. 845 5.6.5 Reporting Format 847 5.6.5.1 Application-Layer reporting 849 The test report MUST note number of GET requests per connection and 850 object size. 852 The transfer rate results SHOULD be reported in tabular form with a 853 row for each of the object sizes. There SHOULD be a column for the 854 object size, the number of completed requests, the number of 855 completed responses, and the transfer rate results for each test. 857 Failure analysis: 859 The test report SHOULD indicate the number and percentage of HTTP 860 GET request or responses that failed to complete. 862 Version information: 864 The test report MUST note the version of HTTP client(s) and 865 server(s). 867 5.6.5.2 Transport-Layer and below reporting 869 The test report MUST note the aggregate number of connections. In 870 addition, the report MUST identify the protocol for which the 871 measurement was made. 873 The results SHOULD be in tabular form with a column for each 874 iteration of the test. There should be columns for transmitted bits, 875 retransmitted bits and the measured goodput. 877 Failure analysis: 879 The test report SHOULD indicate the number and percentage of 880 connections that failed to complete. 882 5.7 HTTP Concurrent Transaction Capacity 884 5.7.1 Objective 886 Determine the maximum number of concurrent or simultaneous HTTP 887 transactions the DUT/SUT can support. This test is intended to 888 find the maximum number of users that can simultaneously access 889 web objects. 891 5.7.2 Setup Parameters 893 GET request rate - The aggregate rate, expressed in request per 894 second, at which HTTP 1.1 or higher GET requests are offered by the 895 virtual client(s). 897 Session type - The virtual clients/servers MUST use HTTP 1.1 or 898 higher. 900 5.7.3 Procedure 902 An iterative search algorithm MAY be used to determine the maximum 903 HTTP concurrent transaction capacity. 905 For each iteration, the virtual client(s) will vary the number of 906 concurrent or simultaneous HTTP transactions - that is, on-going 907 GET requests. The HTTP 1.1 or higher virtual client(s) will request 908 one object, across each connection, from an HTTP 1.1 or higher 909 server using one HTTP GET request. The aggregate rate at which the 910 virtual client(s) will offer the requests will be defined by GET 911 request rate. 913 The object size requested MUST be large enough, such that, the 914 transaction - that is, the request/response cycle -- will exist for 915 the duration of the test. At the end of each iteration, the tester 916 MUST validate that all transactions are still active. After all of 917 the transactions are checked, the transactions MAY be aborted. 919 5.7.4 Measurements 921 Maximum concurrent transactions - Total number of concurrent HTTP 922 transactions active for the last successful iteration performed in 923 the search algorithm. 925 5.7.5 Reporting Format 927 5.7.5.1 Application-Layer reporting 929 The test report MUST note the GET request rate and the maximum 930 concurrent transactions measured. 932 The intermediate results of the search algorithm MAY be reported 933 in a table format with a column for each iteration. There SHOULD be 934 rows for the number of concurrent transactions attempted, GET 935 request rate, number of aborted transactions and number of 936 transactions active at the end of the test iteration. 938 Version information: 940 The test report MUST note the version of HTTP client(s) and 941 server(s). 943 5.8 Maximum HTTP Transaction Rate 945 5.8.1 Objective 947 Determine the maximum HTTP transaction rate that a DUT/SUT can 948 sustain. 950 5.8.2 Setup Parameters 952 Session Type - HTTP 1.1 or higher MUST be used for this test. 954 Test Duration - Time, expressed in seconds, for which the 955 virtual client(s) will sustain the attempted GET request rate. 956 It is RECOMMENDED that the duration be at least 30 seconds. 958 Requests per connection - Number of object requests per connection. 960 Object Size - Defines the number of bytes, excluding any bytes 961 associated with the HTTP header, to be transferred in response to an 962 HTTP 1.1 or higher GET request. 964 5.8.3 Procedure 966 An iterative search algorithm MAY be used to determine the maximum 967 transaction rate that the DUT/SUT can sustain. 969 For each iteration, HTTP 1.1 or higher virtual client(s) will 970 vary the aggregate GET request rate offered to HTTP 1.1 or higher 971 server(s). The virtual client(s) will maintain the offered request 972 rate for the defined test duration. 974 If the tester makes multiple HTTP GET requests per connection, it 975 MUST request the same object size for each GET request rate. 976 Multiple iterations of this test MAY be performed with objects of 977 different sizes. 979 5.8.4 Measurements 981 Maximum Transaction Rate - The maximum rate at which all 982 transactions -- that is all requests/responses cycles -- are 983 completed. 985 Transaction Time - The tester SHOULD measure minimum, maximum and 986 average transaction times. The transaction time will start when the 987 virtual client issues the GET request and end when the requesting 988 virtual client receives the last bit of the requested object. 990 5.8.5 Reporting Format 992 The test report MUST note the test duration, object size, requests 993 per connection and the measured minimum, maximum and average 994 transaction rate. 996 The intermediate results of the search algorithm MAY be reported 997 in a table format with a column for each iteration. There SHOULD be 998 rows for the GET request attempt rate, number of requests attempted, 999 number and percentage of requests completed, number of responses 1000 attempted, number and percentage of responses completed, minimum 1001 transaction time, average transaction time and maximum transaction 1002 time. 1004 Version information: 1006 The test report MUST note the version of HTTP client(s) and 1007 server(s). 1009 5.9 Illegal Traffic Handling 1011 5.9.1 Objective 1013 To determine the behavior of the DUT/SUT when presented with a 1014 combination of both legal and Illegal traffic. Note that Illegal 1015 traffic does not refer to an attack, but traffic which has been 1016 explicitly defined by a rule(s) to drop. 1018 5.9.2 Setup Parameters 1020 Setup parameters will use the same parameters as specified in the 1021 HTTP transfer rate test(Section 5.6.2). In addition, the following 1022 setup parameters MUST be defined: 1024 Illegal traffic percentage - Percentage of HTTP 1.1 or higher 1025 connections which have been explicitly defined in a rule(s) to drop. 1027 5.9.3 Procedure 1029 Each HTTP 1.1 or higher client will request one or more objects from 1030 an HTTP 1.1 or higher server using one or more HTTP GET requests. 1031 The aggregate number of connections attempted, defined by number of 1032 connections, MUST be evenly divided among all of the participating 1033 virtual clients. 1035 The virtual client(s) MUST offer the connection requests, both legal 1036 and illegal, in an evenly distributed manner. Many firewalls have 1037 the capability to filter on different traffic criteria( IP 1038 addresses, Port numbers, etc). Testers may run multiple 1039 iterations of this test with the DUT/SUT configured to filter 1040 on different traffic criteria. 1042 5.9.4 Measurements 1044 Tester SHOULD perform the same measurements as defined in HTTP 1045 transfer rate test(Section 5.6.4). Unlike the HTTP transfer rate 1046 test, the tester MUST not include any bits which are associated 1047 with illegal traffic in its forwarding rate measurements. 1049 5.9.5 Reporting Format 1051 Test report SHOULD be the same as specified in the HTTP 1052 test(Section 5.6.5). 1054 In addition, the report MUST note the percentage of illegal HTTP 1055 connections. 1057 Failure analysis: 1059 Test report MUST note the number and percentage of illegal 1060 connections that were allowed by the DUT/SUT. 1062 5.10 IP Fragmentation Handling 1064 5.10.1 Objective 1066 To determine the performance impact when the DUT/SUT is presented 1067 with IP fragmented[5] traffic. IP packets which have been 1068 fragmented, due to crossing a network that supports a smaller 1069 MTU(Maximum Transmission Unit) than the actual IP packet, may 1070 require the firewall to perform re-assembly prior to the rule set 1071 being applied. 1073 While IP fragmentation is a common form of attack, either on the 1074 firewall itself or on internal hosts, this test will focus on 1075 determining how the additional processing associated with the 1076 re-assembly of the packets have on the forwarding rate of the 1077 DUT/SUT. RFC 1858 addresses some fragmentation attacks that 1078 get around IP filtering processes used in routers and hosts. 1080 5.10.2 Setup Parameters 1082 The following parameters MUST be defined. 1084 5.10.2.1 Non-Fragmented Traffic Parameters 1086 Setup parameters will be the same as defined in the HTTP transfer 1087 rate test(Sections 5.6.2.1 and 5.6.2.2). 1089 5.10.2.2 Fragmented Traffic Parameters 1091 Packet size - Number of bytes in the IP/UDP packet, exclusive of 1092 link-layer headers and checksums, prior to fragmentation. 1094 MTU - Maximum transmission unit, expressed in bytes. For testing 1095 purposes, this MAY be configured to values smaller than the MTU 1096 supported by the link layer. 1098 Intended Load - Intended load, expressed as percentage of media 1099 utilization. 1101 5.10.3 Procedure 1103 Each HTTP 1.1 or higher client will request one or more objects from 1104 an HTTP 1.1 or higher server using one or more HTTP GET requests. 1105 The aggregate number of connections attempted, defined by number of 1106 connections, MUST be evenly divided among all of the participating 1107 virtual clients. If the virtual client(s) make multiple HTTP GET 1108 requests per connection, it MUST request the same object size for 1109 each GET request. 1111 A tester attached to the unprotected side of the network, will offer 1112 a unidirectional stream of unicast fragmented IP/UDP traffic, 1113 targeting a server attached to either the protected or DMZ segment. 1114 The tester MUST offer the unidirectional stream over the duration of 1115 the test -- that is, duration over which the HTTP traffic is being 1116 offered. 1118 Baseline measurements SHOULD be performed with IP filtering deny 1119 rule(s) to filter fragmented traffic. If the DUT/SUT has logging 1120 capability, the log SHOULD be checked to determine if it contains 1121 the correct information regarding the fragmented traffic. 1123 The test SHOULD be repeated with the DUT/SUT rule set changed to 1124 allow the fragmented traffic through. When running multiple 1125 iterations of the test, it is RECOMMENDED to vary the MTU while 1126 keeping all other parameters constant. 1128 Then setup the DUT/SUT to the policy or rule set the manufacturer 1129 required to be defined to protect against fragmentation attacks and 1130 repeat the measurements outlined in the baseline procedures. 1132 5.10.4 Measurements 1134 Tester SHOULD perform the same measurements as defined in HTTP 1135 test(Section 5.6.4). 1137 Transmitted UDP/IP Packets - Number of UDP packets transmitted by 1138 client. 1140 Received UDP/IP Packets - Number of UDP/IP Packets received by 1141 server. 1143 5.10.5 Reporting Format 1145 5.10.1 Non-Fragmented Traffic 1147 The test report SHOULD be the same as described in section 5.6.5. 1148 Note that any forwarding rate measurements for the HTTP traffic 1149 excludes any bits associated with the fragmented traffic which 1150 may be forward by the DUT/SUT. 1152 5.10.2 Fragmented Traffic 1154 The test report MUST note the packet size, MTU size, intended load, 1155 number of UDP/IP packets transmitted and number of UDP/IP packets 1156 forwarded. The test report SHOULD also note whether or not the 1157 DUT/SUT forwarded the offered UDP/IP traffic fragmented. 1159 5.11 Latency 1161 5.11.1 Objective 1163 To determine the latency of network-layer or application-layer data 1164 traversing the DUT/SUT. RFC 1242 [3] defines latency. 1166 5.11.2 Setup Parameters 1168 The following parameters MUST be defined: 1170 5.11.2.1 Network-layer Measurements 1172 Packet size, expressed as the number of bytes in the IP packet, 1173 exclusive of link-layer headers and checksums. 1175 Intended load, expressed as percentage of media utilization. 1177 Test duration, expressed in seconds. 1179 Test instruments MUST generate packets with unique timestamp 1180 signatures. 1182 5.11.2.2 Application-layer Measurements 1184 Object Size - Defines the number of bytes, excluding any bytes 1185 associated with the HTTP header, to be transferred in response to 1186 an HTTP 1.1 or higher GET request. Testers SHOULD use the minimum 1187 object size supported by the media, but MAY use other object 1188 sizes as well. 1190 Connection type. The tester MUST use one HTTP 1.1 or higher 1191 connection for latency measurements. 1193 Number of objects requested. 1195 Number of objects transferred. 1197 Test duration, expressed in seconds. 1199 Test instruments MUST generate packets with unique timestamp 1200 signatures. 1202 5.11.3 Network-layer procedure 1204 A client will offer a unidirectional stream of unicast packets to a 1205 server. The packets MUST use a connectionless protocol like IP or 1206 UDP/IP. 1208 The tester MUST offer packets in a steady state. As noted in the 1209 latency discussion in RFC 2544 [4], latency measurements MUST be 1210 taken at the throughput level -- that is, at the highest offered 1211 load with zero packet loss. Measurements taken at the throughput 1212 level are the only ones that can legitimately be termed latency. 1214 It is RECOMMENDED that implementers use offered loads not only at 1215 the throughput level, but also at load levels that are less than 1216 or greater than the throughput level. To avoid confusion with 1217 existing terminology, measurements from such tests MUST be labeled 1218 as delay rather than latency. 1220 If desired, the tester MAY use a step test in which offered loads 1221 increment or decrement through a range of load levels. 1223 The duration of the test portion of each trial MUST be at least 30 1224 seconds. 1226 5.11.4 Application layer procedure 1228 An HTTP 1.1 or higher client will request one or more objects from 1229 an HTTP or higher 1.1 server using one or more HTTP GET requests. If 1230 the tester makes multiple HTTP GET requests, it MUST request the 1231 same-sized object each time. Testers may run multiple iterations of 1232 this test with objects of different sizes. 1234 Implementers MAY configure the tester to run for a fixed duration. 1235 In this case, the tester MUST report the number of objects requested 1236 and returned for the duration of the test. For fixed-duration tests 1237 it is RECOMMENDED that the duration be at least 30 seconds. 1239 5.11.5 Measurements 1241 Minimum delay - The smallest delay incurred by data traversing the 1242 DUT/SUT at the network layer or application layer, as appropriate. 1244 Maximum delay - The largest delay incurred by data traversing the 1245 DUT/SUT at the network layer or application layer, as appropriate. 1247 Average delay - The mean of all measurements of delay incurred by 1248 data traversing the DUT/SUT at the network layer or application 1249 layer, as appropriate. 1251 Delay distribution - A set of histograms of all delay measurements 1252 observed for data traversing the DUT/SUT at the network layer or 1253 application layer, as appropriate. 1255 5.11.6 Network-layer reporting format 1257 The test report MUST note the packet size(s), offered load(s) and 1258 test duration used. 1260 The latency results SHOULD be reported in the format of a table with 1261 a row for each of the tested packet sizes. There SHOULD be columns 1262 for the packet size, the intended rate, the offered rate, and the 1263 resultant latency or delay values for each test. 1265 5.11.7 Application-layer reporting format 1267 The test report MUST note the object size(s) and number of requests 1268 and responses completed. If applicable, the report MUST note the 1269 test duration if a fixed duration was used. 1271 The latency results SHOULD be reported in the format of a table with 1272 a row for each of the object sizes. There SHOULD be columns for the 1273 object size, the number of completed requests, the number of 1274 completed responses, and the resultant latency or delay values for 1275 each test. 1277 Failure analysis: 1279 The test report SHOULD indicate the number and percentage of HTTP 1280 GET request or responses that failed to complete within the test 1281 duration. 1283 Version information: 1285 The test report MUST note the version of HTTP client and server. 1287 6. References 1289 [1] D. Newman, "Benchmarking Terminology for Firewall Devices", 1290 RFC 2647, August 1999. 1292 [2] R. Fielding, J. Gettys, J. Mogul, H Frystyk, L.Masinter, 1293 P. Leach, T. Berners-Lee , "Hypertext Transfer Protocol - 1294 HTTP/1.1", RFC 2616 June 1999. 1296 [3] S. Bradner, editor. "Benchmarking Terminology for Network 1297 Interconnection Devices," RFC 1242, July 1991. 1299 [4] S. Bradner, J. McQuaid, "Benchmarking Methodology for Network 1300 Interconnect Devices," RFC 2544, March 1999. 1302 [5] David C. Clark, "IP Datagram Reassembly Algorithm", RFC 815 , 1303 July 1982. 1305 [6] Mandeville, R., "Benchmarking Terminology for LAN Switching 1306 Devices", RFC 2285, February 1998. 1308 [7] Mandeville, R., Perser,J., "Benchmarking Methodology for LAN 1309 Switching Devices", RFC 2889, August 2000. 1311 [8] Postel, J. (ed.), "Internet Protocol - DARPA Internet Program 1312 Protocol Specification", RFC 793, USC/Information Sciences 1313 Institute, September 1981. 1315 7. Security Considerations 1317 The primary goal of this document is to provide methodologies in 1318 benchmarking firewall performance. While there is some overlap 1319 between performance and security issues, assessment of firewall 1320 security is outside the scope of this document. 1322 8. Acknowledgement 1324 Funding for the RFC Editor function is currently provided by the 1325 Internet Society. 1327 9. Authors' Addresses 1329 Brooks Hickman 1330 Spirent Communications 1331 26750 Agoura Road 1332 Calabasas, CA 91302 1333 USA 1335 Phone: + 1 818 676 2412 1336 Email: brooks.hickman@spirentcom.com 1338 David Newman 1339 Network Test Inc. 1340 31324 Via Colinas, Suite 113 1341 Westlake Village, CA 91362-6761 1342 USA 1344 Phone: + 1 818 889-0011 1345 Email: dnewman@networktest.com 1347 Saldju Tadjudin 1348 Spirent Communications 1349 26750 Agoura Road 1350 Calabasas, CA 91302 1351 USA 1353 Phone: + 1 818 676 2468 1354 Email: saldju.Tadjudin@spirentcom.com 1356 Terry Martin 1357 GVNW Consulting Inc. 1358 8050 SW Warm Springs Road 1359 Tualatin Or. 97062 1360 USA 1362 Phone: + 1 503 612 4422 1363 Email: tmartin@gvnw.com 1365 APPENDIX A: HTTP(HyperText Transfer Protocol) 1367 The most common versions of HTTP in use today are HTTP/1.0 and 1368 HTTP/1.1 with the main difference being in regard to persistent 1369 connections. HTTP 1.0, by default, does not support persistent 1370 connections. A separate TCP connection is opened up for each 1371 GET request the client wants to initiate and closed after the 1372 requested object transfer is completed. While some implementations 1373 HTTP/1.0 supports persistence through the use of a keep-alive, 1374 there is no official specification for how the keep-alive operates. 1375 In addition, HTTP 1.0 proxies do support persistent connection as 1376 they do not recognize the connection header. 1378 HTTP/1.1, by default, does support persistent connection and 1379 is therefore the version that is referenced in this methodology. 1380 Proxy based DUT/SUTs may monitor the TCP connection and after a 1381 timeout, close the connection if no activity is detected. The 1382 duration of this timeout is not defined in the HTTP/1.1 1383 specification and will vary between DUT/SUTs. If the DUT/SUT 1384 closes inactive connections, the aging timer on the DUT SHOULD 1385 be configured for a duration that exceeds the test time. 1387 While this document cannot foresee future changes to HTTP 1388 and it impact on the methodologies defined herein, such 1389 changes should be accommodated for so that newer versions of 1390 HTTP may be used in benchmarking firewall performance. 1392 APPENDIX B: Connection Establishment Time Measurements 1394 Some connection oriented protocols, such as TCP, involve an odd 1395 number of messages when establishing a connection. In the case of 1396 proxy based DUT/SUTs, the DUT/SUT will terminate the connection, 1397 setting up a separate connection to the server. Since, in such 1398 cases, the tester does not own both sides of the connection, 1399 measurements will be made two different ways. While the following 1400 describes the measurements with reference to TCP, the methodology 1401 may be used with other connection oriented protocols which involve 1402 an odd number of messages. 1404 When testing non-proxy based DUT/SUTs , the establishment time shall 1405 be directly measured and is considered to be from the time the first 1406 bit of the first SYN packet is transmitted by the client to the 1407 time the last bit of the final ACK in the three-way handshake is 1408 received by the target server. 1410 If the DUT/SUT is proxy based, the connection establishment time is 1411 considered to be from the time the first bit of the first SYN packet 1412 is transmitted by the client to the time the client transmits the 1413 first bit of the first acknowledged TCP datagram(t4-t0 in the 1414 following timeline). 1416 t0: Client sends a SYN. 1417 t1: Proxy sends a SYN/ACK. 1418 t2: Client sends the final ACK. 1419 t3: Proxy establishes separate connection with server. 1420 t4: Client sends TCP datagram to server. 1421 *t5: Proxy sends ACK of the datagram to client. 1423 * While t5 is not considered part of the TCP connection 1424 establishment, acknowledgement of t4 must be received for the 1425 connection to be considered successful. 1427 APPENDIX C: Connection Tear Time Measurements 1429 While TCP connections are full duplex, tearing down of such connections 1430 are performed in a simplex fashion -- that is, FIN segments are sent by 1431 each host/device terminating each side of the TCP connection. 1433 When making connection tear down times measurements, such measurements 1434 will be made from the perspective of the client and will be performed 1435 in the same manner, independent of whether or not the DUT/SUT is 1436 proxy-based. The connection tear down will be considered the interval 1437 between the transmission of the first bit of the first TCP FIN packet 1438 transmitted by the tester requesting a connection tear down to receipt 1439 of the last bit of the corresponding ACK packet on the same tester 1440 interface. 1442 Full Copyright Statement 1444 Copyright (C) The Internet Society (2002). All Rights Reserved. 1446 This document and translations of it may be copied and furnished to 1447 others, and derivative works that comment on or otherwise explain it 1448 or assist in its implementation may be prepared, copied, published 1449 and distributed, in whole or in part, without restriction of any 1450 kind, provided that the above copyright notice and this paragraph 1451 are included on all such copies and derivative works. However, this 1452 document itself may not be modified in any way, such as by removing 1453 the copyright notice or references to the Internet Society or other 1454 Internet organizations, except as needed for the purpose of 1455 developing Internet standards in which case the procedures for 1456 copyrights defined in the Internet Standards process must be 1457 followed, or as required to translate it into languages other than 1458 english. The limited permissions granted above are perpetual and 1459 will not be revoked by the Internet Society or its successors or 1460 assigns. This document and the information contained herein is 1461 provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE 1462 INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR 1463 IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1464 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1465 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.