idnits 2.17.1 draft-ietf-bmwg-firewall-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 10) being 114 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 23 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Abstract section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack an Authors' Addresses Section. ** There are 44 instances of too long lines in the document, the longest one being 7 characters in excess of 72. ** There are 9 instances of lines with control characters in the document. == There are 10 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The tester originating the TCP SYN attack MUST be attached to the unprotected network. In addition, the tester MUST not respond to the SYN/ACK packets sent by target server in response to the SYN packet. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 2001) is 8345 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? '1' on line 1090 looks like a reference -- Missing reference section? '5' on line 1103 looks like a reference -- Missing reference section? '3' on line 1097 looks like a reference -- Missing reference section? '4' on line 1100 looks like a reference -- Missing reference section? '2' on line 1093 looks like a reference Summary: 7 errors (**), 0 flaws (~~), 5 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Brooks Hickman 2 Internet-Draft Spirent Communications 3 Expiration Date: December 2001 David Newman 4 Network Test 5 Terry Martin 6 M2networx INC 7 June 2001 9 Benchmarking Methodology for Firewall Performance 10 12 Status of this Memo 14 This document is an Internet-Draft and is in full conformance with 15 all provisions of Section 10 of RFC2026. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that 19 other groups may also distribute working documents as Internet- 20 Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six 23 months and may be updated, replaced, or obsoleted by other documents 24 at any time. It is inappropriate to use Internet-Drafts as 25 reference material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 Table of Contents 35 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 36 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 2 37 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 38 4. Test setup . . . . . . . . . . . . . . . . . . . . . . . . . 2 39 4.1 Test Considerations . . . . . . . . . . . . . . . . . . 3 40 4.2 Virtual Client/Servers . . . . . . . . . . . . . . . . . 3 41 4.3 Test Traffic Requirements . . . . . . . . . . . . . . . . 4 42 4.4 DUT/SUT Traffic Flows . . . . . . . . . . . . . . . . . . 4 43 4.5 Multiple Client/Server Testing . . . . . . . . . . . . . 5 44 4.6 NAT(Network Address Translation) . . . . . . . . . . . . 5 45 4.7 Rule Sets . . . . . . . . . . . . . . . . . . . . . . . . 5 46 4.8 Web Caching . . . . . . . . . . . . . . . . . . . . . . . 5 47 4.9 Authentication . . . . . . . . . . . . . . . . . . . . . 6 48 5. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 6 49 5.1 Concurrent Connection Capacity . . . . . . . . . . . . . 6 50 5.2 Maximum Connection Setup Rate . . . . . . . . . . . . . . 7 51 5.3 Connection Establishment Time . . . . . . . . . . . . . . 9 52 5.4 Connection Teardown Time . . . . . . . . . . . . . . . . 11 53 5.5 Denial Of Service Handling . . . . . . . . . . . . . . . 13 54 5.6 HTTP . . . . . . . . . . . . . . . . . . . . . . . . . . 14 55 5.7 IP Fragmentation Handling . . . . . . . . . . . . . . . . 16 56 5.8 Illegal Traffic Handling . . . . . . . . . . . . . . . . 18 57 5.9 Latency . . . . . . . . . . . . . . . . . . . . . . . . . 19 58 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . 22 59 A. HyperText Transfer Protocol(HTTP) . . . . . . . . . . . . 22 60 B. References . . . . . . . . . . . . . . . . . . . . . . . . 23 62 1. Introduction 64 This document provides methodologies for the performance 65 benchmarking of firewalls. It provides methodologies in four areas: 66 forwarding, connection, latency and filtering. In addition to 67 defining the tests, this document also describes specific formats 68 for reporting the results of the tests. 70 A previous document, "Benchmarking Terminology for Firewall 71 Performance" [1], defines many of the terms that are used in this 72 document. The terminology document SHOULD be consulted before 73 attempting to make use of this document. 75 2. Requirements 77 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 78 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in 79 this document are to be interpreted as described in RFC 2119. 81 3. Scope 83 Firewalls can provide a single point of defense between networks. 84 Usually, a firewall protects private networks from the public or 85 shared networks to which it is connected. A firewall can be as 86 simple as a device that filters different packets or as complex 87 as a group of devices that combine packet filtering and 88 application-level proxy or network translation services. This RFC 89 will focus on developing benchmark testing of DUT/SUTs, wherever 90 possible, independent of their implementation. 92 4. Test Setup 94 Test configurations defined in this document will be confined to 95 dual-homed and tri-homed as shown in figure 1 and figure 2 96 respectively. 98 Firewalls employing dual-homed configurations connect two networks. 99 One interface of the firewall is attached to the unprotected 100 network, typically the public network(Internet). The other interface 101 is connected to the protected network, typically the internal LAN. 103 In the case of dual-homed configurations, servers which are made 104 accessible to the public(Unprotected) network are attached to the 105 private(Protected) network. 107 +----------+ +----------+ 108 | | | +----------+ | | | 109 | Servers/ |----| | | |------| Servers/ | 110 | Clients | | | | | | Clients | 111 | | |-------| DUT/SUT |--------| | | 112 +----------+ | | | | +----------+ 113 | +----------+ | 114 Protected | | Unprotected 115 Network Network 116 Figure 1(Dual-Homed) 118 Tri-homed[1] configurations employ a third segment called a DMZ. 119 With tri-homed configurations, servers accessible to the public 120 network are attached to the DMZ. Tri-Homed configurations offer 121 additional security by separating server accessible to the public 122 network from internal hosts. 124 +----------+ +----------+ 125 | | | +----------+ | | | 126 | Clients |----| | | |------| Servers/ | 127 | | | | | | | Clients | 128 +----------+ |-------| DUT/SUT |--------| | | 129 | | | | +----------+ 130 | +----------+ | 131 Protected | | | Unprotected 132 Network | Network 133 | 134 | 135 ----------------- 136 | DMZ 137 | 138 | 139 +-----------+ 140 | | 141 | Servers | 142 | | 143 +-----------+ 145 Figure 2(Tri-Homed) 147 4.1 Test Considerations 149 4.2 Virtual Clients/Servers 151 Since firewall testing may involve data sources which emulate 152 multiple users or hosts, the methodology uses the terms virtual 153 clients/servers. For these firewall tests, virtual clients/servers 154 specify application layer entities which may not be associated with 155 a unique physical interface. For example, four virtual clients may 156 originate from the same data source[1]. The test report SHOULD 157 indicate the number of virtual clients and virtual servers 158 participating in the test on a per interface(See 4.1.3) basis. 160 Testers MUST synchronize all data sources participating in a test. 162 4.3 Test Traffic Requirements 164 While the function of a firewall is to enforce access control 165 policies, the criteria by which those policies are defined vary 166 depending on the implementation. Firewalls may use network layer, 167 transport layer or, in many cases, application-layer criteria to 168 make access-control decisions. Therefore, the test equipment used to 169 benchmark the DUT/SUT performance MUST consist of real clients and 170 servers generating legitimate layer seven conversations. 172 For the purposes of benchmarking firewall performance, HTTP 1.1 173 will be referenced in this document, although the methodologies 174 may be used as a template for benchmarking with other applications. 175 Since testing may involve proxy based DUT/SUTs, HTTP version 176 considerations are discussed in appendix A. 178 4.4 DUT/SUT Traffic Flows 180 Since the number of interfaces are not fixed, the traffic flows will 181 be dependent upon the configuration used in benchmarking the 182 DUT/SUT. Note that the term "traffic flows" is associated with 183 client-to- server requests. 185 For Dual-Homed configurations, there are two unique traffic flows: 187 Client Server 188 ------ ------ 189 Protected -> Unprotected 190 Unprotected -> Protected 192 For Tri-Homed configurations, there are three unique traffic flows: 194 Client Server 195 ------ ------ 196 Protected -> Unprotected 197 Protected -> DMZ 198 Unprotected -> DMZ 200 4.5 Multiple Client/Server Testing 202 One or more clients may target multiple servers for a given 203 application. Each virtual client MUST initiate requests(Connection, 204 object transfers, etc.) in a round-robin fashion. For example, if 205 the test consisted of six virtual clients targeting three servers, 206 the pattern would be as follows: 208 Client Target Server(In order of request) 209 #1 1 2 3 1... 210 #2 2 3 1 2... 211 #3 3 1 2 3... 212 #4 1 2 3 1... 213 #5 2 3 1 2... 214 #6 3 1 2 3... 216 4.6 NAT(Network Address Translation) 218 Many firewalls implement network address translation(NAT), a 219 function which translates internal host IP addresses attached to 220 the protected network to a virtual IP address for communicating 221 across the unprotected network(Internet). This involves additional 222 processing on the part of the DUT/SUT and may impact performance. 223 Therefore, tests SHOULD be ran with NAT disabled and NAT enabled 224 to determine the performance differentials. The test report SHOULD 225 indicate whether NAT was enabled or disabled. 227 4.7 Rule Sets 229 Rule sets[1] are a collection of access control policies that 230 determines which packets the DUT/SUT will forward and which it will 231 reject. The criteria by which these access control policies may be 232 defined will vary depending on the capabilities of the DUT/SUT. The 233 scope of this document is limited to how the rule sets should be 234 applied when testing the DUT/SUT. 236 The firewall monitors the incoming traffic and checks to make sure 237 that the traffic meets one of the defined rules before allowing it 238 to be forwarded. It is RECOMMENDED that a rule be entered for each 239 host(Virtual client). Although many firewalls permit groups of IP 240 addresses to be defined for a given rule, tests SHOULD be performed 241 with large rule sets, which are more stressful to the DUT/SUT. 243 The DUT/SUT SHOULD be configured to denies access to all traffic 244 which was not previously defined in the rule set. 246 4.7 Web Caching 248 Some firewalls include caching agents in order to reduce network 249 load. When making a request through a caching agent, the caching 250 agent attempts to service the response from its internal memory. 252 The cache itself saves responses it receives, such as responses 253 for HTTP GET requests. The report SHOULD indicate whether caching 254 was enabled or disabled on the DUT/SUT. 256 4.8 Authentication 258 Access control may involve authentication processes such as user, 259 client or session authentication. Authentication is usually 260 performed by devices external to the firewall itself, such as an 261 authentication servers and may add to the latency of the system. 262 Any authentication processes MUST be included as part of connection 263 setup process. 265 5. Benchmarking Tests 267 5.1 Concurrent Connection Capacity 269 5.1.1 Objective 271 To determine the maximum number of concurrent connections through 272 or with the DUT/SUT, as defined in RFC2647[1]. This test will employ 273 a step algorithm to obtain the maximum number of concurrent TCP 274 connections that the DUT/SUT can maintain. 276 5.1.2 Setup Parameters 278 The following parameters MUST be defined for all tests. 280 Connection Attempt Rate - The rate, expressed in connections per 281 second, at which new TCP connection requests are attempted. The 282 rate SHOULD be set lower than maximum rate at which the DUT/SUT can 283 accept connection requests. 285 Connection Step Count - Defines the number of additional TCP 286 connections attempted for each iteration of the step search 287 algorithm. 289 Object Size - Defines the number of bytes to be transferred in 290 response to a HTTP 1.1 GET request . It is RECOMMENDED to use the 291 minimum object size supported by the media. 293 5.1.3 Procedure 295 Each virtual client will attempt to establish TCP connections to its 296 target server(s), using either the target server's IP address or NAT 297 proxy address, at a fixed rate in a round robin fashion. Each 298 iteration will involve the virtual clients attempting to establish a 299 fixed number of additional TCP connections. This search algorithm 300 will be repeated until either: 302 - One or more of the additional connection attempts fail to 303 complete. 304 - One or more of the previously established connections fail. 306 The test MUST also include application layer data transfers in 307 order to validate the TCP connections since, in the case of proxy 308 based DUT/SUTs, the tester does not own both sides of the 309 connection. For the purposes of validation, the virtual client(s) 310 will request an object from its target server(s) using an HTTP 1.1 311 GET request, with both the client request and server response 312 excluding the connection-token close in the connection header. In 313 addition, periodic HTTP GET requests MAY be required to keep the 314 underlying TCP connection open(See Appendix A). 316 5.1.4 Measurements 318 Maximum concurrent connections - Total number of TCP connections 319 open for the last successful iteration performed in the search 320 algorithm. 322 5.1.5 Reporting Format 324 5.1.5.1 Transport-Layer Reporting: 326 The test report MUST note the connection attempt rate, connection 327 step count and maximum concurrent connections measured. 329 5.1.5.2 Application-Layer Reporting: 331 The test report MUST note the object size(s) and the use of 332 HTTP 1.1 client and server. 334 5.1.5.3 Log Files 336 A log file MAY be generated which includes the TCP connection 337 attempt rate, HTTP object size and for each iteration: 339 - Step Iteration 340 - Pass/Fail Status. 341 - Total TCP connections established. 342 - Number of previously established TCP connections dropped. 343 - Number of the additional TCP connections that failed to 344 complete. 346 5.2 Maximum Connection Setup Rate 348 5.2.1 Objective 350 To determine the maximum TCP connection setup rate through or with 351 the DUT/SUT, as defined by RFC2647[1]. This test will employ a 352 search algorithm to obtain the maximum rate at which TCP connections 353 can be established through or with the DUT/SUT. 355 5.2.2 Setup Parameters 357 The following parameters MUST be defined. 359 Initial Attempt Rate - The rate, expressed in connections per 360 second, at which the initial TCP connection requests are attempted. 362 Number of Connections - Defines the number of TCP connections that 363 must be established. The number MUST be between the number of 364 participating virtual clients and the maximum number supported by 365 the DUT/SUT. It is RECOMMENDED not to exceed the concurrent 366 connection capacity found in section 5.1. 368 Connection Teardown Rate - The rate, expressed in connections per 369 second, at which the tester will attempt to teardown TCP connections 370 between each iteration. The connection teardown rate SHOULD be set 371 lower than rate at which the DUT/SUT can teardown TCP connections. 373 Age Time - The time, expressed in seconds, the DUT/SUT will keep a 374 connection in it's state table after receiving a TCP FIN or RST 375 packet. 377 Object Size - Defines the number of bytes to be transferred in 378 response to a HTTP 1.1 GET request . It is RECOMMENDED to use the 379 minimum object size supported by the media. 381 5.2.3 Procedure 383 An iterative search algorithm will be used to determine the maximum 384 connection rate. This test iterates through different connection rates 385 with a fixed number of connections attempted by the virtual clients to 386 their associated server(s). 388 Each iteration will use the same connection establishment and 389 connection validation algorithms defined in the concurrent capacity 390 test(See section 5.1). 392 Between each iteration of the test, the tester must close all 393 connections completed for the previous iteration. In addition, 394 it is RECOMMENDED to abort all unsuccessful connections attempted. 395 The tester will wait for the period of time, specified by age time, 396 before continuing to the next iteration. 398 5.2.4 Measurements 400 Highest connection rate - Highest rate, in connections per second, 401 for which all TCP connections completed successfully. 403 5.2.5 Reporting Format 405 5.2.5.1 Transport-Layer Reporting: 407 The test report MUST note the number of connections attempted, 408 connection teardown rate, age time, and highest connection rate 409 measured. 411 5.1.5.2 Application-Layer Reporting: 413 The test report MUST note the object size(s) and the use of 414 HTTP 1.1 client and server. 415 5.1.5.3 Log Files 417 A log file MAY be generated which includes the total TCP connections 418 attempt, TCP connection teardown rate, age time, HTTP object size and 419 for each iteration: 421 - Step Iteration 422 - Pass/Fail Status. 423 - Total TCP connections established. 424 - Number of TCP connections that failed to complete. 426 5.3 Connection Establishment Time 428 5.3.1 Objective 430 To determine the connection establishment times[1] through or with 431 the DUT/SUT as a function of the number of open connections. 433 A connection for a client/server application is not atomic, in that 434 it not only involves transactions at the application layer, but 435 involves first establishing a connection using one or more underlying 436 connection oriented protocols(TCP, ATM, etc). Therefore, it is 437 encouraged to make separate measurements for each connection oriented 438 protocol required in order to perform the application layer 439 transaction. 441 5.3.2 Setup Parameters 443 The following parameters MUST be defined. 445 Connection Attempt Rate - The rate, expressed in connections per 446 second, at which new TCP connection requests are attempted. It is 447 RECOMMENDED not to exceed the maximum connection rate found in 448 section 5.2. 450 Connection Attempt Step count - Defines the number of additional 451 TCP connections attempted for each iteration of the step algorithm. 453 Maximum Attempt Connection Count - Defines the maximum number of 454 TCP connections attempted in the test. It is RECOMMENDED not to 455 exceed the concurrent connection capacity found in section 5.1. 457 Hickman, Newman, Martin [Page 9] 458 Object Size - Defines the number of bytes to be transferred in 459 response to a HTTP 1.1 GET request. 461 Number of requests - Defines the number of HTTP 1.1 GET requests 462 per connection. Note that connection, in this case, refers to the 463 underlying transport protocol. 465 5.3.3 Procedure 467 Each virtual client will attempt to establish TCP connections to its 468 target server(s) at a fixed rate in a round robin fashion. Each 469 iteration will involve the virtual clients attempting to establish 470 a fixed number of additional connections until the maximum attempt 471 connection count is reached. 473 As with the concurrent capacity tests, application layer data 474 transfers will be performed. Each virtual client(s) will request 475 one or more objects from its target server(s) using one or more 476 HTTP 1.1 GET request, with both the client request and server 477 response excluding the connection-token close in the connection 478 header. In addition, periodic HTTP GET requests MAY be required to 479 keep the underlying TCP connection open(See appendix A). 481 Since testing may involve proxy based DUT/SUTs, which terminates the 482 TCP connection, making a direct measurement of the TCP connection 483 establishment time is not possible since the protocol involves an 484 odd number of messages in establishing a connection. Therefore, when 485 testing with proxy based firewalls, the datagram following the final 486 ACK on the three-way handshake will be used in determining the 487 connection setup time. 489 The following shows the timeline for the TCP connection setup 490 involving a proxy DUT/SUT and is referenced in the measurement 491 section. Note that this method may be applied when measuring other 492 connection oriented protocols involving an odd number of messages 493 in establishing a connection. 495 t0: Client sends a SYN. 496 t1: Proxy sends a SYN/ACK. 497 t2: Client sends the final ACK. 498 t3: Proxy establishes separate connection with server. 499 t4: Client sends TCP datagram to server. 500 *t5: Proxy sends ACK of the datagram to client. 502 * While t5 is not considered part of the TCP connection establishment, 503 acknowledgement of t4 must be received for the connection to be 504 considered successful. 506 5.3.4 Measurements 507 For each iteration of the test, the tester MUST measure the minimum, 508 maximum and average TCP connection establishment times. Measuring TCP 509 connection establishment times will be made two different ways, 510 depending on whether or not the DUT/SUT is proxy based. If proxy 511 based, the connection establishment time is considered to be from the 512 time the first bit of the SYN packet is transmitted by the client to 513 the time the client transmits the first bit of the TCP datagram, 514 provided that the TCP datagram gets acknowledged(t4-t0 in the above 515 timeline). For DUT/SUTs that are not proxy based, the establishment 516 time shall be directly measured and is considered to be from the time 517 the first bit of the SYN packet is transmitted by the client to the 518 time the last bit of the final ACK in the three-way handshake is 519 received by the target server. 521 In addition, the tester SHOULD measure the minimum, maximum and 522 average connection establishment times for all other underlying 523 connection oriented protocols which are required to be established 524 for the client/server application to transfer an object. Each 525 connection oriented protocol has its own set of transactions 526 required for establishing a connection between two hosts or a host 527 and DUT/SUT. For purposes of benchmarking firewall performance, the 528 connection establishment time will be considered the interval 529 between the transmission of the first bit of the first octet of the 530 packet carrying the connection request to receipt of the last bit of 531 the last octet of the last packet of the connection setup traffic 532 received on the client or server, depending on whether a given 533 connection requires an even or odd number of messages, respectfully. 535 5.3.5 Reporting Format 537 The test report MUST note the TCP connection attempt rate, TCP 538 connection attempt step count and maximum TCP connections attempted, 539 HTTP object size and number of requests per connection. 541 For each connection oriented protocol the tester measured, the 542 connection establishment time results SHOULD be in tabular form 543 with a row for each iteration of the test. There SHOULD be a column 544 for the iteration count, minimum connection establishment time, 545 average connection establishment time, maximum connection 546 establishment time, attempted connections completed, attempted 547 connections failed. 549 5.4 Connection Teardown Time 551 5.4.1 Objective 553 To determine the connection teardown time[1] through or with the 554 DUT/SUT as a function of the number of open connections. As with the 555 connection establishment time, separate measurements will be taken 556 for each connection oriented protocol involved in closing a 557 connection. 559 5.4.2 Setup Parameters 561 The following parameters MUST be defined. Each parameters is 562 configured with the following considerations. 564 Initial connections - Defines the number of TCP connections to 565 initialize the test with. It is RECOMMENDED not to exceed the 566 concurrent connection capacity found in section 5.1. 568 Initial connection rate - Defines the rate, in connections per 569 second, at which the initial TCP connections are attempted. It is 570 RECOMMENDED not to exceed the maximum Connection setup rate found 571 in section 5.2. 573 Teardown attempt rate - The rate at which the tester will attempt 574 to teardown TCP connections. 576 Teardown step count - Defines the number of TCP connections the 577 tester will attempt to teardown for each iteration of the step 578 algorithm. 580 Object size - Defines the number of bytes to be transferred across 581 each connection in response to an HTTP 1.1 GET request during the 582 initialization phase of the test as well as periodic GET requests, 583 if required. 585 5.4.3 Procedure 587 Prior to beginning a step algorithm, the tester will initialize 588 the test by establishing connections defined by initial connections. 589 The test will use the same algorithm for establishing the connection 590 as described in the connection capacity test(Section 5.1). 592 For each iteration of the step algorithm, the tester will attempt 593 teardown the number of connections defined by teardown step count 594 at a rate defined by teardown attempt rate. This will be repeated 595 until the tester has attempted to teardown all of the connections. 597 5.4.4 Measurements 599 For each iteration of the test, the tester MUST measure the minimum, 600 average and maximum connection teardown times. As with the 601 connection establishment time test, the tester SHOULD measure all 602 connection oriented protocols which are being torn down. 604 5.4.5 Reporting Format 606 The test report MUST note the initial connections, initial 607 connection rate, teardown attempt rate, teardown step count and 608 object size. 610 For each connection oriented protocol the tester measured, the 611 connection teardown time results SHOULD be in tabular form 612 with a row for each iteration of the test. There SHOULD be a column 613 for the iteration count, minimum connection teardown time, 614 average connection teardown time, maximum connection teardown 615 time, attempted teardowns completed, attempted teardown failed. 617 5.5 Denial Of Service Handling 619 5.5.1 Objective 621 To determine the effect of a denial of service attack on a DUT/SUTs 622 connection establishment rates and/or goodput. The Denial Of Service 623 Handling test MUST be run after obtaining baseline measurements 624 from sections 5.2 and/or 5.6. 626 The TCP SYN flood attack exploits TCP's three-way handshake mechanism 627 by having an attacking source host generate TCP SYN packets with 628 random source addresses towards a victim host, thereby consuming that 629 host's resources. 631 Some firewalls employ mechanisms to guard against SYN attacks. If such 632 mechanisms exist on the DUT/SUT, tests SHOULD be run with these 633 mechanisms enabled to determine how well the DUT/SUT can maintain, 634 under such attacks, the baseline connection rates and goodput determined 635 in section 5.2 and section 5.6, respectively. 637 5.5.2 Setup Parameters 639 Use the same setup parameters as defined in section 5.2.2 or 5.6.2, 640 depending on whether testing against the baseline connection setup 641 rate test or goodput test, respectfully. 643 In addition, the following setup parameters MUST be defined. 645 SYN Attack Rate - Defines the rate, in packets per second at which 646 the server(s) are targeted with TCP SYN packets. 648 5.5.3 Procedure 650 Use the same procedure as defined in section 5.2.3 or 5.6.3, depending 651 on whether testing against the baseline connection setup rate test or 652 goodput test, respectfully. In addition, the tester will generate TCP 653 SYN packets targeting the server(s) IP address or NAT proxy address at 654 a rate defined by SYN attack rate. 656 The tester originating the TCP SYN attack MUST be attached to the 657 unprotected network. In addition, the tester MUST not respond to the 658 SYN/ACK packets sent by target server in response to the SYN packet. 660 INTERNET-DRAFT Methodology for Firewall Performance June 200 662 5.5.4 Measurements 663 Perform the same measurements as defined in section 5.2.4 or 5.6.4, 664 depending on whether testing against the baseline connection setup 665 rate test or goodput test, respectfully. 667 In addition, the tester SHOULD track SYN packets associated with the 668 SYN attack which the DUT/SUT forwards on the protected or DMZ 669 interface(s). 671 5.5.5 Reporting Format 673 The test SHOULD use the same reporting format as described in 674 section 5.2.5 or 5.6.5, depending on whether testing against 675 baseline throughput rates or goodput, respectively. 677 In addition, the report MUST indicate a denial of service handling 678 test, SYN attack rate, number SYN attack packets transmitted and 679 number of SYN attack packets received and whether or not the DUT 680 has any SYN attack mechanisms enabled. 682 5.6 HTTP 684 5.6.1 Objective 686 To determine the goodput, as defined by RFC2647, of the DUT/SUT 687 when presented with HTTP traffic flows. The goodput measurement 688 will be based on HTTP objects forwarded to the correct destination 689 interface of the DUT/SUT. 691 5.6.2 Setup Parameters 693 The following parameters MUST be defined. 695 Number of sessions - Defines the number of HTTP 1.1 sessions to be 696 attempted for transferring an HTTP object(s). Number MUST be equal 697 or greater than the number of virtual clients participating in the 698 test. The number SHOULD be a multiple of the virtual clients 699 participating in the test. Note that each session will use one 700 underlying transport layer connection. 701 Session rate - Defines the rate, in sessions per second, that the 702 HTTP sessions are attempted. 704 Requests per session - Defines the number of HTTP GET requests per 705 session. 707 Object Size - Defines the number of bytes to be transferred in 708 response to an HTTP GET request. 710 5.6.3 HTTP Procedure 712 Each HTTP 1.1 virtual client will attempt to establish sessions 713 to its HTTP 1.1 target server(s), using either the target server's 714 IP address or NAT proxy address, at a fixed rate in a round robin 715 fashion. 717 Baseline measurements SHOULD be performed using a single GET request 718 per HTTP session with the minimal object size supported by the media. 719 If the tester makes multiple HTTP GET requests per session, it MUST 720 request the same-sized object each time. Testers may run multiple 721 iterations of this test with objects of different sizes. See 722 appendix A when testing proxy based DUT/SUT regarding HTTP version 723 considerations. 5.6.4 Measurement 725 Aggregate Goodput - The aggregate bit forwarding rate of the 726 requested HTTP objects. The measurement will start on receipt of the 727 first bit of the first packet containing a requested object which 728 has been successfully transferred and end on receipt of the last 729 packet containing the last requested object that has been 730 successfully transferred. The goodput, in bits per second, can be 731 calculated using the following formula: 733 OBJECTS * OBJECTSIZE * 8 734 Goodput = -------------------------- 735 DURATION 737 OBJECTS - Objects successfully transferred 739 OBJECTSIZE - Object size in bytes 741 DURATION - Aggregate transfer time based on aforementioned time 742 references. 744 5.6.5 Reporting Format 746 The test report MUST note the object size(s), number of sessions, 747 session rate and requests per session. 749 The goodput results SHOULD be reported in tabular form with a row 750 for each of the object sizes. There SHOULD be columns for the object 751 size, measured goodput and number of successfully transferred 752 objects. 754 Failure analysis: 756 The test report SHOULD indicate the number and percentage of HTTP 757 sessions that failed to complete the requested number of 758 transactions, with a transaction being the GET request and 759 successfully returned object. 761 Version information: 763 The test report MUST note the use of an HTTP 1.1 client and server. 765 5.7 IP Fragmentation 767 5.7.1 Objective 769 To determine the performance impact when the DUT/SUT is presented 770 with IP fragmented[5] traffic. IP datagrams which have been 771 fragmented, due to crossing a network that supports a smaller 772 MTU(Maximum Transmission Unit) than the actual datagram, may 773 require the firewall to perform re-assembly prior to the datagram 774 being applied to the rule set. 776 While IP fragmentation is a common form of attack, either on the 777 firewall itself or on internal hosts, this test will focus on 778 determining how the additional processing associated with the 779 re-assembly of the datagrams has on the goodput of the DUT/SUT. 781 5.7.2 Setup Parameters 783 The following parameters MUST be defined. 785 Trial duration - Trial duration SHOULD be set for 30 seconds. 787 5.7.2.1 Non-Fragmented Traffic Parameters 789 Session rate - Defines the rate, in sessions per second, that the 790 HTTP sessions are attempted. 792 Requests per session - Defines the number of HTTP GET requests per 793 session. 795 Object Size - Defines the number of bytes to be transferred in 796 response to an HTTP GET request. 798 5.7.2.1 Fragmented Traffic Parameters 800 Packet size, expressed as the number of bytes in the IP/UDP packet, 801 exclusive of link-layer headers and checksums. 803 Fragmentation Length - Defines the length of the data portion of the 804 IP datagram and MUST be multiple of 8. Testers SHOULD use the minimum 805 value, but MAY use other sizes as well. 807 Intended Load - Intended load, expressed as percentage of media 808 utilization. 810 5.7.3 Procedure 812 Each HTTP 1.1 virtual client will attempt to establish sessions 813 to its HTTP 1.1 target server(s), using either the target server's 814 IP address or NAT proxy address, at a fixed rate in a round robin 815 fashion. At the same time, a client attached to the unprotected side 816 of the network will offer a unidirectional stream of unicast UDP/IP 817 packets to a server connected to the protected side of the network. 818 The tester MUST offer IP/UDP packets in a steady state. 820 Baseline measurements SHOULD be performed with a deny rule(s) that 821 filters the fragmented traffic. If the DUT/SUT has logging 822 capability, the log SHOULD be checked to determine if it contains 823 the correct information regarding the fragmented traffic. 825 The test SHOULD be repeated with the DUT/SUT rule set changed to 826 allow the fragmented traffic through. When running multiple 827 iterations of the test, it is RECOMMENDED to vary the fragment 828 length while keeping all other parameters constant. 830 5.7.4 Measurements 832 Aggregate Goodput - The aggregate bit forwarding rate of the 833 requested HTTP objects.(See section 5.6). Only objects which have 834 successfully completed transferring within the trial duration are 835 to be included in the goodput measurement. 837 Transmitted UDP/IP Packets - Number of UDP packets transmitted by 838 client. 840 Received UDP/IP Packets - Number of UDP/IP Packets received by 841 server. 843 5.7.5 Reporting Format 845 The test report MUST note the test duration. 847 The test report MUST note the packet size(s), offered load(s) and 848 IP fragmentation length of the UDP/IP traffic. It SHOULD also note 849 whether the DUT/SUT egresses the offered UDP/IP traffic fragmented 850 or not. 852 The test report MUST note the object size(s), session rate and 853 requests per session. 855 The results SHOULD be reported in the format of a table with a 856 row for each of the fragmentation lengths. There SHOULD be columns 857 for the fragmentation length, IP/UDP packets transmitted by client, 858 IP/UDP packets received by server, HTTP object size, and measured 859 goodput. 861 5.8 Illegal Traffic Handling 863 5.8.1 Objective 865 To determine the behavior of the DUT/SUT when presented with a 866 combination of both legal and Illegal traffic flows. Note that 867 Illegal traffic does not refer to an attack, but to traffic which 868 has been explicitly defined by a rule(s) to drop. 870 5.8.2 Setup Parameters 872 The following parameters MUST be defined. 874 Number of sessions - Defines the number of HTTP 1.1 sessions to be 875 attempted for transferring an HTTP object(s). Number MUST be equal 876 or greater than the number of virtual clients participating in the 877 test. The number SHOULD be a multiple of the virtual clients 878 participating in the test. Note that each session will use one 879 underlying transport layer connection. 880 Session rate - Defines the rate, in sessions per second, that the 881 HTTP sessions are attempted. 883 Requests per session - Defines the number of HTTP GET requests per 884 session. 886 Object size - Defines the number of bytes to be transferred in 887 response to an HTTP GET request. 889 Illegal traffic percentage - Percentage of HTTP 1.1 sessions which 890 have been explicitly defined in a rule(s) to drop. 892 5.8.3 Procedure 894 Each HTTP 1.1 virtual client will attempt to establish sessions 895 to its HTTP 1.1 target server(s), using either the target server's 896 IP address or NAT proxy address, at a fixed rate in a round robin 897 fashion. 899 The tester MUST present the connection requests, both legal and 900 illegal, in an evenly distributed manner. Many firewalls have 901 the capability to filter on different traffic criteria( IP 902 addresses, Port numbers, etc). Testers may run multiple 903 iterations of this test with the DUT/SUT configured to filter 904 on different traffic criteria. 906 5.8.4 Measurements 908 Legal sessions allowed - Number and percentage of legal HTTP 909 sessions which completed. 911 Illegal session allowed - Number and percentage of illegal HTTP 912 session which completed. 914 5.8.5 Reporting Format 916 The test report MUST note the number of sessions, session rate, 917 requests per session, percentage of illegal sessions and measurement 918 results. The results SHOULD be reported in the form of a table with a row 919 for each of the object sizes. There SHOULD be columns for the 920 object size, number of legal sessions attempted, number of legal 921 sessions successful, number of illegal sessions attempted and number 922 of illegal sessions successful. 924 5.9 Latency 926 5.9.1 Objective 928 To determine the latency of network-layer or application-layer data 929 traversing the DUT/SUT. RFC 1242 [3] defines latency. 931 5.9.2 Setup Parameters 933 The following parameters MUST be defined: 935 5.9.2.1 Network-layer Measurements 937 Packet size, expressed as the number of bytes in the IP packet, 938 exclusive of link-layer headers and checksums. 940 Intended load, expressed as percentage of media utilization. 942 Offered load, expressed as percentage of media utilization. 944 Test duration, expressed in seconds. 946 Test instruments MUST generate packets with unique timestamp signatures. 948 5.9.2.2 Application-layer Measurements 950 Object size, expressed as the number of bytes to be transferred across a 951 connection in response to an HTTP GET request. Testers SHOULD use the 952 minimum object size supported by the media, but MAY use other object 953 sizes as well. 955 Connection type. The tester MUST use one HTTP 1.1 connection for latency 956 measurements. 958 Number of objects requested. 960 Number of objects transferred. 962 Test duration, expressed in seconds. 964 Test instruments MUST generate packets with unique timestamp signatures. 966 5.9.3 Network-layer procedure 968 A client will offer a unidirectional stream of unicast packets to a server. 969 The packets MUST use a connectionless protocol like IP or UDP/IP. 971 The tester MUST offer packets in a steady state. As noted in the latency 972 discussion in RFC 2544 [4], latency measurements MUST be taken at the 973 throughput level -- that is, at the highest offered load with zero packet 974 loss. Measurements taken at the throughput level are the only ones that can 975 legitimately be termed latency. 977 It is RECOMMENDED that implementers use offered loads not only at the 978 throughput level, but also at load levels that are less than or greater 979 than the throughput level. To avoid confusion with existing terminology, 980 measurements from such tests MUST be labeled as delay rather than latency. 981 If desired, the tester MAY use a step test in which offered loads increment 982 or decrement through a range of load levels. 984 The duration of the test portion of each trial MUST be at least 30 seconds. 986 5.9.4 Application layer procedure 988 An HTTP 1.1 client will request one or more objects from an HTTP 1.1 server 989 using one or more HTTP GET requests. If the tester makes multiple HTTP GET 990 requests, it MUST request the same-sized object each time. Testers may run 991 multiple iterations of this test with objects of different sizes. 993 Implementers MAY configure the tester to run for a fixed duration. In this 994 case, the tester MUST report the number of objects requested and returned 995 for the duration of the test. For fixed-duration tests it is RECOMMENDED 996 that the duration be at least 30 seconds. 998 5.9.5 Measurements 1000 Minimum delay - The smallest delay incurred by data traversing the DUT/SUT 1001 at the network layer or application layer, as appropriate. 1003 Maximum delay - The largest delay incurred by data traversing the DUT/SUT 1004 at the network layer or application layer, as appropriate. 1006 Average delay - The mean of all measurements of delay incurred by data 1007 traversing the DUT/SUT at the network layer or application layer, as 1008 appropriate. 1010 Delay distribution - A set of histograms of all delay measurements observed 1011 for data traversing the DUT/SUT at the network layer or application layer, 1012 as appropriate. 1014 5.9.6 Network-layer reporting format 1016 The test report MUST note the packet size(s), offered load(s) and test 1017 duration used. 1019 The latency results SHOULD be reported in the format of a table with a row 1020 for each of the tested packet sizes. There SHOULD be columns for the 1021 packet size, the intended rate, the offered rate, and the resultant latency 1022 or delay values for each test. 1024 5.9.7 Application-layer reporting format 1026 The test report MUST note the object size(s) and number of requests and 1027 responses completed. If applicable, the report MUST note the test duration 1028 if a fixed duration was used. 1030 The latency results SHOULD be reported in the format of a table with a row 1031 for each of the object sizes. There SHOULD be columns for the object size, 1032 the number of completed requests, the number of completed responses, and the 1033 resultant latency or delay values for each test. 1035 Failure analysis: 1037 The test report SHOULD indicate the number and percentage of HTTP GET 1038 request or responses that failed to complete within the test duration. 1040 Version information: 1042 The test report MUST note the use of an HTTP 1.1 client and server. 1044 APPENDICES 1046 APPENDIX A: HTTP(HyperText Transfer Protocol) 1048 The most common versions of HTTP in use today are HTTP/1.0 and 1049 HTTP/1.1 with the main difference being in regard to persistent 1050 connections. HTTP 1.0, by default, does not support persistent 1051 connections. A separate TCP connection is opened up for each 1052 GET request the client wants to initiate and closed after the 1053 requested object transfer is completed. Some implementations of 1054 HTTP/1.0 supports persistence by adding an additional header 1055 to the request/response: 1057 Connection: Keep-Alive 1059 However, under HTTP 1.0, there is no official specification for 1060 how the keep-alive operates. In addition, HTTP 1.0 proxies do 1061 support persistent connection as they do not recognize the 1062 connection header. 1064 HTTP/1.1, by default, does support persistent connection and 1065 is therefore the version that is referenced in this methodology. 1066 When HTTP/1.1 entities want the underlying transport layer 1067 connection closed after a transaction has completed, the 1068 request/response will include a connection-token close in the 1069 connection header: 1071 Connection: close 1073 If no such connection-token is present, the connection remains 1074 open after the transaction is completed. In addition, proxy 1075 based DUT/SUTs may monitor the TCP connection and after a 1076 timeout, close the connection if no activity is detected. The 1077 duration of this timeout is not defined in the HTTP/1.1 1078 specification and will vary between DUT/SUTs. When performing 1079 concurrent connection testing, GET requests MAY need to be 1080 issued at a periodic rate so that the proxy does not close the 1081 TCP connection. 1083 While this document cannot foresee future changes to HTTP 1084 and it's impact on the methodologies defined herein, such 1085 changes should be accommodated for so that newer versions of 1086 HTTP may be used in benchmarking firewall performance. 1088 Appendix B. References 1090 [1] D. Newman, "Benchmarking Terminology for Firewall Devices", RFC 2647, 1091 August 1999. 1093 [2] R. Fielding, J. Gettys, J. Mogul, H Frystyk, L.Masinter, P. Leach, 1094 T. Berners-Lee , "Hypertext Transfer Protocol -- HTTP/1.1", 1095 RFC 2616 June 1999 1097 [3] S. Bradner, editor. "Benchmarking Terminology for Network 1098 Interconnection Devices," RFC 1242, July 1991. 1100 [4] S. Bradner, J. McQuaid, "Benchmarking Methodology for Network 1101 Interconnect Devices," RFC 2544, March 1999. 1103 [5] David C. Clark, "IP Datagram Reassembly Algorithm", RFC 815 , 1104 July 1982.